problem_id
stringlengths 18
22
| source
stringclasses 1
value | task_type
stringclasses 1
value | in_source_id
stringlengths 13
58
| prompt
stringlengths 1.71k
18.9k
| golden_diff
stringlengths 145
5.13k
| verification_info
stringlengths 465
23.6k
| num_tokens_prompt
int64 556
4.1k
| num_tokens_diff
int64 47
1.02k
|
---|---|---|---|---|---|---|---|---|
gh_patches_debug_10958 | rasdani/github-patches | git_diff | python-telegram-bot__python-telegram-bot-1760 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[BUG] v12.4 breaks PicklePersistence
<!--
Thanks for reporting issues of python-telegram-bot!
Use this template to notify us if you found a bug.
To make it easier for us to help you please enter detailed information below.
Please note, we only support the latest version of python-telegram-bot and
master branch. Please make sure to upgrade & recreate the issue on the latest
version prior to opening an issue.
-->
### Steps to reproduce
1. Have a bot using PicklePersistence with singlefile=True
2. Upgrade to v12.4
3. restart bot
### Expected behaviour
pickled file is read correctly
### Actual behaviour
key error `bot_data` is thrown
### Current workaround:
Add an empty dict `bot_data` to the file manually. Quick and dirty script:
```
import pickle
filename = 'my_pickle_persistence_file'
with (open(filename, 'rb')) as file:
data = pickle.load(file)
data['bot_data'] = {}
with open(filename, 'wb') as f:
pickle.dump(data, f)
```
Will be closed by #1760
</issue>
<code>
[start of telegram/ext/picklepersistence.py]
1 #!/usr/bin/env python
2 #
3 # A library that provides a Python interface to the Telegram Bot API
4 # Copyright (C) 2015-2020
5 # Leandro Toledo de Souza <[email protected]>
6 #
7 # This program is free software: you can redistribute it and/or modify
8 # it under the terms of the GNU Lesser Public License as published by
9 # the Free Software Foundation, either version 3 of the License, or
10 # (at your option) any later version.
11 #
12 # This program is distributed in the hope that it will be useful,
13 # but WITHOUT ANY WARRANTY; without even the implied warranty of
14 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
15 # GNU Lesser Public License for more details.
16 #
17 # You should have received a copy of the GNU Lesser Public License
18 # along with this program. If not, see [http://www.gnu.org/licenses/].
19 """This module contains the PicklePersistence class."""
20 import pickle
21 from collections import defaultdict
22 from copy import deepcopy
23
24 from telegram.ext import BasePersistence
25
26
27 class PicklePersistence(BasePersistence):
28 """Using python's builtin pickle for making you bot persistent.
29
30 Attributes:
31 filename (:obj:`str`): The filename for storing the pickle files. When :attr:`single_file`
32 is false this will be used as a prefix.
33 store_user_data (:obj:`bool`): Optional. Whether user_data should be saved by this
34 persistence class.
35 store_chat_data (:obj:`bool`): Optional. Whether user_data should be saved by this
36 persistence class.
37 store_bot_data (:obj:`bool`): Optional. Whether bot_data should be saved by this
38 persistence class.
39 single_file (:obj:`bool`): Optional. When ``False`` will store 3 sperate files of
40 `filename_user_data`, `filename_chat_data` and `filename_conversations`. Default is
41 ``True``.
42 on_flush (:obj:`bool`, optional): When ``True`` will only save to file when :meth:`flush`
43 is called and keep data in memory until that happens. When ``False`` will store data
44 on any transaction *and* on call fo :meth:`flush`. Default is ``False``.
45
46 Args:
47 filename (:obj:`str`): The filename for storing the pickle files. When :attr:`single_file`
48 is false this will be used as a prefix.
49 store_user_data (:obj:`bool`, optional): Whether user_data should be saved by this
50 persistence class. Default is ``True``.
51 store_chat_data (:obj:`bool`, optional): Whether user_data should be saved by this
52 persistence class. Default is ``True``.
53 store_bot_data (:obj:`bool`, optional): Whether bot_data should be saved by this
54 persistence class. Default is ``True`` .
55 single_file (:obj:`bool`, optional): When ``False`` will store 3 sperate files of
56 `filename_user_data`, `filename_chat_data` and `filename_conversations`. Default is
57 ``True``.
58 on_flush (:obj:`bool`, optional): When ``True`` will only save to file when :meth:`flush`
59 is called and keep data in memory until that happens. When ``False`` will store data
60 on any transaction *and* on call fo :meth:`flush`. Default is ``False``.
61 """
62
63 def __init__(self, filename,
64 store_user_data=True,
65 store_chat_data=True,
66 store_bot_data=True,
67 single_file=True,
68 on_flush=False):
69 super(PicklePersistence, self).__init__(store_user_data=store_user_data,
70 store_chat_data=store_chat_data,
71 store_bot_data=store_bot_data)
72 self.filename = filename
73 self.single_file = single_file
74 self.on_flush = on_flush
75 self.user_data = None
76 self.chat_data = None
77 self.bot_data = None
78 self.conversations = None
79
80 def load_singlefile(self):
81 try:
82 filename = self.filename
83 with open(self.filename, "rb") as f:
84 all = pickle.load(f)
85 self.user_data = defaultdict(dict, all['user_data'])
86 self.chat_data = defaultdict(dict, all['chat_data'])
87 self.bot_data = all['bot_data']
88 self.conversations = all['conversations']
89 except IOError:
90 self.conversations = {}
91 self.user_data = defaultdict(dict)
92 self.chat_data = defaultdict(dict)
93 self.bot_data = {}
94 except pickle.UnpicklingError:
95 raise TypeError("File {} does not contain valid pickle data".format(filename))
96 except Exception:
97 raise TypeError("Something went wrong unpickling {}".format(filename))
98
99 def load_file(self, filename):
100 try:
101 with open(filename, "rb") as f:
102 return pickle.load(f)
103 except IOError:
104 return None
105 except pickle.UnpicklingError:
106 raise TypeError("File {} does not contain valid pickle data".format(filename))
107 except Exception:
108 raise TypeError("Something went wrong unpickling {}".format(filename))
109
110 def dump_singlefile(self):
111 with open(self.filename, "wb") as f:
112 all = {'conversations': self.conversations, 'user_data': self.user_data,
113 'chat_data': self.chat_data, 'bot_data': self.bot_data}
114 pickle.dump(all, f)
115
116 def dump_file(self, filename, data):
117 with open(filename, "wb") as f:
118 pickle.dump(data, f)
119
120 def get_user_data(self):
121 """Returns the user_data from the pickle file if it exsists or an empty defaultdict.
122
123 Returns:
124 :obj:`defaultdict`: The restored user data.
125 """
126 if self.user_data:
127 pass
128 elif not self.single_file:
129 filename = "{}_user_data".format(self.filename)
130 data = self.load_file(filename)
131 if not data:
132 data = defaultdict(dict)
133 else:
134 data = defaultdict(dict, data)
135 self.user_data = data
136 else:
137 self.load_singlefile()
138 return deepcopy(self.user_data)
139
140 def get_chat_data(self):
141 """Returns the chat_data from the pickle file if it exsists or an empty defaultdict.
142
143 Returns:
144 :obj:`defaultdict`: The restored chat data.
145 """
146 if self.chat_data:
147 pass
148 elif not self.single_file:
149 filename = "{}_chat_data".format(self.filename)
150 data = self.load_file(filename)
151 if not data:
152 data = defaultdict(dict)
153 else:
154 data = defaultdict(dict, data)
155 self.chat_data = data
156 else:
157 self.load_singlefile()
158 return deepcopy(self.chat_data)
159
160 def get_bot_data(self):
161 """Returns the bot_data from the pickle file if it exsists or an empty dict.
162
163 Returns:
164 :obj:`defaultdict`: The restored bot data.
165 """
166 if self.bot_data:
167 pass
168 elif not self.single_file:
169 filename = "{}_bot_data".format(self.filename)
170 data = self.load_file(filename)
171 if not data:
172 data = {}
173 self.bot_data = data
174 else:
175 self.load_singlefile()
176 return deepcopy(self.bot_data)
177
178 def get_conversations(self, name):
179 """Returns the conversations from the pickle file if it exsists or an empty defaultdict.
180
181 Args:
182 name (:obj:`str`): The handlers name.
183
184 Returns:
185 :obj:`dict`: The restored conversations for the handler.
186 """
187 if self.conversations:
188 pass
189 elif not self.single_file:
190 filename = "{}_conversations".format(self.filename)
191 data = self.load_file(filename)
192 if not data:
193 data = {name: {}}
194 self.conversations = data
195 else:
196 self.load_singlefile()
197 return self.conversations.get(name, {}).copy()
198
199 def update_conversation(self, name, key, new_state):
200 """Will update the conversations for the given handler and depending on :attr:`on_flush`
201 save the pickle file.
202
203 Args:
204 name (:obj:`str`): The handlers name.
205 key (:obj:`tuple`): The key the state is changed for.
206 new_state (:obj:`tuple` | :obj:`any`): The new state for the given key.
207 """
208 if self.conversations.setdefault(name, {}).get(key) == new_state:
209 return
210 self.conversations[name][key] = new_state
211 if not self.on_flush:
212 if not self.single_file:
213 filename = "{}_conversations".format(self.filename)
214 self.dump_file(filename, self.conversations)
215 else:
216 self.dump_singlefile()
217
218 def update_user_data(self, user_id, data):
219 """Will update the user_data (if changed) and depending on :attr:`on_flush` save the
220 pickle file.
221
222 Args:
223 user_id (:obj:`int`): The user the data might have been changed for.
224 data (:obj:`dict`): The :attr:`telegram.ext.dispatcher.user_data` [user_id].
225 """
226 if self.user_data.get(user_id) == data:
227 return
228 self.user_data[user_id] = data
229 if not self.on_flush:
230 if not self.single_file:
231 filename = "{}_user_data".format(self.filename)
232 self.dump_file(filename, self.user_data)
233 else:
234 self.dump_singlefile()
235
236 def update_chat_data(self, chat_id, data):
237 """Will update the chat_data (if changed) and depending on :attr:`on_flush` save the
238 pickle file.
239
240 Args:
241 chat_id (:obj:`int`): The chat the data might have been changed for.
242 data (:obj:`dict`): The :attr:`telegram.ext.dispatcher.chat_data` [chat_id].
243 """
244 if self.chat_data.get(chat_id) == data:
245 return
246 self.chat_data[chat_id] = data
247 if not self.on_flush:
248 if not self.single_file:
249 filename = "{}_chat_data".format(self.filename)
250 self.dump_file(filename, self.chat_data)
251 else:
252 self.dump_singlefile()
253
254 def update_bot_data(self, data):
255 """Will update the bot_data (if changed) and depending on :attr:`on_flush` save the
256 pickle file.
257
258 Args:
259 data (:obj:`dict`): The :attr:`telegram.ext.dispatcher.bot_data`.
260 """
261 if self.bot_data == data:
262 return
263 self.bot_data = data.copy()
264 if not self.on_flush:
265 if not self.single_file:
266 filename = "{}_bot_data".format(self.filename)
267 self.dump_file(filename, self.bot_data)
268 else:
269 self.dump_singlefile()
270
271 def flush(self):
272 """ Will save all data in memory to pickle file(s).
273 """
274 if self.single_file:
275 if self.user_data or self.chat_data or self.conversations:
276 self.dump_singlefile()
277 else:
278 if self.user_data:
279 self.dump_file("{}_user_data".format(self.filename), self.user_data)
280 if self.chat_data:
281 self.dump_file("{}_chat_data".format(self.filename), self.chat_data)
282 if self.bot_data:
283 self.dump_file("{}_bot_data".format(self.filename), self.bot_data)
284 if self.conversations:
285 self.dump_file("{}_conversations".format(self.filename), self.conversations)
286
[end of telegram/ext/picklepersistence.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/telegram/ext/picklepersistence.py b/telegram/ext/picklepersistence.py
--- a/telegram/ext/picklepersistence.py
+++ b/telegram/ext/picklepersistence.py
@@ -84,7 +84,8 @@
all = pickle.load(f)
self.user_data = defaultdict(dict, all['user_data'])
self.chat_data = defaultdict(dict, all['chat_data'])
- self.bot_data = all['bot_data']
+ # For backwards compatibility with files not containing bot data
+ self.bot_data = all.get('bot_data', {})
self.conversations = all['conversations']
except IOError:
self.conversations = {}
| {"golden_diff": "diff --git a/telegram/ext/picklepersistence.py b/telegram/ext/picklepersistence.py\n--- a/telegram/ext/picklepersistence.py\n+++ b/telegram/ext/picklepersistence.py\n@@ -84,7 +84,8 @@\n all = pickle.load(f)\n self.user_data = defaultdict(dict, all['user_data'])\n self.chat_data = defaultdict(dict, all['chat_data'])\n- self.bot_data = all['bot_data']\n+ # For backwards compatibility with files not containing bot data\n+ self.bot_data = all.get('bot_data', {})\n self.conversations = all['conversations']\n except IOError:\n self.conversations = {}\n", "issue": "[BUG] v12.4 breaks PicklePersistence\n<!--\r\nThanks for reporting issues of python-telegram-bot!\r\n\r\nUse this template to notify us if you found a bug.\r\n\r\nTo make it easier for us to help you please enter detailed information below.\r\n\r\nPlease note, we only support the latest version of python-telegram-bot and\r\nmaster branch. Please make sure to upgrade & recreate the issue on the latest\r\nversion prior to opening an issue.\r\n-->\r\n### Steps to reproduce\r\n1. Have a bot using PicklePersistence with singlefile=True\r\n\r\n2. Upgrade to v12.4\r\n\r\n3. restart bot\r\n\r\n### Expected behaviour\r\npickled file is read correctly\r\n\r\n### Actual behaviour\r\nkey error `bot_data` is thrown\r\n\r\n### Current workaround:\r\nAdd an empty dict `bot_data` to the file manually. Quick and dirty script:\r\n```\r\nimport pickle\r\n\r\nfilename = 'my_pickle_persistence_file'\r\n\r\nwith (open(filename, 'rb')) as file:\r\n data = pickle.load(file)\r\n\r\ndata['bot_data'] = {}\r\n\r\nwith open(filename, 'wb') as f:\r\n pickle.dump(data, f)\r\n```\r\n\r\nWill be closed by #1760 \n", "before_files": [{"content": "#!/usr/bin/env python\n#\n# A library that provides a Python interface to the Telegram Bot API\n# Copyright (C) 2015-2020\n# Leandro Toledo de Souza <[email protected]>\n#\n# This program is free software: you can redistribute it and/or modify\n# it under the terms of the GNU Lesser Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# This program is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU Lesser Public License for more details.\n#\n# You should have received a copy of the GNU Lesser Public License\n# along with this program. If not, see [http://www.gnu.org/licenses/].\n\"\"\"This module contains the PicklePersistence class.\"\"\"\nimport pickle\nfrom collections import defaultdict\nfrom copy import deepcopy\n\nfrom telegram.ext import BasePersistence\n\n\nclass PicklePersistence(BasePersistence):\n \"\"\"Using python's builtin pickle for making you bot persistent.\n\n Attributes:\n filename (:obj:`str`): The filename for storing the pickle files. When :attr:`single_file`\n is false this will be used as a prefix.\n store_user_data (:obj:`bool`): Optional. Whether user_data should be saved by this\n persistence class.\n store_chat_data (:obj:`bool`): Optional. Whether user_data should be saved by this\n persistence class.\n store_bot_data (:obj:`bool`): Optional. Whether bot_data should be saved by this\n persistence class.\n single_file (:obj:`bool`): Optional. When ``False`` will store 3 sperate files of\n `filename_user_data`, `filename_chat_data` and `filename_conversations`. Default is\n ``True``.\n on_flush (:obj:`bool`, optional): When ``True`` will only save to file when :meth:`flush`\n is called and keep data in memory until that happens. When ``False`` will store data\n on any transaction *and* on call fo :meth:`flush`. Default is ``False``.\n\n Args:\n filename (:obj:`str`): The filename for storing the pickle files. When :attr:`single_file`\n is false this will be used as a prefix.\n store_user_data (:obj:`bool`, optional): Whether user_data should be saved by this\n persistence class. Default is ``True``.\n store_chat_data (:obj:`bool`, optional): Whether user_data should be saved by this\n persistence class. Default is ``True``.\n store_bot_data (:obj:`bool`, optional): Whether bot_data should be saved by this\n persistence class. Default is ``True`` .\n single_file (:obj:`bool`, optional): When ``False`` will store 3 sperate files of\n `filename_user_data`, `filename_chat_data` and `filename_conversations`. Default is\n ``True``.\n on_flush (:obj:`bool`, optional): When ``True`` will only save to file when :meth:`flush`\n is called and keep data in memory until that happens. When ``False`` will store data\n on any transaction *and* on call fo :meth:`flush`. Default is ``False``.\n \"\"\"\n\n def __init__(self, filename,\n store_user_data=True,\n store_chat_data=True,\n store_bot_data=True,\n single_file=True,\n on_flush=False):\n super(PicklePersistence, self).__init__(store_user_data=store_user_data,\n store_chat_data=store_chat_data,\n store_bot_data=store_bot_data)\n self.filename = filename\n self.single_file = single_file\n self.on_flush = on_flush\n self.user_data = None\n self.chat_data = None\n self.bot_data = None\n self.conversations = None\n\n def load_singlefile(self):\n try:\n filename = self.filename\n with open(self.filename, \"rb\") as f:\n all = pickle.load(f)\n self.user_data = defaultdict(dict, all['user_data'])\n self.chat_data = defaultdict(dict, all['chat_data'])\n self.bot_data = all['bot_data']\n self.conversations = all['conversations']\n except IOError:\n self.conversations = {}\n self.user_data = defaultdict(dict)\n self.chat_data = defaultdict(dict)\n self.bot_data = {}\n except pickle.UnpicklingError:\n raise TypeError(\"File {} does not contain valid pickle data\".format(filename))\n except Exception:\n raise TypeError(\"Something went wrong unpickling {}\".format(filename))\n\n def load_file(self, filename):\n try:\n with open(filename, \"rb\") as f:\n return pickle.load(f)\n except IOError:\n return None\n except pickle.UnpicklingError:\n raise TypeError(\"File {} does not contain valid pickle data\".format(filename))\n except Exception:\n raise TypeError(\"Something went wrong unpickling {}\".format(filename))\n\n def dump_singlefile(self):\n with open(self.filename, \"wb\") as f:\n all = {'conversations': self.conversations, 'user_data': self.user_data,\n 'chat_data': self.chat_data, 'bot_data': self.bot_data}\n pickle.dump(all, f)\n\n def dump_file(self, filename, data):\n with open(filename, \"wb\") as f:\n pickle.dump(data, f)\n\n def get_user_data(self):\n \"\"\"Returns the user_data from the pickle file if it exsists or an empty defaultdict.\n\n Returns:\n :obj:`defaultdict`: The restored user data.\n \"\"\"\n if self.user_data:\n pass\n elif not self.single_file:\n filename = \"{}_user_data\".format(self.filename)\n data = self.load_file(filename)\n if not data:\n data = defaultdict(dict)\n else:\n data = defaultdict(dict, data)\n self.user_data = data\n else:\n self.load_singlefile()\n return deepcopy(self.user_data)\n\n def get_chat_data(self):\n \"\"\"Returns the chat_data from the pickle file if it exsists or an empty defaultdict.\n\n Returns:\n :obj:`defaultdict`: The restored chat data.\n \"\"\"\n if self.chat_data:\n pass\n elif not self.single_file:\n filename = \"{}_chat_data\".format(self.filename)\n data = self.load_file(filename)\n if not data:\n data = defaultdict(dict)\n else:\n data = defaultdict(dict, data)\n self.chat_data = data\n else:\n self.load_singlefile()\n return deepcopy(self.chat_data)\n\n def get_bot_data(self):\n \"\"\"Returns the bot_data from the pickle file if it exsists or an empty dict.\n\n Returns:\n :obj:`defaultdict`: The restored bot data.\n \"\"\"\n if self.bot_data:\n pass\n elif not self.single_file:\n filename = \"{}_bot_data\".format(self.filename)\n data = self.load_file(filename)\n if not data:\n data = {}\n self.bot_data = data\n else:\n self.load_singlefile()\n return deepcopy(self.bot_data)\n\n def get_conversations(self, name):\n \"\"\"Returns the conversations from the pickle file if it exsists or an empty defaultdict.\n\n Args:\n name (:obj:`str`): The handlers name.\n\n Returns:\n :obj:`dict`: The restored conversations for the handler.\n \"\"\"\n if self.conversations:\n pass\n elif not self.single_file:\n filename = \"{}_conversations\".format(self.filename)\n data = self.load_file(filename)\n if not data:\n data = {name: {}}\n self.conversations = data\n else:\n self.load_singlefile()\n return self.conversations.get(name, {}).copy()\n\n def update_conversation(self, name, key, new_state):\n \"\"\"Will update the conversations for the given handler and depending on :attr:`on_flush`\n save the pickle file.\n\n Args:\n name (:obj:`str`): The handlers name.\n key (:obj:`tuple`): The key the state is changed for.\n new_state (:obj:`tuple` | :obj:`any`): The new state for the given key.\n \"\"\"\n if self.conversations.setdefault(name, {}).get(key) == new_state:\n return\n self.conversations[name][key] = new_state\n if not self.on_flush:\n if not self.single_file:\n filename = \"{}_conversations\".format(self.filename)\n self.dump_file(filename, self.conversations)\n else:\n self.dump_singlefile()\n\n def update_user_data(self, user_id, data):\n \"\"\"Will update the user_data (if changed) and depending on :attr:`on_flush` save the\n pickle file.\n\n Args:\n user_id (:obj:`int`): The user the data might have been changed for.\n data (:obj:`dict`): The :attr:`telegram.ext.dispatcher.user_data` [user_id].\n \"\"\"\n if self.user_data.get(user_id) == data:\n return\n self.user_data[user_id] = data\n if not self.on_flush:\n if not self.single_file:\n filename = \"{}_user_data\".format(self.filename)\n self.dump_file(filename, self.user_data)\n else:\n self.dump_singlefile()\n\n def update_chat_data(self, chat_id, data):\n \"\"\"Will update the chat_data (if changed) and depending on :attr:`on_flush` save the\n pickle file.\n\n Args:\n chat_id (:obj:`int`): The chat the data might have been changed for.\n data (:obj:`dict`): The :attr:`telegram.ext.dispatcher.chat_data` [chat_id].\n \"\"\"\n if self.chat_data.get(chat_id) == data:\n return\n self.chat_data[chat_id] = data\n if not self.on_flush:\n if not self.single_file:\n filename = \"{}_chat_data\".format(self.filename)\n self.dump_file(filename, self.chat_data)\n else:\n self.dump_singlefile()\n\n def update_bot_data(self, data):\n \"\"\"Will update the bot_data (if changed) and depending on :attr:`on_flush` save the\n pickle file.\n\n Args:\n data (:obj:`dict`): The :attr:`telegram.ext.dispatcher.bot_data`.\n \"\"\"\n if self.bot_data == data:\n return\n self.bot_data = data.copy()\n if not self.on_flush:\n if not self.single_file:\n filename = \"{}_bot_data\".format(self.filename)\n self.dump_file(filename, self.bot_data)\n else:\n self.dump_singlefile()\n\n def flush(self):\n \"\"\" Will save all data in memory to pickle file(s).\n \"\"\"\n if self.single_file:\n if self.user_data or self.chat_data or self.conversations:\n self.dump_singlefile()\n else:\n if self.user_data:\n self.dump_file(\"{}_user_data\".format(self.filename), self.user_data)\n if self.chat_data:\n self.dump_file(\"{}_chat_data\".format(self.filename), self.chat_data)\n if self.bot_data:\n self.dump_file(\"{}_bot_data\".format(self.filename), self.bot_data)\n if self.conversations:\n self.dump_file(\"{}_conversations\".format(self.filename), self.conversations)\n", "path": "telegram/ext/picklepersistence.py"}]} | 3,966 | 146 |
gh_patches_debug_33779 | rasdani/github-patches | git_diff | CTFd__CTFd-1911 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
IP to City Database
I think we can provide an IP to city database now instead of just showing country.
</issue>
<code>
[start of CTFd/utils/initialization/__init__.py]
1 import datetime
2 import logging
3 import os
4 import sys
5
6 from flask import abort, redirect, render_template, request, session, url_for
7 from sqlalchemy.exc import IntegrityError, InvalidRequestError
8 from werkzeug.middleware.dispatcher import DispatcherMiddleware
9
10 from CTFd.cache import clear_user_recent_ips
11 from CTFd.exceptions import UserNotFoundException, UserTokenExpiredException
12 from CTFd.models import Tracking, db
13 from CTFd.utils import config, get_config, markdown
14 from CTFd.utils.config import (
15 can_send_mail,
16 ctf_logo,
17 ctf_name,
18 ctf_theme,
19 integrations,
20 is_setup,
21 )
22 from CTFd.utils.config.pages import get_pages
23 from CTFd.utils.dates import isoformat, unix_time, unix_time_millis
24 from CTFd.utils.events import EventManager, RedisEventManager
25 from CTFd.utils.humanize.words import pluralize
26 from CTFd.utils.modes import generate_account_url, get_mode_as_word
27 from CTFd.utils.plugins import (
28 get_configurable_plugins,
29 get_registered_admin_scripts,
30 get_registered_admin_stylesheets,
31 get_registered_scripts,
32 get_registered_stylesheets,
33 )
34 from CTFd.utils.security.auth import login_user, logout_user, lookup_user_token
35 from CTFd.utils.security.csrf import generate_nonce
36 from CTFd.utils.user import (
37 authed,
38 get_current_team_attrs,
39 get_current_user_attrs,
40 get_current_user_recent_ips,
41 get_ip,
42 is_admin,
43 )
44
45
46 def init_template_filters(app):
47 app.jinja_env.filters["markdown"] = markdown
48 app.jinja_env.filters["unix_time"] = unix_time
49 app.jinja_env.filters["unix_time_millis"] = unix_time_millis
50 app.jinja_env.filters["isoformat"] = isoformat
51 app.jinja_env.filters["pluralize"] = pluralize
52
53
54 def init_template_globals(app):
55 from CTFd.constants import JINJA_ENUMS
56 from CTFd.constants.config import Configs
57 from CTFd.constants.plugins import Plugins
58 from CTFd.constants.sessions import Session
59 from CTFd.constants.static import Static
60 from CTFd.constants.users import User
61 from CTFd.constants.teams import Team
62 from CTFd.forms import Forms
63 from CTFd.utils.config.visibility import (
64 accounts_visible,
65 challenges_visible,
66 registration_visible,
67 scores_visible,
68 )
69 from CTFd.utils.countries import get_countries, lookup_country_code
70 from CTFd.utils.countries.geoip import lookup_ip_address
71
72 app.jinja_env.globals.update(config=config)
73 app.jinja_env.globals.update(get_pages=get_pages)
74 app.jinja_env.globals.update(can_send_mail=can_send_mail)
75 app.jinja_env.globals.update(get_ctf_name=ctf_name)
76 app.jinja_env.globals.update(get_ctf_logo=ctf_logo)
77 app.jinja_env.globals.update(get_ctf_theme=ctf_theme)
78 app.jinja_env.globals.update(get_configurable_plugins=get_configurable_plugins)
79 app.jinja_env.globals.update(get_registered_scripts=get_registered_scripts)
80 app.jinja_env.globals.update(get_registered_stylesheets=get_registered_stylesheets)
81 app.jinja_env.globals.update(
82 get_registered_admin_scripts=get_registered_admin_scripts
83 )
84 app.jinja_env.globals.update(
85 get_registered_admin_stylesheets=get_registered_admin_stylesheets
86 )
87 app.jinja_env.globals.update(get_config=get_config)
88 app.jinja_env.globals.update(generate_account_url=generate_account_url)
89 app.jinja_env.globals.update(get_countries=get_countries)
90 app.jinja_env.globals.update(lookup_country_code=lookup_country_code)
91 app.jinja_env.globals.update(lookup_ip_address=lookup_ip_address)
92 app.jinja_env.globals.update(accounts_visible=accounts_visible)
93 app.jinja_env.globals.update(challenges_visible=challenges_visible)
94 app.jinja_env.globals.update(registration_visible=registration_visible)
95 app.jinja_env.globals.update(scores_visible=scores_visible)
96 app.jinja_env.globals.update(get_mode_as_word=get_mode_as_word)
97 app.jinja_env.globals.update(integrations=integrations)
98 app.jinja_env.globals.update(authed=authed)
99 app.jinja_env.globals.update(is_admin=is_admin)
100 app.jinja_env.globals.update(get_current_user_attrs=get_current_user_attrs)
101 app.jinja_env.globals.update(get_current_team_attrs=get_current_team_attrs)
102 app.jinja_env.globals.update(get_ip=get_ip)
103 app.jinja_env.globals.update(Configs=Configs)
104 app.jinja_env.globals.update(Plugins=Plugins)
105 app.jinja_env.globals.update(Session=Session)
106 app.jinja_env.globals.update(Static=Static)
107 app.jinja_env.globals.update(Forms=Forms)
108 app.jinja_env.globals.update(User=User)
109 app.jinja_env.globals.update(Team=Team)
110
111 # Add in JinjaEnums
112 # The reason this exists is that on double import, JinjaEnums are not reinitialized
113 # Thus, if you try to create two jinja envs (e.g. during testing), sometimes
114 # an Enum will not be available to Jinja.
115 # Instead we can just directly grab them from the persisted global dictionary.
116 for k, v in JINJA_ENUMS.items():
117 # .update() can't be used here because it would use the literal value k
118 app.jinja_env.globals[k] = v
119
120
121 def init_logs(app):
122 logger_submissions = logging.getLogger("submissions")
123 logger_logins = logging.getLogger("logins")
124 logger_registrations = logging.getLogger("registrations")
125
126 logger_submissions.setLevel(logging.INFO)
127 logger_logins.setLevel(logging.INFO)
128 logger_registrations.setLevel(logging.INFO)
129
130 log_dir = app.config["LOG_FOLDER"]
131 if not os.path.exists(log_dir):
132 os.makedirs(log_dir)
133
134 logs = {
135 "submissions": os.path.join(log_dir, "submissions.log"),
136 "logins": os.path.join(log_dir, "logins.log"),
137 "registrations": os.path.join(log_dir, "registrations.log"),
138 }
139
140 try:
141 for log in logs.values():
142 if not os.path.exists(log):
143 open(log, "a").close()
144
145 submission_log = logging.handlers.RotatingFileHandler(
146 logs["submissions"], maxBytes=10485760, backupCount=5
147 )
148 login_log = logging.handlers.RotatingFileHandler(
149 logs["logins"], maxBytes=10485760, backupCount=5
150 )
151 registration_log = logging.handlers.RotatingFileHandler(
152 logs["registrations"], maxBytes=10485760, backupCount=5
153 )
154
155 logger_submissions.addHandler(submission_log)
156 logger_logins.addHandler(login_log)
157 logger_registrations.addHandler(registration_log)
158 except IOError:
159 pass
160
161 stdout = logging.StreamHandler(stream=sys.stdout)
162
163 logger_submissions.addHandler(stdout)
164 logger_logins.addHandler(stdout)
165 logger_registrations.addHandler(stdout)
166
167 logger_submissions.propagate = 0
168 logger_logins.propagate = 0
169 logger_registrations.propagate = 0
170
171
172 def init_events(app):
173 if app.config.get("CACHE_TYPE") == "redis":
174 app.events_manager = RedisEventManager()
175 elif app.config.get("CACHE_TYPE") == "filesystem":
176 app.events_manager = EventManager()
177 else:
178 app.events_manager = EventManager()
179 app.events_manager.listen()
180
181
182 def init_request_processors(app):
183 @app.url_defaults
184 def inject_theme(endpoint, values):
185 if "theme" not in values and app.url_map.is_endpoint_expecting(
186 endpoint, "theme"
187 ):
188 values["theme"] = ctf_theme()
189
190 @app.before_request
191 def needs_setup():
192 if is_setup() is False:
193 if request.endpoint in (
194 "views.setup",
195 "views.integrations",
196 "views.themes",
197 "views.files",
198 ):
199 return
200 else:
201 return redirect(url_for("views.setup"))
202
203 @app.before_request
204 def tracker():
205 if request.endpoint == "views.themes":
206 return
207
208 if authed():
209 user_ips = get_current_user_recent_ips()
210 ip = get_ip()
211
212 track = None
213 if (ip not in user_ips) or (request.method != "GET"):
214 track = Tracking.query.filter_by(
215 ip=get_ip(), user_id=session["id"]
216 ).first()
217
218 if track:
219 track.date = datetime.datetime.utcnow()
220 else:
221 track = Tracking(ip=get_ip(), user_id=session["id"])
222 db.session.add(track)
223
224 if track:
225 try:
226 db.session.commit()
227 except (InvalidRequestError, IntegrityError):
228 db.session.rollback()
229 db.session.close()
230 logout_user()
231 else:
232 clear_user_recent_ips(user_id=session["id"])
233
234 @app.before_request
235 def banned():
236 if request.endpoint == "views.themes":
237 return
238
239 if authed():
240 user = get_current_user_attrs()
241 team = get_current_team_attrs()
242
243 if user and user.banned:
244 return (
245 render_template(
246 "errors/403.html", error="You have been banned from this CTF"
247 ),
248 403,
249 )
250
251 if team and team.banned:
252 return (
253 render_template(
254 "errors/403.html",
255 error="Your team has been banned from this CTF",
256 ),
257 403,
258 )
259
260 @app.before_request
261 def tokens():
262 token = request.headers.get("Authorization")
263 if token and request.content_type == "application/json":
264 try:
265 token_type, token = token.split(" ", 1)
266 user = lookup_user_token(token)
267 except UserNotFoundException:
268 abort(401)
269 except UserTokenExpiredException:
270 abort(401, description="Your access token has expired")
271 except Exception:
272 abort(401)
273 else:
274 login_user(user)
275
276 @app.before_request
277 def csrf():
278 try:
279 func = app.view_functions[request.endpoint]
280 except KeyError:
281 abort(404)
282 if hasattr(func, "_bypass_csrf"):
283 return
284 if request.headers.get("Authorization"):
285 return
286 if not session.get("nonce"):
287 session["nonce"] = generate_nonce()
288 if request.method not in ("GET", "HEAD", "OPTIONS", "TRACE"):
289 if request.content_type == "application/json":
290 if session["nonce"] != request.headers.get("CSRF-Token"):
291 abort(403)
292 if request.content_type != "application/json":
293 if session["nonce"] != request.form.get("nonce"):
294 abort(403)
295
296 application_root = app.config.get("APPLICATION_ROOT")
297 if application_root != "/":
298
299 @app.before_request
300 def force_subdirectory_redirect():
301 if request.path.startswith(application_root) is False:
302 return redirect(
303 application_root + request.script_root + request.full_path
304 )
305
306 app.wsgi_app = DispatcherMiddleware(app.wsgi_app, {application_root: app})
307
[end of CTFd/utils/initialization/__init__.py]
[start of CTFd/utils/countries/geoip.py]
1 import geoacumen
2 import maxminddb
3 from flask import current_app
4
5 IP_ADDR_LOOKUP = maxminddb.open_database(
6 current_app.config.get("GEOIP_DATABASE_PATH", geoacumen.db_path)
7 )
8
9
10 def lookup_ip_address(addr):
11 try:
12 response = IP_ADDR_LOOKUP.get(addr)
13 return response["country"]["iso_code"]
14 except (KeyError, ValueError):
15 return None
16
[end of CTFd/utils/countries/geoip.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/CTFd/utils/countries/geoip.py b/CTFd/utils/countries/geoip.py
--- a/CTFd/utils/countries/geoip.py
+++ b/CTFd/utils/countries/geoip.py
@@ -1,9 +1,9 @@
-import geoacumen
+import geoacumen_city
import maxminddb
from flask import current_app
IP_ADDR_LOOKUP = maxminddb.open_database(
- current_app.config.get("GEOIP_DATABASE_PATH", geoacumen.db_path)
+ current_app.config.get("GEOIP_DATABASE_PATH", geoacumen_city.db_path)
)
@@ -11,5 +11,13 @@
try:
response = IP_ADDR_LOOKUP.get(addr)
return response["country"]["iso_code"]
- except (KeyError, ValueError):
+ except (KeyError, ValueError, TypeError):
+ return None
+
+
+def lookup_ip_address_city(addr):
+ try:
+ response = IP_ADDR_LOOKUP.get(addr)
+ return response["city"]["names"]["en"]
+ except (KeyError, ValueError, TypeError):
return None
diff --git a/CTFd/utils/initialization/__init__.py b/CTFd/utils/initialization/__init__.py
--- a/CTFd/utils/initialization/__init__.py
+++ b/CTFd/utils/initialization/__init__.py
@@ -67,7 +67,7 @@
scores_visible,
)
from CTFd.utils.countries import get_countries, lookup_country_code
- from CTFd.utils.countries.geoip import lookup_ip_address
+ from CTFd.utils.countries.geoip import lookup_ip_address, lookup_ip_address_city
app.jinja_env.globals.update(config=config)
app.jinja_env.globals.update(get_pages=get_pages)
@@ -89,6 +89,7 @@
app.jinja_env.globals.update(get_countries=get_countries)
app.jinja_env.globals.update(lookup_country_code=lookup_country_code)
app.jinja_env.globals.update(lookup_ip_address=lookup_ip_address)
+ app.jinja_env.globals.update(lookup_ip_address_city=lookup_ip_address_city)
app.jinja_env.globals.update(accounts_visible=accounts_visible)
app.jinja_env.globals.update(challenges_visible=challenges_visible)
app.jinja_env.globals.update(registration_visible=registration_visible)
| {"golden_diff": "diff --git a/CTFd/utils/countries/geoip.py b/CTFd/utils/countries/geoip.py\n--- a/CTFd/utils/countries/geoip.py\n+++ b/CTFd/utils/countries/geoip.py\n@@ -1,9 +1,9 @@\n-import geoacumen\n+import geoacumen_city\n import maxminddb\n from flask import current_app\n \n IP_ADDR_LOOKUP = maxminddb.open_database(\n- current_app.config.get(\"GEOIP_DATABASE_PATH\", geoacumen.db_path)\n+ current_app.config.get(\"GEOIP_DATABASE_PATH\", geoacumen_city.db_path)\n )\n \n \n@@ -11,5 +11,13 @@\n try:\n response = IP_ADDR_LOOKUP.get(addr)\n return response[\"country\"][\"iso_code\"]\n- except (KeyError, ValueError):\n+ except (KeyError, ValueError, TypeError):\n+ return None\n+\n+\n+def lookup_ip_address_city(addr):\n+ try:\n+ response = IP_ADDR_LOOKUP.get(addr)\n+ return response[\"city\"][\"names\"][\"en\"]\n+ except (KeyError, ValueError, TypeError):\n return None\ndiff --git a/CTFd/utils/initialization/__init__.py b/CTFd/utils/initialization/__init__.py\n--- a/CTFd/utils/initialization/__init__.py\n+++ b/CTFd/utils/initialization/__init__.py\n@@ -67,7 +67,7 @@\n scores_visible,\n )\n from CTFd.utils.countries import get_countries, lookup_country_code\n- from CTFd.utils.countries.geoip import lookup_ip_address\n+ from CTFd.utils.countries.geoip import lookup_ip_address, lookup_ip_address_city\n \n app.jinja_env.globals.update(config=config)\n app.jinja_env.globals.update(get_pages=get_pages)\n@@ -89,6 +89,7 @@\n app.jinja_env.globals.update(get_countries=get_countries)\n app.jinja_env.globals.update(lookup_country_code=lookup_country_code)\n app.jinja_env.globals.update(lookup_ip_address=lookup_ip_address)\n+ app.jinja_env.globals.update(lookup_ip_address_city=lookup_ip_address_city)\n app.jinja_env.globals.update(accounts_visible=accounts_visible)\n app.jinja_env.globals.update(challenges_visible=challenges_visible)\n app.jinja_env.globals.update(registration_visible=registration_visible)\n", "issue": "IP to City Database\nI think we can provide an IP to city database now instead of just showing country. \n", "before_files": [{"content": "import datetime\nimport logging\nimport os\nimport sys\n\nfrom flask import abort, redirect, render_template, request, session, url_for\nfrom sqlalchemy.exc import IntegrityError, InvalidRequestError\nfrom werkzeug.middleware.dispatcher import DispatcherMiddleware\n\nfrom CTFd.cache import clear_user_recent_ips\nfrom CTFd.exceptions import UserNotFoundException, UserTokenExpiredException\nfrom CTFd.models import Tracking, db\nfrom CTFd.utils import config, get_config, markdown\nfrom CTFd.utils.config import (\n can_send_mail,\n ctf_logo,\n ctf_name,\n ctf_theme,\n integrations,\n is_setup,\n)\nfrom CTFd.utils.config.pages import get_pages\nfrom CTFd.utils.dates import isoformat, unix_time, unix_time_millis\nfrom CTFd.utils.events import EventManager, RedisEventManager\nfrom CTFd.utils.humanize.words import pluralize\nfrom CTFd.utils.modes import generate_account_url, get_mode_as_word\nfrom CTFd.utils.plugins import (\n get_configurable_plugins,\n get_registered_admin_scripts,\n get_registered_admin_stylesheets,\n get_registered_scripts,\n get_registered_stylesheets,\n)\nfrom CTFd.utils.security.auth import login_user, logout_user, lookup_user_token\nfrom CTFd.utils.security.csrf import generate_nonce\nfrom CTFd.utils.user import (\n authed,\n get_current_team_attrs,\n get_current_user_attrs,\n get_current_user_recent_ips,\n get_ip,\n is_admin,\n)\n\n\ndef init_template_filters(app):\n app.jinja_env.filters[\"markdown\"] = markdown\n app.jinja_env.filters[\"unix_time\"] = unix_time\n app.jinja_env.filters[\"unix_time_millis\"] = unix_time_millis\n app.jinja_env.filters[\"isoformat\"] = isoformat\n app.jinja_env.filters[\"pluralize\"] = pluralize\n\n\ndef init_template_globals(app):\n from CTFd.constants import JINJA_ENUMS\n from CTFd.constants.config import Configs\n from CTFd.constants.plugins import Plugins\n from CTFd.constants.sessions import Session\n from CTFd.constants.static import Static\n from CTFd.constants.users import User\n from CTFd.constants.teams import Team\n from CTFd.forms import Forms\n from CTFd.utils.config.visibility import (\n accounts_visible,\n challenges_visible,\n registration_visible,\n scores_visible,\n )\n from CTFd.utils.countries import get_countries, lookup_country_code\n from CTFd.utils.countries.geoip import lookup_ip_address\n\n app.jinja_env.globals.update(config=config)\n app.jinja_env.globals.update(get_pages=get_pages)\n app.jinja_env.globals.update(can_send_mail=can_send_mail)\n app.jinja_env.globals.update(get_ctf_name=ctf_name)\n app.jinja_env.globals.update(get_ctf_logo=ctf_logo)\n app.jinja_env.globals.update(get_ctf_theme=ctf_theme)\n app.jinja_env.globals.update(get_configurable_plugins=get_configurable_plugins)\n app.jinja_env.globals.update(get_registered_scripts=get_registered_scripts)\n app.jinja_env.globals.update(get_registered_stylesheets=get_registered_stylesheets)\n app.jinja_env.globals.update(\n get_registered_admin_scripts=get_registered_admin_scripts\n )\n app.jinja_env.globals.update(\n get_registered_admin_stylesheets=get_registered_admin_stylesheets\n )\n app.jinja_env.globals.update(get_config=get_config)\n app.jinja_env.globals.update(generate_account_url=generate_account_url)\n app.jinja_env.globals.update(get_countries=get_countries)\n app.jinja_env.globals.update(lookup_country_code=lookup_country_code)\n app.jinja_env.globals.update(lookup_ip_address=lookup_ip_address)\n app.jinja_env.globals.update(accounts_visible=accounts_visible)\n app.jinja_env.globals.update(challenges_visible=challenges_visible)\n app.jinja_env.globals.update(registration_visible=registration_visible)\n app.jinja_env.globals.update(scores_visible=scores_visible)\n app.jinja_env.globals.update(get_mode_as_word=get_mode_as_word)\n app.jinja_env.globals.update(integrations=integrations)\n app.jinja_env.globals.update(authed=authed)\n app.jinja_env.globals.update(is_admin=is_admin)\n app.jinja_env.globals.update(get_current_user_attrs=get_current_user_attrs)\n app.jinja_env.globals.update(get_current_team_attrs=get_current_team_attrs)\n app.jinja_env.globals.update(get_ip=get_ip)\n app.jinja_env.globals.update(Configs=Configs)\n app.jinja_env.globals.update(Plugins=Plugins)\n app.jinja_env.globals.update(Session=Session)\n app.jinja_env.globals.update(Static=Static)\n app.jinja_env.globals.update(Forms=Forms)\n app.jinja_env.globals.update(User=User)\n app.jinja_env.globals.update(Team=Team)\n\n # Add in JinjaEnums\n # The reason this exists is that on double import, JinjaEnums are not reinitialized\n # Thus, if you try to create two jinja envs (e.g. during testing), sometimes\n # an Enum will not be available to Jinja.\n # Instead we can just directly grab them from the persisted global dictionary.\n for k, v in JINJA_ENUMS.items():\n # .update() can't be used here because it would use the literal value k\n app.jinja_env.globals[k] = v\n\n\ndef init_logs(app):\n logger_submissions = logging.getLogger(\"submissions\")\n logger_logins = logging.getLogger(\"logins\")\n logger_registrations = logging.getLogger(\"registrations\")\n\n logger_submissions.setLevel(logging.INFO)\n logger_logins.setLevel(logging.INFO)\n logger_registrations.setLevel(logging.INFO)\n\n log_dir = app.config[\"LOG_FOLDER\"]\n if not os.path.exists(log_dir):\n os.makedirs(log_dir)\n\n logs = {\n \"submissions\": os.path.join(log_dir, \"submissions.log\"),\n \"logins\": os.path.join(log_dir, \"logins.log\"),\n \"registrations\": os.path.join(log_dir, \"registrations.log\"),\n }\n\n try:\n for log in logs.values():\n if not os.path.exists(log):\n open(log, \"a\").close()\n\n submission_log = logging.handlers.RotatingFileHandler(\n logs[\"submissions\"], maxBytes=10485760, backupCount=5\n )\n login_log = logging.handlers.RotatingFileHandler(\n logs[\"logins\"], maxBytes=10485760, backupCount=5\n )\n registration_log = logging.handlers.RotatingFileHandler(\n logs[\"registrations\"], maxBytes=10485760, backupCount=5\n )\n\n logger_submissions.addHandler(submission_log)\n logger_logins.addHandler(login_log)\n logger_registrations.addHandler(registration_log)\n except IOError:\n pass\n\n stdout = logging.StreamHandler(stream=sys.stdout)\n\n logger_submissions.addHandler(stdout)\n logger_logins.addHandler(stdout)\n logger_registrations.addHandler(stdout)\n\n logger_submissions.propagate = 0\n logger_logins.propagate = 0\n logger_registrations.propagate = 0\n\n\ndef init_events(app):\n if app.config.get(\"CACHE_TYPE\") == \"redis\":\n app.events_manager = RedisEventManager()\n elif app.config.get(\"CACHE_TYPE\") == \"filesystem\":\n app.events_manager = EventManager()\n else:\n app.events_manager = EventManager()\n app.events_manager.listen()\n\n\ndef init_request_processors(app):\n @app.url_defaults\n def inject_theme(endpoint, values):\n if \"theme\" not in values and app.url_map.is_endpoint_expecting(\n endpoint, \"theme\"\n ):\n values[\"theme\"] = ctf_theme()\n\n @app.before_request\n def needs_setup():\n if is_setup() is False:\n if request.endpoint in (\n \"views.setup\",\n \"views.integrations\",\n \"views.themes\",\n \"views.files\",\n ):\n return\n else:\n return redirect(url_for(\"views.setup\"))\n\n @app.before_request\n def tracker():\n if request.endpoint == \"views.themes\":\n return\n\n if authed():\n user_ips = get_current_user_recent_ips()\n ip = get_ip()\n\n track = None\n if (ip not in user_ips) or (request.method != \"GET\"):\n track = Tracking.query.filter_by(\n ip=get_ip(), user_id=session[\"id\"]\n ).first()\n\n if track:\n track.date = datetime.datetime.utcnow()\n else:\n track = Tracking(ip=get_ip(), user_id=session[\"id\"])\n db.session.add(track)\n\n if track:\n try:\n db.session.commit()\n except (InvalidRequestError, IntegrityError):\n db.session.rollback()\n db.session.close()\n logout_user()\n else:\n clear_user_recent_ips(user_id=session[\"id\"])\n\n @app.before_request\n def banned():\n if request.endpoint == \"views.themes\":\n return\n\n if authed():\n user = get_current_user_attrs()\n team = get_current_team_attrs()\n\n if user and user.banned:\n return (\n render_template(\n \"errors/403.html\", error=\"You have been banned from this CTF\"\n ),\n 403,\n )\n\n if team and team.banned:\n return (\n render_template(\n \"errors/403.html\",\n error=\"Your team has been banned from this CTF\",\n ),\n 403,\n )\n\n @app.before_request\n def tokens():\n token = request.headers.get(\"Authorization\")\n if token and request.content_type == \"application/json\":\n try:\n token_type, token = token.split(\" \", 1)\n user = lookup_user_token(token)\n except UserNotFoundException:\n abort(401)\n except UserTokenExpiredException:\n abort(401, description=\"Your access token has expired\")\n except Exception:\n abort(401)\n else:\n login_user(user)\n\n @app.before_request\n def csrf():\n try:\n func = app.view_functions[request.endpoint]\n except KeyError:\n abort(404)\n if hasattr(func, \"_bypass_csrf\"):\n return\n if request.headers.get(\"Authorization\"):\n return\n if not session.get(\"nonce\"):\n session[\"nonce\"] = generate_nonce()\n if request.method not in (\"GET\", \"HEAD\", \"OPTIONS\", \"TRACE\"):\n if request.content_type == \"application/json\":\n if session[\"nonce\"] != request.headers.get(\"CSRF-Token\"):\n abort(403)\n if request.content_type != \"application/json\":\n if session[\"nonce\"] != request.form.get(\"nonce\"):\n abort(403)\n\n application_root = app.config.get(\"APPLICATION_ROOT\")\n if application_root != \"/\":\n\n @app.before_request\n def force_subdirectory_redirect():\n if request.path.startswith(application_root) is False:\n return redirect(\n application_root + request.script_root + request.full_path\n )\n\n app.wsgi_app = DispatcherMiddleware(app.wsgi_app, {application_root: app})\n", "path": "CTFd/utils/initialization/__init__.py"}, {"content": "import geoacumen\nimport maxminddb\nfrom flask import current_app\n\nIP_ADDR_LOOKUP = maxminddb.open_database(\n current_app.config.get(\"GEOIP_DATABASE_PATH\", geoacumen.db_path)\n)\n\n\ndef lookup_ip_address(addr):\n try:\n response = IP_ADDR_LOOKUP.get(addr)\n return response[\"country\"][\"iso_code\"]\n except (KeyError, ValueError):\n return None\n", "path": "CTFd/utils/countries/geoip.py"}]} | 3,951 | 520 |
gh_patches_debug_39078 | rasdani/github-patches | git_diff | ranaroussi__yfinance-1297 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Scraper error "TypeError: string indices must be integers" - Yahoo decrypt fail
## Updates
### 2023 January 13
By the time of posting the issue (2023 January 12), the issue only occured sometimes. The library is now (2023 January 13) completely broken and I am unable to retrieve any stock informatio
### 2023 January 14
Fix has been merged to the branch `dev`
## Info about your system:
yfinance version: 0.2.3
Operating system: macOS Monteray 12.0.1
### Snippet that can recreate the error
```
stock = yf.Ticker("^GSPC")
info = stock.info
```
## Error
Message:`TypeError: string indices must be integers`
It seems to be a problem where the scraper is not scraping the correct information, leading to a crash.
### Traceback:
```
Traceback (most recent call last):
File "/home/2022/szhang139/.local/lib/python3.10/site-packages/apscheduler/executors/base_py3.py", line 30, in run_coroutine_job
retval = await job.func(*job.args, **job.kwargs)
File "/home/2022/szhang139/repos/STONK/src/main.py", line 61, in notify
market = get_major_index(f'Market Close - {daytime.today_date()}')
File "/home/2022/szhang139/repos/STONK/src/market_info.py", line 63, in get_major_index
sp500 = get_stock('^GSPC')
File "/home/2022/szhang139/repos/STONK/src/market_info.py", line 41, in get_stock
stock_info = get_stock_info(stock_name)
File "/home/2022/szhang139/repos/STONK/src/market_info.py", line 8, in get_stock_info
info = stock.info
File "/home/2022/szhang139/.local/lib/python3.10/site-packages/yfinance/ticker.py", line 138, in info
return self.get_info()
File "/home/2022/szhang139/.local/lib/python3.10/site-packages/yfinance/base.py", line 894, in get_info
data = self._quote.info
File "/home/2022/szhang139/.local/lib/python3.10/site-packages/yfinance/scrapers/quote.py", line 27, in info
self._scrape(self.proxy)
File "/home/2022/szhang139/.local/lib/python3.10/site-packages/yfinance/scrapers/quote.py", line 58, in _scrape
quote_summary_store = json_data['QuoteSummaryStore']
```
### Frequency
The error occurs in no apparent pattern. Every time it occurs, it seem to persist for some range of time before it recovers back to normal.
n.
</issue>
<code>
[start of yfinance/data.py]
1 import functools
2 from functools import lru_cache
3
4 import hashlib
5 from base64 import b64decode
6 usePycryptodome = False # slightly faster
7 # usePycryptodome = True
8 if usePycryptodome:
9 from Crypto.Cipher import AES
10 from Crypto.Util.Padding import unpad
11 else:
12 from cryptography.hazmat.primitives import padding
13 from cryptography.hazmat.primitives.ciphers import Cipher, algorithms, modes
14
15 import requests as requests
16 import re
17
18 from frozendict import frozendict
19
20 try:
21 import ujson as json
22 except ImportError:
23 import json as json
24
25 cache_maxsize = 64
26
27
28 def lru_cache_freezeargs(func):
29 """
30 Decorator transforms mutable dictionary and list arguments into immutable types
31 Needed so lru_cache can cache method calls what has dict or list arguments.
32 """
33
34 @functools.wraps(func)
35 def wrapped(*args, **kwargs):
36 args = tuple([frozendict(arg) if isinstance(arg, dict) else arg for arg in args])
37 kwargs = {k: frozendict(v) if isinstance(v, dict) else v for k, v in kwargs.items()}
38 args = tuple([tuple(arg) if isinstance(arg, list) else arg for arg in args])
39 kwargs = {k: tuple(v) if isinstance(v, list) else v for k, v in kwargs.items()}
40 return func(*args, **kwargs)
41
42 # copy over the lru_cache extra methods to this wrapper to be able to access them
43 # after this decorator has been applied
44 wrapped.cache_info = func.cache_info
45 wrapped.cache_clear = func.cache_clear
46 return wrapped
47
48
49 def decrypt_cryptojs_aes(data):
50 encrypted_stores = data['context']['dispatcher']['stores']
51 _cs = data["_cs"]
52 _cr = data["_cr"]
53
54 _cr = b"".join(int.to_bytes(i, length=4, byteorder="big", signed=True) for i in json.loads(_cr)["words"])
55 password = hashlib.pbkdf2_hmac("sha1", _cs.encode("utf8"), _cr, 1, dklen=32).hex()
56
57 encrypted_stores = b64decode(encrypted_stores)
58 assert encrypted_stores[0:8] == b"Salted__"
59 salt = encrypted_stores[8:16]
60 encrypted_stores = encrypted_stores[16:]
61
62 def EVPKDF(password, salt, keySize=32, ivSize=16, iterations=1, hashAlgorithm="md5") -> tuple:
63 """OpenSSL EVP Key Derivation Function
64 Args:
65 password (Union[str, bytes, bytearray]): Password to generate key from.
66 salt (Union[bytes, bytearray]): Salt to use.
67 keySize (int, optional): Output key length in bytes. Defaults to 32.
68 ivSize (int, optional): Output Initialization Vector (IV) length in bytes. Defaults to 16.
69 iterations (int, optional): Number of iterations to perform. Defaults to 1.
70 hashAlgorithm (str, optional): Hash algorithm to use for the KDF. Defaults to 'md5'.
71 Returns:
72 key, iv: Derived key and Initialization Vector (IV) bytes.
73
74 Taken from: https://gist.github.com/rafiibrahim8/0cd0f8c46896cafef6486cb1a50a16d3
75 OpenSSL original code: https://github.com/openssl/openssl/blob/master/crypto/evp/evp_key.c#L78
76 """
77
78 assert iterations > 0, "Iterations can not be less than 1."
79
80 if isinstance(password, str):
81 password = password.encode("utf-8")
82
83 final_length = keySize + ivSize
84 key_iv = b""
85 block = None
86
87 while len(key_iv) < final_length:
88 hasher = hashlib.new(hashAlgorithm)
89 if block:
90 hasher.update(block)
91 hasher.update(password)
92 hasher.update(salt)
93 block = hasher.digest()
94 for _ in range(1, iterations):
95 block = hashlib.new(hashAlgorithm, block).digest()
96 key_iv += block
97
98 key, iv = key_iv[:keySize], key_iv[keySize:final_length]
99 return key, iv
100
101 key, iv = EVPKDF(password, salt, keySize=32, ivSize=16, iterations=1, hashAlgorithm="md5")
102
103 if usePycryptodome:
104 cipher = AES.new(key, AES.MODE_CBC, iv=iv)
105 plaintext = cipher.decrypt(encrypted_stores)
106 plaintext = unpad(plaintext, 16, style="pkcs7")
107 else:
108 cipher = Cipher(algorithms.AES(key), modes.CBC(iv))
109 decryptor = cipher.decryptor()
110 plaintext = decryptor.update(encrypted_stores) + decryptor.finalize()
111 unpadder = padding.PKCS7(128).unpadder()
112 plaintext = unpadder.update(plaintext) + unpadder.finalize()
113 plaintext = plaintext.decode("utf-8")
114
115 decoded_stores = json.loads(plaintext)
116 return decoded_stores
117
118
119 _SCRAPE_URL_ = 'https://finance.yahoo.com/quote'
120
121
122 class TickerData:
123 """
124 Have one place to retrieve data from Yahoo API in order to ease caching and speed up operations
125 """
126 user_agent_headers = {
127 'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/39.0.2171.95 Safari/537.36'}
128
129 def __init__(self, ticker: str, session=None):
130 self.ticker = ticker
131 self._session = session or requests
132
133 def get(self, url, user_agent_headers=None, params=None, proxy=None, timeout=30):
134 proxy = self._get_proxy(proxy)
135 response = self._session.get(
136 url=url,
137 params=params,
138 proxies=proxy,
139 timeout=timeout,
140 headers=user_agent_headers or self.user_agent_headers)
141 return response
142
143 @lru_cache_freezeargs
144 @lru_cache(maxsize=cache_maxsize)
145 def cache_get(self, url, user_agent_headers=None, params=None, proxy=None, timeout=30):
146 return self.get(url, user_agent_headers, params, proxy, timeout)
147
148 def _get_proxy(self, proxy):
149 # setup proxy in requests format
150 if proxy is not None:
151 if isinstance(proxy, dict) and "https" in proxy:
152 proxy = proxy["https"]
153 proxy = {"https": proxy}
154 return proxy
155
156 @lru_cache_freezeargs
157 @lru_cache(maxsize=cache_maxsize)
158 def get_json_data_stores(self, sub_page: str = None, proxy=None) -> dict:
159 '''
160 get_json_data_stores returns a python dictionary of the data stores in yahoo finance web page.
161 '''
162 if sub_page:
163 ticker_url = "{}/{}/{}".format(_SCRAPE_URL_, self.ticker, sub_page)
164 else:
165 ticker_url = "{}/{}".format(_SCRAPE_URL_, self.ticker)
166
167 html = self.get(url=ticker_url, proxy=proxy).text
168
169 # The actual json-data for stores is in a javascript assignment in the webpage
170 try:
171 json_str = html.split('root.App.main =')[1].split(
172 '(this)')[0].split(';\n}')[0].strip()
173 except IndexError:
174 # Fetch failed, probably because Yahoo spam triggered
175 return {}
176
177 data = json.loads(json_str)
178
179 if "_cs" in data and "_cr" in data:
180 data = decrypt_cryptojs_aes(data)
181
182 if "context" in data and "dispatcher" in data["context"]:
183 # Keep old code, just in case
184 data = data['context']['dispatcher']['stores']
185
186 # return data
187 new_data = json.dumps(data).replace('{}', 'null')
188 new_data = re.sub(
189 r'{[\'|\"]raw[\'|\"]:(.*?),(.*?)}', r'\1', new_data)
190
191 return json.loads(new_data)
192
[end of yfinance/data.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/yfinance/data.py b/yfinance/data.py
--- a/yfinance/data.py
+++ b/yfinance/data.py
@@ -46,13 +46,33 @@
return wrapped
-def decrypt_cryptojs_aes(data):
+def decrypt_cryptojs_aes_stores(data):
encrypted_stores = data['context']['dispatcher']['stores']
- _cs = data["_cs"]
- _cr = data["_cr"]
- _cr = b"".join(int.to_bytes(i, length=4, byteorder="big", signed=True) for i in json.loads(_cr)["words"])
- password = hashlib.pbkdf2_hmac("sha1", _cs.encode("utf8"), _cr, 1, dklen=32).hex()
+ if "_cs" in data and "_cr" in data:
+ _cs = data["_cs"]
+ _cr = data["_cr"]
+ _cr = b"".join(int.to_bytes(i, length=4, byteorder="big", signed=True) for i in json.loads(_cr)["words"])
+ password = hashlib.pbkdf2_hmac("sha1", _cs.encode("utf8"), _cr, 1, dklen=32).hex()
+ else:
+ # Currently assume one extra key in dict, which is password. Print error if
+ # more extra keys detected.
+ new_keys = [k for k in data.keys() if k not in ["context", "plugins"]]
+ l = len(new_keys)
+ if l == 0:
+ return None
+ elif l == 1 and isinstance(data[new_keys[0]], str):
+ password_key = new_keys[0]
+ else:
+ msg = "Yahoo has again changed data format, yfinance now unsure which key(s) is for decryption:"
+ k = new_keys[0]
+ k_str = k if len(k) < 32 else k[:32-3]+"..."
+ msg += f" '{k_str}'->{type(data[k])}"
+ for i in range(1, len(new_keys)):
+ msg += f" , '{k_str}'->{type(data[k])}"
+ raise Exception(msg)
+ password_key = new_keys[0]
+ password = data[password_key]
encrypted_stores = b64decode(encrypted_stores)
assert encrypted_stores[0:8] == b"Salted__"
@@ -98,7 +118,10 @@
key, iv = key_iv[:keySize], key_iv[keySize:final_length]
return key, iv
- key, iv = EVPKDF(password, salt, keySize=32, ivSize=16, iterations=1, hashAlgorithm="md5")
+ try:
+ key, iv = EVPKDF(password, salt, keySize=32, ivSize=16, iterations=1, hashAlgorithm="md5")
+ except:
+ raise Exception("yfinance failed to decrypt Yahoo data response")
if usePycryptodome:
cipher = AES.new(key, AES.MODE_CBC, iv=iv)
@@ -176,15 +199,16 @@
data = json.loads(json_str)
- if "_cs" in data and "_cr" in data:
- data = decrypt_cryptojs_aes(data)
-
- if "context" in data and "dispatcher" in data["context"]:
- # Keep old code, just in case
- data = data['context']['dispatcher']['stores']
+ stores = decrypt_cryptojs_aes_stores(data)
+ if stores is None:
+ # Maybe Yahoo returned old format, not encrypted
+ if "context" in data and "dispatcher" in data["context"]:
+ stores = data['context']['dispatcher']['stores']
+ if stores is None:
+ raise Exception(f"{self.ticker}: Failed to extract data stores from web request")
# return data
- new_data = json.dumps(data).replace('{}', 'null')
+ new_data = json.dumps(stores).replace('{}', 'null')
new_data = re.sub(
r'{[\'|\"]raw[\'|\"]:(.*?),(.*?)}', r'\1', new_data)
| {"golden_diff": "diff --git a/yfinance/data.py b/yfinance/data.py\n--- a/yfinance/data.py\n+++ b/yfinance/data.py\n@@ -46,13 +46,33 @@\n return wrapped\n \n \n-def decrypt_cryptojs_aes(data):\n+def decrypt_cryptojs_aes_stores(data):\n encrypted_stores = data['context']['dispatcher']['stores']\n- _cs = data[\"_cs\"]\n- _cr = data[\"_cr\"]\n \n- _cr = b\"\".join(int.to_bytes(i, length=4, byteorder=\"big\", signed=True) for i in json.loads(_cr)[\"words\"])\n- password = hashlib.pbkdf2_hmac(\"sha1\", _cs.encode(\"utf8\"), _cr, 1, dklen=32).hex()\n+ if \"_cs\" in data and \"_cr\" in data:\n+ _cs = data[\"_cs\"]\n+ _cr = data[\"_cr\"]\n+ _cr = b\"\".join(int.to_bytes(i, length=4, byteorder=\"big\", signed=True) for i in json.loads(_cr)[\"words\"])\n+ password = hashlib.pbkdf2_hmac(\"sha1\", _cs.encode(\"utf8\"), _cr, 1, dklen=32).hex()\n+ else:\n+ # Currently assume one extra key in dict, which is password. Print error if \n+ # more extra keys detected.\n+ new_keys = [k for k in data.keys() if k not in [\"context\", \"plugins\"]]\n+ l = len(new_keys)\n+ if l == 0:\n+ return None\n+ elif l == 1 and isinstance(data[new_keys[0]], str):\n+ password_key = new_keys[0]\n+ else:\n+ msg = \"Yahoo has again changed data format, yfinance now unsure which key(s) is for decryption:\"\n+ k = new_keys[0]\n+ k_str = k if len(k) < 32 else k[:32-3]+\"...\"\n+ msg += f\" '{k_str}'->{type(data[k])}\"\n+ for i in range(1, len(new_keys)):\n+ msg += f\" , '{k_str}'->{type(data[k])}\"\n+ raise Exception(msg)\n+ password_key = new_keys[0]\n+ password = data[password_key]\n \n encrypted_stores = b64decode(encrypted_stores)\n assert encrypted_stores[0:8] == b\"Salted__\"\n@@ -98,7 +118,10 @@\n key, iv = key_iv[:keySize], key_iv[keySize:final_length]\n return key, iv\n \n- key, iv = EVPKDF(password, salt, keySize=32, ivSize=16, iterations=1, hashAlgorithm=\"md5\")\n+ try:\n+ key, iv = EVPKDF(password, salt, keySize=32, ivSize=16, iterations=1, hashAlgorithm=\"md5\")\n+ except:\n+ raise Exception(\"yfinance failed to decrypt Yahoo data response\")\n \n if usePycryptodome:\n cipher = AES.new(key, AES.MODE_CBC, iv=iv)\n@@ -176,15 +199,16 @@\n \n data = json.loads(json_str)\n \n- if \"_cs\" in data and \"_cr\" in data:\n- data = decrypt_cryptojs_aes(data)\n-\n- if \"context\" in data and \"dispatcher\" in data[\"context\"]:\n- # Keep old code, just in case\n- data = data['context']['dispatcher']['stores']\n+ stores = decrypt_cryptojs_aes_stores(data)\n+ if stores is None:\n+ # Maybe Yahoo returned old format, not encrypted\n+ if \"context\" in data and \"dispatcher\" in data[\"context\"]:\n+ stores = data['context']['dispatcher']['stores']\n+ if stores is None:\n+ raise Exception(f\"{self.ticker}: Failed to extract data stores from web request\")\n \n # return data\n- new_data = json.dumps(data).replace('{}', 'null')\n+ new_data = json.dumps(stores).replace('{}', 'null')\n new_data = re.sub(\n r'{[\\'|\\\"]raw[\\'|\\\"]:(.*?),(.*?)}', r'\\1', new_data)\n", "issue": "Scraper error \"TypeError: string indices must be integers\" - Yahoo decrypt fail\n## Updates\r\n### 2023 January 13\r\nBy the time of posting the issue (2023 January 12), the issue only occured sometimes. The library is now (2023 January 13) completely broken and I am unable to retrieve any stock informatio\r\n### 2023 January 14\r\nFix has been merged to the branch `dev`\r\n\r\n## Info about your system:\r\nyfinance version: 0.2.3\r\nOperating system: macOS Monteray 12.0.1\r\n### Snippet that can recreate the error\r\n```\r\nstock = yf.Ticker(\"^GSPC\")\r\ninfo = stock.info\r\n```\r\n## Error\r\nMessage:`TypeError: string indices must be integers`\r\nIt seems to be a problem where the scraper is not scraping the correct information, leading to a crash.\r\n### Traceback:\r\n```\r\nTraceback (most recent call last):\r\n File \"/home/2022/szhang139/.local/lib/python3.10/site-packages/apscheduler/executors/base_py3.py\", line 30, in run_coroutine_job\r\n retval = await job.func(*job.args, **job.kwargs)\r\n File \"/home/2022/szhang139/repos/STONK/src/main.py\", line 61, in notify\r\n market = get_major_index(f'Market Close - {daytime.today_date()}')\r\n File \"/home/2022/szhang139/repos/STONK/src/market_info.py\", line 63, in get_major_index\r\n sp500 = get_stock('^GSPC')\r\n File \"/home/2022/szhang139/repos/STONK/src/market_info.py\", line 41, in get_stock\r\n stock_info = get_stock_info(stock_name)\r\n File \"/home/2022/szhang139/repos/STONK/src/market_info.py\", line 8, in get_stock_info\r\n info = stock.info\r\n File \"/home/2022/szhang139/.local/lib/python3.10/site-packages/yfinance/ticker.py\", line 138, in info\r\n return self.get_info()\r\n File \"/home/2022/szhang139/.local/lib/python3.10/site-packages/yfinance/base.py\", line 894, in get_info\r\n data = self._quote.info\r\n File \"/home/2022/szhang139/.local/lib/python3.10/site-packages/yfinance/scrapers/quote.py\", line 27, in info\r\n self._scrape(self.proxy)\r\n File \"/home/2022/szhang139/.local/lib/python3.10/site-packages/yfinance/scrapers/quote.py\", line 58, in _scrape\r\n quote_summary_store = json_data['QuoteSummaryStore']\r\n```\r\n### Frequency\r\nThe error occurs in no apparent pattern. Every time it occurs, it seem to persist for some range of time before it recovers back to normal.\r\nn. \r\n\n", "before_files": [{"content": "import functools\nfrom functools import lru_cache\n\nimport hashlib\nfrom base64 import b64decode\nusePycryptodome = False # slightly faster\n# usePycryptodome = True\nif usePycryptodome:\n from Crypto.Cipher import AES\n from Crypto.Util.Padding import unpad\nelse:\n from cryptography.hazmat.primitives import padding\n from cryptography.hazmat.primitives.ciphers import Cipher, algorithms, modes\n\nimport requests as requests\nimport re\n\nfrom frozendict import frozendict\n\ntry:\n import ujson as json\nexcept ImportError:\n import json as json\n\ncache_maxsize = 64\n\n\ndef lru_cache_freezeargs(func):\n \"\"\"\n Decorator transforms mutable dictionary and list arguments into immutable types\n Needed so lru_cache can cache method calls what has dict or list arguments.\n \"\"\"\n\n @functools.wraps(func)\n def wrapped(*args, **kwargs):\n args = tuple([frozendict(arg) if isinstance(arg, dict) else arg for arg in args])\n kwargs = {k: frozendict(v) if isinstance(v, dict) else v for k, v in kwargs.items()}\n args = tuple([tuple(arg) if isinstance(arg, list) else arg for arg in args])\n kwargs = {k: tuple(v) if isinstance(v, list) else v for k, v in kwargs.items()}\n return func(*args, **kwargs)\n\n # copy over the lru_cache extra methods to this wrapper to be able to access them\n # after this decorator has been applied\n wrapped.cache_info = func.cache_info\n wrapped.cache_clear = func.cache_clear\n return wrapped\n\n\ndef decrypt_cryptojs_aes(data):\n encrypted_stores = data['context']['dispatcher']['stores']\n _cs = data[\"_cs\"]\n _cr = data[\"_cr\"]\n\n _cr = b\"\".join(int.to_bytes(i, length=4, byteorder=\"big\", signed=True) for i in json.loads(_cr)[\"words\"])\n password = hashlib.pbkdf2_hmac(\"sha1\", _cs.encode(\"utf8\"), _cr, 1, dklen=32).hex()\n\n encrypted_stores = b64decode(encrypted_stores)\n assert encrypted_stores[0:8] == b\"Salted__\"\n salt = encrypted_stores[8:16]\n encrypted_stores = encrypted_stores[16:]\n\n def EVPKDF(password, salt, keySize=32, ivSize=16, iterations=1, hashAlgorithm=\"md5\") -> tuple:\n \"\"\"OpenSSL EVP Key Derivation Function\n Args:\n password (Union[str, bytes, bytearray]): Password to generate key from.\n salt (Union[bytes, bytearray]): Salt to use.\n keySize (int, optional): Output key length in bytes. Defaults to 32.\n ivSize (int, optional): Output Initialization Vector (IV) length in bytes. Defaults to 16.\n iterations (int, optional): Number of iterations to perform. Defaults to 1.\n hashAlgorithm (str, optional): Hash algorithm to use for the KDF. Defaults to 'md5'.\n Returns:\n key, iv: Derived key and Initialization Vector (IV) bytes.\n\n Taken from: https://gist.github.com/rafiibrahim8/0cd0f8c46896cafef6486cb1a50a16d3\n OpenSSL original code: https://github.com/openssl/openssl/blob/master/crypto/evp/evp_key.c#L78\n \"\"\"\n\n assert iterations > 0, \"Iterations can not be less than 1.\"\n\n if isinstance(password, str):\n password = password.encode(\"utf-8\")\n\n final_length = keySize + ivSize\n key_iv = b\"\"\n block = None\n\n while len(key_iv) < final_length:\n hasher = hashlib.new(hashAlgorithm)\n if block:\n hasher.update(block)\n hasher.update(password)\n hasher.update(salt)\n block = hasher.digest()\n for _ in range(1, iterations):\n block = hashlib.new(hashAlgorithm, block).digest()\n key_iv += block\n\n key, iv = key_iv[:keySize], key_iv[keySize:final_length]\n return key, iv\n\n key, iv = EVPKDF(password, salt, keySize=32, ivSize=16, iterations=1, hashAlgorithm=\"md5\")\n\n if usePycryptodome:\n cipher = AES.new(key, AES.MODE_CBC, iv=iv)\n plaintext = cipher.decrypt(encrypted_stores)\n plaintext = unpad(plaintext, 16, style=\"pkcs7\")\n else:\n cipher = Cipher(algorithms.AES(key), modes.CBC(iv))\n decryptor = cipher.decryptor()\n plaintext = decryptor.update(encrypted_stores) + decryptor.finalize()\n unpadder = padding.PKCS7(128).unpadder()\n plaintext = unpadder.update(plaintext) + unpadder.finalize()\n plaintext = plaintext.decode(\"utf-8\")\n\n decoded_stores = json.loads(plaintext)\n return decoded_stores\n\n\n_SCRAPE_URL_ = 'https://finance.yahoo.com/quote'\n\n\nclass TickerData:\n \"\"\"\n Have one place to retrieve data from Yahoo API in order to ease caching and speed up operations\n \"\"\"\n user_agent_headers = {\n 'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/39.0.2171.95 Safari/537.36'}\n\n def __init__(self, ticker: str, session=None):\n self.ticker = ticker\n self._session = session or requests\n\n def get(self, url, user_agent_headers=None, params=None, proxy=None, timeout=30):\n proxy = self._get_proxy(proxy)\n response = self._session.get(\n url=url,\n params=params,\n proxies=proxy,\n timeout=timeout,\n headers=user_agent_headers or self.user_agent_headers)\n return response\n\n @lru_cache_freezeargs\n @lru_cache(maxsize=cache_maxsize)\n def cache_get(self, url, user_agent_headers=None, params=None, proxy=None, timeout=30):\n return self.get(url, user_agent_headers, params, proxy, timeout)\n\n def _get_proxy(self, proxy):\n # setup proxy in requests format\n if proxy is not None:\n if isinstance(proxy, dict) and \"https\" in proxy:\n proxy = proxy[\"https\"]\n proxy = {\"https\": proxy}\n return proxy\n\n @lru_cache_freezeargs\n @lru_cache(maxsize=cache_maxsize)\n def get_json_data_stores(self, sub_page: str = None, proxy=None) -> dict:\n '''\n get_json_data_stores returns a python dictionary of the data stores in yahoo finance web page.\n '''\n if sub_page:\n ticker_url = \"{}/{}/{}\".format(_SCRAPE_URL_, self.ticker, sub_page)\n else:\n ticker_url = \"{}/{}\".format(_SCRAPE_URL_, self.ticker)\n\n html = self.get(url=ticker_url, proxy=proxy).text\n\n # The actual json-data for stores is in a javascript assignment in the webpage\n try:\n json_str = html.split('root.App.main =')[1].split(\n '(this)')[0].split(';\\n}')[0].strip()\n except IndexError:\n # Fetch failed, probably because Yahoo spam triggered\n return {}\n\n data = json.loads(json_str)\n\n if \"_cs\" in data and \"_cr\" in data:\n data = decrypt_cryptojs_aes(data)\n\n if \"context\" in data and \"dispatcher\" in data[\"context\"]:\n # Keep old code, just in case\n data = data['context']['dispatcher']['stores']\n\n # return data\n new_data = json.dumps(data).replace('{}', 'null')\n new_data = re.sub(\n r'{[\\'|\\\"]raw[\\'|\\\"]:(.*?),(.*?)}', r'\\1', new_data)\n\n return json.loads(new_data)\n", "path": "yfinance/data.py"}]} | 3,524 | 941 |
gh_patches_debug_2937 | rasdani/github-patches | git_diff | openai__gym-1708 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Bug in PixelObservationWrapper
Error log
```
env = PixelObservationWrapper(env, pixels_only=True)
File "/home/tsan/Desktop/gym/gym/wrappers/pixel_observation.py", line 89, in __init__
pixels = self.env.render(**render_kwargs)
File "/home/tsan/Desktop/gym/gym/core.py", line 233, in render
return self.env.render(mode, **kwargs)
TypeError: render() got an unexpected keyword argument 'pixels'
```
Can be reproduced by running
```
import gym
from gym.wrappers.pixel_observation import PixelObservationWrapper # pylint: disable=E0401
env = gym.make('Acrobot-v1')
env.reset()
env = PixelObservationWrapper(env, pixels_only=True)
env.step(0)
```
</issue>
<code>
[start of gym/wrappers/pixel_observation.py]
1 """An observation wrapper that augments observations by pixel values."""
2
3 import collections
4 import copy
5
6 import numpy as np
7
8 from gym import spaces
9 from gym import ObservationWrapper
10
11 STATE_KEY = 'state'
12
13
14 class PixelObservationWrapper(ObservationWrapper):
15 """Augment observations by pixel values."""
16
17 def __init__(self,
18 env,
19 pixels_only=True,
20 render_kwargs=None,
21 pixel_keys=('pixels', )):
22 """Initializes a new pixel Wrapper.
23
24 Args:
25 env: The environment to wrap.
26 pixels_only: If `True` (default), the original observation returned
27 by the wrapped environment will be discarded, and a dictionary
28 observation will only include pixels. If `False`, the
29 observation dictionary will contain both the original
30 observations and the pixel observations.
31 render_kwargs: Optional `dict` containing keyword arguments passed
32 to the `self.render` method.
33 pixel_keys: Optional custom string specifying the pixel
34 observation's key in the `OrderedDict` of observations.
35 Defaults to 'pixels'.
36
37 Raises:
38 ValueError: If `env`'s observation spec is not compatible with the
39 wrapper. Supported formats are a single array, or a dict of
40 arrays.
41 ValueError: If `env`'s observation already contains any of the
42 specified `pixel_keys`.
43 """
44
45 super(PixelObservationWrapper, self).__init__(env)
46
47 if render_kwargs is None:
48 render_kwargs = {}
49
50 for key in pixel_keys:
51 render_kwargs.setdefault(key, {})
52
53 render_mode = render_kwargs[key].pop('mode', 'rgb_array')
54 assert render_mode == 'rgb_array', render_mode
55 render_kwargs[key]['mode'] = 'rgb_array'
56
57 wrapped_observation_space = env.observation_space
58
59 if isinstance(wrapped_observation_space, spaces.Box):
60 self._observation_is_dict = False
61 invalid_keys = set([STATE_KEY])
62 elif isinstance(wrapped_observation_space,
63 (spaces.Dict, collections.MutableMapping)):
64 self._observation_is_dict = True
65 invalid_keys = set(wrapped_observation_space.spaces.keys())
66 else:
67 raise ValueError("Unsupported observation space structure.")
68
69 if not pixels_only:
70 # Make sure that now keys in the `pixel_keys` overlap with
71 # `observation_keys`
72 overlapping_keys = set(pixel_keys) & set(invalid_keys)
73 if overlapping_keys:
74 raise ValueError("Duplicate or reserved pixel keys {!r}."
75 .format(overlapping_keys))
76
77 if pixels_only:
78 self.observation_space = spaces.Dict()
79 elif self._observation_is_dict:
80 self.observation_space = copy.deepcopy(wrapped_observation_space)
81 else:
82 self.observation_space = spaces.Dict()
83 self.observation_space.spaces[STATE_KEY] = wrapped_observation_space
84
85 # Extend observation space with pixels.
86
87 pixels_spaces = {}
88 for pixel_key in pixel_keys:
89 pixels = self.env.render(**render_kwargs)
90
91 if np.issubdtype(pixels.dtype, np.integer):
92 low, high = (0, 255)
93 elif np.issubdtype(pixels.dtype, np.float):
94 low, high = (-float('inf'), float('inf'))
95 else:
96 raise TypeError(pixels.dtype)
97
98 pixels_space = spaces.Box(
99 shape=pixels.shape, low=low, high=high, dtype=pixels.dtype)
100 pixels_spaces[pixel_key] = pixels_space
101
102 self.observation_space.spaces.update(pixels_spaces)
103
104 self._env = env
105 self._pixels_only = pixels_only
106 self._render_kwargs = render_kwargs
107 self._pixel_keys = pixel_keys
108
109 def observation(self, observation):
110 pixel_observation = self._add_pixel_observation(observation)
111 return pixel_observation
112
113 def _add_pixel_observation(self, observation):
114 if self._pixels_only:
115 observation = collections.OrderedDict()
116 elif self._observation_is_dict:
117 observation = type(observation)(observation)
118 else:
119 observation = collections.OrderedDict()
120 observation[STATE_KEY] = observation
121
122 pixel_observations = {
123 pixel_key: self.env.render(**self._render_kwargs[pixel_key])
124 for pixel_key in self._pixel_keys
125 }
126
127 observation.update(pixel_observations)
128
129 return observation
130
[end of gym/wrappers/pixel_observation.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/gym/wrappers/pixel_observation.py b/gym/wrappers/pixel_observation.py
--- a/gym/wrappers/pixel_observation.py
+++ b/gym/wrappers/pixel_observation.py
@@ -86,7 +86,7 @@
pixels_spaces = {}
for pixel_key in pixel_keys:
- pixels = self.env.render(**render_kwargs)
+ pixels = self.env.render(**render_kwargs[pixel_key])
if np.issubdtype(pixels.dtype, np.integer):
low, high = (0, 255)
| {"golden_diff": "diff --git a/gym/wrappers/pixel_observation.py b/gym/wrappers/pixel_observation.py\n--- a/gym/wrappers/pixel_observation.py\n+++ b/gym/wrappers/pixel_observation.py\n@@ -86,7 +86,7 @@\n \n pixels_spaces = {}\n for pixel_key in pixel_keys:\n- pixels = self.env.render(**render_kwargs)\n+ pixels = self.env.render(**render_kwargs[pixel_key])\n \n if np.issubdtype(pixels.dtype, np.integer):\n low, high = (0, 255)\n", "issue": "Bug in PixelObservationWrapper \nError log\r\n```\r\n env = PixelObservationWrapper(env, pixels_only=True)\r\n File \"/home/tsan/Desktop/gym/gym/wrappers/pixel_observation.py\", line 89, in __init__\r\n pixels = self.env.render(**render_kwargs)\r\n File \"/home/tsan/Desktop/gym/gym/core.py\", line 233, in render\r\n return self.env.render(mode, **kwargs)\r\nTypeError: render() got an unexpected keyword argument 'pixels'\r\n```\r\n\r\nCan be reproduced by running\r\n```\r\nimport gym\r\nfrom gym.wrappers.pixel_observation import PixelObservationWrapper # pylint: disable=E0401\r\n\r\nenv = gym.make('Acrobot-v1')\r\nenv.reset()\r\nenv = PixelObservationWrapper(env, pixels_only=True)\r\nenv.step(0)\r\n```\n", "before_files": [{"content": "\"\"\"An observation wrapper that augments observations by pixel values.\"\"\"\n\nimport collections\nimport copy\n\nimport numpy as np\n\nfrom gym import spaces\nfrom gym import ObservationWrapper\n\nSTATE_KEY = 'state'\n\n\nclass PixelObservationWrapper(ObservationWrapper):\n \"\"\"Augment observations by pixel values.\"\"\"\n\n def __init__(self,\n env,\n pixels_only=True,\n render_kwargs=None,\n pixel_keys=('pixels', )):\n \"\"\"Initializes a new pixel Wrapper.\n\n Args:\n env: The environment to wrap.\n pixels_only: If `True` (default), the original observation returned\n by the wrapped environment will be discarded, and a dictionary\n observation will only include pixels. If `False`, the\n observation dictionary will contain both the original\n observations and the pixel observations.\n render_kwargs: Optional `dict` containing keyword arguments passed\n to the `self.render` method.\n pixel_keys: Optional custom string specifying the pixel\n observation's key in the `OrderedDict` of observations.\n Defaults to 'pixels'.\n\n Raises:\n ValueError: If `env`'s observation spec is not compatible with the\n wrapper. Supported formats are a single array, or a dict of\n arrays.\n ValueError: If `env`'s observation already contains any of the\n specified `pixel_keys`.\n \"\"\"\n\n super(PixelObservationWrapper, self).__init__(env)\n\n if render_kwargs is None:\n render_kwargs = {}\n\n for key in pixel_keys:\n render_kwargs.setdefault(key, {})\n\n render_mode = render_kwargs[key].pop('mode', 'rgb_array')\n assert render_mode == 'rgb_array', render_mode\n render_kwargs[key]['mode'] = 'rgb_array'\n\n wrapped_observation_space = env.observation_space\n\n if isinstance(wrapped_observation_space, spaces.Box):\n self._observation_is_dict = False\n invalid_keys = set([STATE_KEY])\n elif isinstance(wrapped_observation_space,\n (spaces.Dict, collections.MutableMapping)):\n self._observation_is_dict = True\n invalid_keys = set(wrapped_observation_space.spaces.keys())\n else:\n raise ValueError(\"Unsupported observation space structure.\")\n\n if not pixels_only:\n # Make sure that now keys in the `pixel_keys` overlap with\n # `observation_keys`\n overlapping_keys = set(pixel_keys) & set(invalid_keys)\n if overlapping_keys:\n raise ValueError(\"Duplicate or reserved pixel keys {!r}.\"\n .format(overlapping_keys))\n\n if pixels_only:\n self.observation_space = spaces.Dict()\n elif self._observation_is_dict:\n self.observation_space = copy.deepcopy(wrapped_observation_space)\n else:\n self.observation_space = spaces.Dict()\n self.observation_space.spaces[STATE_KEY] = wrapped_observation_space\n\n # Extend observation space with pixels.\n\n pixels_spaces = {}\n for pixel_key in pixel_keys:\n pixels = self.env.render(**render_kwargs)\n\n if np.issubdtype(pixels.dtype, np.integer):\n low, high = (0, 255)\n elif np.issubdtype(pixels.dtype, np.float):\n low, high = (-float('inf'), float('inf'))\n else:\n raise TypeError(pixels.dtype)\n\n pixels_space = spaces.Box(\n shape=pixels.shape, low=low, high=high, dtype=pixels.dtype)\n pixels_spaces[pixel_key] = pixels_space\n\n self.observation_space.spaces.update(pixels_spaces)\n\n self._env = env\n self._pixels_only = pixels_only\n self._render_kwargs = render_kwargs\n self._pixel_keys = pixel_keys\n\n def observation(self, observation):\n pixel_observation = self._add_pixel_observation(observation)\n return pixel_observation\n\n def _add_pixel_observation(self, observation):\n if self._pixels_only:\n observation = collections.OrderedDict()\n elif self._observation_is_dict:\n observation = type(observation)(observation)\n else:\n observation = collections.OrderedDict()\n observation[STATE_KEY] = observation\n\n pixel_observations = {\n pixel_key: self.env.render(**self._render_kwargs[pixel_key])\n for pixel_key in self._pixel_keys\n }\n\n observation.update(pixel_observations)\n\n return observation\n", "path": "gym/wrappers/pixel_observation.py"}]} | 1,930 | 131 |
gh_patches_debug_908 | rasdani/github-patches | git_diff | mlflow__mlflow-9827 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[DOC-FIX] Doc for Run.inputs erroneously refers to Run.data
### Willingness to contribute
No. I cannot contribute a documentation fix at this time.
### URL(s) with the issue
https://www.mlflow.org/docs/latest/python_api/mlflow.entities.html#mlflow.entities.Run
### Description of proposal (what needs changing)
In the Run doc page, the doc for Run.inputs refers to Run.data instead of Run.input.
property inputs
The run inputs, including dataset inputs
Return type
mlflow.entities.RunData
</issue>
<code>
[start of mlflow/entities/run.py]
1 from typing import Any, Dict, Optional
2
3 from mlflow.entities._mlflow_object import _MLflowObject
4 from mlflow.entities.run_data import RunData
5 from mlflow.entities.run_info import RunInfo
6 from mlflow.entities.run_inputs import RunInputs
7 from mlflow.exceptions import MlflowException
8 from mlflow.protos.service_pb2 import Run as ProtoRun
9
10
11 class Run(_MLflowObject):
12 """
13 Run object.
14 """
15
16 def __init__(
17 self, run_info: RunInfo, run_data: RunData, run_inputs: Optional[RunInputs] = None
18 ) -> None:
19 if run_info is None:
20 raise MlflowException("run_info cannot be None")
21 self._info = run_info
22 self._data = run_data
23 self._inputs = run_inputs
24
25 @property
26 def info(self) -> RunInfo:
27 """
28 The run metadata, such as the run id, start time, and status.
29
30 :rtype: :py:class:`mlflow.entities.RunInfo`
31 """
32 return self._info
33
34 @property
35 def data(self) -> RunData:
36 """
37 The run data, including metrics, parameters, and tags.
38
39 :rtype: :py:class:`mlflow.entities.RunData`
40 """
41 return self._data
42
43 @property
44 def inputs(self) -> RunInputs:
45 """
46 The run inputs, including dataset inputs
47
48 :rtype: :py:class:`mlflow.entities.RunData`
49 """
50 return self._inputs
51
52 def to_proto(self):
53 run = ProtoRun()
54 run.info.MergeFrom(self.info.to_proto())
55 if self.data:
56 run.data.MergeFrom(self.data.to_proto())
57 if self.inputs:
58 run.inputs.MergeFrom(self.inputs.to_proto())
59 return run
60
61 @classmethod
62 def from_proto(cls, proto):
63 return cls(
64 RunInfo.from_proto(proto.info),
65 RunData.from_proto(proto.data),
66 RunInputs.from_proto(proto.inputs),
67 )
68
69 def to_dictionary(self) -> Dict[Any, Any]:
70 run_dict = {
71 "info": dict(self.info),
72 }
73 if self.data:
74 run_dict["data"] = self.data.to_dictionary()
75 if self.inputs:
76 run_dict["inputs"] = self.inputs.to_dictionary()
77 return run_dict
78
[end of mlflow/entities/run.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/mlflow/entities/run.py b/mlflow/entities/run.py
--- a/mlflow/entities/run.py
+++ b/mlflow/entities/run.py
@@ -45,7 +45,7 @@
"""
The run inputs, including dataset inputs
- :rtype: :py:class:`mlflow.entities.RunData`
+ :rtype: :py:class:`mlflow.entities.RunInputs`
"""
return self._inputs
| {"golden_diff": "diff --git a/mlflow/entities/run.py b/mlflow/entities/run.py\n--- a/mlflow/entities/run.py\n+++ b/mlflow/entities/run.py\n@@ -45,7 +45,7 @@\n \"\"\"\n The run inputs, including dataset inputs\n \n- :rtype: :py:class:`mlflow.entities.RunData`\n+ :rtype: :py:class:`mlflow.entities.RunInputs`\n \"\"\"\n return self._inputs\n", "issue": "[DOC-FIX] Doc for Run.inputs erroneously refers to Run.data\n### Willingness to contribute\n\nNo. I cannot contribute a documentation fix at this time.\n\n### URL(s) with the issue\n\nhttps://www.mlflow.org/docs/latest/python_api/mlflow.entities.html#mlflow.entities.Run\n\n### Description of proposal (what needs changing)\n\nIn the Run doc page, the doc for Run.inputs refers to Run.data instead of Run.input.\r\n\r\n\r\nproperty inputs\r\nThe run inputs, including dataset inputs\r\n\r\nReturn type\r\nmlflow.entities.RunData\r\n\r\n\n", "before_files": [{"content": "from typing import Any, Dict, Optional\n\nfrom mlflow.entities._mlflow_object import _MLflowObject\nfrom mlflow.entities.run_data import RunData\nfrom mlflow.entities.run_info import RunInfo\nfrom mlflow.entities.run_inputs import RunInputs\nfrom mlflow.exceptions import MlflowException\nfrom mlflow.protos.service_pb2 import Run as ProtoRun\n\n\nclass Run(_MLflowObject):\n \"\"\"\n Run object.\n \"\"\"\n\n def __init__(\n self, run_info: RunInfo, run_data: RunData, run_inputs: Optional[RunInputs] = None\n ) -> None:\n if run_info is None:\n raise MlflowException(\"run_info cannot be None\")\n self._info = run_info\n self._data = run_data\n self._inputs = run_inputs\n\n @property\n def info(self) -> RunInfo:\n \"\"\"\n The run metadata, such as the run id, start time, and status.\n\n :rtype: :py:class:`mlflow.entities.RunInfo`\n \"\"\"\n return self._info\n\n @property\n def data(self) -> RunData:\n \"\"\"\n The run data, including metrics, parameters, and tags.\n\n :rtype: :py:class:`mlflow.entities.RunData`\n \"\"\"\n return self._data\n\n @property\n def inputs(self) -> RunInputs:\n \"\"\"\n The run inputs, including dataset inputs\n\n :rtype: :py:class:`mlflow.entities.RunData`\n \"\"\"\n return self._inputs\n\n def to_proto(self):\n run = ProtoRun()\n run.info.MergeFrom(self.info.to_proto())\n if self.data:\n run.data.MergeFrom(self.data.to_proto())\n if self.inputs:\n run.inputs.MergeFrom(self.inputs.to_proto())\n return run\n\n @classmethod\n def from_proto(cls, proto):\n return cls(\n RunInfo.from_proto(proto.info),\n RunData.from_proto(proto.data),\n RunInputs.from_proto(proto.inputs),\n )\n\n def to_dictionary(self) -> Dict[Any, Any]:\n run_dict = {\n \"info\": dict(self.info),\n }\n if self.data:\n run_dict[\"data\"] = self.data.to_dictionary()\n if self.inputs:\n run_dict[\"inputs\"] = self.inputs.to_dictionary()\n return run_dict\n", "path": "mlflow/entities/run.py"}]} | 1,296 | 93 |
gh_patches_debug_23900 | rasdani/github-patches | git_diff | CTFd__CTFd-1823 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Submissions should link directly to the user that submitted
Submissions don't link directly to the user in team mode which means you need to search to see what user submitted for a given team.
</issue>
<code>
[start of CTFd/constants/config.py]
1 import json
2
3 from flask import url_for
4
5 from CTFd.constants import JinjaEnum, RawEnum
6 from CTFd.utils import get_config
7
8
9 class ConfigTypes(str, RawEnum):
10 CHALLENGE_VISIBILITY = "challenge_visibility"
11 SCORE_VISIBILITY = "score_visibility"
12 ACCOUNT_VISIBILITY = "account_visibility"
13 REGISTRATION_VISIBILITY = "registration_visibility"
14
15
16 @JinjaEnum
17 class ChallengeVisibilityTypes(str, RawEnum):
18 PUBLIC = "public"
19 PRIVATE = "private"
20 ADMINS = "admins"
21
22
23 @JinjaEnum
24 class ScoreVisibilityTypes(str, RawEnum):
25 PUBLIC = "public"
26 PRIVATE = "private"
27 HIDDEN = "hidden"
28 ADMINS = "admins"
29
30
31 @JinjaEnum
32 class AccountVisibilityTypes(str, RawEnum):
33 PUBLIC = "public"
34 PRIVATE = "private"
35 ADMINS = "admins"
36
37
38 @JinjaEnum
39 class RegistrationVisibilityTypes(str, RawEnum):
40 PUBLIC = "public"
41 PRIVATE = "private"
42
43
44 class _ConfigsWrapper:
45 def __getattr__(self, attr):
46 return get_config(attr)
47
48 @property
49 def ctf_name(self):
50 return get_config("ctf_name", default="CTFd")
51
52 @property
53 def ctf_small_icon(self):
54 icon = get_config("ctf_small_icon")
55 if icon:
56 return url_for("views.files", path=icon)
57 return url_for("views.themes", path="img/favicon.ico")
58
59 @property
60 def theme_header(self):
61 from CTFd.utils.helpers import markup
62
63 return markup(get_config("theme_header", default=""))
64
65 @property
66 def theme_footer(self):
67 from CTFd.utils.helpers import markup
68
69 return markup(get_config("theme_footer", default=""))
70
71 @property
72 def theme_settings(self):
73 return json.loads(get_config("theme_settings", default="null"))
74
75 @property
76 def tos_or_privacy(self):
77 tos = bool(get_config("tos_url") or get_config("tos_text"))
78 privacy = bool(get_config("privacy_url") or get_config("privacy_text"))
79 return tos or privacy
80
81 @property
82 def tos_link(self):
83 return get_config("tos_url", default=url_for("views.tos"))
84
85 @property
86 def privacy_link(self):
87 return get_config("privacy_url", default=url_for("views.privacy"))
88
89
90 Configs = _ConfigsWrapper()
91
[end of CTFd/constants/config.py]
[start of CTFd/utils/modes/__init__.py]
1 from flask import url_for
2
3 from CTFd.models import Teams, Users
4 from CTFd.utils import get_config
5
6 USERS_MODE = "users"
7 TEAMS_MODE = "teams"
8
9
10 def generate_account_url(account_id, admin=False):
11 if get_config("user_mode") == USERS_MODE:
12 if admin:
13 return url_for("admin.users_detail", user_id=account_id)
14 else:
15 return url_for("users.public", user_id=account_id)
16 elif get_config("user_mode") == TEAMS_MODE:
17 if admin:
18 return url_for("admin.teams_detail", team_id=account_id)
19 else:
20 return url_for("teams.public", team_id=account_id)
21
22
23 def get_model():
24 if get_config("user_mode") == USERS_MODE:
25 return Users
26 elif get_config("user_mode") == TEAMS_MODE:
27 return Teams
28
29
30 def get_mode_as_word(plural=False, capitalize=False):
31 if get_config("user_mode") == USERS_MODE:
32 word = "user"
33 else:
34 word = "team"
35
36 if plural:
37 word += "s"
38 if capitalize:
39 word = word.title()
40 return word
41
[end of CTFd/utils/modes/__init__.py]
[start of CTFd/admin/submissions.py]
1 from flask import render_template, request, url_for
2
3 from CTFd.admin import admin
4 from CTFd.models import Challenges, Submissions
5 from CTFd.utils.decorators import admins_only
6 from CTFd.utils.helpers.models import build_model_filters
7 from CTFd.utils.modes import get_model
8
9
10 @admin.route("/admin/submissions", defaults={"submission_type": None})
11 @admin.route("/admin/submissions/<submission_type>")
12 @admins_only
13 def submissions_listing(submission_type):
14 filters_by = {}
15 if submission_type:
16 filters_by["type"] = submission_type
17 filters = []
18
19 q = request.args.get("q")
20 field = request.args.get("field")
21 page = abs(request.args.get("page", 1, type=int))
22
23 filters = build_model_filters(
24 model=Submissions,
25 query=q,
26 field=field,
27 extra_columns={
28 "challenge_name": Challenges.name,
29 "account_id": Submissions.account_id,
30 },
31 )
32
33 Model = get_model()
34
35 submissions = (
36 Submissions.query.add_columns(
37 Submissions.id,
38 Submissions.type,
39 Submissions.challenge_id,
40 Submissions.provided,
41 Submissions.account_id,
42 Submissions.date,
43 Challenges.name.label("challenge_name"),
44 Model.name.label("account_name"),
45 )
46 .filter_by(**filters_by)
47 .filter(*filters)
48 .join(Challenges)
49 .join(Model)
50 .order_by(Submissions.date.desc())
51 .paginate(page=page, per_page=50)
52 )
53
54 args = dict(request.args)
55 args.pop("page", 1)
56
57 return render_template(
58 "admin/submissions.html",
59 submissions=submissions,
60 prev_page=url_for(
61 request.endpoint,
62 submission_type=submission_type,
63 page=submissions.prev_num,
64 **args
65 ),
66 next_page=url_for(
67 request.endpoint,
68 submission_type=submission_type,
69 page=submissions.next_num,
70 **args
71 ),
72 type=submission_type,
73 q=q,
74 field=field,
75 )
76
[end of CTFd/admin/submissions.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/CTFd/admin/submissions.py b/CTFd/admin/submissions.py
--- a/CTFd/admin/submissions.py
+++ b/CTFd/admin/submissions.py
@@ -33,17 +33,7 @@
Model = get_model()
submissions = (
- Submissions.query.add_columns(
- Submissions.id,
- Submissions.type,
- Submissions.challenge_id,
- Submissions.provided,
- Submissions.account_id,
- Submissions.date,
- Challenges.name.label("challenge_name"),
- Model.name.label("account_name"),
- )
- .filter_by(**filters_by)
+ Submissions.query.filter_by(**filters_by)
.filter(*filters)
.join(Challenges)
.join(Model)
diff --git a/CTFd/constants/config.py b/CTFd/constants/config.py
--- a/CTFd/constants/config.py
+++ b/CTFd/constants/config.py
@@ -13,6 +13,12 @@
REGISTRATION_VISIBILITY = "registration_visibility"
+@JinjaEnum
+class UserModeTypes(str, RawEnum):
+ USERS = "users"
+ TEAMS = "teams"
+
+
@JinjaEnum
class ChallengeVisibilityTypes(str, RawEnum):
PUBLIC = "public"
diff --git a/CTFd/utils/modes/__init__.py b/CTFd/utils/modes/__init__.py
--- a/CTFd/utils/modes/__init__.py
+++ b/CTFd/utils/modes/__init__.py
@@ -3,6 +3,7 @@
from CTFd.models import Teams, Users
from CTFd.utils import get_config
+# TODO: Replace these constants with the UserModeTypes enum
USERS_MODE = "users"
TEAMS_MODE = "teams"
| {"golden_diff": "diff --git a/CTFd/admin/submissions.py b/CTFd/admin/submissions.py\n--- a/CTFd/admin/submissions.py\n+++ b/CTFd/admin/submissions.py\n@@ -33,17 +33,7 @@\n Model = get_model()\n \n submissions = (\n- Submissions.query.add_columns(\n- Submissions.id,\n- Submissions.type,\n- Submissions.challenge_id,\n- Submissions.provided,\n- Submissions.account_id,\n- Submissions.date,\n- Challenges.name.label(\"challenge_name\"),\n- Model.name.label(\"account_name\"),\n- )\n- .filter_by(**filters_by)\n+ Submissions.query.filter_by(**filters_by)\n .filter(*filters)\n .join(Challenges)\n .join(Model)\ndiff --git a/CTFd/constants/config.py b/CTFd/constants/config.py\n--- a/CTFd/constants/config.py\n+++ b/CTFd/constants/config.py\n@@ -13,6 +13,12 @@\n REGISTRATION_VISIBILITY = \"registration_visibility\"\n \n \n+@JinjaEnum\n+class UserModeTypes(str, RawEnum):\n+ USERS = \"users\"\n+ TEAMS = \"teams\"\n+\n+\n @JinjaEnum\n class ChallengeVisibilityTypes(str, RawEnum):\n PUBLIC = \"public\"\ndiff --git a/CTFd/utils/modes/__init__.py b/CTFd/utils/modes/__init__.py\n--- a/CTFd/utils/modes/__init__.py\n+++ b/CTFd/utils/modes/__init__.py\n@@ -3,6 +3,7 @@\n from CTFd.models import Teams, Users\n from CTFd.utils import get_config\n \n+# TODO: Replace these constants with the UserModeTypes enum\n USERS_MODE = \"users\"\n TEAMS_MODE = \"teams\"\n", "issue": "Submissions should link directly to the user that submitted\nSubmissions don't link directly to the user in team mode which means you need to search to see what user submitted for a given team.\r\n\r\n\n", "before_files": [{"content": "import json\n\nfrom flask import url_for\n\nfrom CTFd.constants import JinjaEnum, RawEnum\nfrom CTFd.utils import get_config\n\n\nclass ConfigTypes(str, RawEnum):\n CHALLENGE_VISIBILITY = \"challenge_visibility\"\n SCORE_VISIBILITY = \"score_visibility\"\n ACCOUNT_VISIBILITY = \"account_visibility\"\n REGISTRATION_VISIBILITY = \"registration_visibility\"\n\n\n@JinjaEnum\nclass ChallengeVisibilityTypes(str, RawEnum):\n PUBLIC = \"public\"\n PRIVATE = \"private\"\n ADMINS = \"admins\"\n\n\n@JinjaEnum\nclass ScoreVisibilityTypes(str, RawEnum):\n PUBLIC = \"public\"\n PRIVATE = \"private\"\n HIDDEN = \"hidden\"\n ADMINS = \"admins\"\n\n\n@JinjaEnum\nclass AccountVisibilityTypes(str, RawEnum):\n PUBLIC = \"public\"\n PRIVATE = \"private\"\n ADMINS = \"admins\"\n\n\n@JinjaEnum\nclass RegistrationVisibilityTypes(str, RawEnum):\n PUBLIC = \"public\"\n PRIVATE = \"private\"\n\n\nclass _ConfigsWrapper:\n def __getattr__(self, attr):\n return get_config(attr)\n\n @property\n def ctf_name(self):\n return get_config(\"ctf_name\", default=\"CTFd\")\n\n @property\n def ctf_small_icon(self):\n icon = get_config(\"ctf_small_icon\")\n if icon:\n return url_for(\"views.files\", path=icon)\n return url_for(\"views.themes\", path=\"img/favicon.ico\")\n\n @property\n def theme_header(self):\n from CTFd.utils.helpers import markup\n\n return markup(get_config(\"theme_header\", default=\"\"))\n\n @property\n def theme_footer(self):\n from CTFd.utils.helpers import markup\n\n return markup(get_config(\"theme_footer\", default=\"\"))\n\n @property\n def theme_settings(self):\n return json.loads(get_config(\"theme_settings\", default=\"null\"))\n\n @property\n def tos_or_privacy(self):\n tos = bool(get_config(\"tos_url\") or get_config(\"tos_text\"))\n privacy = bool(get_config(\"privacy_url\") or get_config(\"privacy_text\"))\n return tos or privacy\n\n @property\n def tos_link(self):\n return get_config(\"tos_url\", default=url_for(\"views.tos\"))\n\n @property\n def privacy_link(self):\n return get_config(\"privacy_url\", default=url_for(\"views.privacy\"))\n\n\nConfigs = _ConfigsWrapper()\n", "path": "CTFd/constants/config.py"}, {"content": "from flask import url_for\n\nfrom CTFd.models import Teams, Users\nfrom CTFd.utils import get_config\n\nUSERS_MODE = \"users\"\nTEAMS_MODE = \"teams\"\n\n\ndef generate_account_url(account_id, admin=False):\n if get_config(\"user_mode\") == USERS_MODE:\n if admin:\n return url_for(\"admin.users_detail\", user_id=account_id)\n else:\n return url_for(\"users.public\", user_id=account_id)\n elif get_config(\"user_mode\") == TEAMS_MODE:\n if admin:\n return url_for(\"admin.teams_detail\", team_id=account_id)\n else:\n return url_for(\"teams.public\", team_id=account_id)\n\n\ndef get_model():\n if get_config(\"user_mode\") == USERS_MODE:\n return Users\n elif get_config(\"user_mode\") == TEAMS_MODE:\n return Teams\n\n\ndef get_mode_as_word(plural=False, capitalize=False):\n if get_config(\"user_mode\") == USERS_MODE:\n word = \"user\"\n else:\n word = \"team\"\n\n if plural:\n word += \"s\"\n if capitalize:\n word = word.title()\n return word\n", "path": "CTFd/utils/modes/__init__.py"}, {"content": "from flask import render_template, request, url_for\n\nfrom CTFd.admin import admin\nfrom CTFd.models import Challenges, Submissions\nfrom CTFd.utils.decorators import admins_only\nfrom CTFd.utils.helpers.models import build_model_filters\nfrom CTFd.utils.modes import get_model\n\n\[email protected](\"/admin/submissions\", defaults={\"submission_type\": None})\[email protected](\"/admin/submissions/<submission_type>\")\n@admins_only\ndef submissions_listing(submission_type):\n filters_by = {}\n if submission_type:\n filters_by[\"type\"] = submission_type\n filters = []\n\n q = request.args.get(\"q\")\n field = request.args.get(\"field\")\n page = abs(request.args.get(\"page\", 1, type=int))\n\n filters = build_model_filters(\n model=Submissions,\n query=q,\n field=field,\n extra_columns={\n \"challenge_name\": Challenges.name,\n \"account_id\": Submissions.account_id,\n },\n )\n\n Model = get_model()\n\n submissions = (\n Submissions.query.add_columns(\n Submissions.id,\n Submissions.type,\n Submissions.challenge_id,\n Submissions.provided,\n Submissions.account_id,\n Submissions.date,\n Challenges.name.label(\"challenge_name\"),\n Model.name.label(\"account_name\"),\n )\n .filter_by(**filters_by)\n .filter(*filters)\n .join(Challenges)\n .join(Model)\n .order_by(Submissions.date.desc())\n .paginate(page=page, per_page=50)\n )\n\n args = dict(request.args)\n args.pop(\"page\", 1)\n\n return render_template(\n \"admin/submissions.html\",\n submissions=submissions,\n prev_page=url_for(\n request.endpoint,\n submission_type=submission_type,\n page=submissions.prev_num,\n **args\n ),\n next_page=url_for(\n request.endpoint,\n submission_type=submission_type,\n page=submissions.next_num,\n **args\n ),\n type=submission_type,\n q=q,\n field=field,\n )\n", "path": "CTFd/admin/submissions.py"}]} | 2,252 | 391 |
gh_patches_debug_12551 | rasdani/github-patches | git_diff | quantumlib__Cirq-5211 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
cirq-web doesn't support LineQubit
```python
import cirq
import cirq_web
cirq_circuit = cirq.Circuit(cirq.H(cirq.LineQubit(0)))
cirq_web.Circuit3D(cirq_circuit).generate_html_file(
file_name="circuit_viewer.html",
open_in_browser=True,
)
```
results in
```
AttributeError: 'LineQubit' object has no attribute 'row'
```
</issue>
<code>
[start of cirq-web/cirq_web/circuits/circuit.py]
1 # Copyright 2021 The Cirq Developers
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # https://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 from typing import Iterable
15 import cirq
16 from cirq_web import widget
17 from cirq_web.circuits.symbols import (
18 Operation3DSymbol,
19 SymbolResolver,
20 resolve_operation,
21 DEFAULT_SYMBOL_RESOLVERS,
22 )
23
24
25 class Circuit3D(widget.Widget):
26 """Takes cirq.Circuit objects and displays them in 3D."""
27
28 def __init__(
29 self,
30 circuit: cirq.Circuit,
31 resolvers: Iterable[SymbolResolver] = DEFAULT_SYMBOL_RESOLVERS,
32 padding_factor: float = 1,
33 ):
34 """Initializes a Circuit instance.
35
36 Args:
37 circuit: The `cirq.Circuit` to be represented in 3D.
38 resolvers: The symbol resolve for how to show symbols in 3D.
39 padding_factor: The distance between meshes.
40 """
41 super().__init__()
42 self.circuit = circuit
43 self._resolvers = resolvers
44 self.padding_factor = padding_factor
45
46 def get_client_code(self) -> str:
47 # Remove hyphens from the id so that we can use
48 # it as the variable name in TS.
49 # It's important that we assign the circuit to a variable
50 # for animation purposes. Alternatively, there may be ways
51 # to select/manipulate elements on the screen from three.js
52 stripped_id = self.id.replace('-', '')
53 moments = len(self.circuit.moments)
54 self.serialized_circuit = self._serialize_circuit()
55
56 return f"""
57 <button id="camera-reset">Reset Camera</button>
58 <button id="camera-toggle">Toggle Camera Type</button>
59 <script>
60 let viz_{stripped_id} = createGridCircuit({self.serialized_circuit}, {moments}, "{self.id}", {self.padding_factor});
61
62 document.getElementById("camera-reset").addEventListener('click', () => {{
63 viz_{stripped_id}.scene.setCameraAndControls(viz_{stripped_id}.circuit);
64 }});
65
66 document.getElementById("camera-toggle").addEventListener('click', () => {{
67 viz_{stripped_id}.scene.toggleCamera(viz_{stripped_id}.circuit);
68 }});
69 </script>
70 """
71
72 def get_widget_bundle_name(self) -> str:
73 return 'circuit.bundle.js'
74
75 def _serialize_circuit(self) -> str:
76 args = []
77 moments = self.circuit.moments
78 for moment_id, moment in enumerate(moments):
79 for item in moment:
80 symbol = self._build_3D_symbol(item, moment_id)
81 args.append(symbol.to_typescript())
82
83 argument_str = ','.join(str(item) for item in args)
84 return f'[{argument_str}]'
85
86 def _build_3D_symbol(self, operation, moment) -> Operation3DSymbol:
87 symbol_info = resolve_operation(operation, self._resolvers)
88 location_info = []
89 for qubit in operation.qubits:
90 location_info.append({'row': qubit.row, 'col': qubit.col})
91 return Operation3DSymbol(symbol_info.labels, location_info, symbol_info.colors, moment)
92
[end of cirq-web/cirq_web/circuits/circuit.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/cirq-web/cirq_web/circuits/circuit.py b/cirq-web/cirq_web/circuits/circuit.py
--- a/cirq-web/cirq_web/circuits/circuit.py
+++ b/cirq-web/cirq_web/circuits/circuit.py
@@ -87,5 +87,10 @@
symbol_info = resolve_operation(operation, self._resolvers)
location_info = []
for qubit in operation.qubits:
- location_info.append({'row': qubit.row, 'col': qubit.col})
+ if isinstance(qubit, cirq.GridQubit):
+ location_info.append({'row': qubit.row, 'col': qubit.col})
+ elif isinstance(qubit, cirq.LineQubit):
+ location_info.append({'row': qubit.x, 'col': 0})
+ else:
+ raise ValueError('Unsupported qubit type')
return Operation3DSymbol(symbol_info.labels, location_info, symbol_info.colors, moment)
| {"golden_diff": "diff --git a/cirq-web/cirq_web/circuits/circuit.py b/cirq-web/cirq_web/circuits/circuit.py\n--- a/cirq-web/cirq_web/circuits/circuit.py\n+++ b/cirq-web/cirq_web/circuits/circuit.py\n@@ -87,5 +87,10 @@\n symbol_info = resolve_operation(operation, self._resolvers)\n location_info = []\n for qubit in operation.qubits:\n- location_info.append({'row': qubit.row, 'col': qubit.col})\n+ if isinstance(qubit, cirq.GridQubit):\n+ location_info.append({'row': qubit.row, 'col': qubit.col})\n+ elif isinstance(qubit, cirq.LineQubit):\n+ location_info.append({'row': qubit.x, 'col': 0})\n+ else:\n+ raise ValueError('Unsupported qubit type')\n return Operation3DSymbol(symbol_info.labels, location_info, symbol_info.colors, moment)\n", "issue": "cirq-web doesn't support LineQubit\n```python\r\nimport cirq\r\nimport cirq_web\r\n\r\ncirq_circuit = cirq.Circuit(cirq.H(cirq.LineQubit(0)))\r\ncirq_web.Circuit3D(cirq_circuit).generate_html_file(\r\n file_name=\"circuit_viewer.html\",\r\n open_in_browser=True,\r\n)\r\n```\r\n\r\nresults in\r\n\r\n```\r\nAttributeError: 'LineQubit' object has no attribute 'row'\r\n```\n", "before_files": [{"content": "# Copyright 2021 The Cirq Developers\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nfrom typing import Iterable\nimport cirq\nfrom cirq_web import widget\nfrom cirq_web.circuits.symbols import (\n Operation3DSymbol,\n SymbolResolver,\n resolve_operation,\n DEFAULT_SYMBOL_RESOLVERS,\n)\n\n\nclass Circuit3D(widget.Widget):\n \"\"\"Takes cirq.Circuit objects and displays them in 3D.\"\"\"\n\n def __init__(\n self,\n circuit: cirq.Circuit,\n resolvers: Iterable[SymbolResolver] = DEFAULT_SYMBOL_RESOLVERS,\n padding_factor: float = 1,\n ):\n \"\"\"Initializes a Circuit instance.\n\n Args:\n circuit: The `cirq.Circuit` to be represented in 3D.\n resolvers: The symbol resolve for how to show symbols in 3D.\n padding_factor: The distance between meshes.\n \"\"\"\n super().__init__()\n self.circuit = circuit\n self._resolvers = resolvers\n self.padding_factor = padding_factor\n\n def get_client_code(self) -> str:\n # Remove hyphens from the id so that we can use\n # it as the variable name in TS.\n # It's important that we assign the circuit to a variable\n # for animation purposes. Alternatively, there may be ways\n # to select/manipulate elements on the screen from three.js\n stripped_id = self.id.replace('-', '')\n moments = len(self.circuit.moments)\n self.serialized_circuit = self._serialize_circuit()\n\n return f\"\"\"\n <button id=\"camera-reset\">Reset Camera</button>\n <button id=\"camera-toggle\">Toggle Camera Type</button>\n <script>\n let viz_{stripped_id} = createGridCircuit({self.serialized_circuit}, {moments}, \"{self.id}\", {self.padding_factor});\n\n document.getElementById(\"camera-reset\").addEventListener('click', () => {{\n viz_{stripped_id}.scene.setCameraAndControls(viz_{stripped_id}.circuit);\n }});\n\n document.getElementById(\"camera-toggle\").addEventListener('click', () => {{\n viz_{stripped_id}.scene.toggleCamera(viz_{stripped_id}.circuit);\n }});\n </script>\n \"\"\"\n\n def get_widget_bundle_name(self) -> str:\n return 'circuit.bundle.js'\n\n def _serialize_circuit(self) -> str:\n args = []\n moments = self.circuit.moments\n for moment_id, moment in enumerate(moments):\n for item in moment:\n symbol = self._build_3D_symbol(item, moment_id)\n args.append(symbol.to_typescript())\n\n argument_str = ','.join(str(item) for item in args)\n return f'[{argument_str}]'\n\n def _build_3D_symbol(self, operation, moment) -> Operation3DSymbol:\n symbol_info = resolve_operation(operation, self._resolvers)\n location_info = []\n for qubit in operation.qubits:\n location_info.append({'row': qubit.row, 'col': qubit.col})\n return Operation3DSymbol(symbol_info.labels, location_info, symbol_info.colors, moment)\n", "path": "cirq-web/cirq_web/circuits/circuit.py"}]} | 1,622 | 215 |
gh_patches_debug_38232 | rasdani/github-patches | git_diff | freqtrade__freqtrade-2217 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Plot-scripts require --datadir
## Describe your environment
* Python Version: 3.7
* Branch: Develop
* Last Commit ID: 962d487edb0d28f95d6395c09189a333c436fd20
## Describe the problem:
Currently, `freqtrade plot-dataframe` does require either a valid configuration (`--config` or `config.json` in cwd - or `--datadir user_data/data/bittrex` to find the backtest data.
This is because without one of these, the exchange is not known, which is a requirement to find the data in the datadir.
## Possible fixes
* Error and point out that one of the 2 conditions have to be met
* add `--exchange` parameter as alternative (including the above)
* ... other ideas?
</issue>
<code>
[start of freqtrade/configuration/arguments.py]
1 """
2 This module contains the argument manager class
3 """
4 import argparse
5 from typing import List, Optional
6
7 from freqtrade.configuration.cli_options import AVAILABLE_CLI_OPTIONS
8 from freqtrade import constants
9
10 ARGS_COMMON = ["verbosity", "logfile", "version", "config", "datadir", "user_data_dir"]
11
12 ARGS_STRATEGY = ["strategy", "strategy_path"]
13
14 ARGS_MAIN = ARGS_COMMON + ARGS_STRATEGY + ["db_url", "sd_notify"]
15
16 ARGS_COMMON_OPTIMIZE = ["ticker_interval", "timerange",
17 "max_open_trades", "stake_amount", "refresh_pairs"]
18
19 ARGS_BACKTEST = ARGS_COMMON_OPTIMIZE + ["position_stacking", "use_max_market_positions",
20 "strategy_list", "export", "exportfilename"]
21
22 ARGS_HYPEROPT = ARGS_COMMON_OPTIMIZE + ["hyperopt", "hyperopt_path",
23 "position_stacking", "epochs", "spaces",
24 "use_max_market_positions", "print_all",
25 "print_colorized", "print_json", "hyperopt_jobs",
26 "hyperopt_random_state", "hyperopt_min_trades",
27 "hyperopt_continue", "hyperopt_loss"]
28
29 ARGS_EDGE = ARGS_COMMON_OPTIMIZE + ["stoploss_range"]
30
31 ARGS_LIST_EXCHANGES = ["print_one_column"]
32
33 ARGS_CREATE_USERDIR = ["user_data_dir"]
34
35 ARGS_DOWNLOAD_DATA = ["pairs", "pairs_file", "days", "exchange", "timeframes", "erase"]
36
37 ARGS_PLOT_DATAFRAME = ["pairs", "indicators1", "indicators2", "plot_limit", "db_url",
38 "trade_source", "export", "exportfilename", "timerange", "ticker_interval"]
39
40 ARGS_PLOT_PROFIT = ["pairs", "timerange", "export", "exportfilename", "db_url",
41 "trade_source", "ticker_interval"]
42
43 NO_CONF_REQURIED = ["download-data", "plot-dataframe", "plot-profit"]
44
45
46 class Arguments(object):
47 """
48 Arguments Class. Manage the arguments received by the cli
49 """
50 def __init__(self, args: Optional[List[str]]) -> None:
51 self.args = args
52 self._parsed_arg: Optional[argparse.Namespace] = None
53 self.parser = argparse.ArgumentParser(description='Free, open source crypto trading bot')
54
55 def _load_args(self) -> None:
56 self._build_args(optionlist=ARGS_MAIN)
57 self._build_subcommands()
58
59 def get_parsed_arg(self) -> argparse.Namespace:
60 """
61 Return the list of arguments
62 :return: List[str] List of arguments
63 """
64 if self._parsed_arg is None:
65 self._load_args()
66 self._parsed_arg = self._parse_args()
67
68 return self._parsed_arg
69
70 def _parse_args(self) -> argparse.Namespace:
71 """
72 Parses given arguments and returns an argparse Namespace instance.
73 """
74 parsed_arg = self.parser.parse_args(self.args)
75
76 # Workaround issue in argparse with action='append' and default value
77 # (see https://bugs.python.org/issue16399)
78 # Allow no-config for certain commands (like downloading / plotting)
79 if (parsed_arg.config is None
80 and not ('subparser' in parsed_arg and parsed_arg.subparser in NO_CONF_REQURIED)):
81 parsed_arg.config = [constants.DEFAULT_CONFIG]
82
83 return parsed_arg
84
85 def _build_args(self, optionlist, parser=None):
86 parser = parser or self.parser
87
88 for val in optionlist:
89 opt = AVAILABLE_CLI_OPTIONS[val]
90 parser.add_argument(*opt.cli, dest=val, **opt.kwargs)
91
92 def _build_subcommands(self) -> None:
93 """
94 Builds and attaches all subcommands.
95 :return: None
96 """
97 from freqtrade.optimize import start_backtesting, start_hyperopt, start_edge
98 from freqtrade.utils import start_create_userdir, start_download_data, start_list_exchanges
99
100 subparsers = self.parser.add_subparsers(dest='subparser')
101
102 # Add backtesting subcommand
103 backtesting_cmd = subparsers.add_parser('backtesting', help='Backtesting module.')
104 backtesting_cmd.set_defaults(func=start_backtesting)
105 self._build_args(optionlist=ARGS_BACKTEST, parser=backtesting_cmd)
106
107 # Add edge subcommand
108 edge_cmd = subparsers.add_parser('edge', help='Edge module.')
109 edge_cmd.set_defaults(func=start_edge)
110 self._build_args(optionlist=ARGS_EDGE, parser=edge_cmd)
111
112 # Add hyperopt subcommand
113 hyperopt_cmd = subparsers.add_parser('hyperopt', help='Hyperopt module.')
114 hyperopt_cmd.set_defaults(func=start_hyperopt)
115 self._build_args(optionlist=ARGS_HYPEROPT, parser=hyperopt_cmd)
116
117 # add create-userdir subcommand
118 create_userdir_cmd = subparsers.add_parser('create-userdir',
119 help="Create user-data directory.")
120 create_userdir_cmd.set_defaults(func=start_create_userdir)
121 self._build_args(optionlist=ARGS_CREATE_USERDIR, parser=create_userdir_cmd)
122
123 # Add list-exchanges subcommand
124 list_exchanges_cmd = subparsers.add_parser(
125 'list-exchanges',
126 help='Print available exchanges.'
127 )
128 list_exchanges_cmd.set_defaults(func=start_list_exchanges)
129 self._build_args(optionlist=ARGS_LIST_EXCHANGES, parser=list_exchanges_cmd)
130
131 # Add download-data subcommand
132 download_data_cmd = subparsers.add_parser(
133 'download-data',
134 help='Download backtesting data.'
135 )
136 download_data_cmd.set_defaults(func=start_download_data)
137 self._build_args(optionlist=ARGS_DOWNLOAD_DATA, parser=download_data_cmd)
138
139 # Add Plotting subcommand
140 from freqtrade.plot.plot_utils import start_plot_dataframe, start_plot_profit
141 plot_dataframe_cmd = subparsers.add_parser(
142 'plot-dataframe',
143 help='Plot candles with indicators.'
144 )
145 plot_dataframe_cmd.set_defaults(func=start_plot_dataframe)
146 self._build_args(optionlist=ARGS_PLOT_DATAFRAME, parser=plot_dataframe_cmd)
147
148 # Plot profit
149 plot_profit_cmd = subparsers.add_parser(
150 'plot-profit',
151 help='Generate plot showing profits.'
152 )
153 plot_profit_cmd.set_defaults(func=start_plot_profit)
154 self._build_args(optionlist=ARGS_PLOT_PROFIT, parser=plot_profit_cmd)
155
[end of freqtrade/configuration/arguments.py]
[start of freqtrade/plot/plot_utils.py]
1 from argparse import Namespace
2
3 from freqtrade.state import RunMode
4 from freqtrade.utils import setup_utils_configuration
5
6
7 def start_plot_dataframe(args: Namespace) -> None:
8 """
9 Entrypoint for dataframe plotting
10 """
11 # Import here to avoid errors if plot-dependencies are not installed.
12 from freqtrade.plot.plotting import analyse_and_plot_pairs
13 config = setup_utils_configuration(args, RunMode.PLOT)
14
15 analyse_and_plot_pairs(config)
16
17
18 def start_plot_profit(args: Namespace) -> None:
19 """
20 Entrypoint for plot_profit
21 """
22 # Import here to avoid errors if plot-dependencies are not installed.
23 from freqtrade.plot.plotting import plot_profit
24 config = setup_utils_configuration(args, RunMode.PLOT)
25
26 plot_profit(config)
27
[end of freqtrade/plot/plot_utils.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/freqtrade/configuration/arguments.py b/freqtrade/configuration/arguments.py
--- a/freqtrade/configuration/arguments.py
+++ b/freqtrade/configuration/arguments.py
@@ -3,6 +3,7 @@
"""
import argparse
from typing import List, Optional
+from pathlib import Path
from freqtrade.configuration.cli_options import AVAILABLE_CLI_OPTIONS
from freqtrade import constants
@@ -73,11 +74,13 @@
"""
parsed_arg = self.parser.parse_args(self.args)
+ # When no config is provided, but a config exists, use that configuration!
+
# Workaround issue in argparse with action='append' and default value
# (see https://bugs.python.org/issue16399)
# Allow no-config for certain commands (like downloading / plotting)
- if (parsed_arg.config is None
- and not ('subparser' in parsed_arg and parsed_arg.subparser in NO_CONF_REQURIED)):
+ if (parsed_arg.config is None and ((Path.cwd() / constants.DEFAULT_CONFIG).is_file() or
+ not ('subparser' in parsed_arg and parsed_arg.subparser in NO_CONF_REQURIED))):
parsed_arg.config = [constants.DEFAULT_CONFIG]
return parsed_arg
diff --git a/freqtrade/plot/plot_utils.py b/freqtrade/plot/plot_utils.py
--- a/freqtrade/plot/plot_utils.py
+++ b/freqtrade/plot/plot_utils.py
@@ -1,15 +1,24 @@
from argparse import Namespace
-
+from freqtrade import OperationalException
from freqtrade.state import RunMode
from freqtrade.utils import setup_utils_configuration
+def validate_plot_args(args: Namespace):
+ args_tmp = vars(args)
+ if not args_tmp.get('datadir') and not args_tmp.get('config'):
+ raise OperationalException(
+ "You need to specify either `--datadir` or `--config` "
+ "for plot-profit and plot-dataframe.")
+
+
def start_plot_dataframe(args: Namespace) -> None:
"""
Entrypoint for dataframe plotting
"""
# Import here to avoid errors if plot-dependencies are not installed.
from freqtrade.plot.plotting import analyse_and_plot_pairs
+ validate_plot_args(args)
config = setup_utils_configuration(args, RunMode.PLOT)
analyse_and_plot_pairs(config)
@@ -21,6 +30,7 @@
"""
# Import here to avoid errors if plot-dependencies are not installed.
from freqtrade.plot.plotting import plot_profit
+ validate_plot_args(args)
config = setup_utils_configuration(args, RunMode.PLOT)
plot_profit(config)
| {"golden_diff": "diff --git a/freqtrade/configuration/arguments.py b/freqtrade/configuration/arguments.py\n--- a/freqtrade/configuration/arguments.py\n+++ b/freqtrade/configuration/arguments.py\n@@ -3,6 +3,7 @@\n \"\"\"\n import argparse\n from typing import List, Optional\n+from pathlib import Path\n \n from freqtrade.configuration.cli_options import AVAILABLE_CLI_OPTIONS\n from freqtrade import constants\n@@ -73,11 +74,13 @@\n \"\"\"\n parsed_arg = self.parser.parse_args(self.args)\n \n+ # When no config is provided, but a config exists, use that configuration!\n+\n # Workaround issue in argparse with action='append' and default value\n # (see https://bugs.python.org/issue16399)\n # Allow no-config for certain commands (like downloading / plotting)\n- if (parsed_arg.config is None\n- and not ('subparser' in parsed_arg and parsed_arg.subparser in NO_CONF_REQURIED)):\n+ if (parsed_arg.config is None and ((Path.cwd() / constants.DEFAULT_CONFIG).is_file() or\n+ not ('subparser' in parsed_arg and parsed_arg.subparser in NO_CONF_REQURIED))):\n parsed_arg.config = [constants.DEFAULT_CONFIG]\n \n return parsed_arg\ndiff --git a/freqtrade/plot/plot_utils.py b/freqtrade/plot/plot_utils.py\n--- a/freqtrade/plot/plot_utils.py\n+++ b/freqtrade/plot/plot_utils.py\n@@ -1,15 +1,24 @@\n from argparse import Namespace\n-\n+from freqtrade import OperationalException\n from freqtrade.state import RunMode\n from freqtrade.utils import setup_utils_configuration\n \n \n+def validate_plot_args(args: Namespace):\n+ args_tmp = vars(args)\n+ if not args_tmp.get('datadir') and not args_tmp.get('config'):\n+ raise OperationalException(\n+ \"You need to specify either `--datadir` or `--config` \"\n+ \"for plot-profit and plot-dataframe.\")\n+\n+\n def start_plot_dataframe(args: Namespace) -> None:\n \"\"\"\n Entrypoint for dataframe plotting\n \"\"\"\n # Import here to avoid errors if plot-dependencies are not installed.\n from freqtrade.plot.plotting import analyse_and_plot_pairs\n+ validate_plot_args(args)\n config = setup_utils_configuration(args, RunMode.PLOT)\n \n analyse_and_plot_pairs(config)\n@@ -21,6 +30,7 @@\n \"\"\"\n # Import here to avoid errors if plot-dependencies are not installed.\n from freqtrade.plot.plotting import plot_profit\n+ validate_plot_args(args)\n config = setup_utils_configuration(args, RunMode.PLOT)\n \n plot_profit(config)\n", "issue": "Plot-scripts require --datadir\n## Describe your environment\r\n\r\n * Python Version: 3.7\r\n * Branch: Develop\r\n * Last Commit ID: 962d487edb0d28f95d6395c09189a333c436fd20\r\n \r\n## Describe the problem:\r\n\r\nCurrently, `freqtrade plot-dataframe` does require either a valid configuration (`--config` or `config.json` in cwd - or `--datadir user_data/data/bittrex` to find the backtest data.\r\nThis is because without one of these, the exchange is not known, which is a requirement to find the data in the datadir.\r\n\r\n## Possible fixes\r\n\r\n* Error and point out that one of the 2 conditions have to be met\r\n* add `--exchange` parameter as alternative (including the above)\r\n\r\n* ... other ideas?\n", "before_files": [{"content": "\"\"\"\nThis module contains the argument manager class\n\"\"\"\nimport argparse\nfrom typing import List, Optional\n\nfrom freqtrade.configuration.cli_options import AVAILABLE_CLI_OPTIONS\nfrom freqtrade import constants\n\nARGS_COMMON = [\"verbosity\", \"logfile\", \"version\", \"config\", \"datadir\", \"user_data_dir\"]\n\nARGS_STRATEGY = [\"strategy\", \"strategy_path\"]\n\nARGS_MAIN = ARGS_COMMON + ARGS_STRATEGY + [\"db_url\", \"sd_notify\"]\n\nARGS_COMMON_OPTIMIZE = [\"ticker_interval\", \"timerange\",\n \"max_open_trades\", \"stake_amount\", \"refresh_pairs\"]\n\nARGS_BACKTEST = ARGS_COMMON_OPTIMIZE + [\"position_stacking\", \"use_max_market_positions\",\n \"strategy_list\", \"export\", \"exportfilename\"]\n\nARGS_HYPEROPT = ARGS_COMMON_OPTIMIZE + [\"hyperopt\", \"hyperopt_path\",\n \"position_stacking\", \"epochs\", \"spaces\",\n \"use_max_market_positions\", \"print_all\",\n \"print_colorized\", \"print_json\", \"hyperopt_jobs\",\n \"hyperopt_random_state\", \"hyperopt_min_trades\",\n \"hyperopt_continue\", \"hyperopt_loss\"]\n\nARGS_EDGE = ARGS_COMMON_OPTIMIZE + [\"stoploss_range\"]\n\nARGS_LIST_EXCHANGES = [\"print_one_column\"]\n\nARGS_CREATE_USERDIR = [\"user_data_dir\"]\n\nARGS_DOWNLOAD_DATA = [\"pairs\", \"pairs_file\", \"days\", \"exchange\", \"timeframes\", \"erase\"]\n\nARGS_PLOT_DATAFRAME = [\"pairs\", \"indicators1\", \"indicators2\", \"plot_limit\", \"db_url\",\n \"trade_source\", \"export\", \"exportfilename\", \"timerange\", \"ticker_interval\"]\n\nARGS_PLOT_PROFIT = [\"pairs\", \"timerange\", \"export\", \"exportfilename\", \"db_url\",\n \"trade_source\", \"ticker_interval\"]\n\nNO_CONF_REQURIED = [\"download-data\", \"plot-dataframe\", \"plot-profit\"]\n\n\nclass Arguments(object):\n \"\"\"\n Arguments Class. Manage the arguments received by the cli\n \"\"\"\n def __init__(self, args: Optional[List[str]]) -> None:\n self.args = args\n self._parsed_arg: Optional[argparse.Namespace] = None\n self.parser = argparse.ArgumentParser(description='Free, open source crypto trading bot')\n\n def _load_args(self) -> None:\n self._build_args(optionlist=ARGS_MAIN)\n self._build_subcommands()\n\n def get_parsed_arg(self) -> argparse.Namespace:\n \"\"\"\n Return the list of arguments\n :return: List[str] List of arguments\n \"\"\"\n if self._parsed_arg is None:\n self._load_args()\n self._parsed_arg = self._parse_args()\n\n return self._parsed_arg\n\n def _parse_args(self) -> argparse.Namespace:\n \"\"\"\n Parses given arguments and returns an argparse Namespace instance.\n \"\"\"\n parsed_arg = self.parser.parse_args(self.args)\n\n # Workaround issue in argparse with action='append' and default value\n # (see https://bugs.python.org/issue16399)\n # Allow no-config for certain commands (like downloading / plotting)\n if (parsed_arg.config is None\n and not ('subparser' in parsed_arg and parsed_arg.subparser in NO_CONF_REQURIED)):\n parsed_arg.config = [constants.DEFAULT_CONFIG]\n\n return parsed_arg\n\n def _build_args(self, optionlist, parser=None):\n parser = parser or self.parser\n\n for val in optionlist:\n opt = AVAILABLE_CLI_OPTIONS[val]\n parser.add_argument(*opt.cli, dest=val, **opt.kwargs)\n\n def _build_subcommands(self) -> None:\n \"\"\"\n Builds and attaches all subcommands.\n :return: None\n \"\"\"\n from freqtrade.optimize import start_backtesting, start_hyperopt, start_edge\n from freqtrade.utils import start_create_userdir, start_download_data, start_list_exchanges\n\n subparsers = self.parser.add_subparsers(dest='subparser')\n\n # Add backtesting subcommand\n backtesting_cmd = subparsers.add_parser('backtesting', help='Backtesting module.')\n backtesting_cmd.set_defaults(func=start_backtesting)\n self._build_args(optionlist=ARGS_BACKTEST, parser=backtesting_cmd)\n\n # Add edge subcommand\n edge_cmd = subparsers.add_parser('edge', help='Edge module.')\n edge_cmd.set_defaults(func=start_edge)\n self._build_args(optionlist=ARGS_EDGE, parser=edge_cmd)\n\n # Add hyperopt subcommand\n hyperopt_cmd = subparsers.add_parser('hyperopt', help='Hyperopt module.')\n hyperopt_cmd.set_defaults(func=start_hyperopt)\n self._build_args(optionlist=ARGS_HYPEROPT, parser=hyperopt_cmd)\n\n # add create-userdir subcommand\n create_userdir_cmd = subparsers.add_parser('create-userdir',\n help=\"Create user-data directory.\")\n create_userdir_cmd.set_defaults(func=start_create_userdir)\n self._build_args(optionlist=ARGS_CREATE_USERDIR, parser=create_userdir_cmd)\n\n # Add list-exchanges subcommand\n list_exchanges_cmd = subparsers.add_parser(\n 'list-exchanges',\n help='Print available exchanges.'\n )\n list_exchanges_cmd.set_defaults(func=start_list_exchanges)\n self._build_args(optionlist=ARGS_LIST_EXCHANGES, parser=list_exchanges_cmd)\n\n # Add download-data subcommand\n download_data_cmd = subparsers.add_parser(\n 'download-data',\n help='Download backtesting data.'\n )\n download_data_cmd.set_defaults(func=start_download_data)\n self._build_args(optionlist=ARGS_DOWNLOAD_DATA, parser=download_data_cmd)\n\n # Add Plotting subcommand\n from freqtrade.plot.plot_utils import start_plot_dataframe, start_plot_profit\n plot_dataframe_cmd = subparsers.add_parser(\n 'plot-dataframe',\n help='Plot candles with indicators.'\n )\n plot_dataframe_cmd.set_defaults(func=start_plot_dataframe)\n self._build_args(optionlist=ARGS_PLOT_DATAFRAME, parser=plot_dataframe_cmd)\n\n # Plot profit\n plot_profit_cmd = subparsers.add_parser(\n 'plot-profit',\n help='Generate plot showing profits.'\n )\n plot_profit_cmd.set_defaults(func=start_plot_profit)\n self._build_args(optionlist=ARGS_PLOT_PROFIT, parser=plot_profit_cmd)\n", "path": "freqtrade/configuration/arguments.py"}, {"content": "from argparse import Namespace\n\nfrom freqtrade.state import RunMode\nfrom freqtrade.utils import setup_utils_configuration\n\n\ndef start_plot_dataframe(args: Namespace) -> None:\n \"\"\"\n Entrypoint for dataframe plotting\n \"\"\"\n # Import here to avoid errors if plot-dependencies are not installed.\n from freqtrade.plot.plotting import analyse_and_plot_pairs\n config = setup_utils_configuration(args, RunMode.PLOT)\n\n analyse_and_plot_pairs(config)\n\n\ndef start_plot_profit(args: Namespace) -> None:\n \"\"\"\n Entrypoint for plot_profit\n \"\"\"\n # Import here to avoid errors if plot-dependencies are not installed.\n from freqtrade.plot.plotting import plot_profit\n config = setup_utils_configuration(args, RunMode.PLOT)\n\n plot_profit(config)\n", "path": "freqtrade/plot/plot_utils.py"}]} | 2,670 | 584 |
gh_patches_debug_27850 | rasdani/github-patches | git_diff | streamlit__streamlit-5021 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
st.radio with DataFrame fails on rerun
### Summary
When you pass a DataFrame as the options in an st.radio, every rerun throws an error (but the first run works).
### Steps to reproduce
Code snippet:
```
import streamlit as st
import pandas as pd
df = pd.DataFrame({'foo': ['one', 'two']})
st.radio('Foo', df)
```
1. Run the code above.
2. Press "R" to rerun the code above.
**Expected behavior:**
The rerun works, just like the first run.
**Actual behavior:**
The app hangs (stays in running state forever) and shows the error below in the terminal:
```
Exception in thread ScriptRunner.scriptThread:
Traceback (most recent call last):
File "/usr/local/Cellar/[email protected]/3.9.5/Frameworks/Python.framework/Versions/3.9/lib/python3.9/threading.py", line 954, in _bootstrap_inner
self.run()
File "/usr/local/Cellar/[email protected]/3.9.5/Frameworks/Python.framework/Versions/3.9/lib/python3.9/threading.py", line 892, in run
self._target(*self._args, **self._kwargs)
File "/Users/[HIDDEN]/.venv/lib/python3.9/site-packages/streamlit/script_runner.py", line 210, in _process_request_queue
widget_states = self._session_state.as_widget_states()
File "/Users/[HIDDEN]/.venv/lib/python3.9/site-packages/streamlit/state/session_state.py", line 560, in as_widget_states
return self._new_widget_state.as_widget_states()
File "/Users/[HIDDEN]/.venv/lib/python3.9/site-packages/streamlit/state/session_state.py", line 211, in as_widget_states
states = [
File "/Users/[HIDDEN]/.venv/lib/python3.9/site-packages/streamlit/state/session_state.py", line 214, in <listcomp>
if self.get_serialized(widget_id)
File "/Users/[HIDDEN]/.venv/lib/python3.9/site-packages/streamlit/state/session_state.py", line 190, in get_serialized
serialized = metadata.serializer(item.value)
File "/Users/[HIDDEN]/.venv/lib/python3.9/site-packages/streamlit/elements/radio.py", line 136, in serialize_radio
return index_(options, v)
File "/Users/[HIDDEN]/.venv/lib/python3.9/site-packages/streamlit/util.py", line 129, in index_
raise ValueError("{} is not in iterable".format(str(x)))
ValueError: one is not in iterable
```
### Is this a regression?
yes
Previous known working version = 0.84.0
### Debug info
- Streamlit version: 1.4.0
- Python version: 3.9.5
### Additional information
A meta-bug related to this: I'm not sure why this error is thrown in the terminal rather than inside the Streamlit app. Previously, our goal was to have _every_ error appear in the app, so you never had to check the terminal. It would be great to see if some code change unexpectedly changed this behavior.
</issue>
<code>
[start of e2e/scripts/st_radio.py]
1 # Copyright 2018-2022 Streamlit Inc.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import streamlit as st
16
17 options = ("female", "male")
18 i1 = st.radio("radio 1", options, 1)
19 st.write("value 1:", i1)
20
21 i2 = st.radio("radio 2", options, 0, format_func=lambda x: x.capitalize())
22 st.write("value 2:", i2)
23
24 i3 = st.radio("radio 3", [])
25 st.write("value 3:", i3)
26
27 i4 = st.radio("radio 4", options, disabled=True)
28 st.write("value 4:", i4)
29
30 i5 = st.radio("radio 5", options, horizontal=True)
31 st.write("value 5:", i5)
32
33 if st._is_running_with_streamlit:
34
35 def on_change():
36 st.session_state.radio_changed = True
37
38 st.radio("radio 6", options, 1, key="radio6", on_change=on_change)
39 st.write("value 6:", st.session_state.radio6)
40 st.write("radio changed:", "radio_changed" in st.session_state)
41
[end of e2e/scripts/st_radio.py]
[start of lib/streamlit/elements/radio.py]
1 # Copyright 2018-2022 Streamlit Inc.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from textwrap import dedent
16 from typing import Any, Callable, Optional, cast
17
18 import streamlit
19 from streamlit.errors import StreamlitAPIException
20 from streamlit.proto.Radio_pb2 import Radio as RadioProto
21 from streamlit.scriptrunner import ScriptRunContext, get_script_run_ctx
22 from streamlit.state import (
23 register_widget,
24 WidgetArgs,
25 WidgetCallback,
26 WidgetKwargs,
27 )
28 from streamlit.type_util import Key, OptionSequence, ensure_indexable, to_key
29 from streamlit.util import index_
30 from .form import current_form_id
31 from .utils import check_callback_rules, check_session_state_rules
32
33
34 class RadioMixin:
35 def radio(
36 self,
37 label: str,
38 options: OptionSequence,
39 index: int = 0,
40 format_func: Callable[[Any], Any] = str,
41 key: Optional[Key] = None,
42 help: Optional[str] = None,
43 on_change: Optional[WidgetCallback] = None,
44 args: Optional[WidgetArgs] = None,
45 kwargs: Optional[WidgetKwargs] = None,
46 *, # keyword-only args:
47 disabled: bool = False,
48 horizontal: bool = False,
49 ) -> Any:
50 """Display a radio button widget.
51
52 Parameters
53 ----------
54 label : str
55 A short label explaining to the user what this radio group is for.
56 options : Sequence, numpy.ndarray, pandas.Series, pandas.DataFrame, or pandas.Index
57 Labels for the radio options. This will be cast to str internally
58 by default. For pandas.DataFrame, the first column is selected.
59 index : int
60 The index of the preselected option on first render.
61 format_func : function
62 Function to modify the display of radio options. It receives
63 the raw option as an argument and should output the label to be
64 shown for that option. This has no impact on the return value of
65 the radio.
66 key : str or int
67 An optional string or integer to use as the unique key for the widget.
68 If this is omitted, a key will be generated for the widget
69 based on its content. Multiple widgets of the same type may
70 not share the same key.
71 help : str
72 An optional tooltip that gets displayed next to the radio.
73 on_change : callable
74 An optional callback invoked when this radio's value changes.
75 args : tuple
76 An optional tuple of args to pass to the callback.
77 kwargs : dict
78 An optional dict of kwargs to pass to the callback.
79 disabled : bool
80 An optional boolean, which disables the radio button if set to
81 True. The default is False. This argument can only be supplied by
82 keyword.
83 horizontal : bool
84 An optional boolean, which orients the radio group horizontally.
85 The default is false (vertical buttons). This argument can only
86 be supplied by keyword.
87
88 Returns
89 -------
90 any
91 The selected option.
92
93 Example
94 -------
95 >>> genre = st.radio(
96 ... "What\'s your favorite movie genre",
97 ... ('Comedy', 'Drama', 'Documentary'))
98 >>>
99 >>> if genre == 'Comedy':
100 ... st.write('You selected comedy.')
101 ... else:
102 ... st.write("You didn\'t select comedy.")
103
104 .. output::
105 https://doc-radio.streamlitapp.com/
106 height: 260px
107
108 """
109 ctx = get_script_run_ctx()
110 return self._radio(
111 label=label,
112 options=options,
113 index=index,
114 format_func=format_func,
115 key=key,
116 help=help,
117 on_change=on_change,
118 args=args,
119 kwargs=kwargs,
120 disabled=disabled,
121 horizontal=horizontal,
122 ctx=ctx,
123 )
124
125 def _radio(
126 self,
127 label: str,
128 options: OptionSequence,
129 index: int = 0,
130 format_func: Callable[[Any], Any] = str,
131 key: Optional[Key] = None,
132 help: Optional[str] = None,
133 on_change: Optional[WidgetCallback] = None,
134 args: Optional[WidgetArgs] = None,
135 kwargs: Optional[WidgetKwargs] = None,
136 *, # keyword-only args:
137 disabled: bool = False,
138 horizontal: bool = False,
139 ctx: Optional[ScriptRunContext],
140 ) -> Any:
141 key = to_key(key)
142 check_callback_rules(self.dg, on_change)
143 check_session_state_rules(default_value=None if index == 0 else index, key=key)
144
145 opt = ensure_indexable(options)
146
147 if not isinstance(index, int):
148 raise StreamlitAPIException(
149 "Radio Value has invalid type: %s" % type(index).__name__
150 )
151
152 if len(opt) > 0 and not 0 <= index < len(opt):
153 raise StreamlitAPIException(
154 "Radio index must be between 0 and length of options"
155 )
156
157 radio_proto = RadioProto()
158 radio_proto.label = label
159 radio_proto.default = index
160 radio_proto.options[:] = [str(format_func(option)) for option in opt]
161 radio_proto.form_id = current_form_id(self.dg)
162 radio_proto.horizontal = horizontal
163 if help is not None:
164 radio_proto.help = dedent(help)
165
166 def deserialize_radio(ui_value, widget_id=""):
167 idx = ui_value if ui_value is not None else index
168
169 return opt[idx] if len(opt) > 0 and opt[idx] is not None else None
170
171 def serialize_radio(v):
172 if len(options) == 0:
173 return 0
174 return index_(options, v)
175
176 widget_state = register_widget(
177 "radio",
178 radio_proto,
179 user_key=key,
180 on_change_handler=on_change,
181 args=args,
182 kwargs=kwargs,
183 deserializer=deserialize_radio,
184 serializer=serialize_radio,
185 ctx=ctx,
186 )
187
188 # This needs to be done after register_widget because we don't want
189 # the following proto fields to affect a widget's ID.
190 radio_proto.disabled = disabled
191 if widget_state.value_changed:
192 radio_proto.value = serialize_radio(widget_state.value)
193 radio_proto.set_value = True
194
195 self.dg._enqueue("radio", radio_proto)
196 return widget_state.value
197
198 @property
199 def dg(self) -> "streamlit.delta_generator.DeltaGenerator":
200 """Get our DeltaGenerator."""
201 return cast("streamlit.delta_generator.DeltaGenerator", self)
202
[end of lib/streamlit/elements/radio.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/e2e/scripts/st_radio.py b/e2e/scripts/st_radio.py
--- a/e2e/scripts/st_radio.py
+++ b/e2e/scripts/st_radio.py
@@ -12,6 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
+import pandas as pd
import streamlit as st
options = ("female", "male")
@@ -30,11 +31,15 @@
i5 = st.radio("radio 5", options, horizontal=True)
st.write("value 5:", i5)
+i6 = st.radio("radio 6", pd.DataFrame({"foo": list(options)}))
+st.write("value 6:", i6)
+
+
if st._is_running_with_streamlit:
def on_change():
st.session_state.radio_changed = True
- st.radio("radio 6", options, 1, key="radio6", on_change=on_change)
- st.write("value 6:", st.session_state.radio6)
+ st.radio("radio 7", options, 1, key="radio7", on_change=on_change)
+ st.write("value 7:", st.session_state.radio7)
st.write("radio changed:", "radio_changed" in st.session_state)
diff --git a/lib/streamlit/elements/radio.py b/lib/streamlit/elements/radio.py
--- a/lib/streamlit/elements/radio.py
+++ b/lib/streamlit/elements/radio.py
@@ -169,9 +169,9 @@
return opt[idx] if len(opt) > 0 and opt[idx] is not None else None
def serialize_radio(v):
- if len(options) == 0:
+ if len(opt) == 0:
return 0
- return index_(options, v)
+ return index_(opt, v)
widget_state = register_widget(
"radio",
| {"golden_diff": "diff --git a/e2e/scripts/st_radio.py b/e2e/scripts/st_radio.py\n--- a/e2e/scripts/st_radio.py\n+++ b/e2e/scripts/st_radio.py\n@@ -12,6 +12,7 @@\n # See the License for the specific language governing permissions and\n # limitations under the License.\n \n+import pandas as pd\n import streamlit as st\n \n options = (\"female\", \"male\")\n@@ -30,11 +31,15 @@\n i5 = st.radio(\"radio 5\", options, horizontal=True)\n st.write(\"value 5:\", i5)\n \n+i6 = st.radio(\"radio 6\", pd.DataFrame({\"foo\": list(options)}))\n+st.write(\"value 6:\", i6)\n+\n+\n if st._is_running_with_streamlit:\n \n def on_change():\n st.session_state.radio_changed = True\n \n- st.radio(\"radio 6\", options, 1, key=\"radio6\", on_change=on_change)\n- st.write(\"value 6:\", st.session_state.radio6)\n+ st.radio(\"radio 7\", options, 1, key=\"radio7\", on_change=on_change)\n+ st.write(\"value 7:\", st.session_state.radio7)\n st.write(\"radio changed:\", \"radio_changed\" in st.session_state)\ndiff --git a/lib/streamlit/elements/radio.py b/lib/streamlit/elements/radio.py\n--- a/lib/streamlit/elements/radio.py\n+++ b/lib/streamlit/elements/radio.py\n@@ -169,9 +169,9 @@\n return opt[idx] if len(opt) > 0 and opt[idx] is not None else None\n \n def serialize_radio(v):\n- if len(options) == 0:\n+ if len(opt) == 0:\n return 0\n- return index_(options, v)\n+ return index_(opt, v)\n \n widget_state = register_widget(\n \"radio\",\n", "issue": "st.radio with DataFrame fails on rerun\n### Summary\r\n\r\nWhen you pass a DataFrame as the options in an st.radio, every rerun throws an error (but the first run works).\r\n\r\n### Steps to reproduce\r\n\r\nCode snippet:\r\n\r\n```\r\nimport streamlit as st\r\nimport pandas as pd\r\n\r\ndf = pd.DataFrame({'foo': ['one', 'two']})\r\nst.radio('Foo', df)\r\n```\r\n\r\n1. Run the code above.\r\n2. Press \"R\" to rerun the code above.\r\n\r\n**Expected behavior:**\r\n\r\nThe rerun works, just like the first run.\r\n\r\n**Actual behavior:**\r\n\r\nThe app hangs (stays in running state forever) and shows the error below in the terminal:\r\n\r\n```\r\nException in thread ScriptRunner.scriptThread:\r\nTraceback (most recent call last):\r\n File \"/usr/local/Cellar/[email protected]/3.9.5/Frameworks/Python.framework/Versions/3.9/lib/python3.9/threading.py\", line 954, in _bootstrap_inner\r\n self.run()\r\n File \"/usr/local/Cellar/[email protected]/3.9.5/Frameworks/Python.framework/Versions/3.9/lib/python3.9/threading.py\", line 892, in run\r\n self._target(*self._args, **self._kwargs)\r\n File \"/Users/[HIDDEN]/.venv/lib/python3.9/site-packages/streamlit/script_runner.py\", line 210, in _process_request_queue\r\n widget_states = self._session_state.as_widget_states()\r\n File \"/Users/[HIDDEN]/.venv/lib/python3.9/site-packages/streamlit/state/session_state.py\", line 560, in as_widget_states\r\n return self._new_widget_state.as_widget_states()\r\n File \"/Users/[HIDDEN]/.venv/lib/python3.9/site-packages/streamlit/state/session_state.py\", line 211, in as_widget_states\r\n states = [\r\n File \"/Users/[HIDDEN]/.venv/lib/python3.9/site-packages/streamlit/state/session_state.py\", line 214, in <listcomp>\r\n if self.get_serialized(widget_id)\r\n File \"/Users/[HIDDEN]/.venv/lib/python3.9/site-packages/streamlit/state/session_state.py\", line 190, in get_serialized\r\n serialized = metadata.serializer(item.value)\r\n File \"/Users/[HIDDEN]/.venv/lib/python3.9/site-packages/streamlit/elements/radio.py\", line 136, in serialize_radio\r\n return index_(options, v)\r\n File \"/Users/[HIDDEN]/.venv/lib/python3.9/site-packages/streamlit/util.py\", line 129, in index_\r\n raise ValueError(\"{} is not in iterable\".format(str(x)))\r\nValueError: one is not in iterable\r\n```\r\n\r\n\r\n### Is this a regression?\r\n\r\nyes \r\n\r\nPrevious known working version = 0.84.0\r\n\r\n### Debug info\r\n\r\n- Streamlit version: 1.4.0\r\n- Python version: 3.9.5\r\n\r\n### Additional information\r\n\r\nA meta-bug related to this: I'm not sure why this error is thrown in the terminal rather than inside the Streamlit app. Previously, our goal was to have _every_ error appear in the app, so you never had to check the terminal. It would be great to see if some code change unexpectedly changed this behavior.\n", "before_files": [{"content": "# Copyright 2018-2022 Streamlit Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport streamlit as st\n\noptions = (\"female\", \"male\")\ni1 = st.radio(\"radio 1\", options, 1)\nst.write(\"value 1:\", i1)\n\ni2 = st.radio(\"radio 2\", options, 0, format_func=lambda x: x.capitalize())\nst.write(\"value 2:\", i2)\n\ni3 = st.radio(\"radio 3\", [])\nst.write(\"value 3:\", i3)\n\ni4 = st.radio(\"radio 4\", options, disabled=True)\nst.write(\"value 4:\", i4)\n\ni5 = st.radio(\"radio 5\", options, horizontal=True)\nst.write(\"value 5:\", i5)\n\nif st._is_running_with_streamlit:\n\n def on_change():\n st.session_state.radio_changed = True\n\n st.radio(\"radio 6\", options, 1, key=\"radio6\", on_change=on_change)\n st.write(\"value 6:\", st.session_state.radio6)\n st.write(\"radio changed:\", \"radio_changed\" in st.session_state)\n", "path": "e2e/scripts/st_radio.py"}, {"content": "# Copyright 2018-2022 Streamlit Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom textwrap import dedent\nfrom typing import Any, Callable, Optional, cast\n\nimport streamlit\nfrom streamlit.errors import StreamlitAPIException\nfrom streamlit.proto.Radio_pb2 import Radio as RadioProto\nfrom streamlit.scriptrunner import ScriptRunContext, get_script_run_ctx\nfrom streamlit.state import (\n register_widget,\n WidgetArgs,\n WidgetCallback,\n WidgetKwargs,\n)\nfrom streamlit.type_util import Key, OptionSequence, ensure_indexable, to_key\nfrom streamlit.util import index_\nfrom .form import current_form_id\nfrom .utils import check_callback_rules, check_session_state_rules\n\n\nclass RadioMixin:\n def radio(\n self,\n label: str,\n options: OptionSequence,\n index: int = 0,\n format_func: Callable[[Any], Any] = str,\n key: Optional[Key] = None,\n help: Optional[str] = None,\n on_change: Optional[WidgetCallback] = None,\n args: Optional[WidgetArgs] = None,\n kwargs: Optional[WidgetKwargs] = None,\n *, # keyword-only args:\n disabled: bool = False,\n horizontal: bool = False,\n ) -> Any:\n \"\"\"Display a radio button widget.\n\n Parameters\n ----------\n label : str\n A short label explaining to the user what this radio group is for.\n options : Sequence, numpy.ndarray, pandas.Series, pandas.DataFrame, or pandas.Index\n Labels for the radio options. This will be cast to str internally\n by default. For pandas.DataFrame, the first column is selected.\n index : int\n The index of the preselected option on first render.\n format_func : function\n Function to modify the display of radio options. It receives\n the raw option as an argument and should output the label to be\n shown for that option. This has no impact on the return value of\n the radio.\n key : str or int\n An optional string or integer to use as the unique key for the widget.\n If this is omitted, a key will be generated for the widget\n based on its content. Multiple widgets of the same type may\n not share the same key.\n help : str\n An optional tooltip that gets displayed next to the radio.\n on_change : callable\n An optional callback invoked when this radio's value changes.\n args : tuple\n An optional tuple of args to pass to the callback.\n kwargs : dict\n An optional dict of kwargs to pass to the callback.\n disabled : bool\n An optional boolean, which disables the radio button if set to\n True. The default is False. This argument can only be supplied by\n keyword.\n horizontal : bool\n An optional boolean, which orients the radio group horizontally.\n The default is false (vertical buttons). This argument can only\n be supplied by keyword.\n\n Returns\n -------\n any\n The selected option.\n\n Example\n -------\n >>> genre = st.radio(\n ... \"What\\'s your favorite movie genre\",\n ... ('Comedy', 'Drama', 'Documentary'))\n >>>\n >>> if genre == 'Comedy':\n ... st.write('You selected comedy.')\n ... else:\n ... st.write(\"You didn\\'t select comedy.\")\n\n .. output::\n https://doc-radio.streamlitapp.com/\n height: 260px\n\n \"\"\"\n ctx = get_script_run_ctx()\n return self._radio(\n label=label,\n options=options,\n index=index,\n format_func=format_func,\n key=key,\n help=help,\n on_change=on_change,\n args=args,\n kwargs=kwargs,\n disabled=disabled,\n horizontal=horizontal,\n ctx=ctx,\n )\n\n def _radio(\n self,\n label: str,\n options: OptionSequence,\n index: int = 0,\n format_func: Callable[[Any], Any] = str,\n key: Optional[Key] = None,\n help: Optional[str] = None,\n on_change: Optional[WidgetCallback] = None,\n args: Optional[WidgetArgs] = None,\n kwargs: Optional[WidgetKwargs] = None,\n *, # keyword-only args:\n disabled: bool = False,\n horizontal: bool = False,\n ctx: Optional[ScriptRunContext],\n ) -> Any:\n key = to_key(key)\n check_callback_rules(self.dg, on_change)\n check_session_state_rules(default_value=None if index == 0 else index, key=key)\n\n opt = ensure_indexable(options)\n\n if not isinstance(index, int):\n raise StreamlitAPIException(\n \"Radio Value has invalid type: %s\" % type(index).__name__\n )\n\n if len(opt) > 0 and not 0 <= index < len(opt):\n raise StreamlitAPIException(\n \"Radio index must be between 0 and length of options\"\n )\n\n radio_proto = RadioProto()\n radio_proto.label = label\n radio_proto.default = index\n radio_proto.options[:] = [str(format_func(option)) for option in opt]\n radio_proto.form_id = current_form_id(self.dg)\n radio_proto.horizontal = horizontal\n if help is not None:\n radio_proto.help = dedent(help)\n\n def deserialize_radio(ui_value, widget_id=\"\"):\n idx = ui_value if ui_value is not None else index\n\n return opt[idx] if len(opt) > 0 and opt[idx] is not None else None\n\n def serialize_radio(v):\n if len(options) == 0:\n return 0\n return index_(options, v)\n\n widget_state = register_widget(\n \"radio\",\n radio_proto,\n user_key=key,\n on_change_handler=on_change,\n args=args,\n kwargs=kwargs,\n deserializer=deserialize_radio,\n serializer=serialize_radio,\n ctx=ctx,\n )\n\n # This needs to be done after register_widget because we don't want\n # the following proto fields to affect a widget's ID.\n radio_proto.disabled = disabled\n if widget_state.value_changed:\n radio_proto.value = serialize_radio(widget_state.value)\n radio_proto.set_value = True\n\n self.dg._enqueue(\"radio\", radio_proto)\n return widget_state.value\n\n @property\n def dg(self) -> \"streamlit.delta_generator.DeltaGenerator\":\n \"\"\"Get our DeltaGenerator.\"\"\"\n return cast(\"streamlit.delta_generator.DeltaGenerator\", self)\n", "path": "lib/streamlit/elements/radio.py"}]} | 3,756 | 424 |
gh_patches_debug_15832 | rasdani/github-patches | git_diff | conan-io__conan-2963 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Error with v1.4
Hello,
I have the following Conan recipe
```
# cat conanfile.txt
[requires]
bitprim-node-cint/0.10.0@bitprim/testing
[generators]
cmake
[options]
bitprim-node-cint:shared=True
bitprim-node-cint:currency=BCH
[imports]
bin, *.dll -> .
lib, *.so -> .
lib, *.dylib -> .
```
When I execute: `conan install .`
I get the following errors:
```
...
PROJECT: Generator txt created conanbuildinfo.txt
PROJECT: Generated conaninfo.txt
Traceback (most recent call last):
File "/home/fernando/.local/lib/python2.7/site-packages/conans/client/command.py", line 1182, in run
method(args[0][1:])
File "/home/fernando/.local/lib/python2.7/site-packages/conans/client/command.py", line 325, in install
install_folder=args.install_folder)
File "/home/fernando/.local/lib/python2.7/site-packages/conans/client/conan_api.py", line 77, in wrapper
return f(*args, **kwargs)
File "/home/fernando/.local/lib/python2.7/site-packages/conans/client/conan_api.py", line 465, in install
no_imports=no_imports)
File "/home/fernando/.local/lib/python2.7/site-packages/conans/client/manager.py", line 344, in install
run_imports(conanfile, install_folder, output)
File "/home/fernando/.local/lib/python2.7/site-packages/conans/client/importer.py", line 82, in run_imports
conanfile.imports()
File "/home/fernando/.local/lib/python2.7/site-packages/conans/client/loader_parse.py", line 184, in imports
conan_file.copy(*import_params)
File "/home/fernando/.local/lib/python2.7/site-packages/conans/client/importer.py", line 160, in __call__
excludes=excludes, keep_path=keep_path)
File "/home/fernando/.local/lib/python2.7/site-packages/conans/client/file_copier.py", line 83, in __call__
self._link_folders(src, dst, link_folders)
File "/home/fernando/.local/lib/python2.7/site-packages/conans/client/file_copier.py", line 149, in _link_folders
os.symlink(link, dst_link)
OSError: [Errno 2] No such file or directory
ERROR: [Errno 2] No such file or directory
```
```
$ conan --version
Conan version 1.4.0
$ python --version
Python 2.7.15
$ lsb_release -a
LSB Version: :core-4.1-amd64:core-4.1-noarch
Distributor ID: Fedora
Description: Fedora release 28 (Twenty Eight)
Release: 28
Codename: TwentyEight
```
It works fine with Conan 1.3.3.
Thanks and regards,
Fernando.
</issue>
<code>
[start of conans/client/file_copier.py]
1 import os
2 import fnmatch
3 import shutil
4 from collections import defaultdict
5
6 from conans import tools
7
8
9 def report_copied_files(copied, output):
10 ext_files = defaultdict(list)
11 for f in copied:
12 _, ext = os.path.splitext(f)
13 ext_files[ext].append(os.path.basename(f))
14
15 if not ext_files:
16 return False
17
18 for ext, files in ext_files.items():
19 files_str = (", ".join(files)) if len(files) < 5 else ""
20 file_or_files = "file" if len(files) == 1 else "files"
21 if not ext:
22 output.info("Copied %d %s: %s" % (len(files), file_or_files, files_str))
23 else:
24 output.info("Copied %d '%s' %s: %s" % (len(files), ext, file_or_files, files_str))
25 return True
26
27
28 class FileCopier(object):
29 """ main responsible of copying files from place to place:
30 package: build folder -> package folder
31 imports: package folder -> user folder
32 export: user folder -> store "export" folder
33 """
34 def __init__(self, root_source_folder, root_destination_folder, excluded=None):
35 """
36 Takes the base folders to copy resources src -> dst. These folders names
37 will not be used in the relative names while copying
38 param root_source_folder: The base folder to copy things from, typically the
39 store build folder
40 param root_destination_folder: The base folder to copy things to, typicall the
41 store package folder
42 """
43 self._base_src = root_source_folder
44 self._base_dst = root_destination_folder
45 self._copied = []
46 self._excluded = [root_destination_folder]
47 if excluded:
48 self._excluded.append(excluded)
49
50 def report(self, output):
51 return report_copied_files(self._copied, output)
52
53 def __call__(self, pattern, dst="", src="", keep_path=True, links=False, symlinks=None,
54 excludes=None, ignore_case=False):
55 """
56 param pattern: an fnmatch file pattern of the files that should be copied. Eg. *.dll
57 param dst: the destination local folder, wrt to current conanfile dir, to which
58 the files will be copied. Eg: "bin"
59 param src: the source folder in which those files will be searched. This folder
60 will be stripped from the dst name. Eg.: lib/Debug/x86
61 param keep_path: False if you want the relative paths to be maintained from
62 src to dst folders, or just drop. False is useful if you want
63 to collect e.g. many *.libs among many dirs into a single
64 lib dir
65 return: list of copied files
66 """
67 if symlinks is not None:
68 links = symlinks
69 # Check for ../ patterns and allow them
70 if pattern.startswith(".."):
71 rel_dir = os.path.abspath(os.path.join(self._base_src, pattern))
72 base_src = os.path.dirname(rel_dir)
73 pattern = os.path.basename(rel_dir)
74 else:
75 base_src = self._base_src
76
77 src = os.path.join(base_src, src)
78 dst = os.path.join(self._base_dst, dst)
79
80 files_to_copy, link_folders = self._filter_files(src, pattern, links, excludes,
81 ignore_case)
82 copied_files = self._copy_files(files_to_copy, src, dst, keep_path, links)
83 self._link_folders(src, dst, link_folders)
84 self._copied.extend(files_to_copy)
85 return copied_files
86
87 def _filter_files(self, src, pattern, links, excludes, ignore_case):
88
89 """ return a list of the files matching the patterns
90 The list will be relative path names wrt to the root src folder
91 """
92 filenames = []
93 linked_folders = []
94 for root, subfolders, files in os.walk(src, followlinks=True):
95 if root in self._excluded:
96 subfolders[:] = []
97 continue
98
99 if links and os.path.islink(root):
100 linked_folders.append(os.path.relpath(root, src))
101 subfolders[:] = []
102 continue
103 basename = os.path.basename(root)
104 # Skip git or svn subfolders
105 if basename in [".git", ".svn"]:
106 subfolders[:] = []
107 continue
108 if basename == "test_package": # DO NOT export test_package/build folder
109 try:
110 subfolders.remove("build")
111 except:
112 pass
113
114 relative_path = os.path.relpath(root, src)
115 for f in files:
116 relative_name = os.path.normpath(os.path.join(relative_path, f))
117 filenames.append(relative_name)
118
119 if ignore_case:
120 filenames = {f.lower(): f for f in filenames}
121 pattern = pattern.lower()
122
123 files_to_copy = fnmatch.filter(filenames, pattern)
124 if excludes:
125 if not isinstance(excludes, (tuple, list)):
126 excludes = (excludes, )
127 if ignore_case:
128 excludes = [e.lower() for e in excludes]
129 for exclude in excludes:
130 files_to_copy = [f for f in files_to_copy if not fnmatch.fnmatch(f, exclude)]
131
132 if ignore_case:
133 files_to_copy = [filenames[f] for f in files_to_copy]
134
135 return files_to_copy, linked_folders
136
137 @staticmethod
138 def _link_folders(src, dst, linked_folders):
139 for linked_folder in linked_folders:
140 link = os.readlink(os.path.join(src, linked_folder))
141 dst_link = os.path.join(dst, linked_folder)
142 try:
143 # Remove the previous symlink
144 os.remove(dst_link)
145 except OSError:
146 pass
147 # link is a string relative to linked_folder
148 # e.j: os.symlink("test/bar", "./foo/test_link") will create a link to foo/test/bar in ./foo/test_link
149 os.symlink(link, dst_link)
150 # Remove empty links
151 for linked_folder in linked_folders:
152 dst_link = os.path.join(dst, linked_folder)
153 abs_path = os.path.realpath(dst_link)
154 if not os.path.exists(abs_path):
155 os.remove(dst_link)
156
157 @staticmethod
158 def _copy_files(files, src, dst, keep_path, symlinks):
159 """ executes a multiple file copy from [(src_file, dst_file), (..)]
160 managing symlinks if necessary
161 """
162 copied_files = []
163 for filename in files:
164 abs_src_name = os.path.join(src, filename)
165 filename = filename if keep_path else os.path.basename(filename)
166 abs_dst_name = os.path.normpath(os.path.join(dst, filename))
167 try:
168 os.makedirs(os.path.dirname(abs_dst_name))
169 except:
170 pass
171 if symlinks and os.path.islink(abs_src_name):
172 linkto = os.readlink(abs_src_name) # @UndefinedVariable
173 try:
174 os.remove(abs_dst_name)
175 except OSError:
176 pass
177 os.symlink(linkto, abs_dst_name) # @UndefinedVariable
178 else:
179 shutil.copy2(abs_src_name, abs_dst_name)
180 copied_files.append(abs_dst_name)
181 return copied_files
182
[end of conans/client/file_copier.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/conans/client/file_copier.py b/conans/client/file_copier.py
--- a/conans/client/file_copier.py
+++ b/conans/client/file_copier.py
@@ -3,7 +3,7 @@
import shutil
from collections import defaultdict
-from conans import tools
+from conans.util.files import mkdir
def report_copied_files(copied, output):
@@ -146,6 +146,7 @@
pass
# link is a string relative to linked_folder
# e.j: os.symlink("test/bar", "./foo/test_link") will create a link to foo/test/bar in ./foo/test_link
+ mkdir(os.path.dirname(dst_link))
os.symlink(link, dst_link)
# Remove empty links
for linked_folder in linked_folders:
| {"golden_diff": "diff --git a/conans/client/file_copier.py b/conans/client/file_copier.py\n--- a/conans/client/file_copier.py\n+++ b/conans/client/file_copier.py\n@@ -3,7 +3,7 @@\n import shutil\n from collections import defaultdict\n \n-from conans import tools\n+from conans.util.files import mkdir\n \n \n def report_copied_files(copied, output):\n@@ -146,6 +146,7 @@\n pass\n # link is a string relative to linked_folder\n # e.j: os.symlink(\"test/bar\", \"./foo/test_link\") will create a link to foo/test/bar in ./foo/test_link\n+ mkdir(os.path.dirname(dst_link))\n os.symlink(link, dst_link)\n # Remove empty links\n for linked_folder in linked_folders:\n", "issue": "Error with v1.4\nHello,\r\n\r\nI have the following Conan recipe\r\n\r\n```\r\n# cat conanfile.txt \r\n\r\n[requires]\r\nbitprim-node-cint/0.10.0@bitprim/testing\r\n[generators]\r\ncmake\r\n[options]\r\nbitprim-node-cint:shared=True\r\nbitprim-node-cint:currency=BCH\r\n[imports]\r\nbin, *.dll -> .\r\nlib, *.so -> .\r\nlib, *.dylib -> .\r\n``` \r\n\r\nWhen I execute: `conan install .` \r\nI get the following errors:\r\n\r\n```\r\n...\r\nPROJECT: Generator txt created conanbuildinfo.txt\r\nPROJECT: Generated conaninfo.txt\r\nTraceback (most recent call last):\r\n File \"/home/fernando/.local/lib/python2.7/site-packages/conans/client/command.py\", line 1182, in run\r\n method(args[0][1:])\r\n File \"/home/fernando/.local/lib/python2.7/site-packages/conans/client/command.py\", line 325, in install\r\n install_folder=args.install_folder)\r\n File \"/home/fernando/.local/lib/python2.7/site-packages/conans/client/conan_api.py\", line 77, in wrapper\r\n return f(*args, **kwargs)\r\n File \"/home/fernando/.local/lib/python2.7/site-packages/conans/client/conan_api.py\", line 465, in install\r\n no_imports=no_imports)\r\n File \"/home/fernando/.local/lib/python2.7/site-packages/conans/client/manager.py\", line 344, in install\r\n run_imports(conanfile, install_folder, output)\r\n File \"/home/fernando/.local/lib/python2.7/site-packages/conans/client/importer.py\", line 82, in run_imports\r\n conanfile.imports()\r\n File \"/home/fernando/.local/lib/python2.7/site-packages/conans/client/loader_parse.py\", line 184, in imports\r\n conan_file.copy(*import_params)\r\n File \"/home/fernando/.local/lib/python2.7/site-packages/conans/client/importer.py\", line 160, in __call__\r\n excludes=excludes, keep_path=keep_path)\r\n File \"/home/fernando/.local/lib/python2.7/site-packages/conans/client/file_copier.py\", line 83, in __call__\r\n self._link_folders(src, dst, link_folders)\r\n File \"/home/fernando/.local/lib/python2.7/site-packages/conans/client/file_copier.py\", line 149, in _link_folders\r\n os.symlink(link, dst_link)\r\nOSError: [Errno 2] No such file or directory\r\n\r\nERROR: [Errno 2] No such file or directory\r\n```\r\n\r\n```\r\n$ conan --version\r\nConan version 1.4.0\r\n\r\n$ python --version\r\nPython 2.7.15\r\n\r\n$ lsb_release -a\r\nLSB Version:\t:core-4.1-amd64:core-4.1-noarch\r\nDistributor ID:\tFedora\r\nDescription:\tFedora release 28 (Twenty Eight)\r\nRelease:\t28\r\nCodename:\tTwentyEight\r\n\r\n```\r\n\r\nIt works fine with Conan 1.3.3.\r\n\r\nThanks and regards,\r\nFernando.\n", "before_files": [{"content": "import os\nimport fnmatch\nimport shutil\nfrom collections import defaultdict\n\nfrom conans import tools\n\n\ndef report_copied_files(copied, output):\n ext_files = defaultdict(list)\n for f in copied:\n _, ext = os.path.splitext(f)\n ext_files[ext].append(os.path.basename(f))\n\n if not ext_files:\n return False\n\n for ext, files in ext_files.items():\n files_str = (\", \".join(files)) if len(files) < 5 else \"\"\n file_or_files = \"file\" if len(files) == 1 else \"files\"\n if not ext:\n output.info(\"Copied %d %s: %s\" % (len(files), file_or_files, files_str))\n else:\n output.info(\"Copied %d '%s' %s: %s\" % (len(files), ext, file_or_files, files_str))\n return True\n\n\nclass FileCopier(object):\n \"\"\" main responsible of copying files from place to place:\n package: build folder -> package folder\n imports: package folder -> user folder\n export: user folder -> store \"export\" folder\n \"\"\"\n def __init__(self, root_source_folder, root_destination_folder, excluded=None):\n \"\"\"\n Takes the base folders to copy resources src -> dst. These folders names\n will not be used in the relative names while copying\n param root_source_folder: The base folder to copy things from, typically the\n store build folder\n param root_destination_folder: The base folder to copy things to, typicall the\n store package folder\n \"\"\"\n self._base_src = root_source_folder\n self._base_dst = root_destination_folder\n self._copied = []\n self._excluded = [root_destination_folder]\n if excluded:\n self._excluded.append(excluded)\n\n def report(self, output):\n return report_copied_files(self._copied, output)\n\n def __call__(self, pattern, dst=\"\", src=\"\", keep_path=True, links=False, symlinks=None,\n excludes=None, ignore_case=False):\n \"\"\"\n param pattern: an fnmatch file pattern of the files that should be copied. Eg. *.dll\n param dst: the destination local folder, wrt to current conanfile dir, to which\n the files will be copied. Eg: \"bin\"\n param src: the source folder in which those files will be searched. This folder\n will be stripped from the dst name. Eg.: lib/Debug/x86\n param keep_path: False if you want the relative paths to be maintained from\n src to dst folders, or just drop. False is useful if you want\n to collect e.g. many *.libs among many dirs into a single\n lib dir\n return: list of copied files\n \"\"\"\n if symlinks is not None:\n links = symlinks\n # Check for ../ patterns and allow them\n if pattern.startswith(\"..\"):\n rel_dir = os.path.abspath(os.path.join(self._base_src, pattern))\n base_src = os.path.dirname(rel_dir)\n pattern = os.path.basename(rel_dir)\n else:\n base_src = self._base_src\n\n src = os.path.join(base_src, src)\n dst = os.path.join(self._base_dst, dst)\n\n files_to_copy, link_folders = self._filter_files(src, pattern, links, excludes,\n ignore_case)\n copied_files = self._copy_files(files_to_copy, src, dst, keep_path, links)\n self._link_folders(src, dst, link_folders)\n self._copied.extend(files_to_copy)\n return copied_files\n\n def _filter_files(self, src, pattern, links, excludes, ignore_case):\n\n \"\"\" return a list of the files matching the patterns\n The list will be relative path names wrt to the root src folder\n \"\"\"\n filenames = []\n linked_folders = []\n for root, subfolders, files in os.walk(src, followlinks=True):\n if root in self._excluded:\n subfolders[:] = []\n continue\n\n if links and os.path.islink(root):\n linked_folders.append(os.path.relpath(root, src))\n subfolders[:] = []\n continue\n basename = os.path.basename(root)\n # Skip git or svn subfolders\n if basename in [\".git\", \".svn\"]:\n subfolders[:] = []\n continue\n if basename == \"test_package\": # DO NOT export test_package/build folder\n try:\n subfolders.remove(\"build\")\n except:\n pass\n\n relative_path = os.path.relpath(root, src)\n for f in files:\n relative_name = os.path.normpath(os.path.join(relative_path, f))\n filenames.append(relative_name)\n\n if ignore_case:\n filenames = {f.lower(): f for f in filenames}\n pattern = pattern.lower()\n\n files_to_copy = fnmatch.filter(filenames, pattern)\n if excludes:\n if not isinstance(excludes, (tuple, list)):\n excludes = (excludes, )\n if ignore_case:\n excludes = [e.lower() for e in excludes]\n for exclude in excludes:\n files_to_copy = [f for f in files_to_copy if not fnmatch.fnmatch(f, exclude)]\n\n if ignore_case:\n files_to_copy = [filenames[f] for f in files_to_copy]\n\n return files_to_copy, linked_folders\n\n @staticmethod\n def _link_folders(src, dst, linked_folders):\n for linked_folder in linked_folders:\n link = os.readlink(os.path.join(src, linked_folder))\n dst_link = os.path.join(dst, linked_folder)\n try:\n # Remove the previous symlink\n os.remove(dst_link)\n except OSError:\n pass\n # link is a string relative to linked_folder\n # e.j: os.symlink(\"test/bar\", \"./foo/test_link\") will create a link to foo/test/bar in ./foo/test_link\n os.symlink(link, dst_link)\n # Remove empty links\n for linked_folder in linked_folders:\n dst_link = os.path.join(dst, linked_folder)\n abs_path = os.path.realpath(dst_link)\n if not os.path.exists(abs_path):\n os.remove(dst_link)\n\n @staticmethod\n def _copy_files(files, src, dst, keep_path, symlinks):\n \"\"\" executes a multiple file copy from [(src_file, dst_file), (..)]\n managing symlinks if necessary\n \"\"\"\n copied_files = []\n for filename in files:\n abs_src_name = os.path.join(src, filename)\n filename = filename if keep_path else os.path.basename(filename)\n abs_dst_name = os.path.normpath(os.path.join(dst, filename))\n try:\n os.makedirs(os.path.dirname(abs_dst_name))\n except:\n pass\n if symlinks and os.path.islink(abs_src_name):\n linkto = os.readlink(abs_src_name) # @UndefinedVariable\n try:\n os.remove(abs_dst_name)\n except OSError:\n pass\n os.symlink(linkto, abs_dst_name) # @UndefinedVariable\n else:\n shutil.copy2(abs_src_name, abs_dst_name)\n copied_files.append(abs_dst_name)\n return copied_files\n", "path": "conans/client/file_copier.py"}]} | 3,261 | 181 |
gh_patches_debug_16494 | rasdani/github-patches | git_diff | googleapis__google-api-python-client-1083 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add exception handling to docs
Hi :)
I was reading the [docs](https://github.com/googleapis/google-api-python-client/blob/master/docs/start.md) looking for an example to handle exceptions from when request.execute() goes wrong e.g. a 403 due to
Exceeding qouta limits.
I would like for the docs to be updated with a try: and except: like this
``` python
try:
response = request.execute()
except HttpError as e:
logger.error('Error response status code %d, reason %s:', e.resp.status, e.content)
return {'error': 403, 'body' : 'YouTube API Data v3 qouta limit exceeded'}
```
or something else in the `except` block
If you're happy with this I'd like to contribute this as a first timer to open source?
</issue>
<code>
[start of googleapiclient/errors.py]
1 # Copyright 2014 Google Inc. All Rights Reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """Errors for the library.
16
17 All exceptions defined by the library
18 should be defined in this file.
19 """
20 from __future__ import absolute_import
21
22 __author__ = "[email protected] (Joe Gregorio)"
23
24 import json
25
26 from googleapiclient import _helpers as util
27
28
29 class Error(Exception):
30 """Base error for this module."""
31
32 pass
33
34
35 class HttpError(Error):
36 """HTTP data was invalid or unexpected."""
37
38 @util.positional(3)
39 def __init__(self, resp, content, uri=None):
40 self.resp = resp
41 if not isinstance(content, bytes):
42 raise TypeError("HTTP content should be bytes")
43 self.content = content
44 self.uri = uri
45 self.error_details = ""
46
47 def _get_reason(self):
48 """Calculate the reason for the error from the response content."""
49 reason = self.resp.reason
50 try:
51 data = json.loads(self.content.decode("utf-8"))
52 if isinstance(data, dict):
53 reason = data["error"]["message"]
54 if "details" in data["error"]:
55 self.error_details = data["error"]["details"]
56 elif "detail" in data["error"]:
57 self.error_details = data["error"]["detail"]
58 elif isinstance(data, list) and len(data) > 0:
59 first_error = data[0]
60 reason = first_error["error"]["message"]
61 if "details" in first_error["error"]:
62 self.error_details = first_error["error"]["details"]
63 except (ValueError, KeyError, TypeError):
64 pass
65 if reason is None:
66 reason = ""
67 return reason
68
69 def __repr__(self):
70 reason = self._get_reason()
71 if self.error_details:
72 return '<HttpError %s when requesting %s returned "%s". Details: "%s">' % (
73 self.resp.status,
74 self.uri,
75 reason.strip(),
76 self.error_details,
77 )
78 elif self.uri:
79 return '<HttpError %s when requesting %s returned "%s">' % (
80 self.resp.status,
81 self.uri,
82 self._get_reason().strip(),
83 )
84 else:
85 return '<HttpError %s "%s">' % (self.resp.status, self._get_reason())
86
87 __str__ = __repr__
88
89
90 class InvalidJsonError(Error):
91 """The JSON returned could not be parsed."""
92
93 pass
94
95
96 class UnknownFileType(Error):
97 """File type unknown or unexpected."""
98
99 pass
100
101
102 class UnknownLinkType(Error):
103 """Link type unknown or unexpected."""
104
105 pass
106
107
108 class UnknownApiNameOrVersion(Error):
109 """No API with that name and version exists."""
110
111 pass
112
113
114 class UnacceptableMimeTypeError(Error):
115 """That is an unacceptable mimetype for this operation."""
116
117 pass
118
119
120 class MediaUploadSizeError(Error):
121 """Media is larger than the method can accept."""
122
123 pass
124
125
126 class ResumableUploadError(HttpError):
127 """Error occurred during resumable upload."""
128
129 pass
130
131
132 class InvalidChunkSizeError(Error):
133 """The given chunksize is not valid."""
134
135 pass
136
137
138 class InvalidNotificationError(Error):
139 """The channel Notification is invalid."""
140
141 pass
142
143
144 class BatchError(HttpError):
145 """Error occurred during batch operations."""
146
147 @util.positional(2)
148 def __init__(self, reason, resp=None, content=None):
149 self.resp = resp
150 self.content = content
151 self.reason = reason
152
153 def __repr__(self):
154 if getattr(self.resp, "status", None) is None:
155 return '<BatchError "%s">' % (self.reason)
156 else:
157 return '<BatchError %s "%s">' % (self.resp.status, self.reason)
158
159 __str__ = __repr__
160
161
162 class UnexpectedMethodError(Error):
163 """Exception raised by RequestMockBuilder on unexpected calls."""
164
165 @util.positional(1)
166 def __init__(self, methodId=None):
167 """Constructor for an UnexpectedMethodError."""
168 super(UnexpectedMethodError, self).__init__(
169 "Received unexpected call %s" % methodId
170 )
171
172
173 class UnexpectedBodyError(Error):
174 """Exception raised by RequestMockBuilder on unexpected bodies."""
175
176 def __init__(self, expected, provided):
177 """Constructor for an UnexpectedMethodError."""
178 super(UnexpectedBodyError, self).__init__(
179 "Expected: [%s] - Provided: [%s]" % (expected, provided)
180 )
181
[end of googleapiclient/errors.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/googleapiclient/errors.py b/googleapiclient/errors.py
--- a/googleapiclient/errors.py
+++ b/googleapiclient/errors.py
@@ -51,10 +51,9 @@
data = json.loads(self.content.decode("utf-8"))
if isinstance(data, dict):
reason = data["error"]["message"]
- if "details" in data["error"]:
- self.error_details = data["error"]["details"]
- elif "detail" in data["error"]:
- self.error_details = data["error"]["detail"]
+ error_detail_keyword = next((kw for kw in ["detail", "details", "message"] if kw in data["error"]), "")
+ if error_detail_keyword:
+ self.error_details = data["error"][error_detail_keyword]
elif isinstance(data, list) and len(data) > 0:
first_error = data[0]
reason = first_error["error"]["message"]
| {"golden_diff": "diff --git a/googleapiclient/errors.py b/googleapiclient/errors.py\n--- a/googleapiclient/errors.py\n+++ b/googleapiclient/errors.py\n@@ -51,10 +51,9 @@\n data = json.loads(self.content.decode(\"utf-8\"))\n if isinstance(data, dict):\n reason = data[\"error\"][\"message\"]\n- if \"details\" in data[\"error\"]:\n- self.error_details = data[\"error\"][\"details\"]\n- elif \"detail\" in data[\"error\"]:\n- self.error_details = data[\"error\"][\"detail\"]\n+ error_detail_keyword = next((kw for kw in [\"detail\", \"details\", \"message\"] if kw in data[\"error\"]), \"\")\n+ if error_detail_keyword:\n+ self.error_details = data[\"error\"][error_detail_keyword]\n elif isinstance(data, list) and len(data) > 0:\n first_error = data[0]\n reason = first_error[\"error\"][\"message\"]\n", "issue": "Add exception handling to docs\nHi :) \r\n\r\n\r\nI was reading the [docs](https://github.com/googleapis/google-api-python-client/blob/master/docs/start.md) looking for an example to handle exceptions from when request.execute() goes wrong e.g. a 403 due to \r\nExceeding qouta limits.\r\n\r\n\r\nI would like for the docs to be updated with a try: and except: like this\r\n``` python\r\n try:\r\n response = request.execute()\r\n except HttpError as e:\r\n logger.error('Error response status code %d, reason %s:', e.resp.status, e.content)\r\n return {'error': 403, 'body' : 'YouTube API Data v3 qouta limit exceeded'}\r\n```\r\nor something else in the `except` block\r\n \r\nIf you're happy with this I'd like to contribute this as a first timer to open source?\r\n\n", "before_files": [{"content": "# Copyright 2014 Google Inc. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Errors for the library.\n\nAll exceptions defined by the library\nshould be defined in this file.\n\"\"\"\nfrom __future__ import absolute_import\n\n__author__ = \"[email protected] (Joe Gregorio)\"\n\nimport json\n\nfrom googleapiclient import _helpers as util\n\n\nclass Error(Exception):\n \"\"\"Base error for this module.\"\"\"\n\n pass\n\n\nclass HttpError(Error):\n \"\"\"HTTP data was invalid or unexpected.\"\"\"\n\n @util.positional(3)\n def __init__(self, resp, content, uri=None):\n self.resp = resp\n if not isinstance(content, bytes):\n raise TypeError(\"HTTP content should be bytes\")\n self.content = content\n self.uri = uri\n self.error_details = \"\"\n\n def _get_reason(self):\n \"\"\"Calculate the reason for the error from the response content.\"\"\"\n reason = self.resp.reason\n try:\n data = json.loads(self.content.decode(\"utf-8\"))\n if isinstance(data, dict):\n reason = data[\"error\"][\"message\"]\n if \"details\" in data[\"error\"]:\n self.error_details = data[\"error\"][\"details\"]\n elif \"detail\" in data[\"error\"]:\n self.error_details = data[\"error\"][\"detail\"]\n elif isinstance(data, list) and len(data) > 0:\n first_error = data[0]\n reason = first_error[\"error\"][\"message\"]\n if \"details\" in first_error[\"error\"]:\n self.error_details = first_error[\"error\"][\"details\"]\n except (ValueError, KeyError, TypeError):\n pass\n if reason is None:\n reason = \"\"\n return reason\n\n def __repr__(self):\n reason = self._get_reason()\n if self.error_details:\n return '<HttpError %s when requesting %s returned \"%s\". Details: \"%s\">' % (\n self.resp.status,\n self.uri,\n reason.strip(),\n self.error_details,\n )\n elif self.uri:\n return '<HttpError %s when requesting %s returned \"%s\">' % (\n self.resp.status,\n self.uri,\n self._get_reason().strip(),\n )\n else:\n return '<HttpError %s \"%s\">' % (self.resp.status, self._get_reason())\n\n __str__ = __repr__\n\n\nclass InvalidJsonError(Error):\n \"\"\"The JSON returned could not be parsed.\"\"\"\n\n pass\n\n\nclass UnknownFileType(Error):\n \"\"\"File type unknown or unexpected.\"\"\"\n\n pass\n\n\nclass UnknownLinkType(Error):\n \"\"\"Link type unknown or unexpected.\"\"\"\n\n pass\n\n\nclass UnknownApiNameOrVersion(Error):\n \"\"\"No API with that name and version exists.\"\"\"\n\n pass\n\n\nclass UnacceptableMimeTypeError(Error):\n \"\"\"That is an unacceptable mimetype for this operation.\"\"\"\n\n pass\n\n\nclass MediaUploadSizeError(Error):\n \"\"\"Media is larger than the method can accept.\"\"\"\n\n pass\n\n\nclass ResumableUploadError(HttpError):\n \"\"\"Error occurred during resumable upload.\"\"\"\n\n pass\n\n\nclass InvalidChunkSizeError(Error):\n \"\"\"The given chunksize is not valid.\"\"\"\n\n pass\n\n\nclass InvalidNotificationError(Error):\n \"\"\"The channel Notification is invalid.\"\"\"\n\n pass\n\n\nclass BatchError(HttpError):\n \"\"\"Error occurred during batch operations.\"\"\"\n\n @util.positional(2)\n def __init__(self, reason, resp=None, content=None):\n self.resp = resp\n self.content = content\n self.reason = reason\n\n def __repr__(self):\n if getattr(self.resp, \"status\", None) is None:\n return '<BatchError \"%s\">' % (self.reason)\n else:\n return '<BatchError %s \"%s\">' % (self.resp.status, self.reason)\n\n __str__ = __repr__\n\n\nclass UnexpectedMethodError(Error):\n \"\"\"Exception raised by RequestMockBuilder on unexpected calls.\"\"\"\n\n @util.positional(1)\n def __init__(self, methodId=None):\n \"\"\"Constructor for an UnexpectedMethodError.\"\"\"\n super(UnexpectedMethodError, self).__init__(\n \"Received unexpected call %s\" % methodId\n )\n\n\nclass UnexpectedBodyError(Error):\n \"\"\"Exception raised by RequestMockBuilder on unexpected bodies.\"\"\"\n\n def __init__(self, expected, provided):\n \"\"\"Constructor for an UnexpectedMethodError.\"\"\"\n super(UnexpectedBodyError, self).__init__(\n \"Expected: [%s] - Provided: [%s]\" % (expected, provided)\n )\n", "path": "googleapiclient/errors.py"}]} | 2,254 | 207 |
gh_patches_debug_14573 | rasdani/github-patches | git_diff | ethereum__web3.py-996 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add python 3.7 to CI tests
### What was wrong?
python 3.7 is out, and we should include it in our testing.
### How can it be fixed?
add python 3.7 to our tox.ini & circleci config
</issue>
<code>
[start of setup.py]
1 #!/usr/bin/env python
2 # -*- coding: utf-8 -*-
3 from setuptools import (
4 find_packages,
5 setup,
6 )
7
8
9 setup(
10 name='web3',
11 # *IMPORTANT*: Don't manually change the version here. Use the 'bumpversion' utility.
12 version='4.5.0',
13 description="""Web3.py""",
14 long_description_markdown_filename='README.md',
15 author='Piper Merriam',
16 author_email='[email protected]',
17 url='https://github.com/ethereum/web3.py',
18 include_package_data=True,
19 install_requires=[
20 "toolz>=0.9.0,<1.0.0;implementation_name=='pypy'",
21 "cytoolz>=0.9.0,<1.0.0;implementation_name=='cpython'",
22 "eth-abi>=1.1.1,<2",
23 "eth-account>=0.2.1,<0.4.0",
24 "eth-utils>=1.0.1,<2.0.0",
25 "hexbytes>=0.1.0,<1.0.0",
26 "lru-dict>=1.1.6,<2.0.0",
27 "eth-hash[pycryptodome]",
28 "requests>=2.16.0,<3.0.0",
29 "websockets>=5.0.1,<6.0.0",
30 "pypiwin32>=223;platform_system=='Windows'",
31 ],
32 setup_requires=['setuptools-markdown'],
33 python_requires='>=3.5, <4',
34 extras_require={
35 'tester': [
36 "eth-tester[py-evm]==0.1.0-beta.30",
37 "py-geth>=2.0.1,<3.0.0",
38 ],
39 'testrpc': ["eth-testrpc>=1.3.3,<2.0.0"],
40 'linter': [
41 "flake8==3.4.1",
42 "isort>=4.2.15,<5",
43 ],
44 },
45 py_modules=['web3', 'ens'],
46 license="MIT",
47 zip_safe=False,
48 keywords='ethereum',
49 packages=find_packages(exclude=["tests", "tests.*"]),
50 classifiers=[
51 'Development Status :: 5 - Production/Stable',
52 'Intended Audience :: Developers',
53 'License :: OSI Approved :: MIT License',
54 'Natural Language :: English',
55 'Programming Language :: Python :: 3',
56 'Programming Language :: Python :: 3.5',
57 'Programming Language :: Python :: 3.6',
58 ],
59 )
60
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -26,14 +26,14 @@
"lru-dict>=1.1.6,<2.0.0",
"eth-hash[pycryptodome]",
"requests>=2.16.0,<3.0.0",
- "websockets>=5.0.1,<6.0.0",
+ "websockets>=6.0.0,<7.0.0",
"pypiwin32>=223;platform_system=='Windows'",
],
setup_requires=['setuptools-markdown'],
python_requires='>=3.5, <4',
extras_require={
'tester': [
- "eth-tester[py-evm]==0.1.0-beta.30",
+ "eth-tester[py-evm]==0.1.0-beta.31",
"py-geth>=2.0.1,<3.0.0",
],
'testrpc': ["eth-testrpc>=1.3.3,<2.0.0"],
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -26,14 +26,14 @@\n \"lru-dict>=1.1.6,<2.0.0\",\n \"eth-hash[pycryptodome]\",\n \"requests>=2.16.0,<3.0.0\",\n- \"websockets>=5.0.1,<6.0.0\",\n+ \"websockets>=6.0.0,<7.0.0\",\n \"pypiwin32>=223;platform_system=='Windows'\",\n ],\n setup_requires=['setuptools-markdown'],\n python_requires='>=3.5, <4',\n extras_require={\n 'tester': [\n- \"eth-tester[py-evm]==0.1.0-beta.30\",\n+ \"eth-tester[py-evm]==0.1.0-beta.31\",\n \"py-geth>=2.0.1,<3.0.0\",\n ],\n 'testrpc': [\"eth-testrpc>=1.3.3,<2.0.0\"],\n", "issue": "Add python 3.7 to CI tests\n### What was wrong?\r\npython 3.7 is out, and we should include it in our testing.\r\n\r\n\r\n### How can it be fixed?\r\n\r\nadd python 3.7 to our tox.ini & circleci config\r\n\n", "before_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\nfrom setuptools import (\n find_packages,\n setup,\n)\n\n\nsetup(\n name='web3',\n # *IMPORTANT*: Don't manually change the version here. Use the 'bumpversion' utility.\n version='4.5.0',\n description=\"\"\"Web3.py\"\"\",\n long_description_markdown_filename='README.md',\n author='Piper Merriam',\n author_email='[email protected]',\n url='https://github.com/ethereum/web3.py',\n include_package_data=True,\n install_requires=[\n \"toolz>=0.9.0,<1.0.0;implementation_name=='pypy'\",\n \"cytoolz>=0.9.0,<1.0.0;implementation_name=='cpython'\",\n \"eth-abi>=1.1.1,<2\",\n \"eth-account>=0.2.1,<0.4.0\",\n \"eth-utils>=1.0.1,<2.0.0\",\n \"hexbytes>=0.1.0,<1.0.0\",\n \"lru-dict>=1.1.6,<2.0.0\",\n \"eth-hash[pycryptodome]\",\n \"requests>=2.16.0,<3.0.0\",\n \"websockets>=5.0.1,<6.0.0\",\n \"pypiwin32>=223;platform_system=='Windows'\",\n ],\n setup_requires=['setuptools-markdown'],\n python_requires='>=3.5, <4',\n extras_require={\n 'tester': [\n \"eth-tester[py-evm]==0.1.0-beta.30\",\n \"py-geth>=2.0.1,<3.0.0\",\n ],\n 'testrpc': [\"eth-testrpc>=1.3.3,<2.0.0\"],\n 'linter': [\n \"flake8==3.4.1\",\n \"isort>=4.2.15,<5\",\n ],\n },\n py_modules=['web3', 'ens'],\n license=\"MIT\",\n zip_safe=False,\n keywords='ethereum',\n packages=find_packages(exclude=[\"tests\", \"tests.*\"]),\n classifiers=[\n 'Development Status :: 5 - Production/Stable',\n 'Intended Audience :: Developers',\n 'License :: OSI Approved :: MIT License',\n 'Natural Language :: English',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n ],\n)\n", "path": "setup.py"}]} | 1,262 | 249 |
gh_patches_debug_187 | rasdani/github-patches | git_diff | CTFd__CTFd-863 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
get_config return default
get_config(key) should probably be get_config(key, default=None). This helps in some ideas where you want to do different behavior if get_config returns None.
</issue>
<code>
[start of CTFd/__init__.py]
1 import sys
2 import os
3
4 from distutils.version import StrictVersion
5 from flask import Flask, Request
6 from werkzeug.utils import cached_property
7 from werkzeug.contrib.fixers import ProxyFix
8 from jinja2 import FileSystemLoader
9 from jinja2.sandbox import SandboxedEnvironment
10 from six.moves import input
11
12 from CTFd import utils
13 from CTFd.utils.migrations import migrations, migrate, upgrade, stamp, create_database
14 from CTFd.utils.sessions import CachingSessionInterface
15 from CTFd.utils.updates import update_check
16 from CTFd.utils.initialization import init_request_processors, init_template_filters, init_template_globals, init_logs
17 from CTFd.utils.events import socketio
18 from CTFd.plugins import init_plugins
19
20 # Hack to support Unicode in Python 2 properly
21 if sys.version_info[0] < 3:
22 reload(sys)
23 sys.setdefaultencoding("utf-8")
24
25 __version__ = '2.0.3'
26
27
28 class CTFdRequest(Request):
29 @cached_property
30 def path(self):
31 """
32 Hijack the original Flask request path because it does not account for subdirectory deployments in an intuitive
33 manner. We append script_root so that the path always points to the full path as seen in the browser.
34 e.g. /subdirectory/path/route vs /path/route
35
36 :return: string
37 """
38 return self.script_root + super(CTFdRequest, self).path
39
40
41 class CTFdFlask(Flask):
42 def __init__(self, *args, **kwargs):
43 """Overriden Jinja constructor setting a custom jinja_environment"""
44 self.jinja_environment = SandboxedBaseEnvironment
45 self.session_interface = CachingSessionInterface(key_prefix='session')
46 self.request_class = CTFdRequest
47 Flask.__init__(self, *args, **kwargs)
48
49 def create_jinja_environment(self):
50 """Overridden jinja environment constructor"""
51 return super(CTFdFlask, self).create_jinja_environment()
52
53
54 class SandboxedBaseEnvironment(SandboxedEnvironment):
55 """SandboxEnvironment that mimics the Flask BaseEnvironment"""
56 def __init__(self, app, **options):
57 if 'loader' not in options:
58 options['loader'] = app.create_global_jinja_loader()
59 # Disable cache entirely so that themes can be switched (#662)
60 # If the cache is enabled, switching themes will cause odd rendering errors
61 SandboxedEnvironment.__init__(self, cache_size=0, **options)
62 self.app = app
63
64
65 class ThemeLoader(FileSystemLoader):
66 """Custom FileSystemLoader that switches themes based on the configuration value"""
67 def __init__(self, searchpath, encoding='utf-8', followlinks=False):
68 super(ThemeLoader, self).__init__(searchpath, encoding, followlinks)
69 self.overriden_templates = {}
70
71 def get_source(self, environment, template):
72 # Check if the template has been overriden
73 if template in self.overriden_templates:
74 return self.overriden_templates[template], template, True
75
76 # Check if the template requested is for the admin panel
77 if template.startswith('admin/'):
78 template = template[6:] # Strip out admin/
79 template = "/".join(['admin', 'templates', template])
80 return super(ThemeLoader, self).get_source(environment, template)
81
82 # Load regular theme data
83 theme = utils.get_config('ctf_theme')
84 template = "/".join([theme, 'templates', template])
85 return super(ThemeLoader, self).get_source(environment, template)
86
87
88 def confirm_upgrade():
89 if sys.stdin.isatty():
90 print("/*\\ CTFd has updated and must update the database! /*\\")
91 print("/*\\ Please backup your database before proceeding! /*\\")
92 print("/*\\ CTFd maintainers are not responsible for any data loss! /*\\")
93 if input('Run database migrations (Y/N)').lower().strip() == 'y':
94 return True
95 else:
96 print('/*\\ Ignored database migrations... /*\\')
97 return False
98 else:
99 return True
100
101
102 def run_upgrade():
103 upgrade()
104 utils.set_config('ctf_version', __version__)
105
106
107 def create_app(config='CTFd.config.Config'):
108 app = CTFdFlask(__name__)
109 with app.app_context():
110 app.config.from_object(config)
111
112 theme_loader = ThemeLoader(os.path.join(app.root_path, 'themes'), followlinks=True)
113 app.jinja_loader = theme_loader
114
115 from CTFd.models import db, Teams, Solves, Challenges, Fails, Flags, Tags, Files, Tracking
116
117 url = create_database()
118
119 # This allows any changes to the SQLALCHEMY_DATABASE_URI to get pushed back in
120 # This is mostly so we can force MySQL's charset
121 app.config['SQLALCHEMY_DATABASE_URI'] = str(url)
122
123 # Register database
124 db.init_app(app)
125
126 # Register Flask-Migrate
127 migrations.init_app(app, db)
128
129 # Alembic sqlite support is lacking so we should just create_all anyway
130 if url.drivername.startswith('sqlite'):
131 db.create_all()
132 stamp()
133 else:
134 # This creates tables instead of db.create_all()
135 # Allows migrations to happen properly
136 upgrade()
137
138 from CTFd.models import ma
139
140 ma.init_app(app)
141
142 app.db = db
143 app.VERSION = __version__
144
145 from CTFd.cache import cache
146
147 cache.init_app(app)
148 app.cache = cache
149
150 # If you have multiple workers you must have a shared cache
151 socketio.init_app(
152 app,
153 async_mode=app.config.get('SOCKETIO_ASYNC_MODE'),
154 message_queue=app.config.get('CACHE_REDIS_URL')
155 )
156
157 if app.config.get('REVERSE_PROXY'):
158 app.wsgi_app = ProxyFix(app.wsgi_app)
159
160 version = utils.get_config('ctf_version')
161
162 # Upgrading from an older version of CTFd
163 if version and (StrictVersion(version) < StrictVersion(__version__)):
164 if confirm_upgrade():
165 run_upgrade()
166 else:
167 exit()
168
169 if not version:
170 utils.set_config('ctf_version', __version__)
171
172 if not utils.get_config('ctf_theme'):
173 utils.set_config('ctf_theme', 'core')
174
175 update_check(force=True)
176
177 init_request_processors(app)
178 init_template_filters(app)
179 init_template_globals(app)
180
181 # Importing here allows tests to use sensible names (e.g. api instead of api_bp)
182 from CTFd.views import views
183 from CTFd.teams import teams
184 from CTFd.users import users
185 from CTFd.challenges import challenges
186 from CTFd.scoreboard import scoreboard
187 from CTFd.auth import auth
188 from CTFd.admin import admin
189 from CTFd.api import api
190 from CTFd.events import events
191 from CTFd.errors import page_not_found, forbidden, general_error, gateway_error
192
193 app.register_blueprint(views)
194 app.register_blueprint(teams)
195 app.register_blueprint(users)
196 app.register_blueprint(challenges)
197 app.register_blueprint(scoreboard)
198 app.register_blueprint(auth)
199 app.register_blueprint(api)
200 app.register_blueprint(events)
201
202 app.register_blueprint(admin)
203
204 app.register_error_handler(404, page_not_found)
205 app.register_error_handler(403, forbidden)
206 app.register_error_handler(500, general_error)
207 app.register_error_handler(502, gateway_error)
208
209 init_logs(app)
210 init_plugins(app)
211
212 return app
213
[end of CTFd/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/CTFd/__init__.py b/CTFd/__init__.py
--- a/CTFd/__init__.py
+++ b/CTFd/__init__.py
@@ -22,7 +22,7 @@
reload(sys)
sys.setdefaultencoding("utf-8")
-__version__ = '2.0.3'
+__version__ = '2.0.4'
class CTFdRequest(Request):
| {"golden_diff": "diff --git a/CTFd/__init__.py b/CTFd/__init__.py\n--- a/CTFd/__init__.py\n+++ b/CTFd/__init__.py\n@@ -22,7 +22,7 @@\n reload(sys)\n sys.setdefaultencoding(\"utf-8\")\n \n-__version__ = '2.0.3'\n+__version__ = '2.0.4'\n \n \n class CTFdRequest(Request):\n", "issue": "get_config return default\nget_config(key) should probably be get_config(key, default=None). This helps in some ideas where you want to do different behavior if get_config returns None. \n", "before_files": [{"content": "import sys\nimport os\n\nfrom distutils.version import StrictVersion\nfrom flask import Flask, Request\nfrom werkzeug.utils import cached_property\nfrom werkzeug.contrib.fixers import ProxyFix\nfrom jinja2 import FileSystemLoader\nfrom jinja2.sandbox import SandboxedEnvironment\nfrom six.moves import input\n\nfrom CTFd import utils\nfrom CTFd.utils.migrations import migrations, migrate, upgrade, stamp, create_database\nfrom CTFd.utils.sessions import CachingSessionInterface\nfrom CTFd.utils.updates import update_check\nfrom CTFd.utils.initialization import init_request_processors, init_template_filters, init_template_globals, init_logs\nfrom CTFd.utils.events import socketio\nfrom CTFd.plugins import init_plugins\n\n# Hack to support Unicode in Python 2 properly\nif sys.version_info[0] < 3:\n reload(sys)\n sys.setdefaultencoding(\"utf-8\")\n\n__version__ = '2.0.3'\n\n\nclass CTFdRequest(Request):\n @cached_property\n def path(self):\n \"\"\"\n Hijack the original Flask request path because it does not account for subdirectory deployments in an intuitive\n manner. We append script_root so that the path always points to the full path as seen in the browser.\n e.g. /subdirectory/path/route vs /path/route\n\n :return: string\n \"\"\"\n return self.script_root + super(CTFdRequest, self).path\n\n\nclass CTFdFlask(Flask):\n def __init__(self, *args, **kwargs):\n \"\"\"Overriden Jinja constructor setting a custom jinja_environment\"\"\"\n self.jinja_environment = SandboxedBaseEnvironment\n self.session_interface = CachingSessionInterface(key_prefix='session')\n self.request_class = CTFdRequest\n Flask.__init__(self, *args, **kwargs)\n\n def create_jinja_environment(self):\n \"\"\"Overridden jinja environment constructor\"\"\"\n return super(CTFdFlask, self).create_jinja_environment()\n\n\nclass SandboxedBaseEnvironment(SandboxedEnvironment):\n \"\"\"SandboxEnvironment that mimics the Flask BaseEnvironment\"\"\"\n def __init__(self, app, **options):\n if 'loader' not in options:\n options['loader'] = app.create_global_jinja_loader()\n # Disable cache entirely so that themes can be switched (#662)\n # If the cache is enabled, switching themes will cause odd rendering errors\n SandboxedEnvironment.__init__(self, cache_size=0, **options)\n self.app = app\n\n\nclass ThemeLoader(FileSystemLoader):\n \"\"\"Custom FileSystemLoader that switches themes based on the configuration value\"\"\"\n def __init__(self, searchpath, encoding='utf-8', followlinks=False):\n super(ThemeLoader, self).__init__(searchpath, encoding, followlinks)\n self.overriden_templates = {}\n\n def get_source(self, environment, template):\n # Check if the template has been overriden\n if template in self.overriden_templates:\n return self.overriden_templates[template], template, True\n\n # Check if the template requested is for the admin panel\n if template.startswith('admin/'):\n template = template[6:] # Strip out admin/\n template = \"/\".join(['admin', 'templates', template])\n return super(ThemeLoader, self).get_source(environment, template)\n\n # Load regular theme data\n theme = utils.get_config('ctf_theme')\n template = \"/\".join([theme, 'templates', template])\n return super(ThemeLoader, self).get_source(environment, template)\n\n\ndef confirm_upgrade():\n if sys.stdin.isatty():\n print(\"/*\\\\ CTFd has updated and must update the database! /*\\\\\")\n print(\"/*\\\\ Please backup your database before proceeding! /*\\\\\")\n print(\"/*\\\\ CTFd maintainers are not responsible for any data loss! /*\\\\\")\n if input('Run database migrations (Y/N)').lower().strip() == 'y':\n return True\n else:\n print('/*\\\\ Ignored database migrations... /*\\\\')\n return False\n else:\n return True\n\n\ndef run_upgrade():\n upgrade()\n utils.set_config('ctf_version', __version__)\n\n\ndef create_app(config='CTFd.config.Config'):\n app = CTFdFlask(__name__)\n with app.app_context():\n app.config.from_object(config)\n\n theme_loader = ThemeLoader(os.path.join(app.root_path, 'themes'), followlinks=True)\n app.jinja_loader = theme_loader\n\n from CTFd.models import db, Teams, Solves, Challenges, Fails, Flags, Tags, Files, Tracking\n\n url = create_database()\n\n # This allows any changes to the SQLALCHEMY_DATABASE_URI to get pushed back in\n # This is mostly so we can force MySQL's charset\n app.config['SQLALCHEMY_DATABASE_URI'] = str(url)\n\n # Register database\n db.init_app(app)\n\n # Register Flask-Migrate\n migrations.init_app(app, db)\n\n # Alembic sqlite support is lacking so we should just create_all anyway\n if url.drivername.startswith('sqlite'):\n db.create_all()\n stamp()\n else:\n # This creates tables instead of db.create_all()\n # Allows migrations to happen properly\n upgrade()\n\n from CTFd.models import ma\n\n ma.init_app(app)\n\n app.db = db\n app.VERSION = __version__\n\n from CTFd.cache import cache\n\n cache.init_app(app)\n app.cache = cache\n\n # If you have multiple workers you must have a shared cache\n socketio.init_app(\n app,\n async_mode=app.config.get('SOCKETIO_ASYNC_MODE'),\n message_queue=app.config.get('CACHE_REDIS_URL')\n )\n\n if app.config.get('REVERSE_PROXY'):\n app.wsgi_app = ProxyFix(app.wsgi_app)\n\n version = utils.get_config('ctf_version')\n\n # Upgrading from an older version of CTFd\n if version and (StrictVersion(version) < StrictVersion(__version__)):\n if confirm_upgrade():\n run_upgrade()\n else:\n exit()\n\n if not version:\n utils.set_config('ctf_version', __version__)\n\n if not utils.get_config('ctf_theme'):\n utils.set_config('ctf_theme', 'core')\n\n update_check(force=True)\n\n init_request_processors(app)\n init_template_filters(app)\n init_template_globals(app)\n\n # Importing here allows tests to use sensible names (e.g. api instead of api_bp)\n from CTFd.views import views\n from CTFd.teams import teams\n from CTFd.users import users\n from CTFd.challenges import challenges\n from CTFd.scoreboard import scoreboard\n from CTFd.auth import auth\n from CTFd.admin import admin\n from CTFd.api import api\n from CTFd.events import events\n from CTFd.errors import page_not_found, forbidden, general_error, gateway_error\n\n app.register_blueprint(views)\n app.register_blueprint(teams)\n app.register_blueprint(users)\n app.register_blueprint(challenges)\n app.register_blueprint(scoreboard)\n app.register_blueprint(auth)\n app.register_blueprint(api)\n app.register_blueprint(events)\n\n app.register_blueprint(admin)\n\n app.register_error_handler(404, page_not_found)\n app.register_error_handler(403, forbidden)\n app.register_error_handler(500, general_error)\n app.register_error_handler(502, gateway_error)\n\n init_logs(app)\n init_plugins(app)\n\n return app\n", "path": "CTFd/__init__.py"}]} | 2,765 | 98 |
gh_patches_debug_16110 | rasdani/github-patches | git_diff | mampfes__hacs_waste_collection_schedule-339 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[bug] [recycleapp_be] UnboundLocalError: local variable 'streetId' referenced before assignment
Hello,
I have a problem with the recycleapp_be source.
I have the error into the log :
```
Logger: waste_collection_schedule.scraper
Source: custom_components/waste_collection_schedule/waste_collection_schedule/scraper.py:143
Integration: waste_collection_schedule ([documentation](https://github.com/mampfes/hacs_waste_collection_schedule#readme))
First occurred: 15:24:43 (1 occurrences)
Last logged: 15:24:43
fetch failed for source Recycle!: Traceback (most recent call last): File "/config/custom_components/waste_collection_schedule/waste_collection_schedule/scraper.py", line 141, in fetch entries = self._source.fetch() File "/config/custom_components/waste_collection_schedule/waste_collection_schedule/source/recycleapp_be.py", line 66, in fetch if streetId is None: UnboundLocalError: local variable 'streetId' referenced before assignment
` ``
</issue>
<code>
[start of custom_components/waste_collection_schedule/waste_collection_schedule/source/recycleapp_be.py]
1 import logging
2 from datetime import datetime, timedelta
3
4 import requests
5 from waste_collection_schedule import Collection # type: ignore[attr-defined]
6
7 TITLE = "Recycle!"
8 DESCRIPTION = "Source for RecycleApp.be"
9 URL = "https://www.recycleapp.be"
10 TEST_CASES = {
11 "1140 Evere, Bazellaan 1": {
12 "postcode": 1140,
13 "street": "Bazellaan",
14 "house_number": 1,
15 },
16 "3001, Waversebaan 276 with events": {
17 "postcode": 3001,
18 "street": "Waversebaan",
19 "house_number": 276,
20 },
21 "3001, Waversebaan 276 without events": {
22 "postcode": 3001,
23 "street": "Waversebaan",
24 "house_number": 276,
25 "add_events": False,
26 },
27 }
28
29 _LOGGER = logging.getLogger(__name__)
30
31
32 class Source:
33 def __init__(self, postcode, street, house_number, add_events=True):
34 self._postcode = postcode
35 self._street = street
36 self._house_number = house_number
37 self._add_events = add_events
38
39 def fetch(self):
40 url = "https://api.recycleapp.be/api/app/v1"
41 headers = {
42 "x-secret": "Crgja3EGWe8jdapyr4EEoMBgZACYYjRRcRpaMQrLDW9HJBvmgkfGQyYqLgeXPavAGvnJqkV87PBB2b8zx43q46sUgzqio4yRZbABhtKeagkVKypTEDjKfPgGycjLyJTtLHYpzwJgp4YmmCuJZN9ZmJY8CGEoFs8MKfdJpU9RjkEVfngmmk2LYD4QzFegLNKUbcCeAdEW",
43 "x-consumer": "recycleapp.be",
44 "User-Agent": "",
45 "Authorization": "",
46 }
47 r = requests.get(f"{url}/access-token", headers=headers)
48 headers["Authorization"] = r.json()["accessToken"]
49
50 params = {"q": self._postcode}
51 r = requests.get(f"{url}/zipcodes", params=params, headers=headers)
52 if r.status_code != 200:
53 _LOGGER.error("Get zip code failed")
54 return []
55 zipcodeId = r.json()["items"][0]["id"]
56
57 params = {"q": self._street, "zipcodes": zipcodeId}
58 r = requests.get(f"{url}/streets", params=params, headers=headers)
59 if r.status_code != 200:
60 _LOGGER.error("Get street id failed")
61 return []
62
63 for item in r.json()["items"]:
64 if item["name"] == self._street:
65 streetId = item["id"]
66 if streetId is None:
67 streetId = r.json()["items"][0]["id"]
68
69 now = datetime.now()
70 fromDate = now.strftime("%Y-%m-%d")
71 untilDate = (now + timedelta(days=365)).strftime("%Y-%m-%d")
72 params = {
73 "zipcodeId": zipcodeId,
74 "streetId": streetId,
75 "houseNumber": self._house_number,
76 "fromDate": fromDate,
77 "untilDate": untilDate,
78 # "size":100,
79 }
80 r = requests.get(f"{url}/collections", params=params, headers=headers)
81 if r.status_code != 200:
82 _LOGGER.error("Get data failed")
83 return []
84
85 entries = []
86 for item in r.json()["items"]:
87 if "exception" in item and "replacedBy" in item["exception"]:
88 continue
89
90 date = datetime.strptime(item["timestamp"], "%Y-%m-%dT%H:%M:%S.000Z").date()
91 if item["type"] == "collection":
92 entries.append(Collection(date, item["fraction"]["name"]["en"]))
93 elif item["type"] == "event" and self._add_events:
94 entries.append(Collection(date, item["event"]["title"]["en"]))
95
96 return entries
97
[end of custom_components/waste_collection_schedule/waste_collection_schedule/source/recycleapp_be.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/custom_components/waste_collection_schedule/waste_collection_schedule/source/recycleapp_be.py b/custom_components/waste_collection_schedule/waste_collection_schedule/source/recycleapp_be.py
--- a/custom_components/waste_collection_schedule/waste_collection_schedule/source/recycleapp_be.py
+++ b/custom_components/waste_collection_schedule/waste_collection_schedule/source/recycleapp_be.py
@@ -24,6 +24,12 @@
"house_number": 276,
"add_events": False,
},
+ "1400, Rue de namur 1 with events": {
+ "postcode": 1400,
+ "street": "Rue de namur",
+ "house_number": 1,
+ "add_events": True,
+ },
}
_LOGGER = logging.getLogger(__name__)
@@ -60,6 +66,7 @@
_LOGGER.error("Get street id failed")
return []
+ streetId = None
for item in r.json()["items"]:
if item["name"] == self._street:
streetId = item["id"]
| {"golden_diff": "diff --git a/custom_components/waste_collection_schedule/waste_collection_schedule/source/recycleapp_be.py b/custom_components/waste_collection_schedule/waste_collection_schedule/source/recycleapp_be.py\n--- a/custom_components/waste_collection_schedule/waste_collection_schedule/source/recycleapp_be.py\n+++ b/custom_components/waste_collection_schedule/waste_collection_schedule/source/recycleapp_be.py\n@@ -24,6 +24,12 @@\n \"house_number\": 276,\n \"add_events\": False,\n },\n+ \"1400, Rue de namur 1 with events\": {\n+ \"postcode\": 1400,\n+ \"street\": \"Rue de namur\",\n+ \"house_number\": 1,\n+ \"add_events\": True,\n+ },\n }\n \n _LOGGER = logging.getLogger(__name__)\n@@ -60,6 +66,7 @@\n _LOGGER.error(\"Get street id failed\")\n return []\n \n+ streetId = None\n for item in r.json()[\"items\"]:\n if item[\"name\"] == self._street:\n streetId = item[\"id\"]\n", "issue": "[bug] [recycleapp_be] UnboundLocalError: local variable 'streetId' referenced before assignment\nHello, \r\n\r\nI have a problem with the recycleapp_be source.\r\n\r\nI have the error into the log : \r\n\r\n```\r\nLogger: waste_collection_schedule.scraper\r\nSource: custom_components/waste_collection_schedule/waste_collection_schedule/scraper.py:143\r\nIntegration: waste_collection_schedule ([documentation](https://github.com/mampfes/hacs_waste_collection_schedule#readme))\r\nFirst occurred: 15:24:43 (1 occurrences)\r\nLast logged: 15:24:43\r\n\r\nfetch failed for source Recycle!: Traceback (most recent call last): File \"/config/custom_components/waste_collection_schedule/waste_collection_schedule/scraper.py\", line 141, in fetch entries = self._source.fetch() File \"/config/custom_components/waste_collection_schedule/waste_collection_schedule/source/recycleapp_be.py\", line 66, in fetch if streetId is None: UnboundLocalError: local variable 'streetId' referenced before assignment\r\n` ``\r\n\n", "before_files": [{"content": "import logging\nfrom datetime import datetime, timedelta\n\nimport requests\nfrom waste_collection_schedule import Collection # type: ignore[attr-defined]\n\nTITLE = \"Recycle!\"\nDESCRIPTION = \"Source for RecycleApp.be\"\nURL = \"https://www.recycleapp.be\"\nTEST_CASES = {\n \"1140 Evere, Bazellaan 1\": {\n \"postcode\": 1140,\n \"street\": \"Bazellaan\",\n \"house_number\": 1,\n },\n \"3001, Waversebaan 276 with events\": {\n \"postcode\": 3001,\n \"street\": \"Waversebaan\",\n \"house_number\": 276,\n },\n \"3001, Waversebaan 276 without events\": {\n \"postcode\": 3001,\n \"street\": \"Waversebaan\",\n \"house_number\": 276,\n \"add_events\": False,\n },\n}\n\n_LOGGER = logging.getLogger(__name__)\n\n\nclass Source:\n def __init__(self, postcode, street, house_number, add_events=True):\n self._postcode = postcode\n self._street = street\n self._house_number = house_number\n self._add_events = add_events\n\n def fetch(self):\n url = \"https://api.recycleapp.be/api/app/v1\"\n headers = {\n \"x-secret\": \"Crgja3EGWe8jdapyr4EEoMBgZACYYjRRcRpaMQrLDW9HJBvmgkfGQyYqLgeXPavAGvnJqkV87PBB2b8zx43q46sUgzqio4yRZbABhtKeagkVKypTEDjKfPgGycjLyJTtLHYpzwJgp4YmmCuJZN9ZmJY8CGEoFs8MKfdJpU9RjkEVfngmmk2LYD4QzFegLNKUbcCeAdEW\",\n \"x-consumer\": \"recycleapp.be\",\n \"User-Agent\": \"\",\n \"Authorization\": \"\",\n }\n r = requests.get(f\"{url}/access-token\", headers=headers)\n headers[\"Authorization\"] = r.json()[\"accessToken\"]\n\n params = {\"q\": self._postcode}\n r = requests.get(f\"{url}/zipcodes\", params=params, headers=headers)\n if r.status_code != 200:\n _LOGGER.error(\"Get zip code failed\")\n return []\n zipcodeId = r.json()[\"items\"][0][\"id\"]\n\n params = {\"q\": self._street, \"zipcodes\": zipcodeId}\n r = requests.get(f\"{url}/streets\", params=params, headers=headers)\n if r.status_code != 200:\n _LOGGER.error(\"Get street id failed\")\n return []\n\n for item in r.json()[\"items\"]:\n if item[\"name\"] == self._street:\n streetId = item[\"id\"]\n if streetId is None:\n streetId = r.json()[\"items\"][0][\"id\"]\n\n now = datetime.now()\n fromDate = now.strftime(\"%Y-%m-%d\")\n untilDate = (now + timedelta(days=365)).strftime(\"%Y-%m-%d\")\n params = {\n \"zipcodeId\": zipcodeId,\n \"streetId\": streetId,\n \"houseNumber\": self._house_number,\n \"fromDate\": fromDate,\n \"untilDate\": untilDate,\n # \"size\":100,\n }\n r = requests.get(f\"{url}/collections\", params=params, headers=headers)\n if r.status_code != 200:\n _LOGGER.error(\"Get data failed\")\n return []\n\n entries = []\n for item in r.json()[\"items\"]:\n if \"exception\" in item and \"replacedBy\" in item[\"exception\"]:\n continue\n\n date = datetime.strptime(item[\"timestamp\"], \"%Y-%m-%dT%H:%M:%S.000Z\").date()\n if item[\"type\"] == \"collection\":\n entries.append(Collection(date, item[\"fraction\"][\"name\"][\"en\"]))\n elif item[\"type\"] == \"event\" and self._add_events:\n entries.append(Collection(date, item[\"event\"][\"title\"][\"en\"]))\n\n return entries\n", "path": "custom_components/waste_collection_schedule/waste_collection_schedule/source/recycleapp_be.py"}]} | 1,914 | 240 |
gh_patches_debug_58681 | rasdani/github-patches | git_diff | lightly-ai__lightly-1009 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Loss stuck
Hi, I am trying to run the tutorial posted here
https://docs.lightly.ai/self-supervised-learning/tutorials/package/tutorial_moco_memory_bank.html
But my loss is stuck at 8.32 after 100 epochs
python 3.9
pytorch-lightning 1.8.1
lightly 1.2.38
Any suggestions on how I should troubleshoot this?
Thanks in advance!
</issue>
<code>
[start of lightly/loss/memory_bank.py]
1 """ Memory Bank Wrapper """
2
3 # Copyright (c) 2020. Lightly AG and its affiliates.
4 # All Rights Reserved
5
6 import torch
7 import functools
8
9 class MemoryBankModule(torch.nn.Module):
10 """Memory bank implementation
11
12 This is a parent class to all loss functions implemented by the lightly
13 Python package. This way, any loss can be used with a memory bank if
14 desired.
15
16 Attributes:
17 size:
18 Number of keys the memory bank can store. If set to 0,
19 memory bank is not used.
20
21 Examples:
22 >>> class MyLossFunction(MemoryBankModule):
23 >>>
24 >>> def __init__(self, memory_bank_size: int = 2 ** 16):
25 >>> super(MyLossFunction, self).__init__(memory_bank_size)
26 >>>
27 >>> def forward(self, output: torch.Tensor,
28 >>> labels: torch.Tensor = None):
29 >>>
30 >>> output, negatives = super(
31 >>> MyLossFunction, self).forward(output)
32 >>>
33 >>> if negatives is not None:
34 >>> # evaluate loss with negative samples
35 >>> else:
36 >>> # evaluate loss without negative samples
37
38 """
39
40 def __init__(self, size: int = 2 ** 16):
41
42 super(MemoryBankModule, self).__init__()
43
44 if size < 0:
45 msg = f'Illegal memory bank size {size}, must be non-negative.'
46 raise ValueError(msg)
47
48 self.size = size
49 self.register_buffer("bank", tensor=torch.empty(0, dtype=torch.float), persistent=False)
50 self.register_buffer("bank_ptr", tensor=torch.empty(0, dtype=torch.long), persistent=False)
51
52 @torch.no_grad()
53 def _init_memory_bank(self, dim: int):
54 """Initialize the memory bank if it's empty
55
56 Args:
57 dim:
58 The dimension of the which are stored in the bank.
59
60 """
61 # create memory bank
62 # we could use register buffers like in the moco repo
63 # https://github.com/facebookresearch/moco but we don't
64 # want to pollute our checkpoints
65 self.bank = torch.randn(dim, self.size).type_as(self.bank)
66 torch.nn.functional.normalize(self.bank, dim=0)
67 self.bank_ptr = torch.zeros(1).type_as(self.bank_ptr)
68
69 @torch.no_grad()
70 def _dequeue_and_enqueue(self, batch: torch.Tensor):
71 """Dequeue the oldest batch and add the latest one
72
73 Args:
74 batch:
75 The latest batch of keys to add to the memory bank.
76
77 """
78 batch_size = batch.shape[0]
79 ptr = int(self.bank_ptr)
80
81 if ptr + batch_size >= self.size:
82 self.bank[:, ptr:] = batch[:self.size - ptr].T.detach()
83 self.bank_ptr[0] = 0
84 else:
85 self.bank[:, ptr:ptr + batch_size] = batch.T.detach()
86 self.bank_ptr[0] = ptr + batch_size
87
88 def forward(self,
89 output: torch.Tensor,
90 labels: torch.Tensor = None,
91 update: bool = False):
92 """Query memory bank for additional negative samples
93
94 Args:
95 output:
96 The output of the model.
97 labels:
98 Should always be None, will be ignored.
99
100 Returns:
101 The output if the memory bank is of size 0, otherwise the output
102 and the entries from the memory bank.
103
104 """
105
106 # no memory bank, return the output
107 if self.size == 0:
108 return output, None
109
110 _, dim = output.shape
111
112 # initialize the memory bank if it is not already done
113 if self.bank.nelement() == 0:
114 self._init_memory_bank(dim)
115
116 # query and update memory bank
117 bank = self.bank.clone().detach()
118
119 # only update memory bank if we later do backward pass (gradient)
120 if update:
121 self._dequeue_and_enqueue(output)
122
123 return output, bank
124
[end of lightly/loss/memory_bank.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/lightly/loss/memory_bank.py b/lightly/loss/memory_bank.py
--- a/lightly/loss/memory_bank.py
+++ b/lightly/loss/memory_bank.py
@@ -63,7 +63,7 @@
# https://github.com/facebookresearch/moco but we don't
# want to pollute our checkpoints
self.bank = torch.randn(dim, self.size).type_as(self.bank)
- torch.nn.functional.normalize(self.bank, dim=0)
+ self.bank = torch.nn.functional.normalize(self.bank, dim=0)
self.bank_ptr = torch.zeros(1).type_as(self.bank_ptr)
@torch.no_grad()
| {"golden_diff": "diff --git a/lightly/loss/memory_bank.py b/lightly/loss/memory_bank.py\n--- a/lightly/loss/memory_bank.py\n+++ b/lightly/loss/memory_bank.py\n@@ -63,7 +63,7 @@\n # https://github.com/facebookresearch/moco but we don't\n # want to pollute our checkpoints\n self.bank = torch.randn(dim, self.size).type_as(self.bank)\n- torch.nn.functional.normalize(self.bank, dim=0)\n+ self.bank = torch.nn.functional.normalize(self.bank, dim=0)\n self.bank_ptr = torch.zeros(1).type_as(self.bank_ptr)\n \n @torch.no_grad()\n", "issue": "Loss stuck\nHi, I am trying to run the tutorial posted here \r\nhttps://docs.lightly.ai/self-supervised-learning/tutorials/package/tutorial_moco_memory_bank.html\r\nBut my loss is stuck at 8.32 after 100 epochs\r\npython 3.9\r\npytorch-lightning 1.8.1 \r\nlightly 1.2.38\r\n\r\nAny suggestions on how I should troubleshoot this?\r\nThanks in advance!\n", "before_files": [{"content": "\"\"\" Memory Bank Wrapper \"\"\"\n\n# Copyright (c) 2020. Lightly AG and its affiliates.\n# All Rights Reserved\n\nimport torch\nimport functools\n\nclass MemoryBankModule(torch.nn.Module):\n \"\"\"Memory bank implementation\n\n This is a parent class to all loss functions implemented by the lightly\n Python package. This way, any loss can be used with a memory bank if \n desired.\n\n Attributes:\n size:\n Number of keys the memory bank can store. If set to 0,\n memory bank is not used.\n\n Examples:\n >>> class MyLossFunction(MemoryBankModule):\n >>>\n >>> def __init__(self, memory_bank_size: int = 2 ** 16):\n >>> super(MyLossFunction, self).__init__(memory_bank_size)\n >>>\n >>> def forward(self, output: torch.Tensor,\n >>> labels: torch.Tensor = None):\n >>>\n >>> output, negatives = super(\n >>> MyLossFunction, self).forward(output)\n >>>\n >>> if negatives is not None:\n >>> # evaluate loss with negative samples\n >>> else:\n >>> # evaluate loss without negative samples\n\n \"\"\"\n\n def __init__(self, size: int = 2 ** 16):\n\n super(MemoryBankModule, self).__init__()\n\n if size < 0:\n msg = f'Illegal memory bank size {size}, must be non-negative.'\n raise ValueError(msg)\n\n self.size = size\n self.register_buffer(\"bank\", tensor=torch.empty(0, dtype=torch.float), persistent=False)\n self.register_buffer(\"bank_ptr\", tensor=torch.empty(0, dtype=torch.long), persistent=False)\n\n @torch.no_grad()\n def _init_memory_bank(self, dim: int):\n \"\"\"Initialize the memory bank if it's empty\n\n Args:\n dim:\n The dimension of the which are stored in the bank.\n\n \"\"\"\n # create memory bank\n # we could use register buffers like in the moco repo\n # https://github.com/facebookresearch/moco but we don't\n # want to pollute our checkpoints\n self.bank = torch.randn(dim, self.size).type_as(self.bank)\n torch.nn.functional.normalize(self.bank, dim=0)\n self.bank_ptr = torch.zeros(1).type_as(self.bank_ptr)\n\n @torch.no_grad()\n def _dequeue_and_enqueue(self, batch: torch.Tensor):\n \"\"\"Dequeue the oldest batch and add the latest one\n\n Args:\n batch:\n The latest batch of keys to add to the memory bank.\n\n \"\"\"\n batch_size = batch.shape[0]\n ptr = int(self.bank_ptr)\n\n if ptr + batch_size >= self.size:\n self.bank[:, ptr:] = batch[:self.size - ptr].T.detach()\n self.bank_ptr[0] = 0\n else:\n self.bank[:, ptr:ptr + batch_size] = batch.T.detach()\n self.bank_ptr[0] = ptr + batch_size\n\n def forward(self,\n output: torch.Tensor,\n labels: torch.Tensor = None,\n update: bool = False):\n \"\"\"Query memory bank for additional negative samples\n\n Args:\n output:\n The output of the model.\n labels:\n Should always be None, will be ignored.\n\n Returns:\n The output if the memory bank is of size 0, otherwise the output\n and the entries from the memory bank.\n\n \"\"\"\n\n # no memory bank, return the output\n if self.size == 0:\n return output, None\n\n _, dim = output.shape\n\n # initialize the memory bank if it is not already done\n if self.bank.nelement() == 0:\n self._init_memory_bank(dim)\n\n # query and update memory bank\n bank = self.bank.clone().detach()\n\n # only update memory bank if we later do backward pass (gradient)\n if update:\n self._dequeue_and_enqueue(output)\n\n return output, bank\n", "path": "lightly/loss/memory_bank.py"}]} | 1,768 | 144 |
gh_patches_debug_45351 | rasdani/github-patches | git_diff | quantumlib__Cirq-5261 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Docs: Filter out TYPE_CHECKING from public docs
**Description of the issue**
The `TYPE_CHECKING` variable imported from `typing` shows up in API docs (example: https://github.com/quantumlib/Cirq/issues/5150). We should filter it out, since it's not part of the cirq API. Per @dabacon's [comment](https://github.com/quantumlib/Cirq/pull/5229#issuecomment-1093080151), we should be able to do this in `dev_tools/docs/build_api_docs.py`.
</issue>
<code>
[start of dev_tools/docs/build_api_docs.py]
1 # Copyright 2021 The Cirq Developers
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # https://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 # ==============================================================================
15 """Tool to generate external api_docs for Cirq.
16
17 In order to publish to our site, devsite runs two jobs for us: stable and nightly.
18 The stable one downloads the latest cirq release from pypi and uses that to generate the reference
19 API docs.
20 The nightly one downloads the latest cirq pre-release (pip install cirq --pre) and uses that to
21 generate the "nightly diff".
22
23 This script needs to cater for both of these cases.
24 """
25
26 import os
27 import types
28
29 import networkx
30 from absl import app
31 from absl import flags
32 from tensorflow_docs.api_generator import doc_controls
33 from tensorflow_docs.api_generator import generate_lib
34 from tensorflow_docs.api_generator import public_api
35
36 import cirq
37 import cirq_aqt
38 import cirq_google
39 import cirq_ionq
40 import cirq_pasqal
41 import cirq_rigetti
42 import cirq_web
43
44 from cirq import _doc
45
46 flags.DEFINE_string("output_dir", "docs/api_docs", "Where to output the docs")
47
48 flags.DEFINE_string(
49 "code_url_prefix",
50 "https://github.com/quantumlib/Cirq/blob/master",
51 "The url prefix for links to code.",
52 )
53
54 flags.DEFINE_bool("search_hints", True, "Include metadata search hints in the generated files")
55
56 flags.DEFINE_string("site_path", "reference/python", "Path prefix in the _toc.yaml")
57
58 FLAGS = flags.FLAGS
59
60
61 def filter_unwanted_inherited_methods(path, parent, children):
62 """Filter the unwanted inherited methods.
63
64 CircuitDag inherits a lot of methods from `networkx.DiGraph` and `Graph`.
65 This filter removes these, as it creates a lot of noise in the API docs.
66 """
67 if parent.__name__ != "CircuitDag":
68 return children
69
70 filtered_children = []
71 for name, obj in children:
72 if isinstance(obj, types.FunctionType):
73 if obj.__module__.startswith('cirq'):
74 filtered_children.append((name, obj))
75 return filtered_children
76
77
78 def main(unused_argv):
79 generate_cirq()
80 generate_cirq_google()
81 generate_cirq_aqt()
82 generate_cirq_ionq()
83 generate_cirq_pasqal()
84 generate_cirq_rigetti()
85 generate_cirq_web()
86
87
88 def generate_cirq():
89 doc_generator = generate_lib.DocGenerator(
90 root_title="Cirq",
91 py_modules=[("cirq", cirq)],
92 base_dir=os.path.dirname(cirq.__file__),
93 code_url_prefix=FLAGS.code_url_prefix + "/cirq-core/cirq",
94 search_hints=FLAGS.search_hints,
95 site_path=FLAGS.site_path,
96 callbacks=[public_api.local_definitions_filter, filter_unwanted_inherited_methods],
97 extra_docs=_doc.RECORDED_CONST_DOCS,
98 )
99 doc_controls.decorate_all_class_attributes(
100 doc_controls.do_not_doc_inheritable, networkx.DiGraph, skip=[]
101 )
102 doc_generator.build(output_dir=FLAGS.output_dir)
103
104
105 def generate_cirq_aqt():
106 doc_generator = generate_lib.DocGenerator(
107 root_title="Cirq-aqt",
108 py_modules=[("cirq_aqt", cirq_aqt)],
109 base_dir=os.path.dirname(cirq_aqt.__file__),
110 code_url_prefix=FLAGS.code_url_prefix + "/cirq-aqt/cirq_aqt",
111 search_hints=FLAGS.search_hints,
112 site_path=FLAGS.site_path,
113 callbacks=[public_api.local_definitions_filter, filter_unwanted_inherited_methods],
114 extra_docs=_doc.RECORDED_CONST_DOCS,
115 )
116 doc_controls.decorate_all_class_attributes(
117 doc_controls.do_not_doc_inheritable, networkx.DiGraph, skip=[]
118 )
119
120 doc_generator.build(output_dir=FLAGS.output_dir)
121
122
123 def generate_cirq_ionq():
124 doc_generator = generate_lib.DocGenerator(
125 root_title="Cirq_ionq",
126 py_modules=[("cirq_ionq", cirq_ionq)],
127 base_dir=os.path.dirname(cirq_ionq.__file__),
128 code_url_prefix=FLAGS.code_url_prefix + "/cirq-ionq/cirq_ionq",
129 search_hints=FLAGS.search_hints,
130 site_path=FLAGS.site_path,
131 callbacks=[public_api.local_definitions_filter, filter_unwanted_inherited_methods],
132 extra_docs=_doc.RECORDED_CONST_DOCS,
133 )
134 doc_controls.decorate_all_class_attributes(
135 doc_controls.do_not_doc_inheritable, networkx.DiGraph, skip=[]
136 )
137
138 doc_generator.build(output_dir=FLAGS.output_dir)
139
140
141 def generate_cirq_pasqal():
142 doc_generator = generate_lib.DocGenerator(
143 root_title="Cirq-pasqal",
144 py_modules=[("cirq_pasqal", cirq_pasqal)],
145 base_dir=os.path.dirname(cirq_pasqal.__file__),
146 code_url_prefix=FLAGS.code_url_prefix + "/cirq-pasqal/cirq_pasqal",
147 search_hints=FLAGS.search_hints,
148 site_path=FLAGS.site_path,
149 callbacks=[public_api.local_definitions_filter, filter_unwanted_inherited_methods],
150 extra_docs=_doc.RECORDED_CONST_DOCS,
151 )
152 doc_controls.decorate_all_class_attributes(
153 doc_controls.do_not_doc_inheritable, networkx.DiGraph, skip=[]
154 )
155
156 doc_generator.build(output_dir=FLAGS.output_dir)
157
158
159 def generate_cirq_rigetti():
160 doc_generator = generate_lib.DocGenerator(
161 root_title="Cirq_rigetti",
162 py_modules=[("cirq_rigetti", cirq_rigetti)],
163 base_dir=os.path.dirname(cirq_rigetti.__file__),
164 code_url_prefix=FLAGS.code_url_prefix + "/cirq-rigetti/cirq_rigetti",
165 search_hints=FLAGS.search_hints,
166 site_path=FLAGS.site_path,
167 callbacks=[public_api.local_definitions_filter, filter_unwanted_inherited_methods],
168 extra_docs=_doc.RECORDED_CONST_DOCS,
169 )
170 doc_controls.decorate_all_class_attributes(
171 doc_controls.do_not_doc_inheritable, networkx.DiGraph, skip=[]
172 )
173
174 doc_generator.build(output_dir=FLAGS.output_dir)
175
176
177 def generate_cirq_google():
178 doc_generator = generate_lib.DocGenerator(
179 root_title="Cirq-google",
180 py_modules=[("cirq_google", cirq_google)],
181 base_dir=os.path.dirname(cirq_google.__file__),
182 code_url_prefix=FLAGS.code_url_prefix + "/cirq-google/cirq_google",
183 search_hints=FLAGS.search_hints,
184 site_path=FLAGS.site_path,
185 callbacks=[public_api.local_definitions_filter, filter_unwanted_inherited_methods],
186 private_map={
187 # Opt to not build docs for these paths for now since they error.
188 "cirq_google.cloud.quantum.QuantumEngineServiceClient": ["enums"],
189 "cirq_google.cloud.quantum_v1alpha1.QuantumEngineServiceClient": ["enums"],
190 "cirq_google.api": ["v1"],
191 },
192 extra_docs=_doc.RECORDED_CONST_DOCS,
193 )
194 doc_generator.build(output_dir=FLAGS.output_dir)
195
196
197 def generate_cirq_web():
198 doc_generator = generate_lib.DocGenerator(
199 root_title="Cirq_web",
200 py_modules=[("cirq_web", cirq_web)],
201 base_dir=os.path.dirname(cirq_web.__file__),
202 code_url_prefix=FLAGS.code_url_prefix + "/cirq-web/cirq_web",
203 search_hints=FLAGS.search_hints,
204 site_path=FLAGS.site_path,
205 callbacks=[public_api.local_definitions_filter, filter_unwanted_inherited_methods],
206 extra_docs=_doc.RECORDED_CONST_DOCS,
207 )
208 doc_controls.decorate_all_class_attributes(
209 doc_controls.do_not_doc_inheritable, networkx.DiGraph, skip=[]
210 )
211
212 doc_generator.build(output_dir=FLAGS.output_dir)
213
214
215 if __name__ == "__main__":
216 app.run(main)
217
[end of dev_tools/docs/build_api_docs.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/dev_tools/docs/build_api_docs.py b/dev_tools/docs/build_api_docs.py
--- a/dev_tools/docs/build_api_docs.py
+++ b/dev_tools/docs/build_api_docs.py
@@ -75,6 +75,14 @@
return filtered_children
+def filter_type_checking(path, parent, children):
+ filtered_children = []
+ for name, obj in children:
+ if name != 'TYPE_CHECKING':
+ filtered_children.append((name, obj))
+ return filtered_children
+
+
def main(unused_argv):
generate_cirq()
generate_cirq_google()
@@ -93,7 +101,11 @@
code_url_prefix=FLAGS.code_url_prefix + "/cirq-core/cirq",
search_hints=FLAGS.search_hints,
site_path=FLAGS.site_path,
- callbacks=[public_api.local_definitions_filter, filter_unwanted_inherited_methods],
+ callbacks=[
+ public_api.local_definitions_filter,
+ filter_unwanted_inherited_methods,
+ filter_type_checking,
+ ],
extra_docs=_doc.RECORDED_CONST_DOCS,
)
doc_controls.decorate_all_class_attributes(
@@ -110,7 +122,11 @@
code_url_prefix=FLAGS.code_url_prefix + "/cirq-aqt/cirq_aqt",
search_hints=FLAGS.search_hints,
site_path=FLAGS.site_path,
- callbacks=[public_api.local_definitions_filter, filter_unwanted_inherited_methods],
+ callbacks=[
+ public_api.local_definitions_filter,
+ filter_unwanted_inherited_methods,
+ filter_type_checking,
+ ],
extra_docs=_doc.RECORDED_CONST_DOCS,
)
doc_controls.decorate_all_class_attributes(
@@ -128,7 +144,11 @@
code_url_prefix=FLAGS.code_url_prefix + "/cirq-ionq/cirq_ionq",
search_hints=FLAGS.search_hints,
site_path=FLAGS.site_path,
- callbacks=[public_api.local_definitions_filter, filter_unwanted_inherited_methods],
+ callbacks=[
+ public_api.local_definitions_filter,
+ filter_unwanted_inherited_methods,
+ filter_type_checking,
+ ],
extra_docs=_doc.RECORDED_CONST_DOCS,
)
doc_controls.decorate_all_class_attributes(
@@ -146,7 +166,11 @@
code_url_prefix=FLAGS.code_url_prefix + "/cirq-pasqal/cirq_pasqal",
search_hints=FLAGS.search_hints,
site_path=FLAGS.site_path,
- callbacks=[public_api.local_definitions_filter, filter_unwanted_inherited_methods],
+ callbacks=[
+ public_api.local_definitions_filter,
+ filter_unwanted_inherited_methods,
+ filter_type_checking,
+ ],
extra_docs=_doc.RECORDED_CONST_DOCS,
)
doc_controls.decorate_all_class_attributes(
@@ -164,7 +188,11 @@
code_url_prefix=FLAGS.code_url_prefix + "/cirq-rigetti/cirq_rigetti",
search_hints=FLAGS.search_hints,
site_path=FLAGS.site_path,
- callbacks=[public_api.local_definitions_filter, filter_unwanted_inherited_methods],
+ callbacks=[
+ public_api.local_definitions_filter,
+ filter_unwanted_inherited_methods,
+ filter_type_checking,
+ ],
extra_docs=_doc.RECORDED_CONST_DOCS,
)
doc_controls.decorate_all_class_attributes(
@@ -182,7 +210,11 @@
code_url_prefix=FLAGS.code_url_prefix + "/cirq-google/cirq_google",
search_hints=FLAGS.search_hints,
site_path=FLAGS.site_path,
- callbacks=[public_api.local_definitions_filter, filter_unwanted_inherited_methods],
+ callbacks=[
+ public_api.local_definitions_filter,
+ filter_unwanted_inherited_methods,
+ filter_type_checking,
+ ],
private_map={
# Opt to not build docs for these paths for now since they error.
"cirq_google.cloud.quantum.QuantumEngineServiceClient": ["enums"],
@@ -202,7 +234,11 @@
code_url_prefix=FLAGS.code_url_prefix + "/cirq-web/cirq_web",
search_hints=FLAGS.search_hints,
site_path=FLAGS.site_path,
- callbacks=[public_api.local_definitions_filter, filter_unwanted_inherited_methods],
+ callbacks=[
+ public_api.local_definitions_filter,
+ filter_unwanted_inherited_methods,
+ filter_type_checking,
+ ],
extra_docs=_doc.RECORDED_CONST_DOCS,
)
doc_controls.decorate_all_class_attributes(
| {"golden_diff": "diff --git a/dev_tools/docs/build_api_docs.py b/dev_tools/docs/build_api_docs.py\n--- a/dev_tools/docs/build_api_docs.py\n+++ b/dev_tools/docs/build_api_docs.py\n@@ -75,6 +75,14 @@\n return filtered_children\n \n \n+def filter_type_checking(path, parent, children):\n+ filtered_children = []\n+ for name, obj in children:\n+ if name != 'TYPE_CHECKING':\n+ filtered_children.append((name, obj))\n+ return filtered_children\n+\n+\n def main(unused_argv):\n generate_cirq()\n generate_cirq_google()\n@@ -93,7 +101,11 @@\n code_url_prefix=FLAGS.code_url_prefix + \"/cirq-core/cirq\",\n search_hints=FLAGS.search_hints,\n site_path=FLAGS.site_path,\n- callbacks=[public_api.local_definitions_filter, filter_unwanted_inherited_methods],\n+ callbacks=[\n+ public_api.local_definitions_filter,\n+ filter_unwanted_inherited_methods,\n+ filter_type_checking,\n+ ],\n extra_docs=_doc.RECORDED_CONST_DOCS,\n )\n doc_controls.decorate_all_class_attributes(\n@@ -110,7 +122,11 @@\n code_url_prefix=FLAGS.code_url_prefix + \"/cirq-aqt/cirq_aqt\",\n search_hints=FLAGS.search_hints,\n site_path=FLAGS.site_path,\n- callbacks=[public_api.local_definitions_filter, filter_unwanted_inherited_methods],\n+ callbacks=[\n+ public_api.local_definitions_filter,\n+ filter_unwanted_inherited_methods,\n+ filter_type_checking,\n+ ],\n extra_docs=_doc.RECORDED_CONST_DOCS,\n )\n doc_controls.decorate_all_class_attributes(\n@@ -128,7 +144,11 @@\n code_url_prefix=FLAGS.code_url_prefix + \"/cirq-ionq/cirq_ionq\",\n search_hints=FLAGS.search_hints,\n site_path=FLAGS.site_path,\n- callbacks=[public_api.local_definitions_filter, filter_unwanted_inherited_methods],\n+ callbacks=[\n+ public_api.local_definitions_filter,\n+ filter_unwanted_inherited_methods,\n+ filter_type_checking,\n+ ],\n extra_docs=_doc.RECORDED_CONST_DOCS,\n )\n doc_controls.decorate_all_class_attributes(\n@@ -146,7 +166,11 @@\n code_url_prefix=FLAGS.code_url_prefix + \"/cirq-pasqal/cirq_pasqal\",\n search_hints=FLAGS.search_hints,\n site_path=FLAGS.site_path,\n- callbacks=[public_api.local_definitions_filter, filter_unwanted_inherited_methods],\n+ callbacks=[\n+ public_api.local_definitions_filter,\n+ filter_unwanted_inherited_methods,\n+ filter_type_checking,\n+ ],\n extra_docs=_doc.RECORDED_CONST_DOCS,\n )\n doc_controls.decorate_all_class_attributes(\n@@ -164,7 +188,11 @@\n code_url_prefix=FLAGS.code_url_prefix + \"/cirq-rigetti/cirq_rigetti\",\n search_hints=FLAGS.search_hints,\n site_path=FLAGS.site_path,\n- callbacks=[public_api.local_definitions_filter, filter_unwanted_inherited_methods],\n+ callbacks=[\n+ public_api.local_definitions_filter,\n+ filter_unwanted_inherited_methods,\n+ filter_type_checking,\n+ ],\n extra_docs=_doc.RECORDED_CONST_DOCS,\n )\n doc_controls.decorate_all_class_attributes(\n@@ -182,7 +210,11 @@\n code_url_prefix=FLAGS.code_url_prefix + \"/cirq-google/cirq_google\",\n search_hints=FLAGS.search_hints,\n site_path=FLAGS.site_path,\n- callbacks=[public_api.local_definitions_filter, filter_unwanted_inherited_methods],\n+ callbacks=[\n+ public_api.local_definitions_filter,\n+ filter_unwanted_inherited_methods,\n+ filter_type_checking,\n+ ],\n private_map={\n # Opt to not build docs for these paths for now since they error.\n \"cirq_google.cloud.quantum.QuantumEngineServiceClient\": [\"enums\"],\n@@ -202,7 +234,11 @@\n code_url_prefix=FLAGS.code_url_prefix + \"/cirq-web/cirq_web\",\n search_hints=FLAGS.search_hints,\n site_path=FLAGS.site_path,\n- callbacks=[public_api.local_definitions_filter, filter_unwanted_inherited_methods],\n+ callbacks=[\n+ public_api.local_definitions_filter,\n+ filter_unwanted_inherited_methods,\n+ filter_type_checking,\n+ ],\n extra_docs=_doc.RECORDED_CONST_DOCS,\n )\n doc_controls.decorate_all_class_attributes(\n", "issue": "Docs: Filter out TYPE_CHECKING from public docs\n**Description of the issue**\r\n\r\nThe `TYPE_CHECKING` variable imported from `typing` shows up in API docs (example: https://github.com/quantumlib/Cirq/issues/5150). We should filter it out, since it's not part of the cirq API. Per @dabacon's [comment](https://github.com/quantumlib/Cirq/pull/5229#issuecomment-1093080151), we should be able to do this in `dev_tools/docs/build_api_docs.py`.\r\n\n", "before_files": [{"content": "# Copyright 2021 The Cirq Developers\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n# ==============================================================================\n\"\"\"Tool to generate external api_docs for Cirq.\n\nIn order to publish to our site, devsite runs two jobs for us: stable and nightly.\nThe stable one downloads the latest cirq release from pypi and uses that to generate the reference\nAPI docs.\nThe nightly one downloads the latest cirq pre-release (pip install cirq --pre) and uses that to\ngenerate the \"nightly diff\".\n\nThis script needs to cater for both of these cases.\n\"\"\"\n\nimport os\nimport types\n\nimport networkx\nfrom absl import app\nfrom absl import flags\nfrom tensorflow_docs.api_generator import doc_controls\nfrom tensorflow_docs.api_generator import generate_lib\nfrom tensorflow_docs.api_generator import public_api\n\nimport cirq\nimport cirq_aqt\nimport cirq_google\nimport cirq_ionq\nimport cirq_pasqal\nimport cirq_rigetti\nimport cirq_web\n\nfrom cirq import _doc\n\nflags.DEFINE_string(\"output_dir\", \"docs/api_docs\", \"Where to output the docs\")\n\nflags.DEFINE_string(\n \"code_url_prefix\",\n \"https://github.com/quantumlib/Cirq/blob/master\",\n \"The url prefix for links to code.\",\n)\n\nflags.DEFINE_bool(\"search_hints\", True, \"Include metadata search hints in the generated files\")\n\nflags.DEFINE_string(\"site_path\", \"reference/python\", \"Path prefix in the _toc.yaml\")\n\nFLAGS = flags.FLAGS\n\n\ndef filter_unwanted_inherited_methods(path, parent, children):\n \"\"\"Filter the unwanted inherited methods.\n\n CircuitDag inherits a lot of methods from `networkx.DiGraph` and `Graph`.\n This filter removes these, as it creates a lot of noise in the API docs.\n \"\"\"\n if parent.__name__ != \"CircuitDag\":\n return children\n\n filtered_children = []\n for name, obj in children:\n if isinstance(obj, types.FunctionType):\n if obj.__module__.startswith('cirq'):\n filtered_children.append((name, obj))\n return filtered_children\n\n\ndef main(unused_argv):\n generate_cirq()\n generate_cirq_google()\n generate_cirq_aqt()\n generate_cirq_ionq()\n generate_cirq_pasqal()\n generate_cirq_rigetti()\n generate_cirq_web()\n\n\ndef generate_cirq():\n doc_generator = generate_lib.DocGenerator(\n root_title=\"Cirq\",\n py_modules=[(\"cirq\", cirq)],\n base_dir=os.path.dirname(cirq.__file__),\n code_url_prefix=FLAGS.code_url_prefix + \"/cirq-core/cirq\",\n search_hints=FLAGS.search_hints,\n site_path=FLAGS.site_path,\n callbacks=[public_api.local_definitions_filter, filter_unwanted_inherited_methods],\n extra_docs=_doc.RECORDED_CONST_DOCS,\n )\n doc_controls.decorate_all_class_attributes(\n doc_controls.do_not_doc_inheritable, networkx.DiGraph, skip=[]\n )\n doc_generator.build(output_dir=FLAGS.output_dir)\n\n\ndef generate_cirq_aqt():\n doc_generator = generate_lib.DocGenerator(\n root_title=\"Cirq-aqt\",\n py_modules=[(\"cirq_aqt\", cirq_aqt)],\n base_dir=os.path.dirname(cirq_aqt.__file__),\n code_url_prefix=FLAGS.code_url_prefix + \"/cirq-aqt/cirq_aqt\",\n search_hints=FLAGS.search_hints,\n site_path=FLAGS.site_path,\n callbacks=[public_api.local_definitions_filter, filter_unwanted_inherited_methods],\n extra_docs=_doc.RECORDED_CONST_DOCS,\n )\n doc_controls.decorate_all_class_attributes(\n doc_controls.do_not_doc_inheritable, networkx.DiGraph, skip=[]\n )\n\n doc_generator.build(output_dir=FLAGS.output_dir)\n\n\ndef generate_cirq_ionq():\n doc_generator = generate_lib.DocGenerator(\n root_title=\"Cirq_ionq\",\n py_modules=[(\"cirq_ionq\", cirq_ionq)],\n base_dir=os.path.dirname(cirq_ionq.__file__),\n code_url_prefix=FLAGS.code_url_prefix + \"/cirq-ionq/cirq_ionq\",\n search_hints=FLAGS.search_hints,\n site_path=FLAGS.site_path,\n callbacks=[public_api.local_definitions_filter, filter_unwanted_inherited_methods],\n extra_docs=_doc.RECORDED_CONST_DOCS,\n )\n doc_controls.decorate_all_class_attributes(\n doc_controls.do_not_doc_inheritable, networkx.DiGraph, skip=[]\n )\n\n doc_generator.build(output_dir=FLAGS.output_dir)\n\n\ndef generate_cirq_pasqal():\n doc_generator = generate_lib.DocGenerator(\n root_title=\"Cirq-pasqal\",\n py_modules=[(\"cirq_pasqal\", cirq_pasqal)],\n base_dir=os.path.dirname(cirq_pasqal.__file__),\n code_url_prefix=FLAGS.code_url_prefix + \"/cirq-pasqal/cirq_pasqal\",\n search_hints=FLAGS.search_hints,\n site_path=FLAGS.site_path,\n callbacks=[public_api.local_definitions_filter, filter_unwanted_inherited_methods],\n extra_docs=_doc.RECORDED_CONST_DOCS,\n )\n doc_controls.decorate_all_class_attributes(\n doc_controls.do_not_doc_inheritable, networkx.DiGraph, skip=[]\n )\n\n doc_generator.build(output_dir=FLAGS.output_dir)\n\n\ndef generate_cirq_rigetti():\n doc_generator = generate_lib.DocGenerator(\n root_title=\"Cirq_rigetti\",\n py_modules=[(\"cirq_rigetti\", cirq_rigetti)],\n base_dir=os.path.dirname(cirq_rigetti.__file__),\n code_url_prefix=FLAGS.code_url_prefix + \"/cirq-rigetti/cirq_rigetti\",\n search_hints=FLAGS.search_hints,\n site_path=FLAGS.site_path,\n callbacks=[public_api.local_definitions_filter, filter_unwanted_inherited_methods],\n extra_docs=_doc.RECORDED_CONST_DOCS,\n )\n doc_controls.decorate_all_class_attributes(\n doc_controls.do_not_doc_inheritable, networkx.DiGraph, skip=[]\n )\n\n doc_generator.build(output_dir=FLAGS.output_dir)\n\n\ndef generate_cirq_google():\n doc_generator = generate_lib.DocGenerator(\n root_title=\"Cirq-google\",\n py_modules=[(\"cirq_google\", cirq_google)],\n base_dir=os.path.dirname(cirq_google.__file__),\n code_url_prefix=FLAGS.code_url_prefix + \"/cirq-google/cirq_google\",\n search_hints=FLAGS.search_hints,\n site_path=FLAGS.site_path,\n callbacks=[public_api.local_definitions_filter, filter_unwanted_inherited_methods],\n private_map={\n # Opt to not build docs for these paths for now since they error.\n \"cirq_google.cloud.quantum.QuantumEngineServiceClient\": [\"enums\"],\n \"cirq_google.cloud.quantum_v1alpha1.QuantumEngineServiceClient\": [\"enums\"],\n \"cirq_google.api\": [\"v1\"],\n },\n extra_docs=_doc.RECORDED_CONST_DOCS,\n )\n doc_generator.build(output_dir=FLAGS.output_dir)\n\n\ndef generate_cirq_web():\n doc_generator = generate_lib.DocGenerator(\n root_title=\"Cirq_web\",\n py_modules=[(\"cirq_web\", cirq_web)],\n base_dir=os.path.dirname(cirq_web.__file__),\n code_url_prefix=FLAGS.code_url_prefix + \"/cirq-web/cirq_web\",\n search_hints=FLAGS.search_hints,\n site_path=FLAGS.site_path,\n callbacks=[public_api.local_definitions_filter, filter_unwanted_inherited_methods],\n extra_docs=_doc.RECORDED_CONST_DOCS,\n )\n doc_controls.decorate_all_class_attributes(\n doc_controls.do_not_doc_inheritable, networkx.DiGraph, skip=[]\n )\n\n doc_generator.build(output_dir=FLAGS.output_dir)\n\n\nif __name__ == \"__main__\":\n app.run(main)\n", "path": "dev_tools/docs/build_api_docs.py"}]} | 3,032 | 1,006 |
gh_patches_debug_23108 | rasdani/github-patches | git_diff | keras-team__autokeras-568 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Evaluation criteria for MLP
Are there any evaluation criterias for MLP module in Autokeras?
</issue>
<code>
[start of autokeras/net_module.py]
1 from functools import reduce
2
3 import torch
4 import numpy as np
5
6 import os
7 import time
8
9 from autokeras.constant import Constant
10 from autokeras.search import BayesianSearcher, train
11
12 from autokeras.utils import pickle_to_file, rand_temp_folder_generator, ensure_dir
13 from autokeras.nn.generator import CnnGenerator, MlpGenerator, ResNetGenerator, DenseNetGenerator
14
15
16 class NetworkModule:
17 """ Class to create a network module.
18
19 Attributes:
20 loss: A function taking two parameters, the predictions and the ground truth.
21 metric: An instance of the Metric subclasses.
22 searcher_args: A dictionary containing the parameters for the searcher's __init__ function.
23 searcher: An instance of the Searcher class.
24 path: A string. The path to the directory to save the searcher.
25 verbose: A boolean. Setting it to true prints to stdout.
26 generators: A list of instances of the NetworkGenerator class or its subclasses.
27 search_type: A constant denoting the type of hyperparameter search algorithm that must be used.
28 """
29
30 def __init__(self, loss, metric, searcher_args=None, path=None, verbose=False, search_type=BayesianSearcher):
31 self.searcher_args = searcher_args if searcher_args is not None else {}
32 self.searcher = None
33 self.path = path if path is not None else rand_temp_folder_generator()
34 ensure_dir(self.path)
35 if verbose:
36 print('Saving Directory:', self.path)
37 self.verbose = verbose
38 self.loss = loss
39 self.metric = metric
40 self.generators = []
41 self.search_type = search_type
42
43 def fit(self, n_output_node, input_shape, train_data, test_data, time_limit=24 * 60 * 60):
44 """ Search the best network.
45
46 Args:
47 n_output_node: A integer value represent the number of output node in the final layer.
48 input_shape: A tuple to express the shape of every train entry. For example,
49 MNIST dataset would be (28,28,1).
50 train_data: A PyTorch DataLoader instance representing the training data.
51 test_data: A PyTorch DataLoader instance representing the testing data.
52 time_limit: A integer value represents the time limit on searching for models.
53 """
54 # Create the searcher and save on disk
55
56 if not self.searcher:
57 input_shape = input_shape[1:]
58 self.searcher_args['n_output_node'] = n_output_node
59 self.searcher_args['input_shape'] = input_shape
60 self.searcher_args['path'] = self.path
61 self.searcher_args['metric'] = self.metric
62 self.searcher_args['loss'] = self.loss
63 self.searcher_args['generators'] = self.generators
64 self.searcher_args['verbose'] = self.verbose
65 pickle_to_file(self, os.path.join(self.path, 'module'))
66 self.searcher = self.search_type(**self.searcher_args)
67
68 start_time = time.time()
69 time_remain = time_limit
70 try:
71 while time_remain > 0:
72 self.searcher.search(train_data, test_data, int(time_remain))
73 pickle_to_file(self, os.path.join(self.path, 'module'))
74 if len(self.searcher.history) >= Constant.MAX_MODEL_NUM:
75 break
76 time_elapsed = time.time() - start_time
77 time_remain = time_limit - time_elapsed
78 # if no search executed during the time_limit, then raise an error
79 if time_remain <= 0:
80 raise TimeoutError
81 except TimeoutError:
82 if len(self.searcher.history) == 0:
83 raise TimeoutError("Search Time too short. No model was found during the search time.")
84 elif self.verbose:
85 print('Time is out.')
86
87 def final_fit(self, train_data, test_data, trainer_args=None, retrain=False):
88 """Final training after found the best architecture.
89
90 Args:
91 train_data: A DataLoader instance representing the training data.
92 test_data: A DataLoader instance representing the testing data.
93 trainer_args: A dictionary containing the parameters of the ModelTrainer constructor.
94 retrain: A boolean of whether reinitialize the weights of the model.
95 """
96 graph = self.searcher.load_best_model()
97
98 if retrain:
99 graph.weighted = False
100 _, _1, graph = train(None, graph,
101 train_data,
102 test_data,
103 trainer_args,
104 self.metric,
105 self.loss,
106 self.verbose,
107 self.path)
108 self.searcher.replace_model(graph, self.searcher.get_best_model_id())
109 pickle_to_file(self, os.path.join(self.path, 'module'))
110
111 @property
112 def best_model(self):
113 return self.searcher.load_best_model()
114
115 def predict(self, test_loader):
116 model = self.best_model.produce_model()
117 model.eval()
118
119 outputs = []
120 with torch.no_grad():
121 for index, inputs in enumerate(test_loader):
122 outputs.append(model(inputs).numpy())
123 output = reduce(lambda x, y: np.concatenate((x, y)), outputs)
124 return output
125
126
127 class CnnModule(NetworkModule):
128 """ Class to create a CNN module."""
129
130 def __init__(self, loss, metric, searcher_args=None, path=None, verbose=False,
131 search_type=BayesianSearcher):
132 super(CnnModule, self).__init__(loss, metric, searcher_args, path, verbose, search_type)
133 self.generators.append(CnnGenerator)
134 self.generators.append(ResNetGenerator)
135 self.generators.append(DenseNetGenerator)
136
137
138 class MlpModule(NetworkModule):
139 """ Class to create an MLP module."""
140
141 def __init__(self, loss, metric, searcher_args=None, path=None, verbose=False):
142 super(MlpModule, self).__init__(loss, metric, searcher_args, path, verbose)
143 self.generators.extend([MlpGenerator] * 2)
144
[end of autokeras/net_module.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/autokeras/net_module.py b/autokeras/net_module.py
--- a/autokeras/net_module.py
+++ b/autokeras/net_module.py
@@ -11,6 +11,7 @@
from autokeras.utils import pickle_to_file, rand_temp_folder_generator, ensure_dir
from autokeras.nn.generator import CnnGenerator, MlpGenerator, ResNetGenerator, DenseNetGenerator
+from autokeras.utils import get_device
class NetworkModule:
@@ -123,6 +124,24 @@
output = reduce(lambda x, y: np.concatenate((x, y)), outputs)
return output
+ def evaluate(self, test_data):
+ """Evaluate the performance of the best architecture in terms of the loss.
+
+ Args:
+ test_data: A DataLoader instance representing the testing data.
+ """
+ model = self.best_model.produce_model()
+ model.eval()
+ device = get_device()
+ target, prediction = [], []
+
+ with torch.no_grad():
+ for _, (x, y) in enumerate(test_data):
+ x, y = x.to(device), y.to(device)
+ prediction.append(model(x))
+ target.append(y)
+ return self.metric().compute(prediction, target)
+
class CnnModule(NetworkModule):
""" Class to create a CNN module."""
| {"golden_diff": "diff --git a/autokeras/net_module.py b/autokeras/net_module.py\n--- a/autokeras/net_module.py\n+++ b/autokeras/net_module.py\n@@ -11,6 +11,7 @@\n \n from autokeras.utils import pickle_to_file, rand_temp_folder_generator, ensure_dir\n from autokeras.nn.generator import CnnGenerator, MlpGenerator, ResNetGenerator, DenseNetGenerator\n+from autokeras.utils import get_device\n \n \n class NetworkModule:\n@@ -123,6 +124,24 @@\n output = reduce(lambda x, y: np.concatenate((x, y)), outputs)\n return output\n \n+ def evaluate(self, test_data):\n+ \"\"\"Evaluate the performance of the best architecture in terms of the loss.\n+\n+ Args:\n+ test_data: A DataLoader instance representing the testing data.\n+ \"\"\"\n+ model = self.best_model.produce_model()\n+ model.eval()\n+ device = get_device()\n+ target, prediction = [], []\n+\n+ with torch.no_grad():\n+ for _, (x, y) in enumerate(test_data):\n+ x, y = x.to(device), y.to(device)\n+ prediction.append(model(x))\n+ target.append(y)\n+ return self.metric().compute(prediction, target)\n+\n \n class CnnModule(NetworkModule):\n \"\"\" Class to create a CNN module.\"\"\"\n", "issue": "Evaluation criteria for MLP\nAre there any evaluation criterias for MLP module in Autokeras? \r\n\n", "before_files": [{"content": "from functools import reduce\n\nimport torch\nimport numpy as np\n\nimport os\nimport time\n\nfrom autokeras.constant import Constant\nfrom autokeras.search import BayesianSearcher, train\n\nfrom autokeras.utils import pickle_to_file, rand_temp_folder_generator, ensure_dir\nfrom autokeras.nn.generator import CnnGenerator, MlpGenerator, ResNetGenerator, DenseNetGenerator\n\n\nclass NetworkModule:\n \"\"\" Class to create a network module.\n\n Attributes:\n loss: A function taking two parameters, the predictions and the ground truth.\n metric: An instance of the Metric subclasses.\n searcher_args: A dictionary containing the parameters for the searcher's __init__ function.\n searcher: An instance of the Searcher class.\n path: A string. The path to the directory to save the searcher.\n verbose: A boolean. Setting it to true prints to stdout.\n generators: A list of instances of the NetworkGenerator class or its subclasses.\n search_type: A constant denoting the type of hyperparameter search algorithm that must be used.\n \"\"\"\n\n def __init__(self, loss, metric, searcher_args=None, path=None, verbose=False, search_type=BayesianSearcher):\n self.searcher_args = searcher_args if searcher_args is not None else {}\n self.searcher = None\n self.path = path if path is not None else rand_temp_folder_generator()\n ensure_dir(self.path)\n if verbose:\n print('Saving Directory:', self.path)\n self.verbose = verbose\n self.loss = loss\n self.metric = metric\n self.generators = []\n self.search_type = search_type\n\n def fit(self, n_output_node, input_shape, train_data, test_data, time_limit=24 * 60 * 60):\n \"\"\" Search the best network.\n\n Args:\n n_output_node: A integer value represent the number of output node in the final layer.\n input_shape: A tuple to express the shape of every train entry. For example,\n MNIST dataset would be (28,28,1).\n train_data: A PyTorch DataLoader instance representing the training data.\n test_data: A PyTorch DataLoader instance representing the testing data.\n time_limit: A integer value represents the time limit on searching for models.\n \"\"\"\n # Create the searcher and save on disk\n\n if not self.searcher:\n input_shape = input_shape[1:]\n self.searcher_args['n_output_node'] = n_output_node\n self.searcher_args['input_shape'] = input_shape\n self.searcher_args['path'] = self.path\n self.searcher_args['metric'] = self.metric\n self.searcher_args['loss'] = self.loss\n self.searcher_args['generators'] = self.generators\n self.searcher_args['verbose'] = self.verbose\n pickle_to_file(self, os.path.join(self.path, 'module'))\n self.searcher = self.search_type(**self.searcher_args)\n\n start_time = time.time()\n time_remain = time_limit\n try:\n while time_remain > 0:\n self.searcher.search(train_data, test_data, int(time_remain))\n pickle_to_file(self, os.path.join(self.path, 'module'))\n if len(self.searcher.history) >= Constant.MAX_MODEL_NUM:\n break\n time_elapsed = time.time() - start_time\n time_remain = time_limit - time_elapsed\n # if no search executed during the time_limit, then raise an error\n if time_remain <= 0:\n raise TimeoutError\n except TimeoutError:\n if len(self.searcher.history) == 0:\n raise TimeoutError(\"Search Time too short. No model was found during the search time.\")\n elif self.verbose:\n print('Time is out.')\n\n def final_fit(self, train_data, test_data, trainer_args=None, retrain=False):\n \"\"\"Final training after found the best architecture.\n\n Args:\n train_data: A DataLoader instance representing the training data.\n test_data: A DataLoader instance representing the testing data.\n trainer_args: A dictionary containing the parameters of the ModelTrainer constructor.\n retrain: A boolean of whether reinitialize the weights of the model.\n \"\"\"\n graph = self.searcher.load_best_model()\n\n if retrain:\n graph.weighted = False\n _, _1, graph = train(None, graph,\n train_data,\n test_data,\n trainer_args,\n self.metric,\n self.loss,\n self.verbose,\n self.path)\n self.searcher.replace_model(graph, self.searcher.get_best_model_id())\n pickle_to_file(self, os.path.join(self.path, 'module'))\n\n @property\n def best_model(self):\n return self.searcher.load_best_model()\n\n def predict(self, test_loader):\n model = self.best_model.produce_model()\n model.eval()\n\n outputs = []\n with torch.no_grad():\n for index, inputs in enumerate(test_loader):\n outputs.append(model(inputs).numpy())\n output = reduce(lambda x, y: np.concatenate((x, y)), outputs)\n return output\n\n\nclass CnnModule(NetworkModule):\n \"\"\" Class to create a CNN module.\"\"\"\n\n def __init__(self, loss, metric, searcher_args=None, path=None, verbose=False,\n search_type=BayesianSearcher):\n super(CnnModule, self).__init__(loss, metric, searcher_args, path, verbose, search_type)\n self.generators.append(CnnGenerator)\n self.generators.append(ResNetGenerator)\n self.generators.append(DenseNetGenerator)\n\n\nclass MlpModule(NetworkModule):\n \"\"\" Class to create an MLP module.\"\"\"\n\n def __init__(self, loss, metric, searcher_args=None, path=None, verbose=False):\n super(MlpModule, self).__init__(loss, metric, searcher_args, path, verbose)\n self.generators.extend([MlpGenerator] * 2)\n", "path": "autokeras/net_module.py"}]} | 2,147 | 301 |
gh_patches_debug_23667 | rasdani/github-patches | git_diff | pandas-dev__pandas-6803 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
numexpr 2.3.1 error with pandas 0.13.1
I just installed numexpr 2.3.1 with pandas 0.13.1 and got the following error:
File "C:\Python27\lib\site-packages\pandas\core\ops.py", line 496, in wrapper
arr = na_op(lvalues, rvalues)
File "C:\Python27\lib\site-packages\pandas\core\ops.py", line 443, in na_op
raise_on_error=True, *_eval_kwargs)
File "C:\Python27\lib\site-packages\pandas\computation\expressions.py", line 176, in evaluate
*_eval_kwargs)
File "C:\Python27\lib\site-packages\pandas\computation\expressions.py", line 104, in _evaluate_numexpr
*_eval_kwargs)
File "C:\Python27\lib\site-packages\numexpr\necompiler.py", line 738, in evaluate
NumExpr(ex, signature, *_context)
File "C:\Python27\lib\site-packages\numexpr\necompiler.py", line 554, in NumExpr
precompile(ex, signature, context)
File "C:\Python27\lib\site-packages\numexpr\necompiler.py", line 498, in precompile
ast = typeCompileAst(ast)
File "C:\Python27\lib\site-packages\numexpr\necompiler.py", line 163, in typeCompileAst
% (ast.value + '_' + retsig+basesig))
NotImplementedError: couldn't find matching opcode for 'mul_bbb'
</issue>
<code>
[start of pandas/computation/expressions.py]
1 """
2 Expressions
3 -----------
4
5 Offer fast expression evaluation through numexpr
6
7 """
8
9 import numpy as np
10 from pandas.core.common import _values_from_object
11 from distutils.version import LooseVersion
12
13 try:
14 import numexpr as ne
15 _NUMEXPR_INSTALLED = ne.__version__ >= LooseVersion('2.0')
16 except ImportError: # pragma: no cover
17 _NUMEXPR_INSTALLED = False
18
19 _TEST_MODE = None
20 _TEST_RESULT = None
21 _USE_NUMEXPR = _NUMEXPR_INSTALLED
22 _evaluate = None
23 _where = None
24
25 # the set of dtypes that we will allow pass to numexpr
26 _ALLOWED_DTYPES = {
27 'evaluate': set(['int64', 'int32', 'float64', 'float32', 'bool']),
28 'where': set(['int64', 'float64', 'bool'])
29 }
30
31 # the minimum prod shape that we will use numexpr
32 _MIN_ELEMENTS = 10000
33
34
35 def set_use_numexpr(v=True):
36 # set/unset to use numexpr
37 global _USE_NUMEXPR
38 if _NUMEXPR_INSTALLED:
39 _USE_NUMEXPR = v
40
41 # choose what we are going to do
42 global _evaluate, _where
43 if not _USE_NUMEXPR:
44 _evaluate = _evaluate_standard
45 _where = _where_standard
46 else:
47 _evaluate = _evaluate_numexpr
48 _where = _where_numexpr
49
50
51 def set_numexpr_threads(n=None):
52 # if we are using numexpr, set the threads to n
53 # otherwise reset
54 if _NUMEXPR_INSTALLED and _USE_NUMEXPR:
55 if n is None:
56 n = ne.detect_number_of_cores()
57 ne.set_num_threads(n)
58
59
60 def _evaluate_standard(op, op_str, a, b, raise_on_error=True, **eval_kwargs):
61 """ standard evaluation """
62 if _TEST_MODE:
63 _store_test_result(False)
64 return op(a, b)
65
66
67 def _can_use_numexpr(op, op_str, a, b, dtype_check):
68 """ return a boolean if we WILL be using numexpr """
69 if op_str is not None:
70
71 # required min elements (otherwise we are adding overhead)
72 if np.prod(a.shape) > _MIN_ELEMENTS:
73
74 # check for dtype compatiblity
75 dtypes = set()
76 for o in [a, b]:
77 if hasattr(o, 'get_dtype_counts'):
78 s = o.get_dtype_counts()
79 if len(s) > 1:
80 return False
81 dtypes |= set(s.index)
82 elif isinstance(o, np.ndarray):
83 dtypes |= set([o.dtype.name])
84
85 # allowed are a superset
86 if not len(dtypes) or _ALLOWED_DTYPES[dtype_check] >= dtypes:
87 return True
88
89 return False
90
91
92 def _evaluate_numexpr(op, op_str, a, b, raise_on_error=False, truediv=True,
93 **eval_kwargs):
94 result = None
95
96 if _can_use_numexpr(op, op_str, a, b, 'evaluate'):
97 try:
98 a_value = getattr(a, "values", a)
99 b_value = getattr(b, "values", b)
100 result = ne.evaluate('a_value %s b_value' % op_str,
101 local_dict={'a_value': a_value,
102 'b_value': b_value},
103 casting='safe', truediv=truediv,
104 **eval_kwargs)
105 except ValueError as detail:
106 if 'unknown type object' in str(detail):
107 pass
108 except Exception as detail:
109 if raise_on_error:
110 raise
111
112 if _TEST_MODE:
113 _store_test_result(result is not None)
114
115 if result is None:
116 result = _evaluate_standard(op, op_str, a, b, raise_on_error)
117
118 return result
119
120
121 def _where_standard(cond, a, b, raise_on_error=True):
122 return np.where(_values_from_object(cond), _values_from_object(a),
123 _values_from_object(b))
124
125
126 def _where_numexpr(cond, a, b, raise_on_error=False):
127 result = None
128
129 if _can_use_numexpr(None, 'where', a, b, 'where'):
130
131 try:
132 cond_value = getattr(cond, 'values', cond)
133 a_value = getattr(a, 'values', a)
134 b_value = getattr(b, 'values', b)
135 result = ne.evaluate('where(cond_value, a_value, b_value)',
136 local_dict={'cond_value': cond_value,
137 'a_value': a_value,
138 'b_value': b_value},
139 casting='safe')
140 except ValueError as detail:
141 if 'unknown type object' in str(detail):
142 pass
143 except Exception as detail:
144 if raise_on_error:
145 raise TypeError(str(detail))
146
147 if result is None:
148 result = _where_standard(cond, a, b, raise_on_error)
149
150 return result
151
152
153 # turn myself on
154 set_use_numexpr(True)
155
156
157 def evaluate(op, op_str, a, b, raise_on_error=False, use_numexpr=True,
158 **eval_kwargs):
159 """ evaluate and return the expression of the op on a and b
160
161 Parameters
162 ----------
163
164 op : the actual operand
165 op_str: the string version of the op
166 a : left operand
167 b : right operand
168 raise_on_error : pass the error to the higher level if indicated
169 (default is False), otherwise evaluate the op with and
170 return the results
171 use_numexpr : whether to try to use numexpr (default True)
172 """
173
174 if use_numexpr:
175 return _evaluate(op, op_str, a, b, raise_on_error=raise_on_error,
176 **eval_kwargs)
177 return _evaluate_standard(op, op_str, a, b, raise_on_error=raise_on_error)
178
179
180 def where(cond, a, b, raise_on_error=False, use_numexpr=True):
181 """ evaluate the where condition cond on a and b
182
183 Parameters
184 ----------
185
186 cond : a boolean array
187 a : return if cond is True
188 b : return if cond is False
189 raise_on_error : pass the error to the higher level if indicated
190 (default is False), otherwise evaluate the op with and
191 return the results
192 use_numexpr : whether to try to use numexpr (default True)
193 """
194
195 if use_numexpr:
196 return _where(cond, a, b, raise_on_error=raise_on_error)
197 return _where_standard(cond, a, b, raise_on_error=raise_on_error)
198
199
200 def set_test_mode(v=True):
201 """
202 Keeps track of whether numexpr was used. Stores an additional ``True``
203 for every successful use of evaluate with numexpr since the last
204 ``get_test_result``
205 """
206 global _TEST_MODE, _TEST_RESULT
207 _TEST_MODE = v
208 _TEST_RESULT = []
209
210
211 def _store_test_result(used_numexpr):
212 global _TEST_RESULT
213 if used_numexpr:
214 _TEST_RESULT.append(used_numexpr)
215
216
217 def get_test_result():
218 """get test result and reset test_results"""
219 global _TEST_RESULT
220 res = _TEST_RESULT
221 _TEST_RESULT = []
222 return res
223
[end of pandas/computation/expressions.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/pandas/computation/expressions.py b/pandas/computation/expressions.py
--- a/pandas/computation/expressions.py
+++ b/pandas/computation/expressions.py
@@ -154,6 +154,20 @@
set_use_numexpr(True)
+def _has_bool_dtype(x):
+ try:
+ return x.dtype == bool
+ except AttributeError:
+ return 'bool' in x.blocks
+
+
+def _bool_arith_check(op_str, a, b, not_allowed=frozenset(('+', '*', '-', '/',
+ '//', '**'))):
+ if op_str in not_allowed and _has_bool_dtype(a) and _has_bool_dtype(b):
+ raise NotImplementedError("operator %r not implemented for bool "
+ "dtypes" % op_str)
+
+
def evaluate(op, op_str, a, b, raise_on_error=False, use_numexpr=True,
**eval_kwargs):
""" evaluate and return the expression of the op on a and b
@@ -170,7 +184,7 @@
return the results
use_numexpr : whether to try to use numexpr (default True)
"""
-
+ _bool_arith_check(op_str, a, b)
if use_numexpr:
return _evaluate(op, op_str, a, b, raise_on_error=raise_on_error,
**eval_kwargs)
| {"golden_diff": "diff --git a/pandas/computation/expressions.py b/pandas/computation/expressions.py\n--- a/pandas/computation/expressions.py\n+++ b/pandas/computation/expressions.py\n@@ -154,6 +154,20 @@\n set_use_numexpr(True)\n \n \n+def _has_bool_dtype(x):\n+ try:\n+ return x.dtype == bool\n+ except AttributeError:\n+ return 'bool' in x.blocks\n+\n+\n+def _bool_arith_check(op_str, a, b, not_allowed=frozenset(('+', '*', '-', '/',\n+ '//', '**'))):\n+ if op_str in not_allowed and _has_bool_dtype(a) and _has_bool_dtype(b):\n+ raise NotImplementedError(\"operator %r not implemented for bool \"\n+ \"dtypes\" % op_str)\n+\n+\n def evaluate(op, op_str, a, b, raise_on_error=False, use_numexpr=True,\n **eval_kwargs):\n \"\"\" evaluate and return the expression of the op on a and b\n@@ -170,7 +184,7 @@\n return the results\n use_numexpr : whether to try to use numexpr (default True)\n \"\"\"\n-\n+ _bool_arith_check(op_str, a, b)\n if use_numexpr:\n return _evaluate(op, op_str, a, b, raise_on_error=raise_on_error,\n **eval_kwargs)\n", "issue": "numexpr 2.3.1 error with pandas 0.13.1\nI just installed numexpr 2.3.1 with pandas 0.13.1 and got the following error:\n\n File \"C:\\Python27\\lib\\site-packages\\pandas\\core\\ops.py\", line 496, in wrapper\n arr = na_op(lvalues, rvalues)\n File \"C:\\Python27\\lib\\site-packages\\pandas\\core\\ops.py\", line 443, in na_op\n raise_on_error=True, *_eval_kwargs)\n File \"C:\\Python27\\lib\\site-packages\\pandas\\computation\\expressions.py\", line 176, in evaluate\n *_eval_kwargs)\n File \"C:\\Python27\\lib\\site-packages\\pandas\\computation\\expressions.py\", line 104, in _evaluate_numexpr\n *_eval_kwargs)\n File \"C:\\Python27\\lib\\site-packages\\numexpr\\necompiler.py\", line 738, in evaluate\n NumExpr(ex, signature, *_context)\n File \"C:\\Python27\\lib\\site-packages\\numexpr\\necompiler.py\", line 554, in NumExpr\n precompile(ex, signature, context)\n File \"C:\\Python27\\lib\\site-packages\\numexpr\\necompiler.py\", line 498, in precompile\n ast = typeCompileAst(ast)\n File \"C:\\Python27\\lib\\site-packages\\numexpr\\necompiler.py\", line 163, in typeCompileAst\n % (ast.value + '_' + retsig+basesig))\nNotImplementedError: couldn't find matching opcode for 'mul_bbb'\n\n", "before_files": [{"content": "\"\"\"\nExpressions\n-----------\n\nOffer fast expression evaluation through numexpr\n\n\"\"\"\n\nimport numpy as np\nfrom pandas.core.common import _values_from_object\nfrom distutils.version import LooseVersion\n\ntry:\n import numexpr as ne\n _NUMEXPR_INSTALLED = ne.__version__ >= LooseVersion('2.0')\nexcept ImportError: # pragma: no cover\n _NUMEXPR_INSTALLED = False\n\n_TEST_MODE = None\n_TEST_RESULT = None\n_USE_NUMEXPR = _NUMEXPR_INSTALLED\n_evaluate = None\n_where = None\n\n# the set of dtypes that we will allow pass to numexpr\n_ALLOWED_DTYPES = {\n 'evaluate': set(['int64', 'int32', 'float64', 'float32', 'bool']),\n 'where': set(['int64', 'float64', 'bool'])\n}\n\n# the minimum prod shape that we will use numexpr\n_MIN_ELEMENTS = 10000\n\n\ndef set_use_numexpr(v=True):\n # set/unset to use numexpr\n global _USE_NUMEXPR\n if _NUMEXPR_INSTALLED:\n _USE_NUMEXPR = v\n\n # choose what we are going to do\n global _evaluate, _where\n if not _USE_NUMEXPR:\n _evaluate = _evaluate_standard\n _where = _where_standard\n else:\n _evaluate = _evaluate_numexpr\n _where = _where_numexpr\n\n\ndef set_numexpr_threads(n=None):\n # if we are using numexpr, set the threads to n\n # otherwise reset\n if _NUMEXPR_INSTALLED and _USE_NUMEXPR:\n if n is None:\n n = ne.detect_number_of_cores()\n ne.set_num_threads(n)\n\n\ndef _evaluate_standard(op, op_str, a, b, raise_on_error=True, **eval_kwargs):\n \"\"\" standard evaluation \"\"\"\n if _TEST_MODE:\n _store_test_result(False)\n return op(a, b)\n\n\ndef _can_use_numexpr(op, op_str, a, b, dtype_check):\n \"\"\" return a boolean if we WILL be using numexpr \"\"\"\n if op_str is not None:\n\n # required min elements (otherwise we are adding overhead)\n if np.prod(a.shape) > _MIN_ELEMENTS:\n\n # check for dtype compatiblity\n dtypes = set()\n for o in [a, b]:\n if hasattr(o, 'get_dtype_counts'):\n s = o.get_dtype_counts()\n if len(s) > 1:\n return False\n dtypes |= set(s.index)\n elif isinstance(o, np.ndarray):\n dtypes |= set([o.dtype.name])\n\n # allowed are a superset\n if not len(dtypes) or _ALLOWED_DTYPES[dtype_check] >= dtypes:\n return True\n\n return False\n\n\ndef _evaluate_numexpr(op, op_str, a, b, raise_on_error=False, truediv=True,\n **eval_kwargs):\n result = None\n\n if _can_use_numexpr(op, op_str, a, b, 'evaluate'):\n try:\n a_value = getattr(a, \"values\", a)\n b_value = getattr(b, \"values\", b)\n result = ne.evaluate('a_value %s b_value' % op_str,\n local_dict={'a_value': a_value,\n 'b_value': b_value},\n casting='safe', truediv=truediv,\n **eval_kwargs)\n except ValueError as detail:\n if 'unknown type object' in str(detail):\n pass\n except Exception as detail:\n if raise_on_error:\n raise\n\n if _TEST_MODE:\n _store_test_result(result is not None)\n\n if result is None:\n result = _evaluate_standard(op, op_str, a, b, raise_on_error)\n\n return result\n\n\ndef _where_standard(cond, a, b, raise_on_error=True):\n return np.where(_values_from_object(cond), _values_from_object(a),\n _values_from_object(b))\n\n\ndef _where_numexpr(cond, a, b, raise_on_error=False):\n result = None\n\n if _can_use_numexpr(None, 'where', a, b, 'where'):\n\n try:\n cond_value = getattr(cond, 'values', cond)\n a_value = getattr(a, 'values', a)\n b_value = getattr(b, 'values', b)\n result = ne.evaluate('where(cond_value, a_value, b_value)',\n local_dict={'cond_value': cond_value,\n 'a_value': a_value,\n 'b_value': b_value},\n casting='safe')\n except ValueError as detail:\n if 'unknown type object' in str(detail):\n pass\n except Exception as detail:\n if raise_on_error:\n raise TypeError(str(detail))\n\n if result is None:\n result = _where_standard(cond, a, b, raise_on_error)\n\n return result\n\n\n# turn myself on\nset_use_numexpr(True)\n\n\ndef evaluate(op, op_str, a, b, raise_on_error=False, use_numexpr=True,\n **eval_kwargs):\n \"\"\" evaluate and return the expression of the op on a and b\n\n Parameters\n ----------\n\n op : the actual operand\n op_str: the string version of the op\n a : left operand\n b : right operand\n raise_on_error : pass the error to the higher level if indicated\n (default is False), otherwise evaluate the op with and\n return the results\n use_numexpr : whether to try to use numexpr (default True)\n \"\"\"\n\n if use_numexpr:\n return _evaluate(op, op_str, a, b, raise_on_error=raise_on_error,\n **eval_kwargs)\n return _evaluate_standard(op, op_str, a, b, raise_on_error=raise_on_error)\n\n\ndef where(cond, a, b, raise_on_error=False, use_numexpr=True):\n \"\"\" evaluate the where condition cond on a and b\n\n Parameters\n ----------\n\n cond : a boolean array\n a : return if cond is True\n b : return if cond is False\n raise_on_error : pass the error to the higher level if indicated\n (default is False), otherwise evaluate the op with and\n return the results\n use_numexpr : whether to try to use numexpr (default True)\n \"\"\"\n\n if use_numexpr:\n return _where(cond, a, b, raise_on_error=raise_on_error)\n return _where_standard(cond, a, b, raise_on_error=raise_on_error)\n\n\ndef set_test_mode(v=True):\n \"\"\"\n Keeps track of whether numexpr was used. Stores an additional ``True``\n for every successful use of evaluate with numexpr since the last\n ``get_test_result``\n \"\"\"\n global _TEST_MODE, _TEST_RESULT\n _TEST_MODE = v\n _TEST_RESULT = []\n\n\ndef _store_test_result(used_numexpr):\n global _TEST_RESULT\n if used_numexpr:\n _TEST_RESULT.append(used_numexpr)\n\n\ndef get_test_result():\n \"\"\"get test result and reset test_results\"\"\"\n global _TEST_RESULT\n res = _TEST_RESULT\n _TEST_RESULT = []\n return res\n", "path": "pandas/computation/expressions.py"}]} | 3,078 | 307 |
gh_patches_debug_770 | rasdani/github-patches | git_diff | open-telemetry__opentelemetry-python-1653 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Consider renaming Resource.create_empty() to Resource.get_empty()
Specially given the fact a cached instance is returned, i.e. no actual creation happens.
</issue>
<code>
[start of opentelemetry-sdk/src/opentelemetry/sdk/resources/__init__.py]
1 # Copyright The OpenTelemetry Authors
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """
16 This package implements `OpenTelemetry Resources
17 <https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/resource/sdk.md#resource-sdk>`_:
18
19 *A Resource is an immutable representation of the entity producing
20 telemetry. For example, a process producing telemetry that is running in
21 a container on Kubernetes has a Pod name, it is in a namespace and
22 possibly is part of a Deployment which also has a name. All three of
23 these attributes can be included in the Resource.*
24
25 Resource objects are created with `Resource.create`, which accepts attributes
26 (key-values). Resource attributes can also be passed at process invocation in
27 the :envvar:`OTEL_RESOURCE_ATTRIBUTES` environment variable. You should
28 register your resource with the `opentelemetry.sdk.trace.TracerProvider` by
29 passing them into their constructors. The `Resource` passed to a provider is
30 available to the exporter, which can send on this information as it sees fit.
31
32 .. code-block:: python
33
34 trace.set_tracer_provider(
35 TracerProvider(
36 resource=Resource.create({
37 "service.name": "shoppingcart",
38 "service.instance.id": "instance-12",
39 }),
40 ),
41 )
42 print(trace.get_tracer_provider().resource.attributes)
43
44 {'telemetry.sdk.language': 'python',
45 'telemetry.sdk.name': 'opentelemetry',
46 'telemetry.sdk.version': '0.13.dev0',
47 'service.name': 'shoppingcart',
48 'service.instance.id': 'instance-12'}
49
50 Note that the OpenTelemetry project documents certain `"standard attributes"
51 <https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/resource/semantic_conventions/README.md>`_
52 that have prescribed semantic meanings, for example ``service.name`` in the
53 above example.
54
55 .. envvar:: OTEL_RESOURCE_ATTRIBUTES
56
57 The :envvar:`OTEL_RESOURCE_ATTRIBUTES` environment variable allows resource
58 attributes to be passed to the SDK at process invocation. The attributes from
59 :envvar:`OTEL_RESOURCE_ATTRIBUTES` are merged with those passed to
60 `Resource.create`, meaning :envvar:`OTEL_RESOURCE_ATTRIBUTES` takes *lower*
61 priority. Attributes should be in the format ``key1=value1,key2=value2``.
62 Additional details are available `in the specification
63 <https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/resource/sdk.md#specifying-resource-information-via-an-environment-variable>`_.
64
65 .. code-block:: console
66
67 $ OTEL_RESOURCE_ATTRIBUTES="service.name=shoppingcard,will_be_overridden=foo" python - <<EOF
68 import pprint
69 from opentelemetry.sdk.resources import Resource
70 pprint.pprint(Resource.create({"will_be_overridden": "bar"}).attributes)
71 EOF
72 {'service.name': 'shoppingcard',
73 'telemetry.sdk.language': 'python',
74 'telemetry.sdk.name': 'opentelemetry',
75 'telemetry.sdk.version': '0.13.dev0',
76 'will_be_overridden': 'bar'}
77 """
78
79 import abc
80 import concurrent.futures
81 import logging
82 import os
83 import typing
84 from json import dumps
85
86 import pkg_resources
87
88 from opentelemetry.sdk.environment_variables import OTEL_RESOURCE_ATTRIBUTES
89
90 LabelValue = typing.Union[str, bool, int, float]
91 Attributes = typing.Dict[str, LabelValue]
92 logger = logging.getLogger(__name__)
93
94
95 CLOUD_PROVIDER = "cloud.provider"
96 CLOUD_ACCOUNT_ID = "cloud.account.id"
97 CLOUD_REGION = "cloud.region"
98 CLOUD_ZONE = "cloud.zone"
99 CONTAINER_NAME = "container.name"
100 CONTAINER_ID = "container.id"
101 CONTAINER_IMAGE_NAME = "container.image.name"
102 CONTAINER_IMAGE_TAG = "container.image.tag"
103 DEPLOYMENT_ENVIRONMENT = "deployment.environment"
104 FAAS_NAME = "faas.name"
105 FAAS_ID = "faas.id"
106 FAAS_VERSION = "faas.version"
107 FAAS_INSTANCE = "faas.instance"
108 HOST_NAME = "host.name"
109 HOST_TYPE = "host.type"
110 HOST_IMAGE_NAME = "host.image.name"
111 HOST_IMAGE_ID = "host.image.id"
112 HOST_IMAGE_VERSION = "host.image.version"
113 KUBERNETES_CLUSTER_NAME = "k8s.cluster.name"
114 KUBERNETES_NAMESPACE_NAME = "k8s.namespace.name"
115 KUBERNETES_POD_UID = "k8s.pod.uid"
116 KUBERNETES_POD_NAME = "k8s.pod.name"
117 KUBERNETES_CONTAINER_NAME = "k8s.container.name"
118 KUBERNETES_REPLICA_SET_UID = "k8s.replicaset.uid"
119 KUBERNETES_REPLICA_SET_NAME = "k8s.replicaset.name"
120 KUBERNETES_DEPLOYMENT_UID = "k8s.deployment.uid"
121 KUBERNETES_DEPLOYMENT_NAME = "k8s.deployment.name"
122 KUBERNETES_STATEFUL_SET_UID = "k8s.statefulset.uid"
123 KUBERNETES_STATEFUL_SET_NAME = "k8s.statefulset.name"
124 KUBERNETES_DAEMON_SET_UID = "k8s.daemonset.uid"
125 KUBERNETES_DAEMON_SET_NAME = "k8s.daemonset.name"
126 KUBERNETES_JOB_UID = "k8s.job.uid"
127 KUBERNETES_JOB_NAME = "k8s.job.name"
128 KUBERNETES_CRON_JOB_UID = "k8s.cronjob.uid"
129 KUBERNETES_CRON_JOB_NAME = "k8s.cronjob.name"
130 OS_TYPE = "os.type"
131 OS_DESCRIPTION = "os.description"
132 PROCESS_PID = "process.pid"
133 PROCESS_EXECUTABLE_NAME = "process.executable.name"
134 PROCESS_EXECUTABLE_PATH = "process.executable.path"
135 PROCESS_COMMAND = "process.command"
136 PROCESS_COMMAND_LINE = "process.command_line"
137 PROCESS_COMMAND_ARGS = "process.command_args"
138 PROCESS_OWNER = "process.owner"
139 PROCESS_RUNTIME_NAME = "process.runtime.name"
140 PROCESS_RUNTIME_VERSION = "process.runtime.version"
141 PROCESS_RUNTIME_DESCRIPTION = "process.runtime.description"
142 SERVICE_NAME = "service.name"
143 SERVICE_NAMESPACE = "service.namespace"
144 SERVICE_INSTANCE_ID = "service.instance.id"
145 SERVICE_VERSION = "service.version"
146 TELEMETRY_SDK_NAME = "telemetry.sdk.name"
147 TELEMETRY_SDK_VERSION = "telemetry.sdk.version"
148 TELEMETRY_AUTO_VERSION = "telemetry.auto.version"
149 TELEMETRY_SDK_LANGUAGE = "telemetry.sdk.language"
150
151
152 OPENTELEMETRY_SDK_VERSION = pkg_resources.get_distribution(
153 "opentelemetry-sdk"
154 ).version
155
156
157 class Resource:
158 """A Resource is an immutable representation of the entity producing telemetry as Attributes."""
159
160 def __init__(self, attributes: Attributes):
161 self._attributes = attributes.copy()
162
163 @staticmethod
164 def create(attributes: typing.Optional[Attributes] = None) -> "Resource":
165 """Creates a new `Resource` from attributes.
166
167 Args:
168 attributes: Optional zero or more key-value pairs.
169
170 Returns:
171 The newly-created Resource.
172 """
173 if not attributes:
174 attributes = {}
175 resource = _DEFAULT_RESOURCE.merge(
176 OTELResourceDetector().detect()
177 ).merge(Resource(attributes))
178 if not resource.attributes.get(SERVICE_NAME, None):
179 default_service_name = "unknown_service"
180 process_executable_name = resource.attributes.get(
181 PROCESS_EXECUTABLE_NAME, None
182 )
183 if process_executable_name:
184 default_service_name += ":" + process_executable_name
185 resource = resource.merge(
186 Resource({SERVICE_NAME: default_service_name})
187 )
188 return resource
189
190 @staticmethod
191 def create_empty() -> "Resource":
192 return _EMPTY_RESOURCE
193
194 @property
195 def attributes(self) -> Attributes:
196 return self._attributes.copy()
197
198 def merge(self, other: "Resource") -> "Resource":
199 """Merges this resource and an updating resource into a new `Resource`.
200
201 If a key exists on both the old and updating resource, the value of the
202 updating resource will override the old resource value.
203
204 Args:
205 other: The other resource to be merged.
206
207 Returns:
208 The newly-created Resource.
209 """
210 merged_attributes = self.attributes
211 merged_attributes.update(other.attributes)
212 return Resource(merged_attributes)
213
214 def __eq__(self, other: object) -> bool:
215 if not isinstance(other, Resource):
216 return False
217 return self._attributes == other._attributes
218
219 def __hash__(self):
220 return hash(dumps(self._attributes, sort_keys=True))
221
222
223 _EMPTY_RESOURCE = Resource({})
224 _DEFAULT_RESOURCE = Resource(
225 {
226 TELEMETRY_SDK_LANGUAGE: "python",
227 TELEMETRY_SDK_NAME: "opentelemetry",
228 TELEMETRY_SDK_VERSION: OPENTELEMETRY_SDK_VERSION,
229 }
230 )
231
232
233 class ResourceDetector(abc.ABC):
234 def __init__(self, raise_on_error=False):
235 self.raise_on_error = raise_on_error
236
237 @abc.abstractmethod
238 def detect(self) -> "Resource":
239 raise NotImplementedError()
240
241
242 class OTELResourceDetector(ResourceDetector):
243 # pylint: disable=no-self-use
244 def detect(self) -> "Resource":
245 env_resources_items = os.environ.get(OTEL_RESOURCE_ATTRIBUTES)
246 env_resource_map = {}
247 if env_resources_items:
248 env_resource_map = {
249 key.strip(): value.strip()
250 for key, value in (
251 item.split("=") for item in env_resources_items.split(",")
252 )
253 }
254 return Resource(env_resource_map)
255
256
257 def get_aggregated_resources(
258 detectors: typing.List["ResourceDetector"],
259 initial_resource: typing.Optional[Resource] = None,
260 timeout=5,
261 ) -> "Resource":
262 """Retrieves resources from detectors in the order that they were passed
263
264 :param detectors: List of resources in order of priority
265 :param initial_resource: Static resource. This has highest priority
266 :param timeout: Number of seconds to wait for each detector to return
267 :return:
268 """
269 final_resource = initial_resource or _EMPTY_RESOURCE
270 detectors = [OTELResourceDetector()] + detectors
271
272 with concurrent.futures.ThreadPoolExecutor(max_workers=4) as executor:
273 futures = [executor.submit(detector.detect) for detector in detectors]
274 for detector_ind, future in enumerate(futures):
275 detector = detectors[detector_ind]
276 try:
277 detected_resources = future.result(timeout=timeout)
278 # pylint: disable=broad-except
279 except Exception as ex:
280 if detector.raise_on_error:
281 raise ex
282 logger.warning(
283 "Exception %s in detector %s, ignoring", ex, detector
284 )
285 detected_resources = _EMPTY_RESOURCE
286 finally:
287 final_resource = final_resource.merge(detected_resources)
288 return final_resource
289
[end of opentelemetry-sdk/src/opentelemetry/sdk/resources/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/opentelemetry-sdk/src/opentelemetry/sdk/resources/__init__.py b/opentelemetry-sdk/src/opentelemetry/sdk/resources/__init__.py
--- a/opentelemetry-sdk/src/opentelemetry/sdk/resources/__init__.py
+++ b/opentelemetry-sdk/src/opentelemetry/sdk/resources/__init__.py
@@ -188,7 +188,7 @@
return resource
@staticmethod
- def create_empty() -> "Resource":
+ def get_empty() -> "Resource":
return _EMPTY_RESOURCE
@property
| {"golden_diff": "diff --git a/opentelemetry-sdk/src/opentelemetry/sdk/resources/__init__.py b/opentelemetry-sdk/src/opentelemetry/sdk/resources/__init__.py\n--- a/opentelemetry-sdk/src/opentelemetry/sdk/resources/__init__.py\n+++ b/opentelemetry-sdk/src/opentelemetry/sdk/resources/__init__.py\n@@ -188,7 +188,7 @@\n return resource\n \n @staticmethod\n- def create_empty() -> \"Resource\":\n+ def get_empty() -> \"Resource\":\n return _EMPTY_RESOURCE\n \n @property\n", "issue": "Consider renaming Resource.create_empty() to Resource.get_empty()\nSpecially given the fact a cached instance is returned, i.e. no actual creation happens.\n", "before_files": [{"content": "# Copyright The OpenTelemetry Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"\nThis package implements `OpenTelemetry Resources\n<https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/resource/sdk.md#resource-sdk>`_:\n\n *A Resource is an immutable representation of the entity producing\n telemetry. For example, a process producing telemetry that is running in\n a container on Kubernetes has a Pod name, it is in a namespace and\n possibly is part of a Deployment which also has a name. All three of\n these attributes can be included in the Resource.*\n\nResource objects are created with `Resource.create`, which accepts attributes\n(key-values). Resource attributes can also be passed at process invocation in\nthe :envvar:`OTEL_RESOURCE_ATTRIBUTES` environment variable. You should\nregister your resource with the `opentelemetry.sdk.trace.TracerProvider` by\npassing them into their constructors. The `Resource` passed to a provider is\navailable to the exporter, which can send on this information as it sees fit.\n\n.. code-block:: python\n\n trace.set_tracer_provider(\n TracerProvider(\n resource=Resource.create({\n \"service.name\": \"shoppingcart\",\n \"service.instance.id\": \"instance-12\",\n }),\n ),\n )\n print(trace.get_tracer_provider().resource.attributes)\n\n {'telemetry.sdk.language': 'python',\n 'telemetry.sdk.name': 'opentelemetry',\n 'telemetry.sdk.version': '0.13.dev0',\n 'service.name': 'shoppingcart',\n 'service.instance.id': 'instance-12'}\n\nNote that the OpenTelemetry project documents certain `\"standard attributes\"\n<https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/resource/semantic_conventions/README.md>`_\nthat have prescribed semantic meanings, for example ``service.name`` in the\nabove example.\n\n.. envvar:: OTEL_RESOURCE_ATTRIBUTES\n\nThe :envvar:`OTEL_RESOURCE_ATTRIBUTES` environment variable allows resource\nattributes to be passed to the SDK at process invocation. The attributes from\n:envvar:`OTEL_RESOURCE_ATTRIBUTES` are merged with those passed to\n`Resource.create`, meaning :envvar:`OTEL_RESOURCE_ATTRIBUTES` takes *lower*\npriority. Attributes should be in the format ``key1=value1,key2=value2``.\nAdditional details are available `in the specification\n<https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/resource/sdk.md#specifying-resource-information-via-an-environment-variable>`_.\n\n.. code-block:: console\n\n $ OTEL_RESOURCE_ATTRIBUTES=\"service.name=shoppingcard,will_be_overridden=foo\" python - <<EOF\n import pprint\n from opentelemetry.sdk.resources import Resource\n pprint.pprint(Resource.create({\"will_be_overridden\": \"bar\"}).attributes)\n EOF\n {'service.name': 'shoppingcard',\n 'telemetry.sdk.language': 'python',\n 'telemetry.sdk.name': 'opentelemetry',\n 'telemetry.sdk.version': '0.13.dev0',\n 'will_be_overridden': 'bar'}\n \"\"\"\n\nimport abc\nimport concurrent.futures\nimport logging\nimport os\nimport typing\nfrom json import dumps\n\nimport pkg_resources\n\nfrom opentelemetry.sdk.environment_variables import OTEL_RESOURCE_ATTRIBUTES\n\nLabelValue = typing.Union[str, bool, int, float]\nAttributes = typing.Dict[str, LabelValue]\nlogger = logging.getLogger(__name__)\n\n\nCLOUD_PROVIDER = \"cloud.provider\"\nCLOUD_ACCOUNT_ID = \"cloud.account.id\"\nCLOUD_REGION = \"cloud.region\"\nCLOUD_ZONE = \"cloud.zone\"\nCONTAINER_NAME = \"container.name\"\nCONTAINER_ID = \"container.id\"\nCONTAINER_IMAGE_NAME = \"container.image.name\"\nCONTAINER_IMAGE_TAG = \"container.image.tag\"\nDEPLOYMENT_ENVIRONMENT = \"deployment.environment\"\nFAAS_NAME = \"faas.name\"\nFAAS_ID = \"faas.id\"\nFAAS_VERSION = \"faas.version\"\nFAAS_INSTANCE = \"faas.instance\"\nHOST_NAME = \"host.name\"\nHOST_TYPE = \"host.type\"\nHOST_IMAGE_NAME = \"host.image.name\"\nHOST_IMAGE_ID = \"host.image.id\"\nHOST_IMAGE_VERSION = \"host.image.version\"\nKUBERNETES_CLUSTER_NAME = \"k8s.cluster.name\"\nKUBERNETES_NAMESPACE_NAME = \"k8s.namespace.name\"\nKUBERNETES_POD_UID = \"k8s.pod.uid\"\nKUBERNETES_POD_NAME = \"k8s.pod.name\"\nKUBERNETES_CONTAINER_NAME = \"k8s.container.name\"\nKUBERNETES_REPLICA_SET_UID = \"k8s.replicaset.uid\"\nKUBERNETES_REPLICA_SET_NAME = \"k8s.replicaset.name\"\nKUBERNETES_DEPLOYMENT_UID = \"k8s.deployment.uid\"\nKUBERNETES_DEPLOYMENT_NAME = \"k8s.deployment.name\"\nKUBERNETES_STATEFUL_SET_UID = \"k8s.statefulset.uid\"\nKUBERNETES_STATEFUL_SET_NAME = \"k8s.statefulset.name\"\nKUBERNETES_DAEMON_SET_UID = \"k8s.daemonset.uid\"\nKUBERNETES_DAEMON_SET_NAME = \"k8s.daemonset.name\"\nKUBERNETES_JOB_UID = \"k8s.job.uid\"\nKUBERNETES_JOB_NAME = \"k8s.job.name\"\nKUBERNETES_CRON_JOB_UID = \"k8s.cronjob.uid\"\nKUBERNETES_CRON_JOB_NAME = \"k8s.cronjob.name\"\nOS_TYPE = \"os.type\"\nOS_DESCRIPTION = \"os.description\"\nPROCESS_PID = \"process.pid\"\nPROCESS_EXECUTABLE_NAME = \"process.executable.name\"\nPROCESS_EXECUTABLE_PATH = \"process.executable.path\"\nPROCESS_COMMAND = \"process.command\"\nPROCESS_COMMAND_LINE = \"process.command_line\"\nPROCESS_COMMAND_ARGS = \"process.command_args\"\nPROCESS_OWNER = \"process.owner\"\nPROCESS_RUNTIME_NAME = \"process.runtime.name\"\nPROCESS_RUNTIME_VERSION = \"process.runtime.version\"\nPROCESS_RUNTIME_DESCRIPTION = \"process.runtime.description\"\nSERVICE_NAME = \"service.name\"\nSERVICE_NAMESPACE = \"service.namespace\"\nSERVICE_INSTANCE_ID = \"service.instance.id\"\nSERVICE_VERSION = \"service.version\"\nTELEMETRY_SDK_NAME = \"telemetry.sdk.name\"\nTELEMETRY_SDK_VERSION = \"telemetry.sdk.version\"\nTELEMETRY_AUTO_VERSION = \"telemetry.auto.version\"\nTELEMETRY_SDK_LANGUAGE = \"telemetry.sdk.language\"\n\n\nOPENTELEMETRY_SDK_VERSION = pkg_resources.get_distribution(\n \"opentelemetry-sdk\"\n).version\n\n\nclass Resource:\n \"\"\"A Resource is an immutable representation of the entity producing telemetry as Attributes.\"\"\"\n\n def __init__(self, attributes: Attributes):\n self._attributes = attributes.copy()\n\n @staticmethod\n def create(attributes: typing.Optional[Attributes] = None) -> \"Resource\":\n \"\"\"Creates a new `Resource` from attributes.\n\n Args:\n attributes: Optional zero or more key-value pairs.\n\n Returns:\n The newly-created Resource.\n \"\"\"\n if not attributes:\n attributes = {}\n resource = _DEFAULT_RESOURCE.merge(\n OTELResourceDetector().detect()\n ).merge(Resource(attributes))\n if not resource.attributes.get(SERVICE_NAME, None):\n default_service_name = \"unknown_service\"\n process_executable_name = resource.attributes.get(\n PROCESS_EXECUTABLE_NAME, None\n )\n if process_executable_name:\n default_service_name += \":\" + process_executable_name\n resource = resource.merge(\n Resource({SERVICE_NAME: default_service_name})\n )\n return resource\n\n @staticmethod\n def create_empty() -> \"Resource\":\n return _EMPTY_RESOURCE\n\n @property\n def attributes(self) -> Attributes:\n return self._attributes.copy()\n\n def merge(self, other: \"Resource\") -> \"Resource\":\n \"\"\"Merges this resource and an updating resource into a new `Resource`.\n\n If a key exists on both the old and updating resource, the value of the\n updating resource will override the old resource value.\n\n Args:\n other: The other resource to be merged.\n\n Returns:\n The newly-created Resource.\n \"\"\"\n merged_attributes = self.attributes\n merged_attributes.update(other.attributes)\n return Resource(merged_attributes)\n\n def __eq__(self, other: object) -> bool:\n if not isinstance(other, Resource):\n return False\n return self._attributes == other._attributes\n\n def __hash__(self):\n return hash(dumps(self._attributes, sort_keys=True))\n\n\n_EMPTY_RESOURCE = Resource({})\n_DEFAULT_RESOURCE = Resource(\n {\n TELEMETRY_SDK_LANGUAGE: \"python\",\n TELEMETRY_SDK_NAME: \"opentelemetry\",\n TELEMETRY_SDK_VERSION: OPENTELEMETRY_SDK_VERSION,\n }\n)\n\n\nclass ResourceDetector(abc.ABC):\n def __init__(self, raise_on_error=False):\n self.raise_on_error = raise_on_error\n\n @abc.abstractmethod\n def detect(self) -> \"Resource\":\n raise NotImplementedError()\n\n\nclass OTELResourceDetector(ResourceDetector):\n # pylint: disable=no-self-use\n def detect(self) -> \"Resource\":\n env_resources_items = os.environ.get(OTEL_RESOURCE_ATTRIBUTES)\n env_resource_map = {}\n if env_resources_items:\n env_resource_map = {\n key.strip(): value.strip()\n for key, value in (\n item.split(\"=\") for item in env_resources_items.split(\",\")\n )\n }\n return Resource(env_resource_map)\n\n\ndef get_aggregated_resources(\n detectors: typing.List[\"ResourceDetector\"],\n initial_resource: typing.Optional[Resource] = None,\n timeout=5,\n) -> \"Resource\":\n \"\"\"Retrieves resources from detectors in the order that they were passed\n\n :param detectors: List of resources in order of priority\n :param initial_resource: Static resource. This has highest priority\n :param timeout: Number of seconds to wait for each detector to return\n :return:\n \"\"\"\n final_resource = initial_resource or _EMPTY_RESOURCE\n detectors = [OTELResourceDetector()] + detectors\n\n with concurrent.futures.ThreadPoolExecutor(max_workers=4) as executor:\n futures = [executor.submit(detector.detect) for detector in detectors]\n for detector_ind, future in enumerate(futures):\n detector = detectors[detector_ind]\n try:\n detected_resources = future.result(timeout=timeout)\n # pylint: disable=broad-except\n except Exception as ex:\n if detector.raise_on_error:\n raise ex\n logger.warning(\n \"Exception %s in detector %s, ignoring\", ex, detector\n )\n detected_resources = _EMPTY_RESOURCE\n finally:\n final_resource = final_resource.merge(detected_resources)\n return final_resource\n", "path": "opentelemetry-sdk/src/opentelemetry/sdk/resources/__init__.py"}]} | 3,753 | 120 |
gh_patches_debug_11990 | rasdani/github-patches | git_diff | kivy__python-for-android-1513 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Didn't find any valid dependency graphs. - Flask and websocket-client
In my app I use both flask and websocket-client. However, when i try to add both of these dependencies to my app, p4a fails. However, when I build my app only with `flask`, or only with `websocket-client` p4a works correctly.
```
p4a apk --private /home/user/sample/ --package=samplepackage --name="Sample app" --version 0.1 --bootstrap=sdl2 --requirements=python2,websocket-client,flask
[ERROR]: Didn't find any valid dependency graphs.
[ERROR]: This means that some of your requirements pull in conflicting dependencies.
[ERROR]: Exiting.```
</issue>
<code>
[start of pythonforandroid/recipes/websocket-client/__init__.py]
1 from pythonforandroid.toolchain import Recipe
2
3 # if android app crashes on start with "ImportError: No module named websocket"
4 #
5 # copy the 'websocket' directory into your app directory to force inclusion.
6 #
7 # see my example at https://github.com/debauchery1st/example_kivy_websocket-recipe
8 #
9 # If you see errors relating to 'SSL not available' ensure you have the package backports.ssl-match-hostname
10 # in the buildozer requirements, since Kivy targets python 2.7.x
11 #
12 # You may also need sslopt={"cert_reqs": ssl.CERT_NONE} as a parameter to ws.run_forever() if you get an error relating to
13 # host verification
14
15
16 class WebSocketClient(Recipe):
17
18 url = 'https://github.com/debauchery1st/websocket-client/raw/master/websocket_client-0.40.0.tar.gz'
19
20 version = '0.40.0'
21 # md5sum = 'f1cf4cc7869ef97a98e5f4be25c30986'
22
23 # patches = ['websocket.patch'] # Paths relative to the recipe dir
24
25 depends = ['kivy', 'python2', 'android', 'pyjnius',
26 'cryptography', 'pyasn1', 'pyopenssl']
27
28
29 recipe = WebSocketClient()
30
[end of pythonforandroid/recipes/websocket-client/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/pythonforandroid/recipes/websocket-client/__init__.py b/pythonforandroid/recipes/websocket-client/__init__.py
--- a/pythonforandroid/recipes/websocket-client/__init__.py
+++ b/pythonforandroid/recipes/websocket-client/__init__.py
@@ -15,15 +15,13 @@
class WebSocketClient(Recipe):
- url = 'https://github.com/debauchery1st/websocket-client/raw/master/websocket_client-0.40.0.tar.gz'
+ url = 'https://github.com/websocket-client/websocket-client/archive/v{version}.tar.gz'
version = '0.40.0'
- # md5sum = 'f1cf4cc7869ef97a98e5f4be25c30986'
# patches = ['websocket.patch'] # Paths relative to the recipe dir
- depends = ['kivy', 'python2', 'android', 'pyjnius',
- 'cryptography', 'pyasn1', 'pyopenssl']
+ depends = ['python2', 'android', 'pyjnius', 'cryptography', 'pyasn1', 'pyopenssl']
recipe = WebSocketClient()
| {"golden_diff": "diff --git a/pythonforandroid/recipes/websocket-client/__init__.py b/pythonforandroid/recipes/websocket-client/__init__.py\n--- a/pythonforandroid/recipes/websocket-client/__init__.py\n+++ b/pythonforandroid/recipes/websocket-client/__init__.py\n@@ -15,15 +15,13 @@\n \n class WebSocketClient(Recipe):\n \n- url = 'https://github.com/debauchery1st/websocket-client/raw/master/websocket_client-0.40.0.tar.gz'\n+ url = 'https://github.com/websocket-client/websocket-client/archive/v{version}.tar.gz'\n \n version = '0.40.0'\n- # md5sum = 'f1cf4cc7869ef97a98e5f4be25c30986'\n \n # patches = ['websocket.patch'] # Paths relative to the recipe dir\n \n- depends = ['kivy', 'python2', 'android', 'pyjnius',\n- 'cryptography', 'pyasn1', 'pyopenssl']\n+ depends = ['python2', 'android', 'pyjnius', 'cryptography', 'pyasn1', 'pyopenssl']\n \n \n recipe = WebSocketClient()\n", "issue": "Didn't find any valid dependency graphs. - Flask and websocket-client\nIn my app I use both flask and websocket-client. However, when i try to add both of these dependencies to my app, p4a fails. However, when I build my app only with `flask`, or only with `websocket-client` p4a works correctly.\r\n```\r\np4a apk --private /home/user/sample/ --package=samplepackage --name=\"Sample app\" --version 0.1 --bootstrap=sdl2 --requirements=python2,websocket-client,flask\r\n[ERROR]: Didn't find any valid dependency graphs.\r\n[ERROR]: This means that some of your requirements pull in conflicting dependencies.\r\n[ERROR]: Exiting.```\r\n\r\n\n", "before_files": [{"content": "from pythonforandroid.toolchain import Recipe\n\n# if android app crashes on start with \"ImportError: No module named websocket\"\n#\n# copy the 'websocket' directory into your app directory to force inclusion.\n#\n# see my example at https://github.com/debauchery1st/example_kivy_websocket-recipe\n#\n# If you see errors relating to 'SSL not available' ensure you have the package backports.ssl-match-hostname\n# in the buildozer requirements, since Kivy targets python 2.7.x\n#\n# You may also need sslopt={\"cert_reqs\": ssl.CERT_NONE} as a parameter to ws.run_forever() if you get an error relating to\n# host verification\n\n\nclass WebSocketClient(Recipe):\n\n url = 'https://github.com/debauchery1st/websocket-client/raw/master/websocket_client-0.40.0.tar.gz'\n\n version = '0.40.0'\n # md5sum = 'f1cf4cc7869ef97a98e5f4be25c30986'\n\n # patches = ['websocket.patch'] # Paths relative to the recipe dir\n\n depends = ['kivy', 'python2', 'android', 'pyjnius',\n 'cryptography', 'pyasn1', 'pyopenssl']\n\n\nrecipe = WebSocketClient()\n", "path": "pythonforandroid/recipes/websocket-client/__init__.py"}]} | 1,056 | 275 |
gh_patches_debug_5446 | rasdani/github-patches | git_diff | xonsh__xonsh-3964 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
'Window' object has no attribute 'children'
<!--- Provide a general summary of the issue in the Title above -->
<!--- If you have a question along the lines of "How do I do this Bash command in xonsh"
please first look over the Bash to Xonsh translation guide: https://xon.sh/bash_to_xsh.html
If you don't find an answer there, please do open an issue! -->
## xonfig
<details>
```
+------------------+----------------------+
| xonsh | 0.9.24 |
| Git SHA | 74543ea9 |
| Commit Date | Oct 10 15:12:47 2020 |
| Python | 3.8.6 |
| PLY | 3.11 |
| have readline | True |
| prompt toolkit | 3.0.3 |
| shell type | prompt_toolkit |
| pygments | 2.7.2 |
| on posix | True |
| on linux | True |
| distro | manjaro |
| on darwin | False |
| on windows | False |
| on cygwin | False |
| on msys2 | False |
| is superuser | False |
| default encoding | utf-8 |
| xonsh encoding | utf-8 |
| encoding errors | surrogateescape |
| on jupyter | False |
| jupyter kernel | None |
| xontrib 1 | abbrevs |
| xontrib 2 | argcomplete |
| xontrib 3 | autovox |
| xontrib 4 | back2dir |
| xontrib 5 | cmd_done |
| xontrib 6 | hist_navigator |
| xontrib 7 | jedi |
| xontrib 8 | kitty |
| xontrib 9 | pdb |
| xontrib 10 | prompt_ret_code |
| xontrib 11 | vox |
| xontrib 12 | voxapi |
+------------------+----------------------+
```
</details>
## Expected Behavior
interactive shell runs without any error
## Current Behavior
<!--- Tell us what happens instead of the expected behavior -->
<!--- If part of your bug report is a traceback, please first enter debug mode before triggering the error
To enter debug mode, set the environment variable `XONSH_DEBUG=1` _before_ starting `xonsh`.
On Linux and OSX, an easy way to to do this is to run `env XONSH_DEBUG=1 xonsh` -->
I get the above error randomly when `$UPDATE_COMPLETIONS_ON_KEYPRESS = True`
### Traceback (if applicable)
<details>
```
2020-11-08 22:06:05.995 | INFO | xonsh.ptk_shell.completer:reserve_space:118 - 8735829483909
2020-11-08 22:06:06.000 | ERROR | xonsh.ptk_shell.completer:reserve_space:126 - 'Window' object has no attribute 'children'
Traceback (most recent call last):
File "/usr/lib/python3.8/threading.py", line 890, in _bootstrap
self._bootstrap_inner()
│ └ <function Thread._bootstrap_inner at 0x7f1f895e01f0>
└ <Thread(ThreadPoolExecutor-1_0, started daemon 139773312693824)>
File "/usr/lib/python3.8/threading.py", line 932, in _bootstrap_inner
self.run()
│ └ <function Thread.run at 0x7f1f895dfee0>
└ <Thread(ThreadPoolExecutor-1_0, started daemon 139773312693824)>
File "/usr/lib/python3.8/threading.py", line 870, in run
self._target(*self._args, **self._kwargs)
│ │ │ │ │ └ {}
│ │ │ │ └ <Thread(ThreadPoolExecutor-1_0, started daemon 139773312693824)>
│ │ │ └ (<weakref at 0x7f1f803ed270; to 'ThreadPoolExecutor' at 0x7f1f81e857c0>, <_queue.SimpleQueue object at 0x7f1f82a8b2c0>, None,...
│ │ └ <Thread(ThreadPoolExecutor-1_0, started daemon 139773312693824)>
│ └ <function _worker at 0x7f1f81edb670>
└ <Thread(ThreadPoolExecutor-1_0, started daemon 139773312693824)>
File "/usr/lib/python3.8/concurrent/futures/thread.py", line 80, in _worker
work_item.run()
│ └ <function _WorkItem.run at 0x7f1f81edb790>
└ <concurrent.futures.thread._WorkItem object at 0x7f1f803eb460>
File "/usr/lib/python3.8/concurrent/futures/thread.py", line 57, in run
result = self.fn(*self.args, **self.kwargs)
│ │ │ │ │ └ {}
│ │ │ │ └ <concurrent.futures.thread._WorkItem object at 0x7f1f803eb460>
│ │ │ └ [<function generator_to_async_generator.<locals>.runner at 0x7f1f81f13e50>]
│ │ └ <concurrent.futures.thread._WorkItem object at 0x7f1f803eb460>
│ └ <built-in method run of Context object at 0x7f1f8039c9c0>
└ <concurrent.futures.thread._WorkItem object at 0x7f1f803eb460>
File "/home/noor/.config/xonsh/.venv/lib/python3.8/site-packages/prompt_toolkit/eventloop/async_generator.py", line 43, in runner
for item in get_iterable():
└ <function ThreadedCompleter.get_completions_async.<locals>.<lambda> at 0x7f1f81f13ee0>
File "/home/noor/.config/xonsh/xsh-src/xonsh/ptk_shell/completer.py", line 73, in get_completions
self.reserve_space()
│ └ <function PromptToolkitCompleter.reserve_space at 0x7f1f82aa4430>
└ <xonsh.ptk_shell.completer.PromptToolkitCompleter object at 0x7f1f82aed8e0>
> File "/home/noor/.config/xonsh/xsh-src/xonsh/ptk_shell/completer.py", line 123, in reserve_space
hash(app.layout.container.children[0].content.children[1].content)
│ │ └ Window(content=FormattedTextControl(HTML('No layout specified. Press <reverse>ENTER</reverse> to quit.')))
│ └ Layout(Window(content=FormattedTextControl(HTML('No layout specified. Press <reverse>ENTER</reverse> to quit.'))), current_wi...
└ <prompt_toolkit.application.dummy.DummyApplication object at 0x7f1f803eb8b0>
AttributeError: 'Window' object has no attribute 'children'
```
</details>
## Steps to Reproduce
<!--- Please try to write out a minimal reproducible snippet to trigger the bug, it will help us fix it! -->
1. it happens randomly. sometime doing simple `ls -a` triggers the error
## For community
⬇️ **Please click the 👍 reaction instead of leaving a `+1` or 👍 comment**
</issue>
<code>
[start of xonsh/ptk_shell/completer.py]
1 # -*- coding: utf-8 -*-
2 """Completer implementation to use with prompt_toolkit."""
3 import os
4 import builtins
5
6 from prompt_toolkit.completion import Completer, Completion
7 from prompt_toolkit.auto_suggest import AutoSuggestFromHistory
8 from prompt_toolkit.application.current import get_app
9
10 from xonsh.completers.tools import RichCompletion
11
12
13 class PromptToolkitCompleter(Completer):
14 """Simple prompt_toolkit Completer object.
15
16 It just redirects requests to normal Xonsh completer.
17 """
18
19 def __init__(self, completer, ctx, shell):
20 """Takes instance of xonsh.completer.Completer, the xonsh execution
21 context, and the shell instance itself.
22 """
23 self.completer = completer
24 self.ctx = ctx
25 self.shell = shell
26 self.hist_suggester = AutoSuggestFromHistory()
27 self.current_document = None
28
29 def get_completions(self, document, complete_event):
30 """Returns a generator for list of completions."""
31 env = builtins.__xonsh__.env
32 should_complete = complete_event.completion_requested or env.get(
33 "UPDATE_COMPLETIONS_ON_KEYPRESS"
34 )
35 # Only generate completions when the user hits tab.
36 if not should_complete or self.completer is None:
37 return
38 # generate actual completions
39 line = document.current_line.lstrip()
40 line_ex = builtins.aliases.expand_alias(line)
41
42 endidx = document.cursor_position_col
43 begidx = line[:endidx].rfind(" ") + 1 if line[:endidx].rfind(" ") >= 0 else 0
44 prefix = line[begidx:endidx]
45 expand_offset = len(line_ex) - len(line)
46
47 # enable completers to access entire document
48 self.current_document = document
49
50 # get normal completions
51 completions, l = self.completer.complete(
52 prefix, line_ex, begidx + expand_offset, endidx + expand_offset, self.ctx
53 )
54
55 self.current_document = None
56
57 # completions from auto suggest
58 sug_comp = None
59 if env.get("AUTO_SUGGEST") and env.get("AUTO_SUGGEST_IN_COMPLETIONS"):
60 sug_comp = self.suggestion_completion(document, line)
61 if sug_comp is None:
62 pass
63 elif len(completions) == 0:
64 completions = (sug_comp,)
65 else:
66 completions = set(completions)
67 completions.discard(sug_comp)
68 completions = (sug_comp,) + tuple(sorted(completions))
69 # reserve space, if needed.
70 if len(completions) <= 1:
71 pass
72 elif len(os.path.commonprefix(completions)) <= len(prefix):
73 self.reserve_space()
74 # Find common prefix (strip quoting)
75 c_prefix = os.path.commonprefix([a.strip("'\"") for a in completions])
76 # Find last split symbol, do not trim the last part
77 while c_prefix:
78 if c_prefix[-1] in r"/\.:@,":
79 break
80 c_prefix = c_prefix[:-1]
81 # yield completions
82 if sug_comp is None:
83 pre = min(document.cursor_position_col - begidx, len(c_prefix))
84 else:
85 pre = len(c_prefix)
86 for comp in completions:
87 # do not display quote
88 if isinstance(comp, RichCompletion):
89 yield Completion(
90 comp,
91 -comp.prefix_len if comp.prefix_len is not None else -l,
92 display=comp.display,
93 display_meta=comp.description or None,
94 )
95 else:
96 disp = comp[pre:].strip("'\"")
97 yield Completion(comp, -l, display=disp)
98
99 def suggestion_completion(self, document, line):
100 """Provides a completion based on the current auto-suggestion."""
101 app = self.shell.prompter.app
102 sug = self.hist_suggester.get_suggestion(app.current_buffer, document)
103 if sug is None:
104 return None
105 comp, _, _ = sug.text.partition(" ")
106 _, _, prev = line.rpartition(" ")
107 return prev + comp
108
109 def reserve_space(self):
110 """Adjust the height for showing autocompletion menu."""
111 app = get_app()
112 render = app.renderer
113 window = app.layout.container.children[0].content.children[1].content
114
115 if window and window.render_info:
116 h = window.render_info.content_height
117 r = builtins.__xonsh__.env.get("COMPLETIONS_MENU_ROWS")
118 size = h + r
119 last_h = render._last_screen.height if render._last_screen else 0
120 last_h = max(render._min_available_height, last_h)
121 if last_h < size:
122 if render._last_screen:
123 render._last_screen.height = size
124
[end of xonsh/ptk_shell/completer.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/xonsh/ptk_shell/completer.py b/xonsh/ptk_shell/completer.py
--- a/xonsh/ptk_shell/completer.py
+++ b/xonsh/ptk_shell/completer.py
@@ -110,7 +110,7 @@
"""Adjust the height for showing autocompletion menu."""
app = get_app()
render = app.renderer
- window = app.layout.container.children[0].content.children[1].content
+ window = app.layout.current_window
if window and window.render_info:
h = window.render_info.content_height
| {"golden_diff": "diff --git a/xonsh/ptk_shell/completer.py b/xonsh/ptk_shell/completer.py\n--- a/xonsh/ptk_shell/completer.py\n+++ b/xonsh/ptk_shell/completer.py\n@@ -110,7 +110,7 @@\n \"\"\"Adjust the height for showing autocompletion menu.\"\"\"\n app = get_app()\n render = app.renderer\n- window = app.layout.container.children[0].content.children[1].content\n+ window = app.layout.current_window\n \n if window and window.render_info:\n h = window.render_info.content_height\n", "issue": "'Window' object has no attribute 'children'\n<!--- Provide a general summary of the issue in the Title above -->\r\n<!--- If you have a question along the lines of \"How do I do this Bash command in xonsh\"\r\nplease first look over the Bash to Xonsh translation guide: https://xon.sh/bash_to_xsh.html\r\nIf you don't find an answer there, please do open an issue! -->\r\n\r\n## xonfig\r\n\r\n<details>\r\n\r\n```\r\n+------------------+----------------------+\r\n| xonsh | 0.9.24 |\r\n| Git SHA | 74543ea9 |\r\n| Commit Date | Oct 10 15:12:47 2020 |\r\n| Python | 3.8.6 |\r\n| PLY | 3.11 |\r\n| have readline | True |\r\n| prompt toolkit | 3.0.3 |\r\n| shell type | prompt_toolkit |\r\n| pygments | 2.7.2 |\r\n| on posix | True |\r\n| on linux | True |\r\n| distro | manjaro |\r\n| on darwin | False |\r\n| on windows | False |\r\n| on cygwin | False |\r\n| on msys2 | False |\r\n| is superuser | False |\r\n| default encoding | utf-8 |\r\n| xonsh encoding | utf-8 |\r\n| encoding errors | surrogateescape |\r\n| on jupyter | False |\r\n| jupyter kernel | None |\r\n| xontrib 1 | abbrevs |\r\n| xontrib 2 | argcomplete |\r\n| xontrib 3 | autovox |\r\n| xontrib 4 | back2dir |\r\n| xontrib 5 | cmd_done |\r\n| xontrib 6 | hist_navigator |\r\n| xontrib 7 | jedi |\r\n| xontrib 8 | kitty |\r\n| xontrib 9 | pdb |\r\n| xontrib 10 | prompt_ret_code |\r\n| xontrib 11 | vox |\r\n| xontrib 12 | voxapi |\r\n+------------------+----------------------+\r\n```\r\n\r\n</details>\r\n\r\n## Expected Behavior\r\ninteractive shell runs without any error\r\n\r\n## Current Behavior\r\n<!--- Tell us what happens instead of the expected behavior -->\r\n<!--- If part of your bug report is a traceback, please first enter debug mode before triggering the error\r\nTo enter debug mode, set the environment variable `XONSH_DEBUG=1` _before_ starting `xonsh`.\r\nOn Linux and OSX, an easy way to to do this is to run `env XONSH_DEBUG=1 xonsh` -->\r\nI get the above error randomly when `$UPDATE_COMPLETIONS_ON_KEYPRESS = True`\r\n\r\n### Traceback (if applicable)\r\n\r\n<details>\r\n\r\n```\r\n2020-11-08 22:06:05.995 | INFO | xonsh.ptk_shell.completer:reserve_space:118 - 8735829483909\r\n2020-11-08 22:06:06.000 | ERROR | xonsh.ptk_shell.completer:reserve_space:126 - 'Window' object has no attribute 'children'\r\nTraceback (most recent call last):\r\n\r\n File \"/usr/lib/python3.8/threading.py\", line 890, in _bootstrap\r\n self._bootstrap_inner()\r\n \u2502 \u2514 <function Thread._bootstrap_inner at 0x7f1f895e01f0>\r\n \u2514 <Thread(ThreadPoolExecutor-1_0, started daemon 139773312693824)>\r\n File \"/usr/lib/python3.8/threading.py\", line 932, in _bootstrap_inner\r\n self.run()\r\n \u2502 \u2514 <function Thread.run at 0x7f1f895dfee0>\r\n \u2514 <Thread(ThreadPoolExecutor-1_0, started daemon 139773312693824)>\r\n File \"/usr/lib/python3.8/threading.py\", line 870, in run\r\n self._target(*self._args, **self._kwargs)\r\n \u2502 \u2502 \u2502 \u2502 \u2502 \u2514 {}\r\n \u2502 \u2502 \u2502 \u2502 \u2514 <Thread(ThreadPoolExecutor-1_0, started daemon 139773312693824)>\r\n \u2502 \u2502 \u2502 \u2514 (<weakref at 0x7f1f803ed270; to 'ThreadPoolExecutor' at 0x7f1f81e857c0>, <_queue.SimpleQueue object at 0x7f1f82a8b2c0>, None,...\r\n \u2502 \u2502 \u2514 <Thread(ThreadPoolExecutor-1_0, started daemon 139773312693824)>\r\n \u2502 \u2514 <function _worker at 0x7f1f81edb670>\r\n \u2514 <Thread(ThreadPoolExecutor-1_0, started daemon 139773312693824)>\r\n File \"/usr/lib/python3.8/concurrent/futures/thread.py\", line 80, in _worker\r\n work_item.run()\r\n \u2502 \u2514 <function _WorkItem.run at 0x7f1f81edb790>\r\n \u2514 <concurrent.futures.thread._WorkItem object at 0x7f1f803eb460>\r\n File \"/usr/lib/python3.8/concurrent/futures/thread.py\", line 57, in run\r\n result = self.fn(*self.args, **self.kwargs)\r\n \u2502 \u2502 \u2502 \u2502 \u2502 \u2514 {}\r\n \u2502 \u2502 \u2502 \u2502 \u2514 <concurrent.futures.thread._WorkItem object at 0x7f1f803eb460>\r\n \u2502 \u2502 \u2502 \u2514 [<function generator_to_async_generator.<locals>.runner at 0x7f1f81f13e50>]\r\n \u2502 \u2502 \u2514 <concurrent.futures.thread._WorkItem object at 0x7f1f803eb460>\r\n \u2502 \u2514 <built-in method run of Context object at 0x7f1f8039c9c0>\r\n \u2514 <concurrent.futures.thread._WorkItem object at 0x7f1f803eb460>\r\n File \"/home/noor/.config/xonsh/.venv/lib/python3.8/site-packages/prompt_toolkit/eventloop/async_generator.py\", line 43, in runner\r\n for item in get_iterable():\r\n \u2514 <function ThreadedCompleter.get_completions_async.<locals>.<lambda> at 0x7f1f81f13ee0>\r\n\r\n File \"/home/noor/.config/xonsh/xsh-src/xonsh/ptk_shell/completer.py\", line 73, in get_completions\r\n self.reserve_space()\r\n \u2502 \u2514 <function PromptToolkitCompleter.reserve_space at 0x7f1f82aa4430>\r\n \u2514 <xonsh.ptk_shell.completer.PromptToolkitCompleter object at 0x7f1f82aed8e0>\r\n\r\n> File \"/home/noor/.config/xonsh/xsh-src/xonsh/ptk_shell/completer.py\", line 123, in reserve_space\r\n hash(app.layout.container.children[0].content.children[1].content)\r\n \u2502 \u2502 \u2514 Window(content=FormattedTextControl(HTML('No layout specified. Press <reverse>ENTER</reverse> to quit.')))\r\n \u2502 \u2514 Layout(Window(content=FormattedTextControl(HTML('No layout specified. Press <reverse>ENTER</reverse> to quit.'))), current_wi...\r\n \u2514 <prompt_toolkit.application.dummy.DummyApplication object at 0x7f1f803eb8b0>\r\n\r\nAttributeError: 'Window' object has no attribute 'children'\r\n\r\n```\r\n\r\n</details>\r\n\r\n## Steps to Reproduce\r\n<!--- Please try to write out a minimal reproducible snippet to trigger the bug, it will help us fix it! -->\r\n1. it happens randomly. sometime doing simple `ls -a` triggers the error\r\n\r\n## For community\r\n\u2b07\ufe0f **Please click the \ud83d\udc4d reaction instead of leaving a `+1` or \ud83d\udc4d comment**\r\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"Completer implementation to use with prompt_toolkit.\"\"\"\nimport os\nimport builtins\n\nfrom prompt_toolkit.completion import Completer, Completion\nfrom prompt_toolkit.auto_suggest import AutoSuggestFromHistory\nfrom prompt_toolkit.application.current import get_app\n\nfrom xonsh.completers.tools import RichCompletion\n\n\nclass PromptToolkitCompleter(Completer):\n \"\"\"Simple prompt_toolkit Completer object.\n\n It just redirects requests to normal Xonsh completer.\n \"\"\"\n\n def __init__(self, completer, ctx, shell):\n \"\"\"Takes instance of xonsh.completer.Completer, the xonsh execution\n context, and the shell instance itself.\n \"\"\"\n self.completer = completer\n self.ctx = ctx\n self.shell = shell\n self.hist_suggester = AutoSuggestFromHistory()\n self.current_document = None\n\n def get_completions(self, document, complete_event):\n \"\"\"Returns a generator for list of completions.\"\"\"\n env = builtins.__xonsh__.env\n should_complete = complete_event.completion_requested or env.get(\n \"UPDATE_COMPLETIONS_ON_KEYPRESS\"\n )\n # Only generate completions when the user hits tab.\n if not should_complete or self.completer is None:\n return\n # generate actual completions\n line = document.current_line.lstrip()\n line_ex = builtins.aliases.expand_alias(line)\n\n endidx = document.cursor_position_col\n begidx = line[:endidx].rfind(\" \") + 1 if line[:endidx].rfind(\" \") >= 0 else 0\n prefix = line[begidx:endidx]\n expand_offset = len(line_ex) - len(line)\n\n # enable completers to access entire document\n self.current_document = document\n\n # get normal completions\n completions, l = self.completer.complete(\n prefix, line_ex, begidx + expand_offset, endidx + expand_offset, self.ctx\n )\n\n self.current_document = None\n\n # completions from auto suggest\n sug_comp = None\n if env.get(\"AUTO_SUGGEST\") and env.get(\"AUTO_SUGGEST_IN_COMPLETIONS\"):\n sug_comp = self.suggestion_completion(document, line)\n if sug_comp is None:\n pass\n elif len(completions) == 0:\n completions = (sug_comp,)\n else:\n completions = set(completions)\n completions.discard(sug_comp)\n completions = (sug_comp,) + tuple(sorted(completions))\n # reserve space, if needed.\n if len(completions) <= 1:\n pass\n elif len(os.path.commonprefix(completions)) <= len(prefix):\n self.reserve_space()\n # Find common prefix (strip quoting)\n c_prefix = os.path.commonprefix([a.strip(\"'\\\"\") for a in completions])\n # Find last split symbol, do not trim the last part\n while c_prefix:\n if c_prefix[-1] in r\"/\\.:@,\":\n break\n c_prefix = c_prefix[:-1]\n # yield completions\n if sug_comp is None:\n pre = min(document.cursor_position_col - begidx, len(c_prefix))\n else:\n pre = len(c_prefix)\n for comp in completions:\n # do not display quote\n if isinstance(comp, RichCompletion):\n yield Completion(\n comp,\n -comp.prefix_len if comp.prefix_len is not None else -l,\n display=comp.display,\n display_meta=comp.description or None,\n )\n else:\n disp = comp[pre:].strip(\"'\\\"\")\n yield Completion(comp, -l, display=disp)\n\n def suggestion_completion(self, document, line):\n \"\"\"Provides a completion based on the current auto-suggestion.\"\"\"\n app = self.shell.prompter.app\n sug = self.hist_suggester.get_suggestion(app.current_buffer, document)\n if sug is None:\n return None\n comp, _, _ = sug.text.partition(\" \")\n _, _, prev = line.rpartition(\" \")\n return prev + comp\n\n def reserve_space(self):\n \"\"\"Adjust the height for showing autocompletion menu.\"\"\"\n app = get_app()\n render = app.renderer\n window = app.layout.container.children[0].content.children[1].content\n\n if window and window.render_info:\n h = window.render_info.content_height\n r = builtins.__xonsh__.env.get(\"COMPLETIONS_MENU_ROWS\")\n size = h + r\n last_h = render._last_screen.height if render._last_screen else 0\n last_h = max(render._min_available_height, last_h)\n if last_h < size:\n if render._last_screen:\n render._last_screen.height = size\n", "path": "xonsh/ptk_shell/completer.py"}]} | 3,793 | 138 |
gh_patches_debug_35836 | rasdani/github-patches | git_diff | pyca__cryptography-1532 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add support for loading DSA OpenSSH public keys
Should be straightforward to add support to the existing code.
</issue>
<code>
[start of src/cryptography/hazmat/primitives/serialization.py]
1 # This file is dual licensed under the terms of the Apache License, Version
2 # 2.0, and the BSD License. See the LICENSE file in the root of this repository
3 # for complete details.
4
5 from __future__ import absolute_import, division, print_function
6
7 import base64
8 import struct
9 import warnings
10
11 from cryptography import utils
12 from cryptography.exceptions import UnsupportedAlgorithm
13 from cryptography.hazmat.primitives.asymmetric.rsa import RSAPublicNumbers
14
15
16 def load_pem_traditional_openssl_private_key(data, password, backend):
17 warnings.warn(
18 "load_pem_traditional_openssl_private_key is deprecated and will be "
19 "removed in a future version, use load_pem_private_key instead.",
20 utils.DeprecatedIn06,
21 stacklevel=2
22 )
23
24 return backend.load_traditional_openssl_pem_private_key(
25 data, password
26 )
27
28
29 def load_pem_pkcs8_private_key(data, password, backend):
30 warnings.warn(
31 "load_pem_pkcs8_private_key is deprecated and will be removed in a "
32 "future version, use load_pem_private_key instead.",
33 utils.DeprecatedIn06,
34 stacklevel=2
35 )
36
37 return backend.load_pkcs8_pem_private_key(data, password)
38
39
40 def load_pem_private_key(data, password, backend):
41 return backend.load_pem_private_key(data, password)
42
43
44 def load_pem_public_key(data, backend):
45 return backend.load_pem_public_key(data)
46
47
48 def load_ssh_public_key(data, backend):
49 key_parts = data.split(b' ')
50
51 if len(key_parts) != 2 and len(key_parts) != 3:
52 raise ValueError(
53 'Key is not in the proper format or contains extra data.')
54
55 key_type = key_parts[0]
56 key_body = key_parts[1]
57
58 if not key_type.startswith(b'ssh-'):
59 raise ValueError('SSH-formatted keys must begin with \'ssh-\'.')
60
61 if not key_type.startswith(b'ssh-rsa'):
62 raise UnsupportedAlgorithm('Only RSA keys are currently supported.')
63
64 return _load_ssh_rsa_public_key(key_body, backend)
65
66
67 def _load_ssh_rsa_public_key(key_body, backend):
68 data = base64.b64decode(key_body)
69
70 key_type, rest = _read_next_string(data)
71 e, rest = _read_next_mpint(rest)
72 n, rest = _read_next_mpint(rest)
73
74 if key_type != b'ssh-rsa':
75 raise ValueError(
76 'Key header and key body contain different key type values.')
77
78 if rest:
79 raise ValueError('Key body contains extra bytes.')
80
81 return backend.load_rsa_public_numbers(RSAPublicNumbers(e, n))
82
83
84 def _read_next_string(data):
85 """Retrieves the next RFC 4251 string value from the data."""
86 str_len, = struct.unpack('>I', data[:4])
87 return data[4:4 + str_len], data[4 + str_len:]
88
89
90 def _read_next_mpint(data):
91 """
92 Reads the next mpint from the data.
93
94 Currently, all mpints are interpreted as unsigned.
95 """
96 mpint_data, rest = _read_next_string(data)
97
98 return _int_from_bytes(mpint_data, byteorder='big', signed=False), rest
99
100
101 if hasattr(int, "from_bytes"):
102 _int_from_bytes = int.from_bytes
103 else:
104 def _int_from_bytes(data, byteorder, signed=False):
105 assert byteorder == 'big'
106 assert not signed
107
108 if len(data) % 4 != 0:
109 data = (b'\x00' * (4 - (len(data) % 4))) + data
110
111 result = 0
112
113 while len(data) > 0:
114 digit, = struct.unpack('>I', data[:4])
115 result = (result << 32) + digit
116 data = data[4:]
117
118 return result
119
[end of src/cryptography/hazmat/primitives/serialization.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/cryptography/hazmat/primitives/serialization.py b/src/cryptography/hazmat/primitives/serialization.py
--- a/src/cryptography/hazmat/primitives/serialization.py
+++ b/src/cryptography/hazmat/primitives/serialization.py
@@ -10,6 +10,9 @@
from cryptography import utils
from cryptography.exceptions import UnsupportedAlgorithm
+from cryptography.hazmat.primitives.asymmetric.dsa import (
+ DSAParameterNumbers, DSAPublicNumbers
+)
from cryptography.hazmat.primitives.asymmetric.rsa import RSAPublicNumbers
@@ -55,19 +58,23 @@
key_type = key_parts[0]
key_body = key_parts[1]
- if not key_type.startswith(b'ssh-'):
- raise ValueError('SSH-formatted keys must begin with \'ssh-\'.')
+ try:
+ decoded_data = base64.b64decode(key_body)
+ except TypeError:
+ raise ValueError('Key is not in the proper format.')
- if not key_type.startswith(b'ssh-rsa'):
- raise UnsupportedAlgorithm('Only RSA keys are currently supported.')
+ if key_type == b'ssh-rsa':
+ return _load_ssh_rsa_public_key(decoded_data, backend)
+ elif key_type == b'ssh-dss':
+ return _load_ssh_dss_public_key(decoded_data, backend)
+ else:
+ raise UnsupportedAlgorithm(
+ 'Only RSA and DSA keys are currently supported.'
+ )
- return _load_ssh_rsa_public_key(key_body, backend)
-
-def _load_ssh_rsa_public_key(key_body, backend):
- data = base64.b64decode(key_body)
-
- key_type, rest = _read_next_string(data)
+def _load_ssh_rsa_public_key(decoded_data, backend):
+ key_type, rest = _read_next_string(decoded_data)
e, rest = _read_next_mpint(rest)
n, rest = _read_next_mpint(rest)
@@ -81,6 +88,26 @@
return backend.load_rsa_public_numbers(RSAPublicNumbers(e, n))
+def _load_ssh_dss_public_key(decoded_data, backend):
+ key_type, rest = _read_next_string(decoded_data)
+ p, rest = _read_next_mpint(rest)
+ q, rest = _read_next_mpint(rest)
+ g, rest = _read_next_mpint(rest)
+ y, rest = _read_next_mpint(rest)
+
+ if key_type != b'ssh-dss':
+ raise ValueError(
+ 'Key header and key body contain different key type values.')
+
+ if rest:
+ raise ValueError('Key body contains extra bytes.')
+
+ parameter_numbers = DSAParameterNumbers(p, q, g)
+ public_numbers = DSAPublicNumbers(y, parameter_numbers)
+
+ return backend.load_dsa_public_numbers(public_numbers)
+
+
def _read_next_string(data):
"""Retrieves the next RFC 4251 string value from the data."""
str_len, = struct.unpack('>I', data[:4])
| {"golden_diff": "diff --git a/src/cryptography/hazmat/primitives/serialization.py b/src/cryptography/hazmat/primitives/serialization.py\n--- a/src/cryptography/hazmat/primitives/serialization.py\n+++ b/src/cryptography/hazmat/primitives/serialization.py\n@@ -10,6 +10,9 @@\n \n from cryptography import utils\n from cryptography.exceptions import UnsupportedAlgorithm\n+from cryptography.hazmat.primitives.asymmetric.dsa import (\n+ DSAParameterNumbers, DSAPublicNumbers\n+)\n from cryptography.hazmat.primitives.asymmetric.rsa import RSAPublicNumbers\n \n \n@@ -55,19 +58,23 @@\n key_type = key_parts[0]\n key_body = key_parts[1]\n \n- if not key_type.startswith(b'ssh-'):\n- raise ValueError('SSH-formatted keys must begin with \\'ssh-\\'.')\n+ try:\n+ decoded_data = base64.b64decode(key_body)\n+ except TypeError:\n+ raise ValueError('Key is not in the proper format.')\n \n- if not key_type.startswith(b'ssh-rsa'):\n- raise UnsupportedAlgorithm('Only RSA keys are currently supported.')\n+ if key_type == b'ssh-rsa':\n+ return _load_ssh_rsa_public_key(decoded_data, backend)\n+ elif key_type == b'ssh-dss':\n+ return _load_ssh_dss_public_key(decoded_data, backend)\n+ else:\n+ raise UnsupportedAlgorithm(\n+ 'Only RSA and DSA keys are currently supported.'\n+ )\n \n- return _load_ssh_rsa_public_key(key_body, backend)\n \n-\n-def _load_ssh_rsa_public_key(key_body, backend):\n- data = base64.b64decode(key_body)\n-\n- key_type, rest = _read_next_string(data)\n+def _load_ssh_rsa_public_key(decoded_data, backend):\n+ key_type, rest = _read_next_string(decoded_data)\n e, rest = _read_next_mpint(rest)\n n, rest = _read_next_mpint(rest)\n \n@@ -81,6 +88,26 @@\n return backend.load_rsa_public_numbers(RSAPublicNumbers(e, n))\n \n \n+def _load_ssh_dss_public_key(decoded_data, backend):\n+ key_type, rest = _read_next_string(decoded_data)\n+ p, rest = _read_next_mpint(rest)\n+ q, rest = _read_next_mpint(rest)\n+ g, rest = _read_next_mpint(rest)\n+ y, rest = _read_next_mpint(rest)\n+\n+ if key_type != b'ssh-dss':\n+ raise ValueError(\n+ 'Key header and key body contain different key type values.')\n+\n+ if rest:\n+ raise ValueError('Key body contains extra bytes.')\n+\n+ parameter_numbers = DSAParameterNumbers(p, q, g)\n+ public_numbers = DSAPublicNumbers(y, parameter_numbers)\n+\n+ return backend.load_dsa_public_numbers(public_numbers)\n+\n+\n def _read_next_string(data):\n \"\"\"Retrieves the next RFC 4251 string value from the data.\"\"\"\n str_len, = struct.unpack('>I', data[:4])\n", "issue": "Add support for loading DSA OpenSSH public keys\nShould be straightforward to add support to the existing code.\n\n", "before_files": [{"content": "# This file is dual licensed under the terms of the Apache License, Version\n# 2.0, and the BSD License. See the LICENSE file in the root of this repository\n# for complete details.\n\nfrom __future__ import absolute_import, division, print_function\n\nimport base64\nimport struct\nimport warnings\n\nfrom cryptography import utils\nfrom cryptography.exceptions import UnsupportedAlgorithm\nfrom cryptography.hazmat.primitives.asymmetric.rsa import RSAPublicNumbers\n\n\ndef load_pem_traditional_openssl_private_key(data, password, backend):\n warnings.warn(\n \"load_pem_traditional_openssl_private_key is deprecated and will be \"\n \"removed in a future version, use load_pem_private_key instead.\",\n utils.DeprecatedIn06,\n stacklevel=2\n )\n\n return backend.load_traditional_openssl_pem_private_key(\n data, password\n )\n\n\ndef load_pem_pkcs8_private_key(data, password, backend):\n warnings.warn(\n \"load_pem_pkcs8_private_key is deprecated and will be removed in a \"\n \"future version, use load_pem_private_key instead.\",\n utils.DeprecatedIn06,\n stacklevel=2\n )\n\n return backend.load_pkcs8_pem_private_key(data, password)\n\n\ndef load_pem_private_key(data, password, backend):\n return backend.load_pem_private_key(data, password)\n\n\ndef load_pem_public_key(data, backend):\n return backend.load_pem_public_key(data)\n\n\ndef load_ssh_public_key(data, backend):\n key_parts = data.split(b' ')\n\n if len(key_parts) != 2 and len(key_parts) != 3:\n raise ValueError(\n 'Key is not in the proper format or contains extra data.')\n\n key_type = key_parts[0]\n key_body = key_parts[1]\n\n if not key_type.startswith(b'ssh-'):\n raise ValueError('SSH-formatted keys must begin with \\'ssh-\\'.')\n\n if not key_type.startswith(b'ssh-rsa'):\n raise UnsupportedAlgorithm('Only RSA keys are currently supported.')\n\n return _load_ssh_rsa_public_key(key_body, backend)\n\n\ndef _load_ssh_rsa_public_key(key_body, backend):\n data = base64.b64decode(key_body)\n\n key_type, rest = _read_next_string(data)\n e, rest = _read_next_mpint(rest)\n n, rest = _read_next_mpint(rest)\n\n if key_type != b'ssh-rsa':\n raise ValueError(\n 'Key header and key body contain different key type values.')\n\n if rest:\n raise ValueError('Key body contains extra bytes.')\n\n return backend.load_rsa_public_numbers(RSAPublicNumbers(e, n))\n\n\ndef _read_next_string(data):\n \"\"\"Retrieves the next RFC 4251 string value from the data.\"\"\"\n str_len, = struct.unpack('>I', data[:4])\n return data[4:4 + str_len], data[4 + str_len:]\n\n\ndef _read_next_mpint(data):\n \"\"\"\n Reads the next mpint from the data.\n\n Currently, all mpints are interpreted as unsigned.\n \"\"\"\n mpint_data, rest = _read_next_string(data)\n\n return _int_from_bytes(mpint_data, byteorder='big', signed=False), rest\n\n\nif hasattr(int, \"from_bytes\"):\n _int_from_bytes = int.from_bytes\nelse:\n def _int_from_bytes(data, byteorder, signed=False):\n assert byteorder == 'big'\n assert not signed\n\n if len(data) % 4 != 0:\n data = (b'\\x00' * (4 - (len(data) % 4))) + data\n\n result = 0\n\n while len(data) > 0:\n digit, = struct.unpack('>I', data[:4])\n result = (result << 32) + digit\n data = data[4:]\n\n return result\n", "path": "src/cryptography/hazmat/primitives/serialization.py"}]} | 1,688 | 685 |
gh_patches_debug_23749 | rasdani/github-patches | git_diff | SeldonIO__MLServer-301 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Tempo example fails when parallel inference is enabled
When parallel inference is enabled, the [outlier example using the Tempo runtime](https://tempo.readthedocs.io/en/latest/examples/outlier/README.html)seems to fail. In particular, it seems that either the `cifar10-service` or the `outlier` containers block the request path and never return a response.
</issue>
<code>
[start of mlserver/parallel.py]
1 import asyncio
2
3 from functools import wraps
4 from concurrent.futures import ProcessPoolExecutor
5 from typing import Any, Coroutine, Callable, Optional
6
7 from .errors import MLServerError
8 from .settings import ModelSettings
9 from .model import MLModel
10 from .types import InferenceRequest, InferenceResponse
11
12 _InferencePoolAttr = "__inference_pool__"
13
14 # NOTE: Workaround for mypy
15 _mp_model: MLModel
16
17
18 class InvalidParallelMethod(MLServerError):
19 def __init__(self, method_name: str, reason: Optional[str] = None):
20 msg = f"Method {method_name} can't be parallelised"
21 if reason:
22 msg += f": {reason}"
23
24 super().__init__(msg)
25
26
27 def _mp_load(model_settings: ModelSettings):
28 """
29 This method is meant to run internally in the multiprocessing workers.
30 The loading needs to run synchronously, since the initializer argument
31 doesn't support coroutines.
32 """
33 # NOTE: The global `_mp_model` variable is shared with the `_mp_predict`
34 # method.
35 # This global variable should only be used within the inference
36 # multiprocessing workers.
37 global _mp_model
38
39 model_class = model_settings.implementation
40 _mp_model = model_class(model_settings) # type: ignore
41 return asyncio.run(_mp_model.load())
42
43
44 def _mp_predict(payload: InferenceRequest) -> InferenceResponse:
45 """
46 This method is meant to run internally in the multiprocessing workers.
47 The prediction needs to run synchronously, since multiprocessing
48 doesn't know how to serialise coroutines.
49 """
50 # NOTE: `_mp_model` is a global variable initialised in the `_mp_load`
51 # method.
52 # This global variable is only to be used within the inference worker
53 # context.
54 global _mp_model
55
56 return asyncio.run(_mp_model.predict(payload))
57
58
59 class InferencePool:
60 """
61 The InferencePool class represents a pool of workers where we can run
62 inference on.
63
64 Under the hood, it's responsible for managing a pool of multiprocessing
65 workers, where the model is loaded.
66 This approach lets MLServer work around the GIL to make sure that inference
67 can occur in parallel across multiple models or instances of a model.
68 """
69
70 def __init__(self, model: MLModel):
71 parallel_workers = model.settings.parallel_workers
72 self._executor = ProcessPoolExecutor(
73 max_workers=parallel_workers,
74 initializer=_mp_load,
75 initargs=(model.settings,),
76 )
77
78 async def predict(self, payload: InferenceRequest) -> InferenceResponse:
79 # What if we serialise payload?
80 loop = asyncio.get_running_loop()
81 return await loop.run_in_executor(self._executor, _mp_predict, payload)
82
83 def __del__(self):
84 self._executor.shutdown(wait=True)
85
86
87 def parallel(f: Callable[[InferenceRequest], Coroutine[Any, Any, InferenceResponse]]):
88 """
89 Decorator to attach to model's methods so that they run in parallel.
90 By default, this will get attached to every model's "inference" method.
91
92 NOTE: At the moment, this method only works with `predict()`.
93 """
94 # TODO: Extend to multiple methods
95 @wraps(f)
96 async def _inner(payload: InferenceRequest) -> InferenceResponse:
97 if not hasattr(f, "__self__"):
98 raise InvalidParallelMethod(f.__name__, reason="method is not bound")
99
100 model = getattr(f, "__self__")
101 if not hasattr(model, _InferencePoolAttr):
102 raise InvalidParallelMethod(
103 f.__name__, reason="inference pool has not been loaded"
104 )
105
106 pool = getattr(model, _InferencePoolAttr)
107 return await pool.predict(payload)
108
109 return _inner
110
111
112 async def load_inference_pool(model: MLModel):
113 if model.settings.parallel_workers == 0:
114 # When parallel workers is set to 0, disable parallel inference
115 return model
116
117 pool = InferencePool(model)
118 setattr(model, _InferencePoolAttr, pool)
119
120 # Decorate predict method
121 setattr(model, "predict", parallel(model.predict))
122
123 return model
124
125
126 async def unload_inference_pool(model: MLModel):
127 pool = getattr(model, _InferencePoolAttr)
128 if not pool:
129 return
130
131 pool.__del__()
132 delattr(model, _InferencePoolAttr)
133
[end of mlserver/parallel.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/mlserver/parallel.py b/mlserver/parallel.py
--- a/mlserver/parallel.py
+++ b/mlserver/parallel.py
@@ -1,4 +1,5 @@
import asyncio
+import multiprocessing as mp
from functools import wraps
from concurrent.futures import ProcessPoolExecutor
@@ -52,7 +53,6 @@
# This global variable is only to be used within the inference worker
# context.
global _mp_model
-
return asyncio.run(_mp_model.predict(payload))
@@ -69,8 +69,14 @@
def __init__(self, model: MLModel):
parallel_workers = model.settings.parallel_workers
+
+ # Use 'spawn' instead of 'fork' to ensure that models are loaded in a
+ # clean environment (e.g. to avoid issues like
+ # https://github.com/tensorflow/tensorflow/issues/8220)
+ ctx = mp.get_context("spawn")
self._executor = ProcessPoolExecutor(
max_workers=parallel_workers,
+ mp_context=ctx,
initializer=_mp_load,
initargs=(model.settings,),
)
| {"golden_diff": "diff --git a/mlserver/parallel.py b/mlserver/parallel.py\n--- a/mlserver/parallel.py\n+++ b/mlserver/parallel.py\n@@ -1,4 +1,5 @@\n import asyncio\n+import multiprocessing as mp\n \n from functools import wraps\n from concurrent.futures import ProcessPoolExecutor\n@@ -52,7 +53,6 @@\n # This global variable is only to be used within the inference worker\n # context.\n global _mp_model\n-\n return asyncio.run(_mp_model.predict(payload))\n \n \n@@ -69,8 +69,14 @@\n \n def __init__(self, model: MLModel):\n parallel_workers = model.settings.parallel_workers\n+\n+ # Use 'spawn' instead of 'fork' to ensure that models are loaded in a\n+ # clean environment (e.g. to avoid issues like\n+ # https://github.com/tensorflow/tensorflow/issues/8220)\n+ ctx = mp.get_context(\"spawn\")\n self._executor = ProcessPoolExecutor(\n max_workers=parallel_workers,\n+ mp_context=ctx,\n initializer=_mp_load,\n initargs=(model.settings,),\n )\n", "issue": "Tempo example fails when parallel inference is enabled\nWhen parallel inference is enabled, the [outlier example using the Tempo runtime](https://tempo.readthedocs.io/en/latest/examples/outlier/README.html)seems to fail. In particular, it seems that either the `cifar10-service` or the `outlier` containers block the request path and never return a response.\n", "before_files": [{"content": "import asyncio\n\nfrom functools import wraps\nfrom concurrent.futures import ProcessPoolExecutor\nfrom typing import Any, Coroutine, Callable, Optional\n\nfrom .errors import MLServerError\nfrom .settings import ModelSettings\nfrom .model import MLModel\nfrom .types import InferenceRequest, InferenceResponse\n\n_InferencePoolAttr = \"__inference_pool__\"\n\n# NOTE: Workaround for mypy\n_mp_model: MLModel\n\n\nclass InvalidParallelMethod(MLServerError):\n def __init__(self, method_name: str, reason: Optional[str] = None):\n msg = f\"Method {method_name} can't be parallelised\"\n if reason:\n msg += f\": {reason}\"\n\n super().__init__(msg)\n\n\ndef _mp_load(model_settings: ModelSettings):\n \"\"\"\n This method is meant to run internally in the multiprocessing workers.\n The loading needs to run synchronously, since the initializer argument\n doesn't support coroutines.\n \"\"\"\n # NOTE: The global `_mp_model` variable is shared with the `_mp_predict`\n # method.\n # This global variable should only be used within the inference\n # multiprocessing workers.\n global _mp_model\n\n model_class = model_settings.implementation\n _mp_model = model_class(model_settings) # type: ignore\n return asyncio.run(_mp_model.load())\n\n\ndef _mp_predict(payload: InferenceRequest) -> InferenceResponse:\n \"\"\"\n This method is meant to run internally in the multiprocessing workers.\n The prediction needs to run synchronously, since multiprocessing\n doesn't know how to serialise coroutines.\n \"\"\"\n # NOTE: `_mp_model` is a global variable initialised in the `_mp_load`\n # method.\n # This global variable is only to be used within the inference worker\n # context.\n global _mp_model\n\n return asyncio.run(_mp_model.predict(payload))\n\n\nclass InferencePool:\n \"\"\"\n The InferencePool class represents a pool of workers where we can run\n inference on.\n\n Under the hood, it's responsible for managing a pool of multiprocessing\n workers, where the model is loaded.\n This approach lets MLServer work around the GIL to make sure that inference\n can occur in parallel across multiple models or instances of a model.\n \"\"\"\n\n def __init__(self, model: MLModel):\n parallel_workers = model.settings.parallel_workers\n self._executor = ProcessPoolExecutor(\n max_workers=parallel_workers,\n initializer=_mp_load,\n initargs=(model.settings,),\n )\n\n async def predict(self, payload: InferenceRequest) -> InferenceResponse:\n # What if we serialise payload?\n loop = asyncio.get_running_loop()\n return await loop.run_in_executor(self._executor, _mp_predict, payload)\n\n def __del__(self):\n self._executor.shutdown(wait=True)\n\n\ndef parallel(f: Callable[[InferenceRequest], Coroutine[Any, Any, InferenceResponse]]):\n \"\"\"\n Decorator to attach to model's methods so that they run in parallel.\n By default, this will get attached to every model's \"inference\" method.\n\n NOTE: At the moment, this method only works with `predict()`.\n \"\"\"\n # TODO: Extend to multiple methods\n @wraps(f)\n async def _inner(payload: InferenceRequest) -> InferenceResponse:\n if not hasattr(f, \"__self__\"):\n raise InvalidParallelMethod(f.__name__, reason=\"method is not bound\")\n\n model = getattr(f, \"__self__\")\n if not hasattr(model, _InferencePoolAttr):\n raise InvalidParallelMethod(\n f.__name__, reason=\"inference pool has not been loaded\"\n )\n\n pool = getattr(model, _InferencePoolAttr)\n return await pool.predict(payload)\n\n return _inner\n\n\nasync def load_inference_pool(model: MLModel):\n if model.settings.parallel_workers == 0:\n # When parallel workers is set to 0, disable parallel inference\n return model\n\n pool = InferencePool(model)\n setattr(model, _InferencePoolAttr, pool)\n\n # Decorate predict method\n setattr(model, \"predict\", parallel(model.predict))\n\n return model\n\n\nasync def unload_inference_pool(model: MLModel):\n pool = getattr(model, _InferencePoolAttr)\n if not pool:\n return\n\n pool.__del__()\n delattr(model, _InferencePoolAttr)\n", "path": "mlserver/parallel.py"}]} | 1,864 | 253 |
gh_patches_debug_6029 | rasdani/github-patches | git_diff | pytorch__ignite-768 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
log_message() method fails `desc` is passed to contrib tqdm
## 🐛 Bug description
If I pass a `desc` argument when instantiating a `ignite.contrib.handlers.ProgressBar` the calls to its `log_message()` method fail with this exception:
```
TypeError: write() got an unexpected keyword argument 'desc'
```
## Environment
- PyTorch Version: 1.3.1
- Ignite Version: 0.3.0
- OS: Linux
- How you installed Ignite (`conda`, `pip`, source): Conda
- Python version: 3.7
- Any other relevant information:
</issue>
<code>
[start of ignite/contrib/handlers/tqdm_logger.py]
1 # -*- coding: utf-8 -*-
2 import warnings
3
4 import torch
5
6 from ignite.engine import Events
7 from ignite.engine.engine import EventWithFilter
8 from ignite.contrib.handlers.base_logger import BaseLogger, BaseOutputHandler
9
10
11 class ProgressBar(BaseLogger):
12 """
13 TQDM progress bar handler to log training progress and computed metrics.
14
15 Args:
16 persist (bool, optional): set to ``True`` to persist the progress bar after completion (default = ``False``)
17 bar_format (str, optional): Specify a custom bar string formatting. May impact performance.
18 [default: '{desc}[{n_fmt}/{total_fmt}] {percentage:3.0f}%|{bar}{postfix} [{elapsed}<{remaining}]'].
19 Set to ``None`` to use ``tqdm`` default bar formatting: '{l_bar}{bar}{r_bar}', where
20 l_bar='{desc}: {percentage:3.0f}%|' and
21 r_bar='| {n_fmt}/{total_fmt} [{elapsed}<{remaining}, {rate_fmt}{postfix}]'. For more details on the
22 formatting, see `tqdm docs <https://tqdm.github.io/docs/tqdm/>`_.
23 **tqdm_kwargs: kwargs passed to tqdm progress bar.
24 By default, progress bar description displays "Epoch [5/10]" where 5 is the current epoch and 10 is the
25 number of epochs. If tqdm_kwargs defines `desc`, e.g. "Predictions", than the description is
26 "Predictions [5/10]" if number of epochs is more than one otherwise it is simply "Predictions".
27
28 Examples:
29
30 Simple progress bar
31
32 .. code-block:: python
33
34 trainer = create_supervised_trainer(model, optimizer, loss)
35
36 pbar = ProgressBar()
37 pbar.attach(trainer)
38
39 # Progress bar will looks like
40 # Epoch [2/50]: [64/128] 50%|█████ [06:17<12:34]
41
42 Log output to a file instead of stderr (tqdm's default output)
43
44 .. code-block:: python
45
46 trainer = create_supervised_trainer(model, optimizer, loss)
47
48 log_file = open("output.log", "w")
49 pbar = ProgressBar(file=log_file)
50 pbar.attach(trainer)
51
52 Attach metrics that already have been computed at :attr:`~ignite.engine.Events.ITERATION_COMPLETED`
53 (such as :class:`~ignite.metrics.RunningAverage`)
54
55 .. code-block:: python
56
57 trainer = create_supervised_trainer(model, optimizer, loss)
58
59 RunningAverage(output_transform=lambda x: x).attach(trainer, 'loss')
60
61 pbar = ProgressBar()
62 pbar.attach(trainer, ['loss'])
63
64 # Progress bar will looks like
65 # Epoch [2/50]: [64/128] 50%|█████ , loss=0.123 [06:17<12:34]
66
67 Directly attach the engine's output
68
69 .. code-block:: python
70
71 trainer = create_supervised_trainer(model, optimizer, loss)
72
73 pbar = ProgressBar()
74 pbar.attach(trainer, output_transform=lambda x: {'loss': x})
75
76 # Progress bar will looks like
77 # Epoch [2/50]: [64/128] 50%|█████ , loss=0.123 [06:17<12:34]
78
79 Note:
80 When adding attaching the progress bar to an engine, it is recommend that you replace
81 every print operation in the engine's handlers triggered every iteration with
82 ``pbar.log_message`` to guarantee the correct format of the stdout.
83
84 Note:
85 When using inside jupyter notebook, `ProgressBar` automatically uses `tqdm_notebook`. For correct rendering,
86 please install `ipywidgets <https://ipywidgets.readthedocs.io/en/stable/user_install.html#installation>`_.
87 Due to `tqdm notebook bugs <https://github.com/tqdm/tqdm/issues/594>`_, bar format may be needed to be set
88 to an empty string value.
89
90 """
91
92 _events_order = [
93 Events.STARTED,
94 Events.EPOCH_STARTED,
95 Events.ITERATION_STARTED,
96 Events.ITERATION_COMPLETED,
97 Events.EPOCH_COMPLETED,
98 Events.COMPLETED
99 ]
100
101 def __init__(self, persist=False,
102 bar_format='{desc}[{n_fmt}/{total_fmt}] {percentage:3.0f}%|{bar}{postfix} [{elapsed}<{remaining}]',
103 **tqdm_kwargs):
104
105 try:
106 from tqdm.autonotebook import tqdm
107 except ImportError:
108 raise RuntimeError("This contrib module requires tqdm to be installed. "
109 "Please install it with command: \n pip install tqdm")
110
111 self.pbar_cls = tqdm
112 self.pbar = None
113 self.persist = persist
114 self.bar_format = bar_format
115 self.tqdm_kwargs = tqdm_kwargs
116
117 def _reset(self, pbar_total):
118 self.pbar = self.pbar_cls(
119 total=pbar_total,
120 leave=self.persist,
121 bar_format=self.bar_format,
122 initial=1,
123 **self.tqdm_kwargs
124 )
125
126 def _close(self, engine):
127 if self.pbar:
128 self.pbar.close()
129 self.pbar = None
130
131 @staticmethod
132 def _compare_lt(event1, event2):
133 if isinstance(event1, EventWithFilter):
134 event1 = event1.event
135 if isinstance(event2, EventWithFilter):
136 event2 = event2.event
137 i1 = ProgressBar._events_order.index(event1)
138 i2 = ProgressBar._events_order.index(event2)
139 return i1 < i2
140
141 def log_message(self, message):
142 """
143 Logs a message, preserving the progress bar correct output format.
144
145 Args:
146 message (str): string you wish to log.
147 """
148 from tqdm import tqdm
149 tqdm.write(message, **self.tqdm_kwargs)
150
151 def attach(self, engine, metric_names=None, output_transform=None,
152 event_name=Events.ITERATION_COMPLETED,
153 closing_event_name=Events.EPOCH_COMPLETED):
154 """
155 Attaches the progress bar to an engine object.
156
157 Args:
158 engine (Engine): engine object.
159 metric_names (list of str, optional): list of metric names to plot or a string "all" to plot all available
160 metrics.
161 output_transform (callable, optional): a function to select what you want to print from the engine's
162 output. This function may return either a dictionary with entries in the format of ``{name: value}``,
163 or a single scalar, which will be displayed with the default name `output`.
164 event_name: event's name on which the progress bar advances. Valid events are from
165 :class:`~ignite.engine.Events`.
166 closing_event_name: event's name on which the progress bar is closed. Valid events are from
167 :class:`~ignite.engine.Events`.
168
169 Note: accepted output value types are numbers, 0d and 1d torch tensors and strings
170
171 """
172 desc = self.tqdm_kwargs.get("desc", "Epoch")
173
174 if not isinstance(event_name, (Events, EventWithFilter)):
175 raise ValueError("Logging event should be only `ignite.engine.Events`")
176
177 if isinstance(closing_event_name, EventWithFilter):
178 raise ValueError("Closing event should not use any event filter")
179
180 if not self._compare_lt(event_name, closing_event_name):
181 raise ValueError("Logging event {} should be called before closing event {}"
182 .format(event_name, closing_event_name))
183
184 log_handler = _OutputHandler(desc, metric_names, output_transform,
185 closing_event_name=closing_event_name)
186 # if event_name is EventWithFilter, filter is passed here
187 super(ProgressBar, self).attach(engine, log_handler, event_name)
188 engine.add_event_handler(closing_event_name, self._close)
189
190
191 class _OutputHandler(BaseOutputHandler):
192 """Helper handler to log engine's output and/or metrics
193
194 Args:
195 description (str): progress bar description.
196 metric_names (list of str, optional): list of metric names to plot or a string "all" to plot all available
197 metrics.
198 output_transform (callable, optional): output transform function to prepare `engine.state.output` as a number.
199 For example, `output_transform = lambda output: output`
200 This function can also return a dictionary, e.g `{'loss': loss1, 'another_loss': loss2}` to label the plot
201 with corresponding keys.
202 closing_event_name: event's name on which the progress bar is closed. Valid events are from
203 :class:`~ignite.engine.Events` or any `event_name` added by
204 :meth:`~ignite.engine.Engine.register_events`.
205
206 """
207
208 def __init__(self, description, metric_names=None, output_transform=None,
209 closing_event_name=Events.EPOCH_COMPLETED):
210 if metric_names is None and output_transform is None:
211 # This helps to avoid 'Either metric_names or output_transform should be defined' of BaseOutputHandler
212 metric_names = []
213 super(_OutputHandler, self).__init__(description, metric_names, output_transform,
214 another_engine=None, global_step_transform=None)
215 self.closing_event_name = closing_event_name
216
217 @staticmethod
218 def get_max_number_events(event_name, engine):
219 if event_name in (Events.ITERATION_STARTED, Events.ITERATION_COMPLETED):
220 return len(engine.state.dataloader)
221 if event_name in (Events.EPOCH_STARTED, Events.EPOCH_COMPLETED):
222 return engine.state.max_epochs
223 return 1
224
225 def __call__(self, engine, logger, event_name):
226
227 pbar_total = self.get_max_number_events(event_name, engine)
228 if logger.pbar is None:
229 logger._reset(pbar_total=pbar_total)
230
231 desc = self.tag
232 max_num_of_closing_events = self.get_max_number_events(self.closing_event_name, engine)
233 if max_num_of_closing_events > 1:
234 global_step = engine.state.get_event_attrib_value(self.closing_event_name)
235 desc += " [{}/{}]".format(global_step, max_num_of_closing_events)
236 logger.pbar.set_description(desc)
237
238 metrics = self._setup_output_metrics(engine)
239
240 rendered_metrics = {}
241 for key, value in metrics.items():
242 if isinstance(value, torch.Tensor):
243 if value.ndimension() == 0:
244 rendered_metrics[key] = value.item()
245 elif value.ndimension() == 1:
246 for i, v in enumerate(value):
247 k = "{}_{}".format(key, i)
248 rendered_metrics[k] = v.item()
249 else:
250 warnings.warn("ProgressBar can not log "
251 "tensor with {} dimensions".format(value.ndimension()))
252 else:
253 rendered_metrics[key] = value
254
255 if rendered_metrics:
256 logger.pbar.set_postfix(**rendered_metrics)
257
258 global_step = engine.state.get_event_attrib_value(event_name)
259 global_step = (global_step - 1) % pbar_total + 1
260 logger.pbar.update(global_step - logger.pbar.n)
261
[end of ignite/contrib/handlers/tqdm_logger.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/ignite/contrib/handlers/tqdm_logger.py b/ignite/contrib/handlers/tqdm_logger.py
--- a/ignite/contrib/handlers/tqdm_logger.py
+++ b/ignite/contrib/handlers/tqdm_logger.py
@@ -146,7 +146,8 @@
message (str): string you wish to log.
"""
from tqdm import tqdm
- tqdm.write(message, **self.tqdm_kwargs)
+
+ tqdm.write(message, file=self.tqdm_kwargs.get("file", None))
def attach(self, engine, metric_names=None, output_transform=None,
event_name=Events.ITERATION_COMPLETED,
| {"golden_diff": "diff --git a/ignite/contrib/handlers/tqdm_logger.py b/ignite/contrib/handlers/tqdm_logger.py\n--- a/ignite/contrib/handlers/tqdm_logger.py\n+++ b/ignite/contrib/handlers/tqdm_logger.py\n@@ -146,7 +146,8 @@\n message (str): string you wish to log.\n \"\"\"\n from tqdm import tqdm\n- tqdm.write(message, **self.tqdm_kwargs)\n+\n+ tqdm.write(message, file=self.tqdm_kwargs.get(\"file\", None))\n \n def attach(self, engine, metric_names=None, output_transform=None,\n event_name=Events.ITERATION_COMPLETED,\n", "issue": "log_message() method fails `desc` is passed to contrib tqdm\n## \ud83d\udc1b Bug description\r\n\r\nIf I pass a `desc` argument when instantiating a `ignite.contrib.handlers.ProgressBar` the calls to its `log_message()` method fail with this exception:\r\n\r\n```\r\nTypeError: write() got an unexpected keyword argument 'desc'\r\n```\r\n\r\n## Environment\r\n\r\n - PyTorch Version: 1.3.1\r\n - Ignite Version: 0.3.0\r\n - OS: Linux\r\n - How you installed Ignite (`conda`, `pip`, source): Conda\r\n - Python version: 3.7\r\n - Any other relevant information:\r\n\r\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\nimport warnings\n\nimport torch\n\nfrom ignite.engine import Events\nfrom ignite.engine.engine import EventWithFilter\nfrom ignite.contrib.handlers.base_logger import BaseLogger, BaseOutputHandler\n\n\nclass ProgressBar(BaseLogger):\n \"\"\"\n TQDM progress bar handler to log training progress and computed metrics.\n\n Args:\n persist (bool, optional): set to ``True`` to persist the progress bar after completion (default = ``False``)\n bar_format (str, optional): Specify a custom bar string formatting. May impact performance.\n [default: '{desc}[{n_fmt}/{total_fmt}] {percentage:3.0f}%|{bar}{postfix} [{elapsed}<{remaining}]'].\n Set to ``None`` to use ``tqdm`` default bar formatting: '{l_bar}{bar}{r_bar}', where\n l_bar='{desc}: {percentage:3.0f}%|' and\n r_bar='| {n_fmt}/{total_fmt} [{elapsed}<{remaining}, {rate_fmt}{postfix}]'. For more details on the\n formatting, see `tqdm docs <https://tqdm.github.io/docs/tqdm/>`_.\n **tqdm_kwargs: kwargs passed to tqdm progress bar.\n By default, progress bar description displays \"Epoch [5/10]\" where 5 is the current epoch and 10 is the\n number of epochs. If tqdm_kwargs defines `desc`, e.g. \"Predictions\", than the description is\n \"Predictions [5/10]\" if number of epochs is more than one otherwise it is simply \"Predictions\".\n\n Examples:\n\n Simple progress bar\n\n .. code-block:: python\n\n trainer = create_supervised_trainer(model, optimizer, loss)\n\n pbar = ProgressBar()\n pbar.attach(trainer)\n\n # Progress bar will looks like\n # Epoch [2/50]: [64/128] 50%|\u2588\u2588\u2588\u2588\u2588 [06:17<12:34]\n\n Log output to a file instead of stderr (tqdm's default output)\n\n .. code-block:: python\n\n trainer = create_supervised_trainer(model, optimizer, loss)\n\n log_file = open(\"output.log\", \"w\")\n pbar = ProgressBar(file=log_file)\n pbar.attach(trainer)\n\n Attach metrics that already have been computed at :attr:`~ignite.engine.Events.ITERATION_COMPLETED`\n (such as :class:`~ignite.metrics.RunningAverage`)\n\n .. code-block:: python\n\n trainer = create_supervised_trainer(model, optimizer, loss)\n\n RunningAverage(output_transform=lambda x: x).attach(trainer, 'loss')\n\n pbar = ProgressBar()\n pbar.attach(trainer, ['loss'])\n\n # Progress bar will looks like\n # Epoch [2/50]: [64/128] 50%|\u2588\u2588\u2588\u2588\u2588 , loss=0.123 [06:17<12:34]\n\n Directly attach the engine's output\n\n .. code-block:: python\n\n trainer = create_supervised_trainer(model, optimizer, loss)\n\n pbar = ProgressBar()\n pbar.attach(trainer, output_transform=lambda x: {'loss': x})\n\n # Progress bar will looks like\n # Epoch [2/50]: [64/128] 50%|\u2588\u2588\u2588\u2588\u2588 , loss=0.123 [06:17<12:34]\n\n Note:\n When adding attaching the progress bar to an engine, it is recommend that you replace\n every print operation in the engine's handlers triggered every iteration with\n ``pbar.log_message`` to guarantee the correct format of the stdout.\n\n Note:\n When using inside jupyter notebook, `ProgressBar` automatically uses `tqdm_notebook`. For correct rendering,\n please install `ipywidgets <https://ipywidgets.readthedocs.io/en/stable/user_install.html#installation>`_.\n Due to `tqdm notebook bugs <https://github.com/tqdm/tqdm/issues/594>`_, bar format may be needed to be set\n to an empty string value.\n\n \"\"\"\n\n _events_order = [\n Events.STARTED,\n Events.EPOCH_STARTED,\n Events.ITERATION_STARTED,\n Events.ITERATION_COMPLETED,\n Events.EPOCH_COMPLETED,\n Events.COMPLETED\n ]\n\n def __init__(self, persist=False,\n bar_format='{desc}[{n_fmt}/{total_fmt}] {percentage:3.0f}%|{bar}{postfix} [{elapsed}<{remaining}]',\n **tqdm_kwargs):\n\n try:\n from tqdm.autonotebook import tqdm\n except ImportError:\n raise RuntimeError(\"This contrib module requires tqdm to be installed. \"\n \"Please install it with command: \\n pip install tqdm\")\n\n self.pbar_cls = tqdm\n self.pbar = None\n self.persist = persist\n self.bar_format = bar_format\n self.tqdm_kwargs = tqdm_kwargs\n\n def _reset(self, pbar_total):\n self.pbar = self.pbar_cls(\n total=pbar_total,\n leave=self.persist,\n bar_format=self.bar_format,\n initial=1,\n **self.tqdm_kwargs\n )\n\n def _close(self, engine):\n if self.pbar:\n self.pbar.close()\n self.pbar = None\n\n @staticmethod\n def _compare_lt(event1, event2):\n if isinstance(event1, EventWithFilter):\n event1 = event1.event\n if isinstance(event2, EventWithFilter):\n event2 = event2.event\n i1 = ProgressBar._events_order.index(event1)\n i2 = ProgressBar._events_order.index(event2)\n return i1 < i2\n\n def log_message(self, message):\n \"\"\"\n Logs a message, preserving the progress bar correct output format.\n\n Args:\n message (str): string you wish to log.\n \"\"\"\n from tqdm import tqdm\n tqdm.write(message, **self.tqdm_kwargs)\n\n def attach(self, engine, metric_names=None, output_transform=None,\n event_name=Events.ITERATION_COMPLETED,\n closing_event_name=Events.EPOCH_COMPLETED):\n \"\"\"\n Attaches the progress bar to an engine object.\n\n Args:\n engine (Engine): engine object.\n metric_names (list of str, optional): list of metric names to plot or a string \"all\" to plot all available\n metrics.\n output_transform (callable, optional): a function to select what you want to print from the engine's\n output. This function may return either a dictionary with entries in the format of ``{name: value}``,\n or a single scalar, which will be displayed with the default name `output`.\n event_name: event's name on which the progress bar advances. Valid events are from\n :class:`~ignite.engine.Events`.\n closing_event_name: event's name on which the progress bar is closed. Valid events are from\n :class:`~ignite.engine.Events`.\n\n Note: accepted output value types are numbers, 0d and 1d torch tensors and strings\n\n \"\"\"\n desc = self.tqdm_kwargs.get(\"desc\", \"Epoch\")\n\n if not isinstance(event_name, (Events, EventWithFilter)):\n raise ValueError(\"Logging event should be only `ignite.engine.Events`\")\n\n if isinstance(closing_event_name, EventWithFilter):\n raise ValueError(\"Closing event should not use any event filter\")\n\n if not self._compare_lt(event_name, closing_event_name):\n raise ValueError(\"Logging event {} should be called before closing event {}\"\n .format(event_name, closing_event_name))\n\n log_handler = _OutputHandler(desc, metric_names, output_transform,\n closing_event_name=closing_event_name)\n # if event_name is EventWithFilter, filter is passed here\n super(ProgressBar, self).attach(engine, log_handler, event_name)\n engine.add_event_handler(closing_event_name, self._close)\n\n\nclass _OutputHandler(BaseOutputHandler):\n \"\"\"Helper handler to log engine's output and/or metrics\n\n Args:\n description (str): progress bar description.\n metric_names (list of str, optional): list of metric names to plot or a string \"all\" to plot all available\n metrics.\n output_transform (callable, optional): output transform function to prepare `engine.state.output` as a number.\n For example, `output_transform = lambda output: output`\n This function can also return a dictionary, e.g `{'loss': loss1, 'another_loss': loss2}` to label the plot\n with corresponding keys.\n closing_event_name: event's name on which the progress bar is closed. Valid events are from\n :class:`~ignite.engine.Events` or any `event_name` added by\n :meth:`~ignite.engine.Engine.register_events`.\n\n \"\"\"\n\n def __init__(self, description, metric_names=None, output_transform=None,\n closing_event_name=Events.EPOCH_COMPLETED):\n if metric_names is None and output_transform is None:\n # This helps to avoid 'Either metric_names or output_transform should be defined' of BaseOutputHandler\n metric_names = []\n super(_OutputHandler, self).__init__(description, metric_names, output_transform,\n another_engine=None, global_step_transform=None)\n self.closing_event_name = closing_event_name\n\n @staticmethod\n def get_max_number_events(event_name, engine):\n if event_name in (Events.ITERATION_STARTED, Events.ITERATION_COMPLETED):\n return len(engine.state.dataloader)\n if event_name in (Events.EPOCH_STARTED, Events.EPOCH_COMPLETED):\n return engine.state.max_epochs\n return 1\n\n def __call__(self, engine, logger, event_name):\n\n pbar_total = self.get_max_number_events(event_name, engine)\n if logger.pbar is None:\n logger._reset(pbar_total=pbar_total)\n\n desc = self.tag\n max_num_of_closing_events = self.get_max_number_events(self.closing_event_name, engine)\n if max_num_of_closing_events > 1:\n global_step = engine.state.get_event_attrib_value(self.closing_event_name)\n desc += \" [{}/{}]\".format(global_step, max_num_of_closing_events)\n logger.pbar.set_description(desc)\n\n metrics = self._setup_output_metrics(engine)\n\n rendered_metrics = {}\n for key, value in metrics.items():\n if isinstance(value, torch.Tensor):\n if value.ndimension() == 0:\n rendered_metrics[key] = value.item()\n elif value.ndimension() == 1:\n for i, v in enumerate(value):\n k = \"{}_{}\".format(key, i)\n rendered_metrics[k] = v.item()\n else:\n warnings.warn(\"ProgressBar can not log \"\n \"tensor with {} dimensions\".format(value.ndimension()))\n else:\n rendered_metrics[key] = value\n\n if rendered_metrics:\n logger.pbar.set_postfix(**rendered_metrics)\n\n global_step = engine.state.get_event_attrib_value(event_name)\n global_step = (global_step - 1) % pbar_total + 1\n logger.pbar.update(global_step - logger.pbar.n)\n", "path": "ignite/contrib/handlers/tqdm_logger.py"}]} | 3,801 | 151 |
gh_patches_debug_63843 | rasdani/github-patches | git_diff | WeblateOrg__weblate-9567 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Microsoft automatic translation fails for Serbian ("sr")
### Describe the issue
For the locale Serbian - "sr" the automatic translation with Microsoft Translator does not work. There are no "Automatic suggestions" and the "Automatic translation" tool does not get any texts.
### I already tried
- [X] I've read and searched [the documentation](https://docs.weblate.org/).
- [X] I've searched for similar issues in this repository.
### Steps to reproduce the behavior
1. Add Microsoft Translator to Weblate
2. Create a project and component with the language "Serbian" - "sr"
3. Go to `/translate/{project}/{component}/sr/?q=state:<translated` and see that no texts are suggested
### Expected behavior
Automatic suggestions should be shown for Serbian.
### Screenshots
_No response_
### Exception traceback
_No response_
### How do you run Weblate?
Docker container
### Weblate versions
* Weblate: 4.18.2
* Django: 4.2.2
* siphashc: 2.1
* translate-toolkit: 3.9.2
* lxml: 4.9.2
* Pillow: 9.5.0
* nh3: 0.2.13
* python-dateutil: 2.8.2
* social-auth-core: 4.4.2
* social-auth-app-django: 5.2.0
* django-crispy-forms: 2.0
* oauthlib: 3.2.2
* django-compressor: 4.4
* djangorestframework: 3.14.0
* django-filter: 23.2
* django-appconf: 1.0.5
* user-agents: 2.2.0
* filelock: 3.12.2
* rapidfuzz: 3.1.1
* openpyxl: 3.1.2
* celery: 5.3.1
* django-celery-beat: 2.5.0
* kombu: 5.3.1
* translation-finder: 2.15
* weblate-language-data: 2023.5
* html2text: 2020.1.16
* pycairo: 1.24.0
* PyGObject: 3.44.1
* diff-match-patch: 20230430
* requests: 2.31.0
* django-redis: 5.3.0
* hiredis: 2.2.3
* sentry-sdk: 1.26.0
* Cython: 0.29.35
* misaka: 2.1.1
* GitPython: 3.1.31
* borgbackup: 1.2.4
* pyparsing: 3.0.9
* pyahocorasick: 2.0.0
* python-redis-lock: 4.0.0
* charset-normalizer: 3.1.0
* Python: 3.11.4
* Git: 2.30.2
* psycopg2: 2.9.6
* phply: 1.2.6
* ruamel.yaml: 0.17.32
* tesserocr: 2.6.0
* boto3: 1.26.164
* zeep: 4.2.1
* aeidon: 1.12
* iniparse: 0.5
* mysqlclient: 2.2.0
* Mercurial: 6.4.5
* git-svn: 2.30.2
* git-review: 2.3.1
* Redis server: 6.2.12
* PostgreSQL server: 13.10
* Database backends: django.db.backends.postgresql
* Cache backends: default:RedisCache, avatar:FileBasedCache
* Email setup: django.core.mail.backends.smtp.EmailBackend: mailz.porsche.co.at
* OS encoding: filesystem=utf-8, default=utf-8
* Celery: redis://localhost:6379/1, redis://localhost:6379/1, regular
* Platform: Linux 3.10.0-1160.90.1.el7.x86_64 (x86_64)
### Weblate deploy checks
```shell
System check identified some issues:
INFOS:
?: (weblate.I021) Error collection is not set up, it is highly recommended for production use
HINT: https://docs.weblate.org/en/latest/admin/install.html#collecting-errors
System check identified 1 issue (1 silenced).
```
### Additional context
It seems that Microsoft translator treats "sr" as "sr-Latn".
For example:
```
POST https://api-eur.cognitive.microsofttranslator.com/translate?api-version=3.0&from=en&to=sr
Content-Type: application/json
[{"Text":"Hello World!"}]
```
gets the answer
```
[
{
"translations": [
{
"text": "Zdravo svete!",
"to": "sr-Latn"
}
]
}
]
```
I think this has to be added to the `language_map`: https://github.com/WeblateOrg/weblate/blob/5674acc39e21ea092c0d2fba89569b802315595a/weblate/machinery/microsoft.py#L26
</issue>
<code>
[start of weblate/machinery/microsoft.py]
1 # Copyright © Michal Čihař <[email protected]>
2 #
3 # SPDX-License-Identifier: GPL-3.0-or-later
4
5 from __future__ import annotations
6
7 from datetime import timedelta
8
9 from django.conf import settings
10 from django.utils import timezone
11
12 from .base import MachineTranslation, MachineTranslationError
13 from .forms import MicrosoftMachineryForm
14
15 TOKEN_URL = "https://{0}{1}/sts/v1.0/issueToken?Subscription-Key={2}"
16 TOKEN_EXPIRY = timedelta(minutes=9)
17
18
19 class MicrosoftCognitiveTranslation(MachineTranslation):
20 """Microsoft Cognitive Services Translator API support."""
21
22 name = "Microsoft Translator"
23 max_score = 90
24 settings_form = MicrosoftMachineryForm
25
26 language_map = {
27 "zh-hant": "zh-Hant",
28 "zh-hans": "zh-Hans",
29 "zh-tw": "zh-Hant",
30 "zh-cn": "zh-Hans",
31 "tlh": "tlh-Latn",
32 "tlh-qaak": "tlh-Piqd",
33 "nb": "no",
34 "bs-latn": "bs-Latn",
35 "sr-latn": "sr-Latn",
36 "sr-cyrl": "sr-Cyrl",
37 "mn": "mn-Mong",
38 }
39
40 def __init__(self, settings: dict[str, str]):
41 """Check configuration."""
42 super().__init__(settings)
43 self._access_token = None
44 self._token_expiry = None
45
46 # check settings for Microsoft region prefix
47 region = "" if not self.settings["region"] else f"{self.settings['region']}."
48
49 self._cognitive_token_url = TOKEN_URL.format(
50 region,
51 self.settings["endpoint_url"],
52 self.settings["key"],
53 )
54
55 @staticmethod
56 def migrate_settings():
57 return {
58 "region": settings.MT_MICROSOFT_REGION,
59 "endpoint_url": settings.MT_MICROSOFT_ENDPOINT_URL,
60 "base_url": settings.MT_MICROSOFT_BASE_URL,
61 "key": settings.MT_MICROSOFT_COGNITIVE_KEY,
62 }
63
64 def get_url(self, suffix):
65 return f"https://{self.settings['base_url']}/{suffix}"
66
67 def is_token_expired(self):
68 """Check whether token is about to expire."""
69 return self._token_expiry <= timezone.now()
70
71 def get_authentication(self):
72 """Hook for backends to allow add authentication headers to request."""
73 return {"Authorization": f"Bearer {self.access_token}"}
74
75 @property
76 def access_token(self):
77 """Obtain and caches access token."""
78 if self._access_token is None or self.is_token_expired():
79 self._access_token = self.request(
80 "post", self._cognitive_token_url, skip_auth=True
81 ).text
82 self._token_expiry = timezone.now() + TOKEN_EXPIRY
83
84 return self._access_token
85
86 def map_language_code(self, code):
87 """Convert language to service specific code."""
88 return super().map_language_code(code).replace("_", "-")
89
90 def download_languages(self):
91 """
92 Download list of supported languages from a service.
93
94 Example of the response:
95
96 ['af', 'ar', 'bs-Latn', 'bg', 'ca', 'zh-CHS', 'zh-CHT', 'yue', 'hr', 'cs', 'da',
97 'nl', 'en', 'et', 'fj', 'fil', 'fi', 'fr', 'de', 'el', 'ht', 'he', 'hi', 'mww',
98 'h', 'id', 'it', 'ja', 'sw', 'tlh', 'tlh-Qaak', 'ko', 'lv', 'lt', 'mg', 'ms',
99 'mt', 'yua', 'no', 'otq', 'fa', 'pl', 'pt', 'ro', 'r', 'sm', 'sr-Cyrl',
100 'sr-Latn', 'sk', 'sl', 'es', 'sv', 'ty', 'th', 'to', 'tr', 'uk', 'ur', 'vi',
101 'cy']
102 """
103 response = self.request(
104 "get", self.get_url("languages"), params={"api-version": "3.0"}
105 )
106 # Microsoft tends to use utf-8-sig instead of plain utf-8
107 response.encoding = response.apparent_encoding
108 payload = response.json()
109
110 # We should get an object, string usually means an error
111 if isinstance(payload, str):
112 raise MachineTranslationError(payload)
113
114 return payload["translation"].keys()
115
116 def download_translations(
117 self,
118 source,
119 language,
120 text: str,
121 unit,
122 user,
123 threshold: int = 75,
124 ):
125 """Download list of possible translations from a service."""
126 args = {
127 "api-version": "3.0",
128 "from": source,
129 "to": language,
130 "category": "general",
131 }
132 response = self.request(
133 "post", self.get_url("translate"), params=args, json=[{"Text": text[:5000]}]
134 )
135 # Microsoft tends to use utf-8-sig instead of plain utf-8
136 response.encoding = "utf-8-sig"
137 payload = response.json()
138 yield {
139 "text": payload[0]["translations"][0]["text"],
140 "quality": self.max_score,
141 "service": self.name,
142 "source": text,
143 }
144
[end of weblate/machinery/microsoft.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/weblate/machinery/microsoft.py b/weblate/machinery/microsoft.py
--- a/weblate/machinery/microsoft.py
+++ b/weblate/machinery/microsoft.py
@@ -32,6 +32,7 @@
"tlh-qaak": "tlh-Piqd",
"nb": "no",
"bs-latn": "bs-Latn",
+ "sr": "sr-Latn",
"sr-latn": "sr-Latn",
"sr-cyrl": "sr-Cyrl",
"mn": "mn-Mong",
| {"golden_diff": "diff --git a/weblate/machinery/microsoft.py b/weblate/machinery/microsoft.py\n--- a/weblate/machinery/microsoft.py\n+++ b/weblate/machinery/microsoft.py\n@@ -32,6 +32,7 @@\n \"tlh-qaak\": \"tlh-Piqd\",\n \"nb\": \"no\",\n \"bs-latn\": \"bs-Latn\",\n+ \"sr\": \"sr-Latn\",\n \"sr-latn\": \"sr-Latn\",\n \"sr-cyrl\": \"sr-Cyrl\",\n \"mn\": \"mn-Mong\",\n", "issue": "Microsoft automatic translation fails for Serbian (\"sr\")\n### Describe the issue\n\nFor the locale Serbian - \"sr\" the automatic translation with Microsoft Translator does not work. There are no \"Automatic suggestions\" and the \"Automatic translation\" tool does not get any texts.\n\n### I already tried\n\n- [X] I've read and searched [the documentation](https://docs.weblate.org/).\n- [X] I've searched for similar issues in this repository.\n\n### Steps to reproduce the behavior\n\n1. Add Microsoft Translator to Weblate\r\n2. Create a project and component with the language \"Serbian\" - \"sr\"\r\n3. Go to `/translate/{project}/{component}/sr/?q=state:<translated` and see that no texts are suggested\n\n### Expected behavior\n\nAutomatic suggestions should be shown for Serbian.\n\n### Screenshots\n\n_No response_\n\n### Exception traceback\n\n_No response_\n\n### How do you run Weblate?\n\nDocker container\n\n### Weblate versions\n\n * Weblate: 4.18.2\r\n * Django: 4.2.2\r\n * siphashc: 2.1\r\n * translate-toolkit: 3.9.2\r\n * lxml: 4.9.2\r\n * Pillow: 9.5.0\r\n * nh3: 0.2.13\r\n * python-dateutil: 2.8.2\r\n * social-auth-core: 4.4.2\r\n * social-auth-app-django: 5.2.0\r\n * django-crispy-forms: 2.0\r\n * oauthlib: 3.2.2\r\n * django-compressor: 4.4\r\n * djangorestframework: 3.14.0\r\n * django-filter: 23.2\r\n * django-appconf: 1.0.5\r\n * user-agents: 2.2.0\r\n * filelock: 3.12.2\r\n * rapidfuzz: 3.1.1\r\n * openpyxl: 3.1.2\r\n * celery: 5.3.1\r\n * django-celery-beat: 2.5.0\r\n * kombu: 5.3.1\r\n * translation-finder: 2.15\r\n * weblate-language-data: 2023.5\r\n * html2text: 2020.1.16\r\n * pycairo: 1.24.0\r\n * PyGObject: 3.44.1\r\n * diff-match-patch: 20230430\r\n * requests: 2.31.0\r\n * django-redis: 5.3.0\r\n * hiredis: 2.2.3\r\n * sentry-sdk: 1.26.0\r\n * Cython: 0.29.35\r\n * misaka: 2.1.1\r\n * GitPython: 3.1.31\r\n * borgbackup: 1.2.4\r\n * pyparsing: 3.0.9\r\n * pyahocorasick: 2.0.0\r\n * python-redis-lock: 4.0.0\r\n * charset-normalizer: 3.1.0\r\n * Python: 3.11.4\r\n * Git: 2.30.2\r\n * psycopg2: 2.9.6\r\n * phply: 1.2.6\r\n * ruamel.yaml: 0.17.32\r\n * tesserocr: 2.6.0\r\n * boto3: 1.26.164\r\n * zeep: 4.2.1\r\n * aeidon: 1.12\r\n * iniparse: 0.5\r\n * mysqlclient: 2.2.0\r\n * Mercurial: 6.4.5\r\n * git-svn: 2.30.2\r\n * git-review: 2.3.1\r\n * Redis server: 6.2.12\r\n * PostgreSQL server: 13.10\r\n * Database backends: django.db.backends.postgresql\r\n * Cache backends: default:RedisCache, avatar:FileBasedCache\r\n * Email setup: django.core.mail.backends.smtp.EmailBackend: mailz.porsche.co.at\r\n * OS encoding: filesystem=utf-8, default=utf-8\r\n * Celery: redis://localhost:6379/1, redis://localhost:6379/1, regular\r\n * Platform: Linux 3.10.0-1160.90.1.el7.x86_64 (x86_64)\n\n### Weblate deploy checks\n\n```shell\nSystem check identified some issues:\r\n\r\nINFOS:\r\n?: (weblate.I021) Error collection is not set up, it is highly recommended for production use\r\n HINT: https://docs.weblate.org/en/latest/admin/install.html#collecting-errors\r\n\r\nSystem check identified 1 issue (1 silenced).\n```\n\n\n### Additional context\n\nIt seems that Microsoft translator treats \"sr\" as \"sr-Latn\".\r\n\r\nFor example:\r\n``` \r\nPOST https://api-eur.cognitive.microsofttranslator.com/translate?api-version=3.0&from=en&to=sr\r\nContent-Type: application/json\r\n\r\n[{\"Text\":\"Hello World!\"}]\r\n```\r\n\r\ngets the answer\r\n```\r\n[\r\n {\r\n \"translations\": [\r\n {\r\n \"text\": \"Zdravo svete!\",\r\n \"to\": \"sr-Latn\"\r\n }\r\n ]\r\n }\r\n]\r\n```\r\n\r\nI think this has to be added to the `language_map`: https://github.com/WeblateOrg/weblate/blob/5674acc39e21ea092c0d2fba89569b802315595a/weblate/machinery/microsoft.py#L26\n", "before_files": [{"content": "# Copyright \u00a9 Michal \u010ciha\u0159 <[email protected]>\n#\n# SPDX-License-Identifier: GPL-3.0-or-later\n\nfrom __future__ import annotations\n\nfrom datetime import timedelta\n\nfrom django.conf import settings\nfrom django.utils import timezone\n\nfrom .base import MachineTranslation, MachineTranslationError\nfrom .forms import MicrosoftMachineryForm\n\nTOKEN_URL = \"https://{0}{1}/sts/v1.0/issueToken?Subscription-Key={2}\"\nTOKEN_EXPIRY = timedelta(minutes=9)\n\n\nclass MicrosoftCognitiveTranslation(MachineTranslation):\n \"\"\"Microsoft Cognitive Services Translator API support.\"\"\"\n\n name = \"Microsoft Translator\"\n max_score = 90\n settings_form = MicrosoftMachineryForm\n\n language_map = {\n \"zh-hant\": \"zh-Hant\",\n \"zh-hans\": \"zh-Hans\",\n \"zh-tw\": \"zh-Hant\",\n \"zh-cn\": \"zh-Hans\",\n \"tlh\": \"tlh-Latn\",\n \"tlh-qaak\": \"tlh-Piqd\",\n \"nb\": \"no\",\n \"bs-latn\": \"bs-Latn\",\n \"sr-latn\": \"sr-Latn\",\n \"sr-cyrl\": \"sr-Cyrl\",\n \"mn\": \"mn-Mong\",\n }\n\n def __init__(self, settings: dict[str, str]):\n \"\"\"Check configuration.\"\"\"\n super().__init__(settings)\n self._access_token = None\n self._token_expiry = None\n\n # check settings for Microsoft region prefix\n region = \"\" if not self.settings[\"region\"] else f\"{self.settings['region']}.\"\n\n self._cognitive_token_url = TOKEN_URL.format(\n region,\n self.settings[\"endpoint_url\"],\n self.settings[\"key\"],\n )\n\n @staticmethod\n def migrate_settings():\n return {\n \"region\": settings.MT_MICROSOFT_REGION,\n \"endpoint_url\": settings.MT_MICROSOFT_ENDPOINT_URL,\n \"base_url\": settings.MT_MICROSOFT_BASE_URL,\n \"key\": settings.MT_MICROSOFT_COGNITIVE_KEY,\n }\n\n def get_url(self, suffix):\n return f\"https://{self.settings['base_url']}/{suffix}\"\n\n def is_token_expired(self):\n \"\"\"Check whether token is about to expire.\"\"\"\n return self._token_expiry <= timezone.now()\n\n def get_authentication(self):\n \"\"\"Hook for backends to allow add authentication headers to request.\"\"\"\n return {\"Authorization\": f\"Bearer {self.access_token}\"}\n\n @property\n def access_token(self):\n \"\"\"Obtain and caches access token.\"\"\"\n if self._access_token is None or self.is_token_expired():\n self._access_token = self.request(\n \"post\", self._cognitive_token_url, skip_auth=True\n ).text\n self._token_expiry = timezone.now() + TOKEN_EXPIRY\n\n return self._access_token\n\n def map_language_code(self, code):\n \"\"\"Convert language to service specific code.\"\"\"\n return super().map_language_code(code).replace(\"_\", \"-\")\n\n def download_languages(self):\n \"\"\"\n Download list of supported languages from a service.\n\n Example of the response:\n\n ['af', 'ar', 'bs-Latn', 'bg', 'ca', 'zh-CHS', 'zh-CHT', 'yue', 'hr', 'cs', 'da',\n 'nl', 'en', 'et', 'fj', 'fil', 'fi', 'fr', 'de', 'el', 'ht', 'he', 'hi', 'mww',\n 'h', 'id', 'it', 'ja', 'sw', 'tlh', 'tlh-Qaak', 'ko', 'lv', 'lt', 'mg', 'ms',\n 'mt', 'yua', 'no', 'otq', 'fa', 'pl', 'pt', 'ro', 'r', 'sm', 'sr-Cyrl',\n 'sr-Latn', 'sk', 'sl', 'es', 'sv', 'ty', 'th', 'to', 'tr', 'uk', 'ur', 'vi',\n 'cy']\n \"\"\"\n response = self.request(\n \"get\", self.get_url(\"languages\"), params={\"api-version\": \"3.0\"}\n )\n # Microsoft tends to use utf-8-sig instead of plain utf-8\n response.encoding = response.apparent_encoding\n payload = response.json()\n\n # We should get an object, string usually means an error\n if isinstance(payload, str):\n raise MachineTranslationError(payload)\n\n return payload[\"translation\"].keys()\n\n def download_translations(\n self,\n source,\n language,\n text: str,\n unit,\n user,\n threshold: int = 75,\n ):\n \"\"\"Download list of possible translations from a service.\"\"\"\n args = {\n \"api-version\": \"3.0\",\n \"from\": source,\n \"to\": language,\n \"category\": \"general\",\n }\n response = self.request(\n \"post\", self.get_url(\"translate\"), params=args, json=[{\"Text\": text[:5000]}]\n )\n # Microsoft tends to use utf-8-sig instead of plain utf-8\n response.encoding = \"utf-8-sig\"\n payload = response.json()\n yield {\n \"text\": payload[0][\"translations\"][0][\"text\"],\n \"quality\": self.max_score,\n \"service\": self.name,\n \"source\": text,\n }\n", "path": "weblate/machinery/microsoft.py"}]} | 3,352 | 135 |
gh_patches_debug_27689 | rasdani/github-patches | git_diff | python-discord__site-1007 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add some more common abbreviations to the rule keyword command list
I believe it would be beneficial to add two more shorthands for invoking the 'Rules' embed: "hw" and "eng". "hw" is a common shorthand for "homework", so it should be associated with the embed for rule 8. Likewise, "eng" is a common abbreviation for "English", so it can be linked with rule 4.
</issue>
<code>
[start of pydis_site/apps/api/views.py]
1 from rest_framework.exceptions import ParseError
2 from rest_framework.request import Request
3 from rest_framework.response import Response
4 from rest_framework.views import APIView
5
6 from . import github_utils
7
8
9 class HealthcheckView(APIView):
10 """
11 Provides a simple view to check that the website is alive and well.
12
13 ## Routes
14 ### GET /healthcheck
15 Returns a simple JSON document showcasing whether the system is working:
16
17 >>> {
18 ... 'status': 'ok'
19 ... }
20
21 Seems to be.
22
23 ## Authentication
24 Does not require any authentication nor permissions.
25 """
26
27 authentication_classes = ()
28 permission_classes = ()
29
30 def get(self, request, format=None): # noqa: D102,ANN001,ANN201
31 return Response({'status': 'ok'})
32
33
34 class RulesView(APIView):
35 """
36 Return a list of the server's rules.
37
38 ## Routes
39 ### GET /rules
40 Returns a JSON array containing the server's rules
41 and keywords relating to each rule.
42 Example response:
43
44 >>> [
45 ... ["Eat candy.", ["candy", "sweets"]],
46 ... ["Wake up at 4 AM.", ["wake_up", "early", "early_bird"]],
47 ... ["Take your medicine.", ["medicine", "health"]]
48 ... ]
49
50 Since some of the the rules require links, this view
51 gives you the option to return rules in either Markdown
52 or HTML format by specifying the `link_format` query parameter
53 as either `md` or `html`. Specifying a different value than
54 `md` or `html` will return 400.
55
56 ## Authentication
57 Does not require any authentication nor permissions.
58 """
59
60 authentication_classes = ()
61 permission_classes = ()
62
63 @staticmethod
64 def _format_link(description: str, link: str, target: str) -> str:
65 """
66 Build the markup for rendering the link.
67
68 This will render `link` with `description` as its description in the given
69 `target` language.
70
71 Arguments:
72 description (str):
73 A textual description of the string. Represents the content
74 between the `<a>` tags in HTML, or the content between the
75 array brackets in Markdown.
76
77 link (str):
78 The resulting link that a user should be redirected to
79 upon clicking the generated element.
80
81 target (str):
82 One of `{'md', 'html'}`, denoting the target format that the
83 link should be rendered in.
84
85 Returns:
86 str:
87 The link, rendered appropriately for the given `target` format
88 using `description` as its textual description.
89
90 Raises:
91 ValueError:
92 If `target` is not `'md'` or `'html'`.
93 """
94 if target == 'html':
95 return f'<a href="{link}">{description}</a>'
96 elif target == 'md': # noqa: RET505
97 return f'[{description}]({link})'
98 else:
99 raise ValueError(
100 f"Can only template links to `html` or `md`, got `{target}`"
101 )
102
103 # `format` here is the result format, we have a link format here instead.
104 def get(self, request, format=None): # noqa: ANN001, ANN201
105 """
106 Returns a list of our community rules coupled with their keywords.
107
108 Each item in the returned list is a tuple with the rule as first item
109 and a list of keywords that match that rules as second item.
110 """
111 link_format = request.query_params.get('link_format', 'md')
112 if link_format not in ('html', 'md'):
113 raise ParseError(
114 f"`format` must be `html` or `md`, got `{format}`."
115 )
116
117 discord_community_guidelines = self._format_link(
118 'Discord Community Guidelines',
119 'https://discordapp.com/guidelines',
120 link_format
121 )
122 discord_tos = self._format_link(
123 'Terms of Service',
124 'https://discordapp.com/terms',
125 link_format
126 )
127 pydis_coc = self._format_link(
128 'Python Discord Code of Conduct',
129 'https://pythondiscord.com/pages/code-of-conduct/',
130 link_format
131 )
132
133 return Response([
134 (
135 f"Follow the {pydis_coc}.",
136 ["coc", "conduct", "code"]
137 ),
138 (
139 f"Follow the {discord_community_guidelines} and {discord_tos}.",
140 ["discord", "guidelines", "discord_tos"]
141 ),
142 (
143 "Respect staff members and listen to their instructions.",
144 ["respect", "staff", "instructions"]
145 ),
146 (
147 "Use English to the best of your ability. "
148 "Be polite if someone speaks English imperfectly.",
149 ["english", "language"]
150 ),
151 (
152 "Do not provide or request help on projects that may violate terms of service, "
153 "or that may be deemed inappropriate, malicious, or illegal.",
154 ["infraction", "tos", "breach", "malicious", "inappropriate", "illegal"]
155 ),
156 (
157 "Do not post unapproved advertising.",
158 ["ad", "ads", "advert", "advertising"]
159 ),
160 (
161 "Keep discussions relevant to the channel topic. "
162 "Each channel's description tells you the topic.",
163 ["off-topic", "topic", "relevance"]
164 ),
165 (
166 "Do not help with ongoing exams. When helping with homework, "
167 "help people learn how to do the assignment without doing it for them.",
168 ["exam", "exams", "assignment", "assignments", "homework"]
169 ),
170 (
171 "Do not offer or ask for paid work of any kind.",
172 ["paid", "work", "money"]
173 ),
174 (
175 "Do not copy and paste answers from ChatGPT or similar AI tools.",
176 ["gpt", "chatgpt", "gpt3", "ai"]
177 ),
178 ])
179
180
181 class GitHubArtifactsView(APIView):
182 """
183 Provides utilities for interacting with the GitHub API and obtaining action artifacts.
184
185 ## Routes
186 ### GET /github/artifacts
187 Returns a download URL for the artifact requested.
188
189 {
190 'url': 'https://pipelines.actions.githubusercontent.com/...'
191 }
192
193 ### Exceptions
194 In case of an error, the following body will be returned:
195
196 {
197 "error_type": "<error class name>",
198 "error": "<error description>",
199 "requested_resource": "<owner>/<repo>/<sha>/<artifact_name>"
200 }
201
202 ## Authentication
203 Does not require any authentication nor permissions.
204 """
205
206 authentication_classes = ()
207 permission_classes = ()
208
209 def get(
210 self,
211 request: Request,
212 *,
213 owner: str,
214 repo: str,
215 sha: str,
216 action_name: str,
217 artifact_name: str
218 ) -> Response:
219 """Return a download URL for the requested artifact."""
220 try:
221 url = github_utils.get_artifact(owner, repo, sha, action_name, artifact_name)
222 return Response({"url": url})
223 except github_utils.ArtifactProcessingError as e:
224 return Response({
225 "error_type": e.__class__.__name__,
226 "error": str(e),
227 "requested_resource": f"{owner}/{repo}/{sha}/{action_name}/{artifact_name}"
228 }, status=e.status)
229
[end of pydis_site/apps/api/views.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/pydis_site/apps/api/views.py b/pydis_site/apps/api/views.py
--- a/pydis_site/apps/api/views.py
+++ b/pydis_site/apps/api/views.py
@@ -146,7 +146,7 @@
(
"Use English to the best of your ability. "
"Be polite if someone speaks English imperfectly.",
- ["english", "language"]
+ ["english", "eng", "language"]
),
(
"Do not provide or request help on projects that may violate terms of service, "
@@ -165,15 +165,15 @@
(
"Do not help with ongoing exams. When helping with homework, "
"help people learn how to do the assignment without doing it for them.",
- ["exam", "exams", "assignment", "assignments", "homework"]
+ ["exam", "exams", "assignment", "assignments", "homework", "hw"]
),
(
"Do not offer or ask for paid work of any kind.",
["paid", "work", "money"]
),
(
- "Do not copy and paste answers from ChatGPT or similar AI tools.",
- ["gpt", "chatgpt", "gpt3", "ai"]
+ "Do not copy and paste answers from ChatGPT or similar AI tools.",
+ ["gpt", "chatgpt", "gpt3", "ai"]
),
])
| {"golden_diff": "diff --git a/pydis_site/apps/api/views.py b/pydis_site/apps/api/views.py\n--- a/pydis_site/apps/api/views.py\n+++ b/pydis_site/apps/api/views.py\n@@ -146,7 +146,7 @@\n (\n \"Use English to the best of your ability. \"\n \"Be polite if someone speaks English imperfectly.\",\n- [\"english\", \"language\"]\n+ [\"english\", \"eng\", \"language\"]\n ),\n (\n \"Do not provide or request help on projects that may violate terms of service, \"\n@@ -165,15 +165,15 @@\n (\n \"Do not help with ongoing exams. When helping with homework, \"\n \"help people learn how to do the assignment without doing it for them.\",\n- [\"exam\", \"exams\", \"assignment\", \"assignments\", \"homework\"]\n+ [\"exam\", \"exams\", \"assignment\", \"assignments\", \"homework\", \"hw\"]\n ),\n (\n \"Do not offer or ask for paid work of any kind.\",\n [\"paid\", \"work\", \"money\"]\n ),\n (\n- \"Do not copy and paste answers from ChatGPT or similar AI tools.\",\n- [\"gpt\", \"chatgpt\", \"gpt3\", \"ai\"]\n+ \"Do not copy and paste answers from ChatGPT or similar AI tools.\",\n+ [\"gpt\", \"chatgpt\", \"gpt3\", \"ai\"]\n ),\n ])\n", "issue": "Add some more common abbreviations to the rule keyword command list\nI believe it would be beneficial to add two more shorthands for invoking the 'Rules' embed: \"hw\" and \"eng\". \"hw\" is a common shorthand for \"homework\", so it should be associated with the embed for rule 8. Likewise, \"eng\" is a common abbreviation for \"English\", so it can be linked with rule 4.\n", "before_files": [{"content": "from rest_framework.exceptions import ParseError\nfrom rest_framework.request import Request\nfrom rest_framework.response import Response\nfrom rest_framework.views import APIView\n\nfrom . import github_utils\n\n\nclass HealthcheckView(APIView):\n \"\"\"\n Provides a simple view to check that the website is alive and well.\n\n ## Routes\n ### GET /healthcheck\n Returns a simple JSON document showcasing whether the system is working:\n\n >>> {\n ... 'status': 'ok'\n ... }\n\n Seems to be.\n\n ## Authentication\n Does not require any authentication nor permissions.\n \"\"\"\n\n authentication_classes = ()\n permission_classes = ()\n\n def get(self, request, format=None): # noqa: D102,ANN001,ANN201\n return Response({'status': 'ok'})\n\n\nclass RulesView(APIView):\n \"\"\"\n Return a list of the server's rules.\n\n ## Routes\n ### GET /rules\n Returns a JSON array containing the server's rules\n and keywords relating to each rule.\n Example response:\n\n >>> [\n ... [\"Eat candy.\", [\"candy\", \"sweets\"]],\n ... [\"Wake up at 4 AM.\", [\"wake_up\", \"early\", \"early_bird\"]],\n ... [\"Take your medicine.\", [\"medicine\", \"health\"]]\n ... ]\n\n Since some of the the rules require links, this view\n gives you the option to return rules in either Markdown\n or HTML format by specifying the `link_format` query parameter\n as either `md` or `html`. Specifying a different value than\n `md` or `html` will return 400.\n\n ## Authentication\n Does not require any authentication nor permissions.\n \"\"\"\n\n authentication_classes = ()\n permission_classes = ()\n\n @staticmethod\n def _format_link(description: str, link: str, target: str) -> str:\n \"\"\"\n Build the markup for rendering the link.\n\n This will render `link` with `description` as its description in the given\n `target` language.\n\n Arguments:\n description (str):\n A textual description of the string. Represents the content\n between the `<a>` tags in HTML, or the content between the\n array brackets in Markdown.\n\n link (str):\n The resulting link that a user should be redirected to\n upon clicking the generated element.\n\n target (str):\n One of `{'md', 'html'}`, denoting the target format that the\n link should be rendered in.\n\n Returns:\n str:\n The link, rendered appropriately for the given `target` format\n using `description` as its textual description.\n\n Raises:\n ValueError:\n If `target` is not `'md'` or `'html'`.\n \"\"\"\n if target == 'html':\n return f'<a href=\"{link}\">{description}</a>'\n elif target == 'md': # noqa: RET505\n return f'[{description}]({link})'\n else:\n raise ValueError(\n f\"Can only template links to `html` or `md`, got `{target}`\"\n )\n\n # `format` here is the result format, we have a link format here instead.\n def get(self, request, format=None): # noqa: ANN001, ANN201\n \"\"\"\n Returns a list of our community rules coupled with their keywords.\n\n Each item in the returned list is a tuple with the rule as first item\n and a list of keywords that match that rules as second item.\n \"\"\"\n link_format = request.query_params.get('link_format', 'md')\n if link_format not in ('html', 'md'):\n raise ParseError(\n f\"`format` must be `html` or `md`, got `{format}`.\"\n )\n\n discord_community_guidelines = self._format_link(\n 'Discord Community Guidelines',\n 'https://discordapp.com/guidelines',\n link_format\n )\n discord_tos = self._format_link(\n 'Terms of Service',\n 'https://discordapp.com/terms',\n link_format\n )\n pydis_coc = self._format_link(\n 'Python Discord Code of Conduct',\n 'https://pythondiscord.com/pages/code-of-conduct/',\n link_format\n )\n\n return Response([\n (\n f\"Follow the {pydis_coc}.\",\n [\"coc\", \"conduct\", \"code\"]\n ),\n (\n f\"Follow the {discord_community_guidelines} and {discord_tos}.\",\n [\"discord\", \"guidelines\", \"discord_tos\"]\n ),\n (\n \"Respect staff members and listen to their instructions.\",\n [\"respect\", \"staff\", \"instructions\"]\n ),\n (\n \"Use English to the best of your ability. \"\n \"Be polite if someone speaks English imperfectly.\",\n [\"english\", \"language\"]\n ),\n (\n \"Do not provide or request help on projects that may violate terms of service, \"\n \"or that may be deemed inappropriate, malicious, or illegal.\",\n [\"infraction\", \"tos\", \"breach\", \"malicious\", \"inappropriate\", \"illegal\"]\n ),\n (\n \"Do not post unapproved advertising.\",\n [\"ad\", \"ads\", \"advert\", \"advertising\"]\n ),\n (\n \"Keep discussions relevant to the channel topic. \"\n \"Each channel's description tells you the topic.\",\n [\"off-topic\", \"topic\", \"relevance\"]\n ),\n (\n \"Do not help with ongoing exams. When helping with homework, \"\n \"help people learn how to do the assignment without doing it for them.\",\n [\"exam\", \"exams\", \"assignment\", \"assignments\", \"homework\"]\n ),\n (\n \"Do not offer or ask for paid work of any kind.\",\n [\"paid\", \"work\", \"money\"]\n ),\n (\n \"Do not copy and paste answers from ChatGPT or similar AI tools.\",\n [\"gpt\", \"chatgpt\", \"gpt3\", \"ai\"]\n ),\n ])\n\n\nclass GitHubArtifactsView(APIView):\n \"\"\"\n Provides utilities for interacting with the GitHub API and obtaining action artifacts.\n\n ## Routes\n ### GET /github/artifacts\n Returns a download URL for the artifact requested.\n\n {\n 'url': 'https://pipelines.actions.githubusercontent.com/...'\n }\n\n ### Exceptions\n In case of an error, the following body will be returned:\n\n {\n \"error_type\": \"<error class name>\",\n \"error\": \"<error description>\",\n \"requested_resource\": \"<owner>/<repo>/<sha>/<artifact_name>\"\n }\n\n ## Authentication\n Does not require any authentication nor permissions.\n \"\"\"\n\n authentication_classes = ()\n permission_classes = ()\n\n def get(\n self,\n request: Request,\n *,\n owner: str,\n repo: str,\n sha: str,\n action_name: str,\n artifact_name: str\n ) -> Response:\n \"\"\"Return a download URL for the requested artifact.\"\"\"\n try:\n url = github_utils.get_artifact(owner, repo, sha, action_name, artifact_name)\n return Response({\"url\": url})\n except github_utils.ArtifactProcessingError as e:\n return Response({\n \"error_type\": e.__class__.__name__,\n \"error\": str(e),\n \"requested_resource\": f\"{owner}/{repo}/{sha}/{action_name}/{artifact_name}\"\n }, status=e.status)\n", "path": "pydis_site/apps/api/views.py"}]} | 2,814 | 320 |
gh_patches_debug_39336 | rasdani/github-patches | git_diff | chainer__chainer-4347 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
2nd order derivative of ELU should not give NaN
The 2nd order derivative of ELU gives off NaN about once out of hundreds of iterations. Then the entire network will instantly contaminated with NaN.
I tracked the cause and the following code in backward() of class ELUGrad in chainer/chainer/functions/activation/elu.py is the cause
```
if 1 in indexes:
ret.append(ggxgx / gy)
```
It is natural that this division will give NaN if some element of gy is zero. Zero will occur when the single-precision floating point subtraction underflows.
For your information, I am circumventing the issue by making minor modification to elu.py as in the attached file, which uses the same way to calculate derivative as the mathematical functions like F.exp.
[elu.py.txt](https://github.com/chainer/chainer/files/1683829/elu.py.txt)
How to reproduce:
I am using Chainer 3.2.0, but that part of the ELU source code is not different in v4.0, therefore I think this issue persists over the versions.
```
>>> import chainer
>>> import numpy as np
>>> x = chainer.Variable(np.array([[0, 0]],dtype=np.float32))
>>> y = chainer.functions.elu(x)
>>> y
variable([[ 0., 0.]])
>>>
>>> y.grad = (np.array([[0, 1e-30]],dtype=np.float32))
>>> y.backward(enable_double_backprop=True)
>>>
>>> x.grad_var.grad = np.array([[1, 1]],dtype=np.float32)
>>> x.grad_var.backward()
/home/mogami/.pyenv/versions/anaconda3-4.2.0/lib/python3.5/site-packages/chainer/functions/math/basic_math.py:322: RuntimeWarning: invalid value encountered in true_divide
return utils.force_array(x[0] / x[1]),
>>> y.grad_var
variable([[ 0.00000000e+00, 1.00000000e-30]])
>>> y.grad_var.grad
array([[ nan, 1.]], dtype=float32)
```
The first element is nan, though it should be 1.0 in this case. This example may seem silly when considering ELU only, but having zero for some of elements often happens when dy is back propagated from somewhere else because of underflow.
</issue>
<code>
[start of chainer/functions/activation/elu.py]
1 import numpy
2
3 from chainer.backends import cuda
4 from chainer import function_node
5 from chainer.utils import type_check
6
7
8 class ELU(function_node.FunctionNode):
9
10 """Exponential Linear Unit."""
11
12 def __init__(self, alpha=1.0):
13 self.alpha = float(alpha)
14
15 def check_type_forward(self, in_types):
16 type_check.expect(in_types.size() == 1)
17 x_type, = in_types
18
19 type_check.expect(x_type.dtype.kind == 'f')
20
21 def forward_cpu(self, x):
22 self.retain_inputs((0,))
23 y = x[0].copy()
24 neg_indices = x[0] < 0
25 y[neg_indices] = self.alpha * (numpy.exp(y[neg_indices]) - 1)
26 return y,
27
28 def forward_gpu(self, x):
29 self.retain_inputs((0,))
30 y = cuda.elementwise(
31 'T x, T alpha', 'T y',
32 'y = x >= 0 ? x : (T)(alpha * (exp(x) - 1))',
33 'elu_fwd')(
34 x[0], self.alpha)
35 return y,
36
37 def backward(self, indexes, grad_outputs):
38 x, = self.get_retained_inputs()
39 gy, = grad_outputs
40 return ELUGrad(self.alpha).apply((x, gy))
41
42
43 class ELUGrad(function_node.FunctionNode):
44
45 """Exponential Linear Unit gradient function."""
46
47 def __init__(self, alpha):
48 self.alpha = alpha
49
50 def check_type_forward(self, in_types):
51 type_check.expect(in_types.size() == 2)
52 type_check.expect(in_types[0].dtype.kind == 'f')
53 type_check.expect(in_types[1].dtype.kind == 'f')
54
55 def forward_cpu(self, inputs):
56 x, gy = inputs
57 gx = gy.copy()
58 neg_indices = x < 0
59 gx[neg_indices] *= self.alpha * numpy.exp(x[neg_indices])
60 self.retain_inputs((0, 1))
61 self.retain_outputs((0,))
62 return gx,
63
64 def forward_gpu(self, inputs):
65 x, gy = inputs
66 gx = cuda.elementwise(
67 'T x, T gy, T alpha', 'T gx',
68 'gx = x >= 0 ? gy : (T)(gy * alpha * exp(x))',
69 'elu_bwd')(
70 x, gy, self.alpha)
71 self.retain_inputs((0, 1))
72 self.retain_outputs((0,))
73 return gx,
74
75 def backward(self, indexes, grad_outputs):
76 x, gy = self.get_retained_inputs()
77 gx, = self.get_retained_outputs()
78 ggx, = grad_outputs
79 ggxgx = ggx * gx
80
81 ret = []
82 if 0 in indexes:
83 ret.append(ggxgx * (x.data < 0))
84 if 1 in indexes:
85 ret.append(ggxgx / gy)
86
87 return ret
88
89
90 def elu(x, alpha=1.0):
91 """Exponential Linear Unit function.
92
93 For a parameter :math:`\\alpha`, it is expressed as
94
95 .. math::
96 f(x) = \\left \\{ \\begin{array}{ll}
97 x & {\\rm if}~ x \\ge 0 \\\\
98 \\alpha (\\exp(x) - 1) & {\\rm if}~ x < 0,
99 \\end{array} \\right.
100
101 See: https://arxiv.org/abs/1511.07289
102
103 Args:
104 x (:class:`~chainer.Variable` or :class:`numpy.ndarray` or \
105 :class:`cupy.ndarray`):
106 Input variable. A :math:`(s_1, s_2, ..., s_N)`-shaped float array.
107 alpha (float): Parameter :math:`\\alpha`. Default is 1.0.
108
109 Returns:
110 ~chainer.Variable: Output variable. A
111 :math:`(s_1, s_2, ..., s_N)`-shaped float array.
112
113 .. admonition:: Example
114
115 >>> x = np.array([[-1, 0], [2, -3]], 'f')
116 >>> x
117 array([[-1., 0.],
118 [ 2., -3.]], dtype=float32)
119 >>> y = F.elu(x, alpha=1.)
120 >>> y.data
121 array([[-0.63212055, 0. ],
122 [ 2. , -0.95021296]], dtype=float32)
123
124 """
125 return ELU(alpha=alpha).apply((x,))[0]
126
[end of chainer/functions/activation/elu.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/chainer/functions/activation/elu.py b/chainer/functions/activation/elu.py
--- a/chainer/functions/activation/elu.py
+++ b/chainer/functions/activation/elu.py
@@ -22,14 +22,14 @@
self.retain_inputs((0,))
y = x[0].copy()
neg_indices = x[0] < 0
- y[neg_indices] = self.alpha * (numpy.exp(y[neg_indices]) - 1)
+ y[neg_indices] = self.alpha * (numpy.expm1(y[neg_indices]))
return y,
def forward_gpu(self, x):
self.retain_inputs((0,))
y = cuda.elementwise(
'T x, T alpha', 'T y',
- 'y = x >= 0 ? x : (T)(alpha * (exp(x) - 1))',
+ 'y = x >= 0 ? x : (T)(alpha * expm1(x))',
'elu_fwd')(
x[0], self.alpha)
return y,
@@ -37,7 +37,7 @@
def backward(self, indexes, grad_outputs):
x, = self.get_retained_inputs()
gy, = grad_outputs
- return ELUGrad(self.alpha).apply((x, gy))
+ return ELUGrad(self.alpha).apply((x,))[0] * gy,
class ELUGrad(function_node.FunctionNode):
@@ -48,43 +48,34 @@
self.alpha = alpha
def check_type_forward(self, in_types):
- type_check.expect(in_types.size() == 2)
+ type_check.expect(in_types.size() == 1)
type_check.expect(in_types[0].dtype.kind == 'f')
- type_check.expect(in_types[1].dtype.kind == 'f')
def forward_cpu(self, inputs):
- x, gy = inputs
- gx = gy.copy()
+ x, = inputs
+ gx = numpy.ones_like(x)
neg_indices = x < 0
gx[neg_indices] *= self.alpha * numpy.exp(x[neg_indices])
- self.retain_inputs((0, 1))
+ self.retain_inputs((0,))
self.retain_outputs((0,))
return gx,
def forward_gpu(self, inputs):
- x, gy = inputs
+ x, = inputs
gx = cuda.elementwise(
- 'T x, T gy, T alpha', 'T gx',
- 'gx = x >= 0 ? gy : (T)(gy * alpha * exp(x))',
+ 'T x, T alpha', 'T gx',
+ 'gx = x >= 0 ? (T)1 : (T)(alpha * exp(x))',
'elu_bwd')(
- x, gy, self.alpha)
- self.retain_inputs((0, 1))
+ x, self.alpha)
+ self.retain_inputs((0,))
self.retain_outputs((0,))
return gx,
def backward(self, indexes, grad_outputs):
- x, gy = self.get_retained_inputs()
+ x, = self.get_retained_inputs()
gx, = self.get_retained_outputs()
ggx, = grad_outputs
- ggxgx = ggx * gx
-
- ret = []
- if 0 in indexes:
- ret.append(ggxgx * (x.data < 0))
- if 1 in indexes:
- ret.append(ggxgx / gy)
-
- return ret
+ return ggx * gx * (x.data < 0),
def elu(x, alpha=1.0):
| {"golden_diff": "diff --git a/chainer/functions/activation/elu.py b/chainer/functions/activation/elu.py\n--- a/chainer/functions/activation/elu.py\n+++ b/chainer/functions/activation/elu.py\n@@ -22,14 +22,14 @@\n self.retain_inputs((0,))\n y = x[0].copy()\n neg_indices = x[0] < 0\n- y[neg_indices] = self.alpha * (numpy.exp(y[neg_indices]) - 1)\n+ y[neg_indices] = self.alpha * (numpy.expm1(y[neg_indices]))\n return y,\n \n def forward_gpu(self, x):\n self.retain_inputs((0,))\n y = cuda.elementwise(\n 'T x, T alpha', 'T y',\n- 'y = x >= 0 ? x : (T)(alpha * (exp(x) - 1))',\n+ 'y = x >= 0 ? x : (T)(alpha * expm1(x))',\n 'elu_fwd')(\n x[0], self.alpha)\n return y,\n@@ -37,7 +37,7 @@\n def backward(self, indexes, grad_outputs):\n x, = self.get_retained_inputs()\n gy, = grad_outputs\n- return ELUGrad(self.alpha).apply((x, gy))\n+ return ELUGrad(self.alpha).apply((x,))[0] * gy,\n \n \n class ELUGrad(function_node.FunctionNode):\n@@ -48,43 +48,34 @@\n self.alpha = alpha\n \n def check_type_forward(self, in_types):\n- type_check.expect(in_types.size() == 2)\n+ type_check.expect(in_types.size() == 1)\n type_check.expect(in_types[0].dtype.kind == 'f')\n- type_check.expect(in_types[1].dtype.kind == 'f')\n \n def forward_cpu(self, inputs):\n- x, gy = inputs\n- gx = gy.copy()\n+ x, = inputs\n+ gx = numpy.ones_like(x)\n neg_indices = x < 0\n gx[neg_indices] *= self.alpha * numpy.exp(x[neg_indices])\n- self.retain_inputs((0, 1))\n+ self.retain_inputs((0,))\n self.retain_outputs((0,))\n return gx,\n \n def forward_gpu(self, inputs):\n- x, gy = inputs\n+ x, = inputs\n gx = cuda.elementwise(\n- 'T x, T gy, T alpha', 'T gx',\n- 'gx = x >= 0 ? gy : (T)(gy * alpha * exp(x))',\n+ 'T x, T alpha', 'T gx',\n+ 'gx = x >= 0 ? (T)1 : (T)(alpha * exp(x))',\n 'elu_bwd')(\n- x, gy, self.alpha)\n- self.retain_inputs((0, 1))\n+ x, self.alpha)\n+ self.retain_inputs((0,))\n self.retain_outputs((0,))\n return gx,\n \n def backward(self, indexes, grad_outputs):\n- x, gy = self.get_retained_inputs()\n+ x, = self.get_retained_inputs()\n gx, = self.get_retained_outputs()\n ggx, = grad_outputs\n- ggxgx = ggx * gx\n-\n- ret = []\n- if 0 in indexes:\n- ret.append(ggxgx * (x.data < 0))\n- if 1 in indexes:\n- ret.append(ggxgx / gy)\n-\n- return ret\n+ return ggx * gx * (x.data < 0),\n \n \n def elu(x, alpha=1.0):\n", "issue": "2nd order derivative of ELU should not give NaN\nThe 2nd order derivative of ELU gives off NaN about once out of hundreds of iterations. Then the entire network will instantly contaminated with NaN.\r\n\r\nI tracked the cause and the following code in backward() of class ELUGrad in chainer/chainer/functions/activation/elu.py is the cause\r\n```\r\n if 1 in indexes:\r\n ret.append(ggxgx / gy)\r\n```\r\n\r\nIt is natural that this division will give NaN if some element of gy is zero. Zero will occur when the single-precision floating point subtraction underflows.\r\n\r\nFor your information, I am circumventing the issue by making minor modification to elu.py as in the attached file, which uses the same way to calculate derivative as the mathematical functions like F.exp.\r\n[elu.py.txt](https://github.com/chainer/chainer/files/1683829/elu.py.txt)\r\n\r\n\r\nHow to reproduce:\r\nI am using Chainer 3.2.0, but that part of the ELU source code is not different in v4.0, therefore I think this issue persists over the versions.\r\n```\r\n>>> import chainer\r\n>>> import numpy as np\r\n>>> x = chainer.Variable(np.array([[0, 0]],dtype=np.float32))\r\n>>> y = chainer.functions.elu(x)\r\n>>> y\r\nvariable([[ 0., 0.]])\r\n>>> \r\n>>> y.grad = (np.array([[0, 1e-30]],dtype=np.float32))\r\n>>> y.backward(enable_double_backprop=True)\r\n>>> \r\n>>> x.grad_var.grad = np.array([[1, 1]],dtype=np.float32)\r\n>>> x.grad_var.backward()\r\n/home/mogami/.pyenv/versions/anaconda3-4.2.0/lib/python3.5/site-packages/chainer/functions/math/basic_math.py:322: RuntimeWarning: invalid value encountered in true_divide\r\n return utils.force_array(x[0] / x[1]),\r\n>>> y.grad_var\r\nvariable([[ 0.00000000e+00, 1.00000000e-30]])\r\n>>> y.grad_var.grad\r\narray([[ nan, 1.]], dtype=float32)\r\n```\r\nThe first element is nan, though it should be 1.0 in this case. This example may seem silly when considering ELU only, but having zero for some of elements often happens when dy is back propagated from somewhere else because of underflow.\n", "before_files": [{"content": "import numpy\n\nfrom chainer.backends import cuda\nfrom chainer import function_node\nfrom chainer.utils import type_check\n\n\nclass ELU(function_node.FunctionNode):\n\n \"\"\"Exponential Linear Unit.\"\"\"\n\n def __init__(self, alpha=1.0):\n self.alpha = float(alpha)\n\n def check_type_forward(self, in_types):\n type_check.expect(in_types.size() == 1)\n x_type, = in_types\n\n type_check.expect(x_type.dtype.kind == 'f')\n\n def forward_cpu(self, x):\n self.retain_inputs((0,))\n y = x[0].copy()\n neg_indices = x[0] < 0\n y[neg_indices] = self.alpha * (numpy.exp(y[neg_indices]) - 1)\n return y,\n\n def forward_gpu(self, x):\n self.retain_inputs((0,))\n y = cuda.elementwise(\n 'T x, T alpha', 'T y',\n 'y = x >= 0 ? x : (T)(alpha * (exp(x) - 1))',\n 'elu_fwd')(\n x[0], self.alpha)\n return y,\n\n def backward(self, indexes, grad_outputs):\n x, = self.get_retained_inputs()\n gy, = grad_outputs\n return ELUGrad(self.alpha).apply((x, gy))\n\n\nclass ELUGrad(function_node.FunctionNode):\n\n \"\"\"Exponential Linear Unit gradient function.\"\"\"\n\n def __init__(self, alpha):\n self.alpha = alpha\n\n def check_type_forward(self, in_types):\n type_check.expect(in_types.size() == 2)\n type_check.expect(in_types[0].dtype.kind == 'f')\n type_check.expect(in_types[1].dtype.kind == 'f')\n\n def forward_cpu(self, inputs):\n x, gy = inputs\n gx = gy.copy()\n neg_indices = x < 0\n gx[neg_indices] *= self.alpha * numpy.exp(x[neg_indices])\n self.retain_inputs((0, 1))\n self.retain_outputs((0,))\n return gx,\n\n def forward_gpu(self, inputs):\n x, gy = inputs\n gx = cuda.elementwise(\n 'T x, T gy, T alpha', 'T gx',\n 'gx = x >= 0 ? gy : (T)(gy * alpha * exp(x))',\n 'elu_bwd')(\n x, gy, self.alpha)\n self.retain_inputs((0, 1))\n self.retain_outputs((0,))\n return gx,\n\n def backward(self, indexes, grad_outputs):\n x, gy = self.get_retained_inputs()\n gx, = self.get_retained_outputs()\n ggx, = grad_outputs\n ggxgx = ggx * gx\n\n ret = []\n if 0 in indexes:\n ret.append(ggxgx * (x.data < 0))\n if 1 in indexes:\n ret.append(ggxgx / gy)\n\n return ret\n\n\ndef elu(x, alpha=1.0):\n \"\"\"Exponential Linear Unit function.\n\n For a parameter :math:`\\\\alpha`, it is expressed as\n\n .. math::\n f(x) = \\\\left \\\\{ \\\\begin{array}{ll}\n x & {\\\\rm if}~ x \\\\ge 0 \\\\\\\\\n \\\\alpha (\\\\exp(x) - 1) & {\\\\rm if}~ x < 0,\n \\\\end{array} \\\\right.\n\n See: https://arxiv.org/abs/1511.07289\n\n Args:\n x (:class:`~chainer.Variable` or :class:`numpy.ndarray` or \\\n :class:`cupy.ndarray`):\n Input variable. A :math:`(s_1, s_2, ..., s_N)`-shaped float array.\n alpha (float): Parameter :math:`\\\\alpha`. Default is 1.0.\n\n Returns:\n ~chainer.Variable: Output variable. A\n :math:`(s_1, s_2, ..., s_N)`-shaped float array.\n\n .. admonition:: Example\n\n >>> x = np.array([[-1, 0], [2, -3]], 'f')\n >>> x\n array([[-1., 0.],\n [ 2., -3.]], dtype=float32)\n >>> y = F.elu(x, alpha=1.)\n >>> y.data\n array([[-0.63212055, 0. ],\n [ 2. , -0.95021296]], dtype=float32)\n\n \"\"\"\n return ELU(alpha=alpha).apply((x,))[0]\n", "path": "chainer/functions/activation/elu.py"}]} | 2,384 | 813 |
gh_patches_debug_32691 | rasdani/github-patches | git_diff | mindsdb__lightwood-619 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
`Img2VecEncoder` calls `self.prepare` twice
## Your Environment
* Python version: Python 3.8.10
* Operating system: Ubuntu 20.04.3 LTS
* Lightwood version: 1.3.0
## Describe your issue
`Img2VecEncoder` crashes when `.encode(images)` is called.
Internally there seems to be some confusion on what the `.prepare` method actually does.
For some reason it is called to convert images to tensors, while in reality it should be used to initialize the model and (maybe) perform some initial training.
## Fixing the issue
Implement a method to convert images to torch tensors.
</issue>
<code>
[start of lightwood/encoder/image/img_2_vec.py]
1 import logging
2 import torch
3 import torchvision.transforms as transforms
4 from lightwood.encoder.image.helpers.img_to_vec import Img2Vec
5 from lightwood.encoder.base import BaseEncoder
6
7
8 class Img2VecEncoder(BaseEncoder):
9
10 def __init__(self, is_target: bool = False):
11 super().__init__(is_target)
12 self.model = None
13 # I think we should make this an enum, something like: speed, balance, accuracy
14 self.aim = aim
15 self._prepared = False
16
17 self._scaler = transforms.Scale((224, 224))
18 self._normalize = transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
19 self._to_tensor = transforms.ToTensor()
20
21 pil_logger = logging.getLogger('PIL')
22 pil_logger.setLevel(logging.ERROR)
23
24 def prepare(self, priming_data):
25 if self._prepared:
26 raise Exception('You can only call "prepare" once for a given encoder.')
27
28 if self.model is None:
29 self.model = Img2Vec(model='resnext-50-small')
30 self._prepared = True
31
32 def encode(self, images):
33 """
34 Encode list of images
35
36 :images : list of images, each image is a path to a file or a url
37 :return: a torch.floatTensor
38 """
39 if not self._prepared:
40 raise Exception('You need to call "prepare" before calling "encode" or "decode".')
41
42 img_tensors = self.prepare(images)
43 vec_arr = []
44 self.model.eval()
45 with torch.no_grad():
46 for img_tensor in img_tensors:
47 vec = self.model(img_tensor.unsqueeze(0), batch=False)
48 vec_arr.append(vec)
49 return torch.stack(vec_arr)
50
51 def decode(self, encoded_values_tensor):
52 raise Exception('This encoder is not bi-directional')
53
[end of lightwood/encoder/image/img_2_vec.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/lightwood/encoder/image/img_2_vec.py b/lightwood/encoder/image/img_2_vec.py
--- a/lightwood/encoder/image/img_2_vec.py
+++ b/lightwood/encoder/image/img_2_vec.py
@@ -3,6 +3,7 @@
import torchvision.transforms as transforms
from lightwood.encoder.image.helpers.img_to_vec import Img2Vec
from lightwood.encoder.base import BaseEncoder
+from PIL import Image
class Img2VecEncoder(BaseEncoder):
@@ -10,13 +11,18 @@
def __init__(self, is_target: bool = False):
super().__init__(is_target)
self.model = None
- # I think we should make this an enum, something like: speed, balance, accuracy
- self.aim = aim
+ # # I think we should make this an enum, something like: speed, balance, accuracy
+ # self.aim = aim
self._prepared = False
- self._scaler = transforms.Scale((224, 224))
+ self._scaler = transforms.Resize((224, 224))
self._normalize = transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
self._to_tensor = transforms.ToTensor()
+ self._img_to_tensor = transforms.Compose([
+ self._scaler,
+ self._to_tensor,
+ self._normalize
+ ])
pil_logger = logging.getLogger('PIL')
pil_logger.setLevel(logging.ERROR)
@@ -39,8 +45,11 @@
if not self._prepared:
raise Exception('You need to call "prepare" before calling "encode" or "decode".')
- img_tensors = self.prepare(images)
+ img_tensors = [self._img_to_tensor(
+ Image.open(img_path)
+ ) for img_path in images]
vec_arr = []
+
self.model.eval()
with torch.no_grad():
for img_tensor in img_tensors:
| {"golden_diff": "diff --git a/lightwood/encoder/image/img_2_vec.py b/lightwood/encoder/image/img_2_vec.py\n--- a/lightwood/encoder/image/img_2_vec.py\n+++ b/lightwood/encoder/image/img_2_vec.py\n@@ -3,6 +3,7 @@\n import torchvision.transforms as transforms\n from lightwood.encoder.image.helpers.img_to_vec import Img2Vec\n from lightwood.encoder.base import BaseEncoder\n+from PIL import Image\n \n \n class Img2VecEncoder(BaseEncoder):\n@@ -10,13 +11,18 @@\n def __init__(self, is_target: bool = False):\n super().__init__(is_target)\n self.model = None\n- # I think we should make this an enum, something like: speed, balance, accuracy\n- self.aim = aim\n+ # # I think we should make this an enum, something like: speed, balance, accuracy\n+ # self.aim = aim\n self._prepared = False\n \n- self._scaler = transforms.Scale((224, 224))\n+ self._scaler = transforms.Resize((224, 224))\n self._normalize = transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])\n self._to_tensor = transforms.ToTensor()\n+ self._img_to_tensor = transforms.Compose([\n+ self._scaler,\n+ self._to_tensor,\n+ self._normalize\n+ ])\n \n pil_logger = logging.getLogger('PIL')\n pil_logger.setLevel(logging.ERROR)\n@@ -39,8 +45,11 @@\n if not self._prepared:\n raise Exception('You need to call \"prepare\" before calling \"encode\" or \"decode\".')\n \n- img_tensors = self.prepare(images)\n+ img_tensors = [self._img_to_tensor(\n+ Image.open(img_path)\n+ ) for img_path in images]\n vec_arr = []\n+\n self.model.eval()\n with torch.no_grad():\n for img_tensor in img_tensors:\n", "issue": "`Img2VecEncoder` calls `self.prepare` twice\n## Your Environment\r\n* Python version: Python 3.8.10\r\n* Operating system: Ubuntu 20.04.3 LTS\r\n* Lightwood version: 1.3.0\r\n\r\n## Describe your issue\r\n`Img2VecEncoder` crashes when `.encode(images)` is called.\r\n\r\nInternally there seems to be some confusion on what the `.prepare` method actually does.\r\nFor some reason it is called to convert images to tensors, while in reality it should be used to initialize the model and (maybe) perform some initial training.\r\n\r\n## Fixing the issue\r\n\r\nImplement a method to convert images to torch tensors.\r\n\n", "before_files": [{"content": "import logging\nimport torch\nimport torchvision.transforms as transforms\nfrom lightwood.encoder.image.helpers.img_to_vec import Img2Vec\nfrom lightwood.encoder.base import BaseEncoder\n\n\nclass Img2VecEncoder(BaseEncoder):\n\n def __init__(self, is_target: bool = False):\n super().__init__(is_target)\n self.model = None\n # I think we should make this an enum, something like: speed, balance, accuracy\n self.aim = aim\n self._prepared = False\n\n self._scaler = transforms.Scale((224, 224))\n self._normalize = transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])\n self._to_tensor = transforms.ToTensor()\n\n pil_logger = logging.getLogger('PIL')\n pil_logger.setLevel(logging.ERROR)\n\n def prepare(self, priming_data):\n if self._prepared:\n raise Exception('You can only call \"prepare\" once for a given encoder.')\n\n if self.model is None:\n self.model = Img2Vec(model='resnext-50-small')\n self._prepared = True\n\n def encode(self, images):\n \"\"\"\n Encode list of images\n\n :images : list of images, each image is a path to a file or a url\n :return: a torch.floatTensor\n \"\"\"\n if not self._prepared:\n raise Exception('You need to call \"prepare\" before calling \"encode\" or \"decode\".')\n\n img_tensors = self.prepare(images)\n vec_arr = []\n self.model.eval()\n with torch.no_grad():\n for img_tensor in img_tensors:\n vec = self.model(img_tensor.unsqueeze(0), batch=False)\n vec_arr.append(vec)\n return torch.stack(vec_arr)\n\n def decode(self, encoded_values_tensor):\n raise Exception('This encoder is not bi-directional')\n", "path": "lightwood/encoder/image/img_2_vec.py"}]} | 1,215 | 470 |
gh_patches_debug_33258 | rasdani/github-patches | git_diff | fidals__shopelectro-395 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
category_products.html:5: Implement pagination buttons...
The puzzle `302-bdb9bbef` from #302 has to be resolved:
https://github.com/fidals/shopelectro/blob/96e14747c3d7da9dd7db50f01bbb987147e4e2cb/templates/catalog/category_products.html#L5-L5
The puzzle was created by Artemiy on 08-Jun-18.
Estimate: 60 minutes,
If you have any technical questions, don't ask me, submit new tickets instead. The task will be "done" when the problem is fixed and the text of the puzzle is _removed_ from the source code. Here is more about [PDD](http://www.yegor256.com/2009/03/04/pdd.html) and [about me](http://www.yegor256.com/2017/04/05/pdd-in-action.html).
</issue>
<code>
[start of shopelectro/views/catalog.py]
1 from functools import partial
2
3 from django.conf import settings
4 from django.core.paginator import Paginator
5 from django.http import Http404, HttpResponse, HttpResponseBadRequest, HttpResponseForbidden
6 from django.shortcuts import render, get_object_or_404
7 from django.views.decorators.http import require_POST
8 from django_user_agents.utils import get_user_agent
9
10 from catalog.views import catalog
11 from images.models import Image
12 from pages import views as pages_views
13
14 from shopelectro import config
15 from shopelectro import models
16 from shopelectro.views.helpers import set_csrf_cookie
17
18 PRODUCTS_ON_PAGE_PC = 48
19 PRODUCTS_ON_PAGE_MOB = 12
20
21
22 def get_products_count(request):
23 """Calculate max products list size from request. List size depends on device type."""
24 mobile_view = get_user_agent(request).is_mobile
25 return PRODUCTS_ON_PAGE_MOB if mobile_view else PRODUCTS_ON_PAGE_PC
26
27
28 # CATALOG VIEWS
29 class CategoryTree(catalog.CategoryTree):
30 category_model = models.Category
31
32
33 @set_csrf_cookie
34 class ProductPage(catalog.ProductPage):
35 pk_url_kwarg = None
36 slug_url_kwarg = 'product_vendor_code'
37 slug_field = 'vendor_code'
38
39 queryset = (
40 models.Product.objects
41 .filter(category__isnull=False)
42 .prefetch_related('product_feedbacks', 'page__images')
43 .select_related('page')
44 )
45
46 def get_context_data(self, **kwargs):
47 context = super(ProductPage, self).get_context_data(**kwargs)
48
49 group_tags_pairs = (
50 models.Tag.objects
51 .filter(products=self.object)
52 .get_group_tags_pairs()
53 )
54
55 return {
56 **context,
57 'price_bounds': config.PRICE_BOUNDS,
58 'group_tags_pairs': group_tags_pairs
59 }
60
61
62 # SHOPELECTRO-SPECIFIC VIEWS
63 @set_csrf_cookie
64 class IndexPage(pages_views.CustomPageView):
65
66 def get_context_data(self, **kwargs):
67 """Extended method. Add product's images to context."""
68 context = super(IndexPage, self).get_context_data(**kwargs)
69 mobile_view = get_user_agent(self.request).is_mobile
70
71 top_products = (
72 models.Product.objects
73 .filter(id__in=settings.TOP_PRODUCTS)
74 .prefetch_related('category')
75 .select_related('page')
76 )
77
78 images = Image.objects.get_main_images_by_pages(
79 models.ProductPage.objects.filter(
80 shopelectro_product__in=top_products
81 )
82 )
83
84 categories = models.Category.objects.get_root_categories_by_products(
85 top_products)
86
87 prepared_top_products = []
88 if not mobile_view:
89 prepared_top_products = [
90 (product, images.get(product.page), categories.get(product))
91 for product in top_products
92 ]
93
94 return {
95 **context,
96 'category_tile': config.MAIN_PAGE_TILE,
97 'prepared_top_products': prepared_top_products,
98 }
99
100
101 def merge_products_and_images(products):
102 images = Image.objects.get_main_images_by_pages(
103 models.ProductPage.objects.filter(shopelectro_product__in=products)
104 )
105
106 return [
107 (product, images.get(product.page))
108 for product in products
109 ]
110
111
112 @set_csrf_cookie
113 class CategoryPage(catalog.CategoryPage):
114
115 def get_context_data(self, **kwargs):
116 """Add sorting options and view_types in context."""
117 context = super().get_context_data(**kwargs)
118 products_on_page = int(self.request.GET.get(
119 'step', get_products_count(self.request),
120 ))
121 page_number = int(self.request.GET.get('page', 1))
122 view_type = self.request.session.get('view_type', 'tile')
123 sorting = int(self.kwargs.get('sorting', 0))
124 sorting_option = config.category_sorting(sorting)
125 category = context['category']
126 if (
127 page_number < 1 or
128 products_on_page not in settings.CATEGORY_STEP_MULTIPLIERS
129 ):
130 raise Http404('Page does not exist.')
131
132 all_products = (
133 models.Product.objects
134 .prefetch_related('page__images')
135 .select_related('page')
136 .get_by_category(category, ordering=(sorting_option, ))
137 )
138
139 group_tags_pairs = (
140 models.Tag.objects
141 .filter(products__in=all_products)
142 .get_group_tags_pairs()
143 )
144
145 tags = self.kwargs.get('tags')
146
147 tag_titles = ''
148 if tags:
149 slugs = models.Tag.parse_url_tags(tags)
150 tags = models.Tag.objects.filter(slug__in=slugs)
151
152 all_products = (
153 all_products
154 .filter(tags__in=tags)
155 # Use distinct because filtering by QuerySet tags,
156 # that related with products by many-to-many relation.
157 .distinct(sorting_option.lstrip('-'))
158 )
159
160 tag_titles = models.serialize_tags_to_title(tags)
161
162 def template_context(page, tag_titles, tags):
163 return {
164 'page': page,
165 'tag_titles': tag_titles,
166 'tags': tags,
167 }
168
169 page = context['page']
170 page.get_template_render_context = partial(
171 template_context, page, tag_titles, tags)
172
173 paginated_page = Paginator(all_products, products_on_page).page(page_number)
174 total_products = all_products.count()
175 products = paginated_page.object_list
176 if not products:
177 raise Http404('Page without products does not exist.')
178
179 return {
180 **context,
181 'product_image_pairs': merge_products_and_images(products),
182 'group_tags_pairs': group_tags_pairs,
183 'total_products': total_products,
184 'products_count': (page_number - 1) * products_on_page + products.count(),
185 'paginated_page': paginated_page,
186 'sorting_options': config.category_sorting(),
187 'limits': settings.CATEGORY_STEP_MULTIPLIERS,
188 'sort': sorting,
189 'tags': tags,
190 'view_type': view_type,
191 'skip_canonical': bool(tags),
192 }
193
194
195 def load_more(request, category_slug, offset=0, limit=0, sorting=0, tags=None):
196 """
197 Load more products of a given category.
198
199 :param sorting: preferred sorting index from CATEGORY_SORTING tuple
200 :param request: HttpRequest object
201 :param category_slug: Slug for a given category
202 :param offset: used for slicing QuerySet.
203 :return: products list in html format
204 """
205 products_on_page = limit or get_products_count(request)
206 offset = int(offset)
207 if offset < 0:
208 return HttpResponseBadRequest(
209 'The offset is wrong. An offset should be greater than or equal to 0.'
210 )
211 if products_on_page not in settings.CATEGORY_STEP_MULTIPLIERS:
212 return HttpResponseBadRequest(
213 'The limit number is wrong. List of available numbers:'
214 f' {", ".join(map(str, settings.CATEGORY_STEP_MULTIPLIERS))}'
215 )
216 # increment page number because:
217 # 11 // 12 = 0, 0 // 12 = 0 but it should be the first page
218 # 12 // 12 = 1, 23 // 12 = 1, but it should be the second page
219 page_number = (offset // products_on_page) + 1
220 category = get_object_or_404(models.CategoryPage, slug=category_slug).model
221 sorting_option = config.category_sorting(int(sorting))
222
223 all_products = (
224 models.Product.objects
225 .prefetch_related('page__images')
226 .select_related('page')
227 .get_by_category(category, ordering=(sorting_option,))
228 )
229
230 if tags:
231 tag_entities = models.Tag.objects.filter(
232 slug__in=models.Tag.parse_url_tags(tags)
233 )
234
235 all_products = (
236 all_products
237 .filter(tags__in=tag_entities)
238 # Use distinct because filtering by QuerySet tags,
239 # that related with products by many-to-many relation.
240 .distinct(sorting_option.lstrip('-'))
241 )
242
243 paginated_page = Paginator(all_products, products_on_page).page(page_number)
244 products = paginated_page.object_list
245 view = request.session.get('view_type', 'tile')
246
247 return render(request, 'catalog/category_products.html', {
248 'product_image_pairs': merge_products_and_images(products),
249 'paginated_page': paginated_page,
250 'view_type': view,
251 'prods': products_on_page,
252 })
253
254
255 @require_POST
256 def save_feedback(request):
257 def get_keys_from_post(*args):
258 return {arg: request.POST.get(arg, '') for arg in args}
259
260 product_id = request.POST.get('id')
261 product = models.Product.objects.filter(id=product_id).first()
262 if not (product_id and product):
263 return HttpResponse(status=422)
264
265 fields = ['rating', 'name', 'dignities', 'limitations', 'general']
266 feedback_data = get_keys_from_post(*fields)
267
268 models.ProductFeedback.objects.create(product=product, **feedback_data)
269 return HttpResponse('ok')
270
271
272 @require_POST
273 def delete_feedback(request):
274 if not request.user.is_authenticated:
275 return HttpResponseForbidden('Not today, sly guy...')
276
277 feedback_id = request.POST.get('id')
278 feedback = models.ProductFeedback.objects.filter(id=feedback_id).first()
279 if not (feedback_id and feedback):
280 return HttpResponse(status=422)
281
282 feedback.delete()
283 return HttpResponse('Feedback with id={} was deleted.'.format(feedback_id))
284
285
286 class ProductsWithoutImages(catalog.ProductsWithoutImages):
287 model = models.Product
288
289
290 class ProductsWithoutText(catalog.ProductsWithoutText):
291 model = models.Product
292
[end of shopelectro/views/catalog.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/shopelectro/views/catalog.py b/shopelectro/views/catalog.py
--- a/shopelectro/views/catalog.py
+++ b/shopelectro/views/catalog.py
@@ -1,7 +1,7 @@
from functools import partial
from django.conf import settings
-from django.core.paginator import Paginator
+from django.core.paginator import Paginator, InvalidPage
from django.http import Http404, HttpResponse, HttpResponseBadRequest, HttpResponseForbidden
from django.shortcuts import render, get_object_or_404
from django.views.decorators.http import require_POST
@@ -25,6 +25,13 @@
return PRODUCTS_ON_PAGE_MOB if mobile_view else PRODUCTS_ON_PAGE_PC
+def get_paginated_page_or_404(objects, per_page, page_number):
+ try:
+ return Paginator(objects, per_page).page(page_number)
+ except InvalidPage:
+ raise Http404('Page does not exist')
+
+
# CATALOG VIEWS
class CategoryTree(catalog.CategoryTree):
category_model = models.Category
@@ -170,7 +177,7 @@
page.get_template_render_context = partial(
template_context, page, tag_titles, tags)
- paginated_page = Paginator(all_products, products_on_page).page(page_number)
+ paginated_page = get_paginated_page_or_404(all_products, products_on_page, page_number)
total_products = all_products.count()
products = paginated_page.object_list
if not products:
@@ -240,7 +247,7 @@
.distinct(sorting_option.lstrip('-'))
)
- paginated_page = Paginator(all_products, products_on_page).page(page_number)
+ paginated_page = get_paginated_page_or_404(all_products, products_on_page, page_number)
products = paginated_page.object_list
view = request.session.get('view_type', 'tile')
| {"golden_diff": "diff --git a/shopelectro/views/catalog.py b/shopelectro/views/catalog.py\n--- a/shopelectro/views/catalog.py\n+++ b/shopelectro/views/catalog.py\n@@ -1,7 +1,7 @@\n from functools import partial\n \n from django.conf import settings\n-from django.core.paginator import Paginator\n+from django.core.paginator import Paginator, InvalidPage\n from django.http import Http404, HttpResponse, HttpResponseBadRequest, HttpResponseForbidden\n from django.shortcuts import render, get_object_or_404\n from django.views.decorators.http import require_POST\n@@ -25,6 +25,13 @@\n return PRODUCTS_ON_PAGE_MOB if mobile_view else PRODUCTS_ON_PAGE_PC\n \n \n+def get_paginated_page_or_404(objects, per_page, page_number):\n+ try:\n+ return Paginator(objects, per_page).page(page_number)\n+ except InvalidPage:\n+ raise Http404('Page does not exist')\n+\n+\n # CATALOG VIEWS\n class CategoryTree(catalog.CategoryTree):\n category_model = models.Category\n@@ -170,7 +177,7 @@\n page.get_template_render_context = partial(\n template_context, page, tag_titles, tags)\n \n- paginated_page = Paginator(all_products, products_on_page).page(page_number)\n+ paginated_page = get_paginated_page_or_404(all_products, products_on_page, page_number)\n total_products = all_products.count()\n products = paginated_page.object_list\n if not products:\n@@ -240,7 +247,7 @@\n .distinct(sorting_option.lstrip('-'))\n )\n \n- paginated_page = Paginator(all_products, products_on_page).page(page_number)\n+ paginated_page = get_paginated_page_or_404(all_products, products_on_page, page_number)\n products = paginated_page.object_list\n view = request.session.get('view_type', 'tile')\n", "issue": "category_products.html:5: Implement pagination buttons...\nThe puzzle `302-bdb9bbef` from #302 has to be resolved:\n\nhttps://github.com/fidals/shopelectro/blob/96e14747c3d7da9dd7db50f01bbb987147e4e2cb/templates/catalog/category_products.html#L5-L5\n\nThe puzzle was created by Artemiy on 08-Jun-18. \n\nEstimate: 60 minutes, \n\nIf you have any technical questions, don't ask me, submit new tickets instead. The task will be \"done\" when the problem is fixed and the text of the puzzle is _removed_ from the source code. Here is more about [PDD](http://www.yegor256.com/2009/03/04/pdd.html) and [about me](http://www.yegor256.com/2017/04/05/pdd-in-action.html).\n", "before_files": [{"content": "from functools import partial\n\nfrom django.conf import settings\nfrom django.core.paginator import Paginator\nfrom django.http import Http404, HttpResponse, HttpResponseBadRequest, HttpResponseForbidden\nfrom django.shortcuts import render, get_object_or_404\nfrom django.views.decorators.http import require_POST\nfrom django_user_agents.utils import get_user_agent\n\nfrom catalog.views import catalog\nfrom images.models import Image\nfrom pages import views as pages_views\n\nfrom shopelectro import config\nfrom shopelectro import models\nfrom shopelectro.views.helpers import set_csrf_cookie\n\nPRODUCTS_ON_PAGE_PC = 48\nPRODUCTS_ON_PAGE_MOB = 12\n\n\ndef get_products_count(request):\n \"\"\"Calculate max products list size from request. List size depends on device type.\"\"\"\n mobile_view = get_user_agent(request).is_mobile\n return PRODUCTS_ON_PAGE_MOB if mobile_view else PRODUCTS_ON_PAGE_PC\n\n\n# CATALOG VIEWS\nclass CategoryTree(catalog.CategoryTree):\n category_model = models.Category\n\n\n@set_csrf_cookie\nclass ProductPage(catalog.ProductPage):\n pk_url_kwarg = None\n slug_url_kwarg = 'product_vendor_code'\n slug_field = 'vendor_code'\n\n queryset = (\n models.Product.objects\n .filter(category__isnull=False)\n .prefetch_related('product_feedbacks', 'page__images')\n .select_related('page')\n )\n\n def get_context_data(self, **kwargs):\n context = super(ProductPage, self).get_context_data(**kwargs)\n\n group_tags_pairs = (\n models.Tag.objects\n .filter(products=self.object)\n .get_group_tags_pairs()\n )\n\n return {\n **context,\n 'price_bounds': config.PRICE_BOUNDS,\n 'group_tags_pairs': group_tags_pairs\n }\n\n\n# SHOPELECTRO-SPECIFIC VIEWS\n@set_csrf_cookie\nclass IndexPage(pages_views.CustomPageView):\n\n def get_context_data(self, **kwargs):\n \"\"\"Extended method. Add product's images to context.\"\"\"\n context = super(IndexPage, self).get_context_data(**kwargs)\n mobile_view = get_user_agent(self.request).is_mobile\n\n top_products = (\n models.Product.objects\n .filter(id__in=settings.TOP_PRODUCTS)\n .prefetch_related('category')\n .select_related('page')\n )\n\n images = Image.objects.get_main_images_by_pages(\n models.ProductPage.objects.filter(\n shopelectro_product__in=top_products\n )\n )\n\n categories = models.Category.objects.get_root_categories_by_products(\n top_products)\n\n prepared_top_products = []\n if not mobile_view:\n prepared_top_products = [\n (product, images.get(product.page), categories.get(product))\n for product in top_products\n ]\n\n return {\n **context,\n 'category_tile': config.MAIN_PAGE_TILE,\n 'prepared_top_products': prepared_top_products,\n }\n\n\ndef merge_products_and_images(products):\n images = Image.objects.get_main_images_by_pages(\n models.ProductPage.objects.filter(shopelectro_product__in=products)\n )\n\n return [\n (product, images.get(product.page))\n for product in products\n ]\n\n\n@set_csrf_cookie\nclass CategoryPage(catalog.CategoryPage):\n\n def get_context_data(self, **kwargs):\n \"\"\"Add sorting options and view_types in context.\"\"\"\n context = super().get_context_data(**kwargs)\n products_on_page = int(self.request.GET.get(\n 'step', get_products_count(self.request),\n ))\n page_number = int(self.request.GET.get('page', 1))\n view_type = self.request.session.get('view_type', 'tile')\n sorting = int(self.kwargs.get('sorting', 0))\n sorting_option = config.category_sorting(sorting)\n category = context['category']\n if (\n page_number < 1 or\n products_on_page not in settings.CATEGORY_STEP_MULTIPLIERS\n ):\n raise Http404('Page does not exist.')\n\n all_products = (\n models.Product.objects\n .prefetch_related('page__images')\n .select_related('page')\n .get_by_category(category, ordering=(sorting_option, ))\n )\n\n group_tags_pairs = (\n models.Tag.objects\n .filter(products__in=all_products)\n .get_group_tags_pairs()\n )\n\n tags = self.kwargs.get('tags')\n\n tag_titles = ''\n if tags:\n slugs = models.Tag.parse_url_tags(tags)\n tags = models.Tag.objects.filter(slug__in=slugs)\n\n all_products = (\n all_products\n .filter(tags__in=tags)\n # Use distinct because filtering by QuerySet tags,\n # that related with products by many-to-many relation.\n .distinct(sorting_option.lstrip('-'))\n )\n\n tag_titles = models.serialize_tags_to_title(tags)\n\n def template_context(page, tag_titles, tags):\n return {\n 'page': page,\n 'tag_titles': tag_titles,\n 'tags': tags,\n }\n\n page = context['page']\n page.get_template_render_context = partial(\n template_context, page, tag_titles, tags)\n\n paginated_page = Paginator(all_products, products_on_page).page(page_number)\n total_products = all_products.count()\n products = paginated_page.object_list\n if not products:\n raise Http404('Page without products does not exist.')\n\n return {\n **context,\n 'product_image_pairs': merge_products_and_images(products),\n 'group_tags_pairs': group_tags_pairs,\n 'total_products': total_products,\n 'products_count': (page_number - 1) * products_on_page + products.count(),\n 'paginated_page': paginated_page,\n 'sorting_options': config.category_sorting(),\n 'limits': settings.CATEGORY_STEP_MULTIPLIERS,\n 'sort': sorting,\n 'tags': tags,\n 'view_type': view_type,\n 'skip_canonical': bool(tags),\n }\n\n\ndef load_more(request, category_slug, offset=0, limit=0, sorting=0, tags=None):\n \"\"\"\n Load more products of a given category.\n\n :param sorting: preferred sorting index from CATEGORY_SORTING tuple\n :param request: HttpRequest object\n :param category_slug: Slug for a given category\n :param offset: used for slicing QuerySet.\n :return: products list in html format\n \"\"\"\n products_on_page = limit or get_products_count(request)\n offset = int(offset)\n if offset < 0:\n return HttpResponseBadRequest(\n 'The offset is wrong. An offset should be greater than or equal to 0.'\n )\n if products_on_page not in settings.CATEGORY_STEP_MULTIPLIERS:\n return HttpResponseBadRequest(\n 'The limit number is wrong. List of available numbers:'\n f' {\", \".join(map(str, settings.CATEGORY_STEP_MULTIPLIERS))}'\n )\n # increment page number because:\n # 11 // 12 = 0, 0 // 12 = 0 but it should be the first page\n # 12 // 12 = 1, 23 // 12 = 1, but it should be the second page\n page_number = (offset // products_on_page) + 1\n category = get_object_or_404(models.CategoryPage, slug=category_slug).model\n sorting_option = config.category_sorting(int(sorting))\n\n all_products = (\n models.Product.objects\n .prefetch_related('page__images')\n .select_related('page')\n .get_by_category(category, ordering=(sorting_option,))\n )\n\n if tags:\n tag_entities = models.Tag.objects.filter(\n slug__in=models.Tag.parse_url_tags(tags)\n )\n\n all_products = (\n all_products\n .filter(tags__in=tag_entities)\n # Use distinct because filtering by QuerySet tags,\n # that related with products by many-to-many relation.\n .distinct(sorting_option.lstrip('-'))\n )\n\n paginated_page = Paginator(all_products, products_on_page).page(page_number)\n products = paginated_page.object_list\n view = request.session.get('view_type', 'tile')\n\n return render(request, 'catalog/category_products.html', {\n 'product_image_pairs': merge_products_and_images(products),\n 'paginated_page': paginated_page,\n 'view_type': view,\n 'prods': products_on_page,\n })\n\n\n@require_POST\ndef save_feedback(request):\n def get_keys_from_post(*args):\n return {arg: request.POST.get(arg, '') for arg in args}\n\n product_id = request.POST.get('id')\n product = models.Product.objects.filter(id=product_id).first()\n if not (product_id and product):\n return HttpResponse(status=422)\n\n fields = ['rating', 'name', 'dignities', 'limitations', 'general']\n feedback_data = get_keys_from_post(*fields)\n\n models.ProductFeedback.objects.create(product=product, **feedback_data)\n return HttpResponse('ok')\n\n\n@require_POST\ndef delete_feedback(request):\n if not request.user.is_authenticated:\n return HttpResponseForbidden('Not today, sly guy...')\n\n feedback_id = request.POST.get('id')\n feedback = models.ProductFeedback.objects.filter(id=feedback_id).first()\n if not (feedback_id and feedback):\n return HttpResponse(status=422)\n\n feedback.delete()\n return HttpResponse('Feedback with id={} was deleted.'.format(feedback_id))\n\n\nclass ProductsWithoutImages(catalog.ProductsWithoutImages):\n model = models.Product\n\n\nclass ProductsWithoutText(catalog.ProductsWithoutText):\n model = models.Product\n", "path": "shopelectro/views/catalog.py"}]} | 3,616 | 419 |
gh_patches_debug_7123 | rasdani/github-patches | git_diff | optuna__optuna-4133 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
`None` categorical not visible on Slice plot
### Expected behavior
I have a categorical like this: `trial.suggest_categorical("class_weight", ["balanced", None])`
The slice plot shows the "balanced" value but not the `None` value.
I could write a workaround by using `"None"` as a string and then convert it to `None`
but I thing it could be nice if the real `None` is ploted.
See sceenshot:
<img width="234" alt="image" src="https://user-images.githubusercontent.com/229382/199188383-981f256d-0b66-4a1c-be40-68ecd6ae4528.png">
### Environment
- Optuna version:3.0.3
- Python version:3.9.13
- OS:Linux-5.10.0-17-amd64-x86_64-with-glibc2.31
### Error messages, stack traces, or logs
```shell
see screenshot
```
### Steps to reproduce
see description above
### Additional context (optional)
_No response_
</issue>
<code>
[start of optuna/visualization/_slice.py]
1 from typing import Any
2 from typing import Callable
3 from typing import cast
4 from typing import List
5 from typing import NamedTuple
6 from typing import Optional
7
8 from optuna.logging import get_logger
9 from optuna.study import Study
10 from optuna.trial import FrozenTrial
11 from optuna.trial import TrialState
12 from optuna.visualization._plotly_imports import _imports
13 from optuna.visualization._utils import _check_plot_args
14 from optuna.visualization._utils import _filter_nonfinite
15 from optuna.visualization._utils import _is_log_scale
16 from optuna.visualization._utils import _is_numerical
17
18
19 if _imports.is_successful():
20 from optuna.visualization._plotly_imports import go
21 from optuna.visualization._plotly_imports import make_subplots
22 from optuna.visualization._plotly_imports import Scatter
23 from optuna.visualization._utils import COLOR_SCALE
24
25 _logger = get_logger(__name__)
26
27
28 class _SliceSubplotInfo(NamedTuple):
29 param_name: str
30 x: List[Any]
31 y: List[float]
32 trial_numbers: List[int]
33 is_log: bool
34 is_numerical: bool
35
36
37 class _SlicePlotInfo(NamedTuple):
38 target_name: str
39 subplots: List[_SliceSubplotInfo]
40
41
42 def _get_slice_subplot_info(
43 trials: List[FrozenTrial],
44 param: str,
45 target: Optional[Callable[[FrozenTrial], float]],
46 log_scale: bool,
47 numerical: bool,
48 ) -> _SliceSubplotInfo:
49
50 if target is None:
51
52 def _target(t: FrozenTrial) -> float:
53 return cast(float, t.value)
54
55 target = _target
56
57 return _SliceSubplotInfo(
58 param_name=param,
59 x=[t.params[param] for t in trials if param in t.params],
60 y=[target(t) for t in trials if param in t.params],
61 trial_numbers=[t.number for t in trials if param in t.params],
62 is_log=log_scale,
63 is_numerical=numerical,
64 )
65
66
67 def _get_slice_plot_info(
68 study: Study,
69 params: Optional[List[str]],
70 target: Optional[Callable[[FrozenTrial], float]],
71 target_name: str,
72 ) -> _SlicePlotInfo:
73
74 _check_plot_args(study, target, target_name)
75
76 trials = _filter_nonfinite(
77 study.get_trials(deepcopy=False, states=(TrialState.COMPLETE,)), target=target
78 )
79
80 if len(trials) == 0:
81 _logger.warning("Your study does not have any completed trials.")
82 return _SlicePlotInfo(target_name, [])
83
84 all_params = {p_name for t in trials for p_name in t.params.keys()}
85 if params is None:
86 sorted_params = sorted(all_params)
87 else:
88 for input_p_name in params:
89 if input_p_name not in all_params:
90 raise ValueError(f"Parameter {input_p_name} does not exist in your study.")
91 sorted_params = sorted(set(params))
92
93 return _SlicePlotInfo(
94 target_name=target_name,
95 subplots=[
96 _get_slice_subplot_info(
97 trials=trials,
98 param=param,
99 target=target,
100 log_scale=_is_log_scale(trials, param),
101 numerical=_is_numerical(trials, param),
102 )
103 for param in sorted_params
104 ],
105 )
106
107
108 def plot_slice(
109 study: Study,
110 params: Optional[List[str]] = None,
111 *,
112 target: Optional[Callable[[FrozenTrial], float]] = None,
113 target_name: str = "Objective Value",
114 ) -> "go.Figure":
115 """Plot the parameter relationship as slice plot in a study.
116
117 Note that, if a parameter contains missing values, a trial with missing values is not plotted.
118
119 Example:
120
121 The following code snippet shows how to plot the parameter relationship as slice plot.
122
123 .. plotly::
124
125 import optuna
126
127
128 def objective(trial):
129 x = trial.suggest_float("x", -100, 100)
130 y = trial.suggest_categorical("y", [-1, 0, 1])
131 return x ** 2 + y
132
133
134 sampler = optuna.samplers.TPESampler(seed=10)
135 study = optuna.create_study(sampler=sampler)
136 study.optimize(objective, n_trials=10)
137
138 fig = optuna.visualization.plot_slice(study, params=["x", "y"])
139 fig.show()
140
141 Args:
142 study:
143 A :class:`~optuna.study.Study` object whose trials are plotted for their target values.
144 params:
145 Parameter list to visualize. The default is all parameters.
146 target:
147 A function to specify the value to display. If it is :obj:`None` and ``study`` is being
148 used for single-objective optimization, the objective values are plotted.
149
150 .. note::
151 Specify this argument if ``study`` is being used for multi-objective optimization.
152 target_name:
153 Target's name to display on the axis label.
154
155 Returns:
156 A :class:`plotly.graph_objs.Figure` object.
157 """
158
159 _imports.check()
160 return _get_slice_plot(_get_slice_plot_info(study, params, target, target_name))
161
162
163 def _get_slice_plot(info: _SlicePlotInfo) -> "go.Figure":
164
165 layout = go.Layout(title="Slice Plot")
166
167 if len(info.subplots) == 0:
168 return go.Figure(data=[], layout=layout)
169 elif len(info.subplots) == 1:
170 figure = go.Figure(data=[_generate_slice_subplot(info.subplots[0])], layout=layout)
171 figure.update_xaxes(title_text=info.subplots[0].param_name)
172 figure.update_yaxes(title_text=info.target_name)
173 if info.subplots[0].is_log:
174 figure.update_xaxes(type="log")
175 else:
176 figure = make_subplots(rows=1, cols=len(info.subplots), shared_yaxes=True)
177 figure.update_layout(layout)
178 showscale = True # showscale option only needs to be specified once.
179 for column_index, subplot_info in enumerate(info.subplots, start=1):
180 trace = _generate_slice_subplot(subplot_info)
181 trace.update(marker={"showscale": showscale}) # showscale's default is True.
182 if showscale:
183 showscale = False
184 figure.add_trace(trace, row=1, col=column_index)
185 figure.update_xaxes(title_text=subplot_info.param_name, row=1, col=column_index)
186 if column_index == 1:
187 figure.update_yaxes(title_text=info.target_name, row=1, col=column_index)
188 if subplot_info.is_log:
189 figure.update_xaxes(type="log", row=1, col=column_index)
190 if len(info.subplots) > 3:
191 # Ensure that each subplot has a minimum width without relying on autusizing.
192 figure.update_layout(width=300 * len(info.subplots))
193
194 return figure
195
196
197 def _generate_slice_subplot(subplot_info: _SliceSubplotInfo) -> "Scatter":
198 return go.Scatter(
199 x=subplot_info.x,
200 y=subplot_info.y,
201 mode="markers",
202 marker={
203 "line": {"width": 0.5, "color": "Grey"},
204 "color": subplot_info.trial_numbers,
205 "colorscale": COLOR_SCALE,
206 "colorbar": {
207 "title": "Trial",
208 "x": 1.0, # Offset the colorbar position with a fixed width `xpad`.
209 "xpad": 40,
210 },
211 },
212 showlegend=False,
213 )
214
[end of optuna/visualization/_slice.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/optuna/visualization/_slice.py b/optuna/visualization/_slice.py
--- a/optuna/visualization/_slice.py
+++ b/optuna/visualization/_slice.py
@@ -195,9 +195,12 @@
def _generate_slice_subplot(subplot_info: _SliceSubplotInfo) -> "Scatter":
+ x = [x if x is not None else "None" for x in subplot_info.x]
+ y = [y if y is not None else "None" for y in subplot_info.y]
+
return go.Scatter(
- x=subplot_info.x,
- y=subplot_info.y,
+ x=x,
+ y=y,
mode="markers",
marker={
"line": {"width": 0.5, "color": "Grey"},
| {"golden_diff": "diff --git a/optuna/visualization/_slice.py b/optuna/visualization/_slice.py\n--- a/optuna/visualization/_slice.py\n+++ b/optuna/visualization/_slice.py\n@@ -195,9 +195,12 @@\n \n \n def _generate_slice_subplot(subplot_info: _SliceSubplotInfo) -> \"Scatter\":\n+ x = [x if x is not None else \"None\" for x in subplot_info.x]\n+ y = [y if y is not None else \"None\" for y in subplot_info.y]\n+\n return go.Scatter(\n- x=subplot_info.x,\n- y=subplot_info.y,\n+ x=x,\n+ y=y,\n mode=\"markers\",\n marker={\n \"line\": {\"width\": 0.5, \"color\": \"Grey\"},\n", "issue": "`None` categorical not visible on Slice plot\n### Expected behavior\r\n\r\nI have a categorical like this: `trial.suggest_categorical(\"class_weight\", [\"balanced\", None])`\r\n\r\nThe slice plot shows the \"balanced\" value but not the `None` value.\r\nI could write a workaround by using `\"None\"` as a string and then convert it to `None`\r\nbut I thing it could be nice if the real `None` is ploted.\r\n\r\nSee sceenshot:\r\n\r\n<img width=\"234\" alt=\"image\" src=\"https://user-images.githubusercontent.com/229382/199188383-981f256d-0b66-4a1c-be40-68ecd6ae4528.png\">\r\n\r\n\r\n### Environment\r\n\r\n- Optuna version:3.0.3\r\n- Python version:3.9.13\r\n- OS:Linux-5.10.0-17-amd64-x86_64-with-glibc2.31\r\n\r\n### Error messages, stack traces, or logs\r\n\r\n```shell\r\nsee screenshot\r\n```\r\n\r\n\r\n### Steps to reproduce\r\n\r\nsee description above\r\n\r\n### Additional context (optional)\r\n\r\n_No response_\n", "before_files": [{"content": "from typing import Any\nfrom typing import Callable\nfrom typing import cast\nfrom typing import List\nfrom typing import NamedTuple\nfrom typing import Optional\n\nfrom optuna.logging import get_logger\nfrom optuna.study import Study\nfrom optuna.trial import FrozenTrial\nfrom optuna.trial import TrialState\nfrom optuna.visualization._plotly_imports import _imports\nfrom optuna.visualization._utils import _check_plot_args\nfrom optuna.visualization._utils import _filter_nonfinite\nfrom optuna.visualization._utils import _is_log_scale\nfrom optuna.visualization._utils import _is_numerical\n\n\nif _imports.is_successful():\n from optuna.visualization._plotly_imports import go\n from optuna.visualization._plotly_imports import make_subplots\n from optuna.visualization._plotly_imports import Scatter\n from optuna.visualization._utils import COLOR_SCALE\n\n_logger = get_logger(__name__)\n\n\nclass _SliceSubplotInfo(NamedTuple):\n param_name: str\n x: List[Any]\n y: List[float]\n trial_numbers: List[int]\n is_log: bool\n is_numerical: bool\n\n\nclass _SlicePlotInfo(NamedTuple):\n target_name: str\n subplots: List[_SliceSubplotInfo]\n\n\ndef _get_slice_subplot_info(\n trials: List[FrozenTrial],\n param: str,\n target: Optional[Callable[[FrozenTrial], float]],\n log_scale: bool,\n numerical: bool,\n) -> _SliceSubplotInfo:\n\n if target is None:\n\n def _target(t: FrozenTrial) -> float:\n return cast(float, t.value)\n\n target = _target\n\n return _SliceSubplotInfo(\n param_name=param,\n x=[t.params[param] for t in trials if param in t.params],\n y=[target(t) for t in trials if param in t.params],\n trial_numbers=[t.number for t in trials if param in t.params],\n is_log=log_scale,\n is_numerical=numerical,\n )\n\n\ndef _get_slice_plot_info(\n study: Study,\n params: Optional[List[str]],\n target: Optional[Callable[[FrozenTrial], float]],\n target_name: str,\n) -> _SlicePlotInfo:\n\n _check_plot_args(study, target, target_name)\n\n trials = _filter_nonfinite(\n study.get_trials(deepcopy=False, states=(TrialState.COMPLETE,)), target=target\n )\n\n if len(trials) == 0:\n _logger.warning(\"Your study does not have any completed trials.\")\n return _SlicePlotInfo(target_name, [])\n\n all_params = {p_name for t in trials for p_name in t.params.keys()}\n if params is None:\n sorted_params = sorted(all_params)\n else:\n for input_p_name in params:\n if input_p_name not in all_params:\n raise ValueError(f\"Parameter {input_p_name} does not exist in your study.\")\n sorted_params = sorted(set(params))\n\n return _SlicePlotInfo(\n target_name=target_name,\n subplots=[\n _get_slice_subplot_info(\n trials=trials,\n param=param,\n target=target,\n log_scale=_is_log_scale(trials, param),\n numerical=_is_numerical(trials, param),\n )\n for param in sorted_params\n ],\n )\n\n\ndef plot_slice(\n study: Study,\n params: Optional[List[str]] = None,\n *,\n target: Optional[Callable[[FrozenTrial], float]] = None,\n target_name: str = \"Objective Value\",\n) -> \"go.Figure\":\n \"\"\"Plot the parameter relationship as slice plot in a study.\n\n Note that, if a parameter contains missing values, a trial with missing values is not plotted.\n\n Example:\n\n The following code snippet shows how to plot the parameter relationship as slice plot.\n\n .. plotly::\n\n import optuna\n\n\n def objective(trial):\n x = trial.suggest_float(\"x\", -100, 100)\n y = trial.suggest_categorical(\"y\", [-1, 0, 1])\n return x ** 2 + y\n\n\n sampler = optuna.samplers.TPESampler(seed=10)\n study = optuna.create_study(sampler=sampler)\n study.optimize(objective, n_trials=10)\n\n fig = optuna.visualization.plot_slice(study, params=[\"x\", \"y\"])\n fig.show()\n\n Args:\n study:\n A :class:`~optuna.study.Study` object whose trials are plotted for their target values.\n params:\n Parameter list to visualize. The default is all parameters.\n target:\n A function to specify the value to display. If it is :obj:`None` and ``study`` is being\n used for single-objective optimization, the objective values are plotted.\n\n .. note::\n Specify this argument if ``study`` is being used for multi-objective optimization.\n target_name:\n Target's name to display on the axis label.\n\n Returns:\n A :class:`plotly.graph_objs.Figure` object.\n \"\"\"\n\n _imports.check()\n return _get_slice_plot(_get_slice_plot_info(study, params, target, target_name))\n\n\ndef _get_slice_plot(info: _SlicePlotInfo) -> \"go.Figure\":\n\n layout = go.Layout(title=\"Slice Plot\")\n\n if len(info.subplots) == 0:\n return go.Figure(data=[], layout=layout)\n elif len(info.subplots) == 1:\n figure = go.Figure(data=[_generate_slice_subplot(info.subplots[0])], layout=layout)\n figure.update_xaxes(title_text=info.subplots[0].param_name)\n figure.update_yaxes(title_text=info.target_name)\n if info.subplots[0].is_log:\n figure.update_xaxes(type=\"log\")\n else:\n figure = make_subplots(rows=1, cols=len(info.subplots), shared_yaxes=True)\n figure.update_layout(layout)\n showscale = True # showscale option only needs to be specified once.\n for column_index, subplot_info in enumerate(info.subplots, start=1):\n trace = _generate_slice_subplot(subplot_info)\n trace.update(marker={\"showscale\": showscale}) # showscale's default is True.\n if showscale:\n showscale = False\n figure.add_trace(trace, row=1, col=column_index)\n figure.update_xaxes(title_text=subplot_info.param_name, row=1, col=column_index)\n if column_index == 1:\n figure.update_yaxes(title_text=info.target_name, row=1, col=column_index)\n if subplot_info.is_log:\n figure.update_xaxes(type=\"log\", row=1, col=column_index)\n if len(info.subplots) > 3:\n # Ensure that each subplot has a minimum width without relying on autusizing.\n figure.update_layout(width=300 * len(info.subplots))\n\n return figure\n\n\ndef _generate_slice_subplot(subplot_info: _SliceSubplotInfo) -> \"Scatter\":\n return go.Scatter(\n x=subplot_info.x,\n y=subplot_info.y,\n mode=\"markers\",\n marker={\n \"line\": {\"width\": 0.5, \"color\": \"Grey\"},\n \"color\": subplot_info.trial_numbers,\n \"colorscale\": COLOR_SCALE,\n \"colorbar\": {\n \"title\": \"Trial\",\n \"x\": 1.0, # Offset the colorbar position with a fixed width `xpad`.\n \"xpad\": 40,\n },\n },\n showlegend=False,\n )\n", "path": "optuna/visualization/_slice.py"}]} | 2,986 | 179 |
gh_patches_debug_28308 | rasdani/github-patches | git_diff | sunpy__sunpy-5451 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Order of arguments in `scale` input to `make_fitswcs_header` is wrong in AIA/EUVI reprojection example
In [the AIA/EUVI reprojection mosaic example](https://docs.sunpy.org/en/latest/generated/gallery/map_transformations/reprojection_aia_euvi_mosaic.html), the ordering of the `scale` argument to `make_fitswcs_header` is incorrect. The ordering should be Cartesian (lon, lat) [according to the `make_fitswcs_header` docstring](https://docs.sunpy.org/en/stable/api/sunpy.map.make_fitswcs_header.html#sunpy.map.make_fitswcs_header), but in this example, the order is according to the array index. This actually has no effect on the example output as the scale in both directions is the same (1 deg/pix), but is potentially confusing and conflicts with the function docstring.
</issue>
<code>
[start of examples/map_transformations/reprojection_heliographic_stonyhurst.py]
1 """
2 ===========================
3 Creating a Heliographic Map
4 ===========================
5
6 In this example we use the `reproject` generate an image in heliographic coordinates from an AIA image.
7
8 You will need `reproject <https://reproject.readthedocs.io/en/stable/>`__ v0.6 or higher installed.
9 """
10 # sphinx_gallery_thumbnail_number = 2
11
12 import matplotlib.pyplot as plt
13 from reproject import reproject_interp
14
15 import astropy.units as u
16 from astropy.coordinates import SkyCoord
17 from astropy.wcs import WCS
18
19 import sunpy.data.sample
20 import sunpy.map
21
22 ###############################################################################
23 # We will start with using sunpy's sample data for this example.
24
25 aia_map = sunpy.map.Map(sunpy.data.sample.AIA_193_IMAGE)
26
27 fig = plt.figure()
28 ax = plt.subplot(projection=aia_map)
29 aia_map.plot(ax)
30
31 ###############################################################################
32 # Reproject works by transforming an input image (with a `~astropy.wcs.WCS`) to
33 # a output image, specified by a different WCS object. Therefore we need to
34 # build a `~astropy.wcs.WCS` object describing the output we desire.
35 # To do this we use the `sunpy.map.make_fitswcs_header` which assists us in
36 # constructing this World Coordinate System (WCS) object.
37 # Here we create a WCS based on a heliographic
38 # Stonyhurst reference coordinate and with the CAR (plate carree) projection.
39
40 shape_out = [720, 1440]
41 frame_out = SkyCoord(0, 0, unit=u.deg,
42 frame="heliographic_stonyhurst",
43 obstime=aia_map.date)
44 header = sunpy.map.make_fitswcs_header(shape_out,
45 frame_out,
46 scale=[180 / shape_out[0],
47 360 / shape_out[1]] * u.deg / u.pix,
48 projection_code="CAR")
49
50 out_wcs = WCS(header)
51
52 ###############################################################################
53 # With the new header, re-project the data into the new coordinate system.
54 # Here we are using the fastest but least accurate method of reprojection,
55 # `reproject.reproject_interp`, a more accurate but slower method is
56 # `reproject.reproject_adaptive`.
57
58 array, footprint = reproject_interp(aia_map, out_wcs, shape_out=shape_out)
59 outmap = sunpy.map.Map((array, header))
60 outmap.plot_settings = aia_map.plot_settings
61
62 ###############################################################################
63 # Plot the result.
64
65 fig = plt.figure()
66 ax = plt.subplot(projection=outmap)
67 outmap.plot(ax)
68
69 ax.set_xlim(0, shape_out[1])
70 ax.set_ylim(0, shape_out[0])
71
72 plt.show()
73
[end of examples/map_transformations/reprojection_heliographic_stonyhurst.py]
[start of examples/map_transformations/reprojection_aia_euvi_mosaic.py]
1 """
2 =========================================
3 Creating a Full Sun Map with AIA and EUVI
4 =========================================
5
6 With SDO/AIA and STEREO/A and STEREO/B, it is now possible (given specific dates)
7 to combine combine three EUV images from these satellites
8 to produce a full latitude / longitude map of the Sun.
9
10 You will need an active internet connection as well as
11 `reproject <https://reproject.readthedocs.io/en/stable/>`__ v0.6 or higher installed.
12 """
13 # sphinx_gallery_thumbnail_number = 4
14
15 import matplotlib.pyplot as plt
16 import numpy as np
17 from reproject import reproject_interp
18 from reproject.mosaicking import reproject_and_coadd
19
20 import astropy.units as u
21 from astropy.coordinates import SkyCoord
22 from astropy.wcs import WCS
23
24 import sunpy.map
25 import sunpy.sun
26 from sunpy.coordinates import get_body_heliographic_stonyhurst
27 from sunpy.net import Fido
28 from sunpy.net import attrs as a
29
30 ######################################################################
31 # To get started, let's download the data:
32
33 stereo = (a.Instrument("EUVI") &
34 a.Time('2011-11-01', '2011-11-01T00:10:00'))
35 aia = (a.Instrument.aia &
36 a.Sample(24 * u.hour) &
37 a.Time('2011-11-01', '2011-11-02'))
38 wave = a.Wavelength(19.5 * u.nm, 19.5 * u.nm)
39 res = Fido.search(wave, aia | stereo)
40 files = Fido.fetch(res)
41
42 ######################################################################
43 # Next we create a sunpy map for each of the files.
44
45 maps = sunpy.map.Map(sorted(files))
46
47 ######################################################################
48 # To reduce memory consumption we also downsample these maps before continuing,
49 # you can disable this.
50
51 maps = [m.resample((1024, 1024)*u.pix) for m in maps]
52
53 ######################################################################
54 # When combining these images all three need to assume the same radius of
55 # the Sun for the data. The AIA images specify a slightly different value
56 # than the IAU 2015 constant. To avoid coordinate transformation issues we
57 # reset this here.
58
59 maps[0].meta['rsun_ref'] = sunpy.sun.constants.radius.to_value(u.m)
60
61 ######################################################################
62 # Next we will plot the locations of the three spacecraft with respect to
63 # the Sun so we can easily see the relative separations.
64
65 earth = get_body_heliographic_stonyhurst('earth', maps[0].date)
66
67 plt.figure(figsize=(8, 8))
68 r_unit = u.AU
69
70 ax = plt.subplot(projection='polar')
71 circle = plt.Circle((0.0, 0.0), (10*u.Rsun).to_value(r_unit),
72 transform=ax.transProjectionAffine + ax.transAxes, color="yellow",
73 alpha=1, label="Sun")
74 ax.add_artist(circle)
75 ax.text(earth.lon.to_value("rad")+0.05, earth.radius.to_value(r_unit), "Earth")
76
77 for this_satellite, this_coord in [(m.observatory, m.observer_coordinate) for m in maps]:
78 ax.plot(this_coord.lon.to('rad'), this_coord.radius.to(r_unit), 'o', label=this_satellite)
79
80 ax.set_theta_zero_location("S")
81 ax.set_rlim(0, 1.3)
82
83 ax.legend()
84
85 plt.show()
86
87 ######################################################################
88 # The next step is to calculate the output coordinate system for the combined
89 # map. We select a heliographic Stonyhurst frame, and a Plate Carree (CAR)
90 # projection, and generate a header using `sunpy.map.make_fitswcs_header` and
91 # then construct a World Coordinate System (WCS) object for that header.
92
93 shape_out = (180, 360) # This is set deliberately low to reduce memory consumption
94 header = sunpy.map.make_fitswcs_header(shape_out,
95 SkyCoord(0, 0, unit=u.deg,
96 frame="heliographic_stonyhurst",
97 obstime=maps[0].date),
98 scale=[180 / shape_out[0],
99 360 / shape_out[1]] * u.deg / u.pix,
100 wavelength=int(maps[0].meta['wavelnth']) * u.AA,
101 projection_code="CAR")
102 out_wcs = WCS(header)
103
104 ######################################################################
105 # Next we call the `reproject.mosaicking.reproject_and_coadd` function, which
106 # takes a list of maps, and the desired output WCS and array shape.
107
108 array, footprint = reproject_and_coadd(maps, out_wcs, shape_out,
109 reproject_function=reproject_interp)
110
111 ######################################################################
112 # To display the output we construct a new map using the new array and our
113 # generated header. We also borrow the plot settings from the AIA map.
114
115 outmap = sunpy.map.Map((array, header))
116 outmap.plot_settings = maps[0].plot_settings
117 outmap.plot()
118
119 plt.show()
120
121 ######################################################################
122 # Improving the Output
123 # --------------------
124 #
125 # As you can see this leaves a little to be desired. To reduce the obvious
126 # warping towards the points which are close to the limb in the input
127 # images, we can define a set of weights to use when co-adding the output
128 # arrays. To reduce this warping we want to calculate an set of weights
129 # which highly weigh points close to the centre of the disk in the input
130 # image.
131 #
132 # We can achieve this by using sunpy's coordinate framework. First we
133 # calculate all the world coordinates for all the pixels in all three
134 # input maps.
135
136 coordinates = tuple(map(sunpy.map.all_coordinates_from_map, maps))
137
138 ######################################################################
139 # To get a weighting which is high close to disk centre and low towards
140 # the limb, we can use the Z coordinate in the heliocentric frame. This
141 # coordinate is the distance of the sphere from the centre of the Sun
142 # towards the observer.
143
144 weights = [coord.transform_to("heliocentric").z.value for coord in coordinates]
145
146 ######################################################################
147 # These weights are good, but they are better if the ramp down is a little
148 # smoother, and more biased to the centre. Also we can scale them to the
149 # range 0-1, and set any off disk (NaN) regions to 0.
150
151 weights = [(w / np.nanmax(w)) ** 3 for w in weights]
152 for w in weights:
153 w[np.isnan(w)] = 0
154
155 plt.figure()
156 plt.imshow(weights[0])
157 plt.colorbar()
158
159 plt.show()
160
161 ######################################################################
162 # Now we can rerun the reprojection. This time we also set
163 # ``match_background=True`` which scales the images by a single scaling
164 # factor so they are of similar brightness. We also set
165 # ``background_reference=0`` which uses the AIA map as the reference for
166 # the background scaling.
167 #
168 # Here we are using the fastest but least accurate method of reprojection,
169 # `reproject.reproject_interp`, a more accurate but slower method is
170 # `reproject.reproject_adaptive`.
171
172 array, _ = reproject_and_coadd(maps, out_wcs, shape_out,
173 input_weights=weights,
174 reproject_function=reproject_interp,
175 match_background=True,
176 background_reference=0)
177
178 ######################################################################
179 # Once again we create a new map, and this time we customise the plot a
180 # little.
181
182 outmap = sunpy.map.Map((array, header))
183 outmap.plot_settings = maps[0].plot_settings
184 outmap.nickname = 'AIA + EUVI/A + EUVI/B'
185
186 plt.figure(figsize=(10, 5))
187 ax = plt.subplot(projection=out_wcs)
188 im = outmap.plot(vmin=400)
189
190 lon, lat = ax.coords
191 lon.set_coord_type("longitude")
192 lon.coord_wrap = 180
193 lon.set_format_unit(u.deg)
194 lat.set_coord_type("latitude")
195 lat.set_format_unit(u.deg)
196
197 lon.set_axislabel('Heliographic Longitude', minpad=0.8)
198 lat.set_axislabel('Heliographic Latitude', minpad=0.9)
199 lon.set_ticks(spacing=25*u.deg, color='k')
200 lat.set_ticks(spacing=15*u.deg, color='k')
201
202 plt.colorbar(im, ax=ax)
203
204 # Reset the view to pixel centers
205 _ = ax.axis((0, shape_out[1], 0, shape_out[0]))
206
207 plt.show()
208
[end of examples/map_transformations/reprojection_aia_euvi_mosaic.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/examples/map_transformations/reprojection_aia_euvi_mosaic.py b/examples/map_transformations/reprojection_aia_euvi_mosaic.py
--- a/examples/map_transformations/reprojection_aia_euvi_mosaic.py
+++ b/examples/map_transformations/reprojection_aia_euvi_mosaic.py
@@ -95,8 +95,8 @@
SkyCoord(0, 0, unit=u.deg,
frame="heliographic_stonyhurst",
obstime=maps[0].date),
- scale=[180 / shape_out[0],
- 360 / shape_out[1]] * u.deg / u.pix,
+ scale=[360 / shape_out[1],
+ 180 / shape_out[0]] * u.deg / u.pix,
wavelength=int(maps[0].meta['wavelnth']) * u.AA,
projection_code="CAR")
out_wcs = WCS(header)
diff --git a/examples/map_transformations/reprojection_heliographic_stonyhurst.py b/examples/map_transformations/reprojection_heliographic_stonyhurst.py
--- a/examples/map_transformations/reprojection_heliographic_stonyhurst.py
+++ b/examples/map_transformations/reprojection_heliographic_stonyhurst.py
@@ -43,8 +43,8 @@
obstime=aia_map.date)
header = sunpy.map.make_fitswcs_header(shape_out,
frame_out,
- scale=[180 / shape_out[0],
- 360 / shape_out[1]] * u.deg / u.pix,
+ scale=[360 / shape_out[1],
+ 180 / shape_out[0]] * u.deg / u.pix,
projection_code="CAR")
out_wcs = WCS(header)
| {"golden_diff": "diff --git a/examples/map_transformations/reprojection_aia_euvi_mosaic.py b/examples/map_transformations/reprojection_aia_euvi_mosaic.py\n--- a/examples/map_transformations/reprojection_aia_euvi_mosaic.py\n+++ b/examples/map_transformations/reprojection_aia_euvi_mosaic.py\n@@ -95,8 +95,8 @@\n SkyCoord(0, 0, unit=u.deg,\n frame=\"heliographic_stonyhurst\",\n obstime=maps[0].date),\n- scale=[180 / shape_out[0],\n- 360 / shape_out[1]] * u.deg / u.pix,\n+ scale=[360 / shape_out[1],\n+ 180 / shape_out[0]] * u.deg / u.pix,\n wavelength=int(maps[0].meta['wavelnth']) * u.AA,\n projection_code=\"CAR\")\n out_wcs = WCS(header)\ndiff --git a/examples/map_transformations/reprojection_heliographic_stonyhurst.py b/examples/map_transformations/reprojection_heliographic_stonyhurst.py\n--- a/examples/map_transformations/reprojection_heliographic_stonyhurst.py\n+++ b/examples/map_transformations/reprojection_heliographic_stonyhurst.py\n@@ -43,8 +43,8 @@\n obstime=aia_map.date)\n header = sunpy.map.make_fitswcs_header(shape_out,\n frame_out,\n- scale=[180 / shape_out[0],\n- 360 / shape_out[1]] * u.deg / u.pix,\n+ scale=[360 / shape_out[1],\n+ 180 / shape_out[0]] * u.deg / u.pix,\n projection_code=\"CAR\")\n \n out_wcs = WCS(header)\n", "issue": "Order of arguments in `scale` input to `make_fitswcs_header` is wrong in AIA/EUVI reprojection example\nIn [the AIA/EUVI reprojection mosaic example](https://docs.sunpy.org/en/latest/generated/gallery/map_transformations/reprojection_aia_euvi_mosaic.html), the ordering of the `scale` argument to `make_fitswcs_header` is incorrect. The ordering should be Cartesian (lon, lat) [according to the `make_fitswcs_header` docstring](https://docs.sunpy.org/en/stable/api/sunpy.map.make_fitswcs_header.html#sunpy.map.make_fitswcs_header), but in this example, the order is according to the array index. This actually has no effect on the example output as the scale in both directions is the same (1 deg/pix), but is potentially confusing and conflicts with the function docstring.\n", "before_files": [{"content": "\"\"\"\n===========================\nCreating a Heliographic Map\n===========================\n\nIn this example we use the `reproject` generate an image in heliographic coordinates from an AIA image.\n\nYou will need `reproject <https://reproject.readthedocs.io/en/stable/>`__ v0.6 or higher installed.\n\"\"\"\n# sphinx_gallery_thumbnail_number = 2\n\nimport matplotlib.pyplot as plt\nfrom reproject import reproject_interp\n\nimport astropy.units as u\nfrom astropy.coordinates import SkyCoord\nfrom astropy.wcs import WCS\n\nimport sunpy.data.sample\nimport sunpy.map\n\n###############################################################################\n# We will start with using sunpy's sample data for this example.\n\naia_map = sunpy.map.Map(sunpy.data.sample.AIA_193_IMAGE)\n\nfig = plt.figure()\nax = plt.subplot(projection=aia_map)\naia_map.plot(ax)\n\n###############################################################################\n# Reproject works by transforming an input image (with a `~astropy.wcs.WCS`) to\n# a output image, specified by a different WCS object. Therefore we need to\n# build a `~astropy.wcs.WCS` object describing the output we desire.\n# To do this we use the `sunpy.map.make_fitswcs_header` which assists us in\n# constructing this World Coordinate System (WCS) object.\n# Here we create a WCS based on a heliographic\n# Stonyhurst reference coordinate and with the CAR (plate carree) projection.\n\nshape_out = [720, 1440]\nframe_out = SkyCoord(0, 0, unit=u.deg,\n frame=\"heliographic_stonyhurst\",\n obstime=aia_map.date)\nheader = sunpy.map.make_fitswcs_header(shape_out,\n frame_out,\n scale=[180 / shape_out[0],\n 360 / shape_out[1]] * u.deg / u.pix,\n projection_code=\"CAR\")\n\nout_wcs = WCS(header)\n\n###############################################################################\n# With the new header, re-project the data into the new coordinate system.\n# Here we are using the fastest but least accurate method of reprojection,\n# `reproject.reproject_interp`, a more accurate but slower method is\n# `reproject.reproject_adaptive`.\n\narray, footprint = reproject_interp(aia_map, out_wcs, shape_out=shape_out)\noutmap = sunpy.map.Map((array, header))\noutmap.plot_settings = aia_map.plot_settings\n\n###############################################################################\n# Plot the result.\n\nfig = plt.figure()\nax = plt.subplot(projection=outmap)\noutmap.plot(ax)\n\nax.set_xlim(0, shape_out[1])\nax.set_ylim(0, shape_out[0])\n\nplt.show()\n", "path": "examples/map_transformations/reprojection_heliographic_stonyhurst.py"}, {"content": "\"\"\"\n=========================================\nCreating a Full Sun Map with AIA and EUVI\n=========================================\n\nWith SDO/AIA and STEREO/A and STEREO/B, it is now possible (given specific dates)\nto combine combine three EUV images from these satellites\nto produce a full latitude / longitude map of the Sun.\n\nYou will need an active internet connection as well as\n`reproject <https://reproject.readthedocs.io/en/stable/>`__ v0.6 or higher installed.\n\"\"\"\n# sphinx_gallery_thumbnail_number = 4\n\nimport matplotlib.pyplot as plt\nimport numpy as np\nfrom reproject import reproject_interp\nfrom reproject.mosaicking import reproject_and_coadd\n\nimport astropy.units as u\nfrom astropy.coordinates import SkyCoord\nfrom astropy.wcs import WCS\n\nimport sunpy.map\nimport sunpy.sun\nfrom sunpy.coordinates import get_body_heliographic_stonyhurst\nfrom sunpy.net import Fido\nfrom sunpy.net import attrs as a\n\n######################################################################\n# To get started, let's download the data:\n\nstereo = (a.Instrument(\"EUVI\") &\n a.Time('2011-11-01', '2011-11-01T00:10:00'))\naia = (a.Instrument.aia &\n a.Sample(24 * u.hour) &\n a.Time('2011-11-01', '2011-11-02'))\nwave = a.Wavelength(19.5 * u.nm, 19.5 * u.nm)\nres = Fido.search(wave, aia | stereo)\nfiles = Fido.fetch(res)\n\n######################################################################\n# Next we create a sunpy map for each of the files.\n\nmaps = sunpy.map.Map(sorted(files))\n\n######################################################################\n# To reduce memory consumption we also downsample these maps before continuing,\n# you can disable this.\n\nmaps = [m.resample((1024, 1024)*u.pix) for m in maps]\n\n######################################################################\n# When combining these images all three need to assume the same radius of\n# the Sun for the data. The AIA images specify a slightly different value\n# than the IAU 2015 constant. To avoid coordinate transformation issues we\n# reset this here.\n\nmaps[0].meta['rsun_ref'] = sunpy.sun.constants.radius.to_value(u.m)\n\n######################################################################\n# Next we will plot the locations of the three spacecraft with respect to\n# the Sun so we can easily see the relative separations.\n\nearth = get_body_heliographic_stonyhurst('earth', maps[0].date)\n\nplt.figure(figsize=(8, 8))\nr_unit = u.AU\n\nax = plt.subplot(projection='polar')\ncircle = plt.Circle((0.0, 0.0), (10*u.Rsun).to_value(r_unit),\n transform=ax.transProjectionAffine + ax.transAxes, color=\"yellow\",\n alpha=1, label=\"Sun\")\nax.add_artist(circle)\nax.text(earth.lon.to_value(\"rad\")+0.05, earth.radius.to_value(r_unit), \"Earth\")\n\nfor this_satellite, this_coord in [(m.observatory, m.observer_coordinate) for m in maps]:\n ax.plot(this_coord.lon.to('rad'), this_coord.radius.to(r_unit), 'o', label=this_satellite)\n\nax.set_theta_zero_location(\"S\")\nax.set_rlim(0, 1.3)\n\nax.legend()\n\nplt.show()\n\n######################################################################\n# The next step is to calculate the output coordinate system for the combined\n# map. We select a heliographic Stonyhurst frame, and a Plate Carree (CAR)\n# projection, and generate a header using `sunpy.map.make_fitswcs_header` and\n# then construct a World Coordinate System (WCS) object for that header.\n\nshape_out = (180, 360) # This is set deliberately low to reduce memory consumption\nheader = sunpy.map.make_fitswcs_header(shape_out,\n SkyCoord(0, 0, unit=u.deg,\n frame=\"heliographic_stonyhurst\",\n obstime=maps[0].date),\n scale=[180 / shape_out[0],\n 360 / shape_out[1]] * u.deg / u.pix,\n wavelength=int(maps[0].meta['wavelnth']) * u.AA,\n projection_code=\"CAR\")\nout_wcs = WCS(header)\n\n######################################################################\n# Next we call the `reproject.mosaicking.reproject_and_coadd` function, which\n# takes a list of maps, and the desired output WCS and array shape.\n\narray, footprint = reproject_and_coadd(maps, out_wcs, shape_out,\n reproject_function=reproject_interp)\n\n######################################################################\n# To display the output we construct a new map using the new array and our\n# generated header. We also borrow the plot settings from the AIA map.\n\noutmap = sunpy.map.Map((array, header))\noutmap.plot_settings = maps[0].plot_settings\noutmap.plot()\n\nplt.show()\n\n######################################################################\n# Improving the Output\n# --------------------\n#\n# As you can see this leaves a little to be desired. To reduce the obvious\n# warping towards the points which are close to the limb in the input\n# images, we can define a set of weights to use when co-adding the output\n# arrays. To reduce this warping we want to calculate an set of weights\n# which highly weigh points close to the centre of the disk in the input\n# image.\n#\n# We can achieve this by using sunpy's coordinate framework. First we\n# calculate all the world coordinates for all the pixels in all three\n# input maps.\n\ncoordinates = tuple(map(sunpy.map.all_coordinates_from_map, maps))\n\n######################################################################\n# To get a weighting which is high close to disk centre and low towards\n# the limb, we can use the Z coordinate in the heliocentric frame. This\n# coordinate is the distance of the sphere from the centre of the Sun\n# towards the observer.\n\nweights = [coord.transform_to(\"heliocentric\").z.value for coord in coordinates]\n\n######################################################################\n# These weights are good, but they are better if the ramp down is a little\n# smoother, and more biased to the centre. Also we can scale them to the\n# range 0-1, and set any off disk (NaN) regions to 0.\n\nweights = [(w / np.nanmax(w)) ** 3 for w in weights]\nfor w in weights:\n w[np.isnan(w)] = 0\n\nplt.figure()\nplt.imshow(weights[0])\nplt.colorbar()\n\nplt.show()\n\n######################################################################\n# Now we can rerun the reprojection. This time we also set\n# ``match_background=True`` which scales the images by a single scaling\n# factor so they are of similar brightness. We also set\n# ``background_reference=0`` which uses the AIA map as the reference for\n# the background scaling.\n#\n# Here we are using the fastest but least accurate method of reprojection,\n# `reproject.reproject_interp`, a more accurate but slower method is\n# `reproject.reproject_adaptive`.\n\narray, _ = reproject_and_coadd(maps, out_wcs, shape_out,\n input_weights=weights,\n reproject_function=reproject_interp,\n match_background=True,\n background_reference=0)\n\n######################################################################\n# Once again we create a new map, and this time we customise the plot a\n# little.\n\noutmap = sunpy.map.Map((array, header))\noutmap.plot_settings = maps[0].plot_settings\noutmap.nickname = 'AIA + EUVI/A + EUVI/B'\n\nplt.figure(figsize=(10, 5))\nax = plt.subplot(projection=out_wcs)\nim = outmap.plot(vmin=400)\n\nlon, lat = ax.coords\nlon.set_coord_type(\"longitude\")\nlon.coord_wrap = 180\nlon.set_format_unit(u.deg)\nlat.set_coord_type(\"latitude\")\nlat.set_format_unit(u.deg)\n\nlon.set_axislabel('Heliographic Longitude', minpad=0.8)\nlat.set_axislabel('Heliographic Latitude', minpad=0.9)\nlon.set_ticks(spacing=25*u.deg, color='k')\nlat.set_ticks(spacing=15*u.deg, color='k')\n\nplt.colorbar(im, ax=ax)\n\n# Reset the view to pixel centers\n_ = ax.axis((0, shape_out[1], 0, shape_out[0]))\n\nplt.show()\n", "path": "examples/map_transformations/reprojection_aia_euvi_mosaic.py"}]} | 3,874 | 388 |
gh_patches_debug_16989 | rasdani/github-patches | git_diff | gpodder__mygpo-493 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
API: Device Synchronization API - Start / Stop Sync returning HTTP status 500
During my work on PR https://github.com/gpodder/mygpo/pull/122 is was testing the Device Synchronization API - Start / Stop Sync (https://gpoddernet.readthedocs.io/en/latest/api/reference/sync.html#post--api-2-sync-devices-(username).json)
I sent the following request
```json
{
"synchronize": [
[
"my-desktop", "cellphone"
]
]
}
```
and it is returning HTTP 500
```html
<html>
<head>
<title>500 Internal server error (gpodder.net)</title>
<link rel="stylesheet" type="text/css" href="/static/css/fail.css" />
</head>
<body>
<div id="c">
<div id="fail">
<h1>500 - Internal server error.</h1>
<p>
The service is currently overloaded.
Please try again later or contact us.
</p>
</div>
</div>
<img id="icon" src="/static/failpodder.png">
</body>
</html>
```
as a reference, a previous call to https://gpoddernet.readthedocs.io/en/latest/api/reference/sync.html#get--api-2-sync-devices-(username).json was returning:
```json
{
"synchronized": [],
"not-synchronized": [
"cellphone",
"my-desktop"
]
}
```
I'm able ot sync this devices on the web ui though.
</issue>
<code>
[start of mygpo/api/advanced/sync.py]
1 from django.http import HttpResponseBadRequest, HttpResponseNotFound
2 from django.views.decorators.csrf import csrf_exempt
3 from django.views.decorators.cache import never_cache
4
5 from mygpo.decorators import allowed_methods, cors_origin
6 from mygpo.utils import parse_request_body
7 from mygpo.api.basic_auth import require_valid_user, check_username
8 from mygpo.api.httpresponse import JsonResponse
9 from mygpo.users.models import Client, UserProxy
10 from mygpo.users.tasks import sync_user
11
12
13 @csrf_exempt
14 @require_valid_user
15 @check_username
16 @never_cache
17 @allowed_methods(["GET", "POST"])
18 @cors_origin()
19 def main(request, username):
20 """ API Endpoint for Device Synchronisation """
21
22 if request.method == "GET":
23 return JsonResponse(get_sync_status(request.user))
24
25 else:
26 try:
27 actions = parse_request_body(request)
28 except ValueError as e:
29 return HttpResponseBadRequest(str(e))
30
31 synclist = actions.get("synchronize", [])
32 stopsync = actions.get("stop-synchronize", [])
33
34 try:
35 update_sync_status(request.user, synclist, stopsync)
36 except ValueError as e:
37 return HttpResponseBadRequest(str(e))
38 except Client.DoesNotExist as e:
39 return HttpResponseNotFound(str(e))
40
41 return JsonResponse(get_sync_status(user))
42
43
44 def get_sync_status(user):
45 """ Returns the current Device Sync status """
46
47 sync_groups = []
48 unsynced = []
49
50 user = UserProxy.objects.from_user(user)
51 for group in user.get_grouped_devices():
52 uids = [device.uid for device in group.devices]
53
54 if group.is_synced:
55 sync_groups.append(uids)
56
57 else:
58 unsynced = uids
59
60 return {"synchronized": sync_groups, "not-synchronized": unsynced}
61
62
63 def update_sync_status(user, synclist, stopsync):
64 """Updates the current Device Sync status
65
66 Synchronisation between devices can be set up and stopped. Devices are
67 identified by their UIDs. Unknown UIDs cause errors, no new devices are
68 created."""
69
70 for devlist in synclist:
71
72 if len(devlist) <= 1:
73 raise ValueError("at least two devices are needed to sync")
74
75 # Setup all devices to sync with the first in the list
76 uid = devlist[0]
77 dev = user.client_set.get(uid=uid)
78
79 for other_uid in devlist[1:]:
80 other = user.get_device_by_uid(other_uid)
81 dev.sync_with(other)
82
83 for uid in stopsync:
84 dev = user.get_device_by_uid(uid)
85 try:
86 dev.stop_sync()
87 except ValueError:
88 # if all devices of a sync-group are un-synced,
89 # the last one will raise a ValueError, because it is no longer
90 # being synced -- we just ignore it
91 pass
92
93 user.save()
94
95 sync_user.delay(user)
96
[end of mygpo/api/advanced/sync.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/mygpo/api/advanced/sync.py b/mygpo/api/advanced/sync.py
--- a/mygpo/api/advanced/sync.py
+++ b/mygpo/api/advanced/sync.py
@@ -38,7 +38,7 @@
except Client.DoesNotExist as e:
return HttpResponseNotFound(str(e))
- return JsonResponse(get_sync_status(user))
+ return JsonResponse(get_sync_status(request.user))
def get_sync_status(user):
@@ -77,11 +77,11 @@
dev = user.client_set.get(uid=uid)
for other_uid in devlist[1:]:
- other = user.get_device_by_uid(other_uid)
+ other = user.client_set.get(uid=other_uid)
dev.sync_with(other)
for uid in stopsync:
- dev = user.get_device_by_uid(uid)
+ dev = user.client_set.get(uid=uid)
try:
dev.stop_sync()
except ValueError:
| {"golden_diff": "diff --git a/mygpo/api/advanced/sync.py b/mygpo/api/advanced/sync.py\n--- a/mygpo/api/advanced/sync.py\n+++ b/mygpo/api/advanced/sync.py\n@@ -38,7 +38,7 @@\n except Client.DoesNotExist as e:\n return HttpResponseNotFound(str(e))\n \n- return JsonResponse(get_sync_status(user))\n+ return JsonResponse(get_sync_status(request.user))\n \n \n def get_sync_status(user):\n@@ -77,11 +77,11 @@\n dev = user.client_set.get(uid=uid)\n \n for other_uid in devlist[1:]:\n- other = user.get_device_by_uid(other_uid)\n+ other = user.client_set.get(uid=other_uid)\n dev.sync_with(other)\n \n for uid in stopsync:\n- dev = user.get_device_by_uid(uid)\n+ dev = user.client_set.get(uid=uid)\n try:\n dev.stop_sync()\n except ValueError:\n", "issue": "API: Device Synchronization API - Start / Stop Sync returning HTTP status 500\nDuring my work on PR https://github.com/gpodder/mygpo/pull/122 is was testing the Device Synchronization API - Start / Stop Sync (https://gpoddernet.readthedocs.io/en/latest/api/reference/sync.html#post--api-2-sync-devices-(username).json)\r\n\r\nI sent the following request\r\n```json\r\n{\r\n \"synchronize\": [\r\n [\r\n \"my-desktop\", \"cellphone\"\r\n ]\r\n ]\r\n}\r\n```\r\n\r\nand it is returning HTTP 500\r\n```html\r\n<html>\r\n <head>\r\n <title>500 Internal server error (gpodder.net)</title>\r\n <link rel=\"stylesheet\" type=\"text/css\" href=\"/static/css/fail.css\" />\r\n </head>\r\n <body>\r\n <div id=\"c\">\r\n <div id=\"fail\">\r\n <h1>500 - Internal server error.</h1>\r\n <p>\r\n The service is currently overloaded.\r\n Please try again later or contact us.\r\n </p>\r\n </div>\r\n </div>\r\n <img id=\"icon\" src=\"/static/failpodder.png\">\r\n </body>\r\n</html>\r\n```\r\n\r\nas a reference, a previous call to https://gpoddernet.readthedocs.io/en/latest/api/reference/sync.html#get--api-2-sync-devices-(username).json was returning:\r\n```json\r\n{\r\n \"synchronized\": [],\r\n \"not-synchronized\": [\r\n \"cellphone\",\r\n \"my-desktop\"\r\n ]\r\n}\r\n```\r\n\r\nI'm able ot sync this devices on the web ui though.\n", "before_files": [{"content": "from django.http import HttpResponseBadRequest, HttpResponseNotFound\nfrom django.views.decorators.csrf import csrf_exempt\nfrom django.views.decorators.cache import never_cache\n\nfrom mygpo.decorators import allowed_methods, cors_origin\nfrom mygpo.utils import parse_request_body\nfrom mygpo.api.basic_auth import require_valid_user, check_username\nfrom mygpo.api.httpresponse import JsonResponse\nfrom mygpo.users.models import Client, UserProxy\nfrom mygpo.users.tasks import sync_user\n\n\n@csrf_exempt\n@require_valid_user\n@check_username\n@never_cache\n@allowed_methods([\"GET\", \"POST\"])\n@cors_origin()\ndef main(request, username):\n \"\"\" API Endpoint for Device Synchronisation \"\"\"\n\n if request.method == \"GET\":\n return JsonResponse(get_sync_status(request.user))\n\n else:\n try:\n actions = parse_request_body(request)\n except ValueError as e:\n return HttpResponseBadRequest(str(e))\n\n synclist = actions.get(\"synchronize\", [])\n stopsync = actions.get(\"stop-synchronize\", [])\n\n try:\n update_sync_status(request.user, synclist, stopsync)\n except ValueError as e:\n return HttpResponseBadRequest(str(e))\n except Client.DoesNotExist as e:\n return HttpResponseNotFound(str(e))\n\n return JsonResponse(get_sync_status(user))\n\n\ndef get_sync_status(user):\n \"\"\" Returns the current Device Sync status \"\"\"\n\n sync_groups = []\n unsynced = []\n\n user = UserProxy.objects.from_user(user)\n for group in user.get_grouped_devices():\n uids = [device.uid for device in group.devices]\n\n if group.is_synced:\n sync_groups.append(uids)\n\n else:\n unsynced = uids\n\n return {\"synchronized\": sync_groups, \"not-synchronized\": unsynced}\n\n\ndef update_sync_status(user, synclist, stopsync):\n \"\"\"Updates the current Device Sync status\n\n Synchronisation between devices can be set up and stopped. Devices are\n identified by their UIDs. Unknown UIDs cause errors, no new devices are\n created.\"\"\"\n\n for devlist in synclist:\n\n if len(devlist) <= 1:\n raise ValueError(\"at least two devices are needed to sync\")\n\n # Setup all devices to sync with the first in the list\n uid = devlist[0]\n dev = user.client_set.get(uid=uid)\n\n for other_uid in devlist[1:]:\n other = user.get_device_by_uid(other_uid)\n dev.sync_with(other)\n\n for uid in stopsync:\n dev = user.get_device_by_uid(uid)\n try:\n dev.stop_sync()\n except ValueError:\n # if all devices of a sync-group are un-synced,\n # the last one will raise a ValueError, because it is no longer\n # being synced -- we just ignore it\n pass\n\n user.save()\n\n sync_user.delay(user)\n", "path": "mygpo/api/advanced/sync.py"}]} | 1,701 | 211 |
gh_patches_debug_67497 | rasdani/github-patches | git_diff | vllm-project__vllm-2887 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[v0.3.1] Release Tracker
**ETA**: Feb 14-16 th
## Major changes
TBD
## PRs to be merged before the release
- [x] #2855
- [x] #2845
- [x] ~~#2514~~
- [x] Ensure memory release when `LLM` class is deleted. #2882
- [x] #2875 #2880
</issue>
<code>
[start of vllm/__init__.py]
1 """vLLM: a high-throughput and memory-efficient inference engine for LLMs"""
2
3 from vllm.engine.arg_utils import AsyncEngineArgs, EngineArgs
4 from vllm.engine.async_llm_engine import AsyncLLMEngine
5 from vllm.engine.llm_engine import LLMEngine
6 from vllm.engine.ray_utils import initialize_cluster
7 from vllm.entrypoints.llm import LLM
8 from vllm.outputs import CompletionOutput, RequestOutput
9 from vllm.sampling_params import SamplingParams
10
11 __version__ = "0.3.0"
12
13 __all__ = [
14 "LLM",
15 "SamplingParams",
16 "RequestOutput",
17 "CompletionOutput",
18 "LLMEngine",
19 "EngineArgs",
20 "AsyncLLMEngine",
21 "AsyncEngineArgs",
22 "initialize_cluster",
23 ]
24
[end of vllm/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/vllm/__init__.py b/vllm/__init__.py
--- a/vllm/__init__.py
+++ b/vllm/__init__.py
@@ -8,7 +8,7 @@
from vllm.outputs import CompletionOutput, RequestOutput
from vllm.sampling_params import SamplingParams
-__version__ = "0.3.0"
+__version__ = "0.3.1"
__all__ = [
"LLM",
| {"golden_diff": "diff --git a/vllm/__init__.py b/vllm/__init__.py\n--- a/vllm/__init__.py\n+++ b/vllm/__init__.py\n@@ -8,7 +8,7 @@\n from vllm.outputs import CompletionOutput, RequestOutput\n from vllm.sampling_params import SamplingParams\n \n-__version__ = \"0.3.0\"\n+__version__ = \"0.3.1\"\n \n __all__ = [\n \"LLM\",\n", "issue": "[v0.3.1] Release Tracker\n**ETA**: Feb 14-16 th\r\n\r\n## Major changes\r\n\r\nTBD\r\n\r\n## PRs to be merged before the release\r\n\r\n- [x] #2855 \r\n- [x] #2845 \r\n- [x] ~~#2514~~\r\n- [x] Ensure memory release when `LLM` class is deleted. #2882 \r\n- [x] #2875 #2880\n", "before_files": [{"content": "\"\"\"vLLM: a high-throughput and memory-efficient inference engine for LLMs\"\"\"\n\nfrom vllm.engine.arg_utils import AsyncEngineArgs, EngineArgs\nfrom vllm.engine.async_llm_engine import AsyncLLMEngine\nfrom vllm.engine.llm_engine import LLMEngine\nfrom vllm.engine.ray_utils import initialize_cluster\nfrom vllm.entrypoints.llm import LLM\nfrom vllm.outputs import CompletionOutput, RequestOutput\nfrom vllm.sampling_params import SamplingParams\n\n__version__ = \"0.3.0\"\n\n__all__ = [\n \"LLM\",\n \"SamplingParams\",\n \"RequestOutput\",\n \"CompletionOutput\",\n \"LLMEngine\",\n \"EngineArgs\",\n \"AsyncLLMEngine\",\n \"AsyncEngineArgs\",\n \"initialize_cluster\",\n]\n", "path": "vllm/__init__.py"}]} | 864 | 108 |
gh_patches_debug_6598 | rasdani/github-patches | git_diff | holoviz__panel-2883 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
panel examples gives UnboundLocalError
#### ALL software version info
panel 0.12.4
#### Description of expected behavior and the observed behavior
`$ panel examples` doesn't raise an error
#### Complete, minimal, self-contained example code that reproduces the issue
Was taking a look at https://panel.holoviz.org/#id1
```
panel examples
```
#### Stack traceback and/or browser JavaScript console output
#### Screenshots or screencasts of the bug in action
<img width="846" alt="Screen Shot 2021-11-04 at 9 23 56 PM" src="https://user-images.githubusercontent.com/17162724/140442696-82e6c5c2-4cd6-40f6-821d-47c87f5e1541.png">
</issue>
<code>
[start of panel/command/__init__.py]
1 """
2 Commandline interface to Panel
3 """
4 import sys
5 import argparse
6
7 from bokeh.__main__ import main as bokeh_entry_point
8 from bokeh.command.subcommands.serve import Serve as BkServe
9 from bokeh.command.util import die
10 from bokeh.util.string import nice_join
11
12 from .. import __version__
13 from .serve import Serve
14 from .oauth_secret import OAuthSecret
15
16
17 def transform_cmds(argv):
18 """
19 Allows usage with anaconda-project by remapping the argv list provided
20 into arguments accepted by Bokeh 0.12.7 or later.
21 """
22 replacements = {
23 '--anaconda-project-host':'--allow-websocket-origin',
24 '--anaconda-project-port': '--port',
25 '--anaconda-project-address': '--address'
26 }
27 transformed = []
28 skip = False
29 for arg in argv:
30 if skip:
31 skip = False
32 continue
33 if arg in replacements.keys():
34 transformed.append(replacements[arg])
35 elif arg == '--anaconda-project-iframe-hosts':
36 skip = True
37 continue
38 elif arg.startswith('--anaconda-project'):
39 continue
40 else:
41 transformed.append(arg)
42 return transformed
43
44
45 def main(args=None):
46 """Merges commands offered by pyct and bokeh and provides help for both"""
47 from bokeh.command.subcommands import all as bokeh_commands
48 bokeh_commands = bokeh_commands + [OAuthSecret]
49
50 try:
51 import pyct.cmd
52 pyct_commands = ['copy-examples', 'examples']
53 except Exception:
54 pass
55
56 parser = argparse.ArgumentParser(
57 prog="panel", epilog="See '<command> --help' to read about a specific subcommand."
58 )
59
60 parser.add_argument('-v', '--version', action='version', version=__version__)
61
62 subs = parser.add_subparsers(help="Sub-commands")
63
64 for cmd in pyct_commands:
65 cmd = cmd.replace('-', '_')
66 fn = getattr(pyct.cmd, cmd)
67 subs.add_parser(cmd, help=fn.__doc__)
68
69 for cls in bokeh_commands:
70 if cls is BkServe:
71 subparser = subs.add_parser(Serve.name, help=Serve.help)
72 subcommand = Serve(parser=subparser)
73 subparser.set_defaults(invoke=subcommand.invoke)
74 else:
75 subs.add_parser(cls.name, help=cls.help)
76
77 if len(sys.argv) == 1:
78 all_commands = sorted([c.name for c in bokeh_commands] + pyct_commands)
79 die("ERROR: Must specify subcommand, one of: %s" % nice_join(all_commands))
80
81 if sys.argv[1] in ('--help', '-h'):
82 args = parser.parse_args(sys.argv[1:])
83 args.invoke(args)
84 sys.exit()
85
86 if len(sys.argv) > 1 and any(sys.argv[1] == c.name for c in bokeh_commands):
87 sys.argv = transform_cmds(sys.argv)
88 if sys.argv[1] == 'serve':
89 args = parser.parse_args(sys.argv[1:])
90 try:
91 ret = args.invoke(args)
92 except Exception as e:
93 die("ERROR: " + str(e))
94 elif sys.argv[1] == 'oauth-secret':
95 ret = OAuthSecret(parser).invoke(args)
96 else:
97 ret = bokeh_entry_point()
98 elif sys.argv[1] in pyct_commands:
99 try:
100 import pyct.cmd
101 except ImportError:
102 print("install pyct to enable this command (e.g. `conda install -c pyviz pyct` or `pip install pyct[cmd]`)")
103 sys.exit(1)
104 pyct.cmd.substitute_main('panel', cmds=pyct_commands, args=args)
105 else:
106 parser.parse_args(sys.argv[1:])
107 sys.exit(1)
108
109 if ret is False:
110 sys.exit(1)
111 elif ret is not True and isinstance(ret, int) and ret != 0:
112 sys.exit(ret)
113
114
115
116 if __name__ == "__main__":
117 main()
118
[end of panel/command/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/panel/command/__init__.py b/panel/command/__init__.py
--- a/panel/command/__init__.py
+++ b/panel/command/__init__.py
@@ -102,6 +102,7 @@
print("install pyct to enable this command (e.g. `conda install -c pyviz pyct` or `pip install pyct[cmd]`)")
sys.exit(1)
pyct.cmd.substitute_main('panel', cmds=pyct_commands, args=args)
+ sys.exit()
else:
parser.parse_args(sys.argv[1:])
sys.exit(1)
| {"golden_diff": "diff --git a/panel/command/__init__.py b/panel/command/__init__.py\n--- a/panel/command/__init__.py\n+++ b/panel/command/__init__.py\n@@ -102,6 +102,7 @@\n print(\"install pyct to enable this command (e.g. `conda install -c pyviz pyct` or `pip install pyct[cmd]`)\")\n sys.exit(1)\n pyct.cmd.substitute_main('panel', cmds=pyct_commands, args=args)\n+ sys.exit()\n else:\n parser.parse_args(sys.argv[1:])\n sys.exit(1)\n", "issue": "panel examples gives UnboundLocalError\n#### ALL software version info\r\npanel 0.12.4\r\n\r\n#### Description of expected behavior and the observed behavior\r\n`$ panel examples` doesn't raise an error\r\n\r\n#### Complete, minimal, self-contained example code that reproduces the issue\r\n\r\nWas taking a look at https://panel.holoviz.org/#id1\r\n\r\n```\r\npanel examples\r\n```\r\n\r\n#### Stack traceback and/or browser JavaScript console output\r\n\r\n#### Screenshots or screencasts of the bug in action\r\n\r\n<img width=\"846\" alt=\"Screen Shot 2021-11-04 at 9 23 56 PM\" src=\"https://user-images.githubusercontent.com/17162724/140442696-82e6c5c2-4cd6-40f6-821d-47c87f5e1541.png\">\r\n\r\n\n", "before_files": [{"content": "\"\"\"\nCommandline interface to Panel\n\"\"\"\nimport sys\nimport argparse\n\nfrom bokeh.__main__ import main as bokeh_entry_point\nfrom bokeh.command.subcommands.serve import Serve as BkServe\nfrom bokeh.command.util import die\nfrom bokeh.util.string import nice_join\n\nfrom .. import __version__\nfrom .serve import Serve\nfrom .oauth_secret import OAuthSecret\n\n\ndef transform_cmds(argv):\n \"\"\"\n Allows usage with anaconda-project by remapping the argv list provided\n into arguments accepted by Bokeh 0.12.7 or later.\n \"\"\"\n replacements = {\n '--anaconda-project-host':'--allow-websocket-origin',\n '--anaconda-project-port': '--port',\n '--anaconda-project-address': '--address'\n }\n transformed = []\n skip = False\n for arg in argv:\n if skip:\n skip = False\n continue\n if arg in replacements.keys():\n transformed.append(replacements[arg])\n elif arg == '--anaconda-project-iframe-hosts':\n skip = True\n continue\n elif arg.startswith('--anaconda-project'):\n continue\n else:\n transformed.append(arg)\n return transformed\n\n\ndef main(args=None):\n \"\"\"Merges commands offered by pyct and bokeh and provides help for both\"\"\"\n from bokeh.command.subcommands import all as bokeh_commands\n bokeh_commands = bokeh_commands + [OAuthSecret]\n\n try:\n import pyct.cmd\n pyct_commands = ['copy-examples', 'examples']\n except Exception:\n pass\n\n parser = argparse.ArgumentParser(\n prog=\"panel\", epilog=\"See '<command> --help' to read about a specific subcommand.\"\n )\n\n parser.add_argument('-v', '--version', action='version', version=__version__)\n\n subs = parser.add_subparsers(help=\"Sub-commands\")\n\n for cmd in pyct_commands:\n cmd = cmd.replace('-', '_')\n fn = getattr(pyct.cmd, cmd)\n subs.add_parser(cmd, help=fn.__doc__)\n\n for cls in bokeh_commands:\n if cls is BkServe:\n subparser = subs.add_parser(Serve.name, help=Serve.help)\n subcommand = Serve(parser=subparser)\n subparser.set_defaults(invoke=subcommand.invoke)\n else:\n subs.add_parser(cls.name, help=cls.help)\n\n if len(sys.argv) == 1:\n all_commands = sorted([c.name for c in bokeh_commands] + pyct_commands)\n die(\"ERROR: Must specify subcommand, one of: %s\" % nice_join(all_commands))\n\n if sys.argv[1] in ('--help', '-h'):\n args = parser.parse_args(sys.argv[1:])\n args.invoke(args)\n sys.exit()\n\n if len(sys.argv) > 1 and any(sys.argv[1] == c.name for c in bokeh_commands):\n sys.argv = transform_cmds(sys.argv)\n if sys.argv[1] == 'serve':\n args = parser.parse_args(sys.argv[1:])\n try:\n ret = args.invoke(args)\n except Exception as e:\n die(\"ERROR: \" + str(e))\n elif sys.argv[1] == 'oauth-secret':\n ret = OAuthSecret(parser).invoke(args)\n else:\n ret = bokeh_entry_point()\n elif sys.argv[1] in pyct_commands:\n try:\n import pyct.cmd\n except ImportError:\n print(\"install pyct to enable this command (e.g. `conda install -c pyviz pyct` or `pip install pyct[cmd]`)\")\n sys.exit(1)\n pyct.cmd.substitute_main('panel', cmds=pyct_commands, args=args)\n else:\n parser.parse_args(sys.argv[1:])\n sys.exit(1)\n\n if ret is False:\n sys.exit(1)\n elif ret is not True and isinstance(ret, int) and ret != 0:\n sys.exit(ret)\n\n\n\nif __name__ == \"__main__\":\n main()\n", "path": "panel/command/__init__.py"}]} | 1,853 | 137 |
gh_patches_debug_26307 | rasdani/github-patches | git_diff | web2py__web2py-2419 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Unable to login when using redis for storing sessions
Basically a new session is created on each page load so the login is never established.
Issue discussed at: https://groups.google.com/forum/?utm_medium=email&utm_source=footer#!msg/web2py/6Ig5YVgvIsI/HpueAUELBgAJ
Confirmed with web2py versions from 2.18.5 up to 2.20.4. Python versions 3.6 and 3.8
</issue>
<code>
[start of gluon/contrib/redis_session.py]
1 #!/usr/bin/env python
2 # -*- coding: utf-8 -*-
3 """
4 Developed by [email protected]
5 License MIT/BSD/GPL
6
7 Redis-backed sessions
8 """
9
10 import logging
11 from threading import Lock
12 from gluon import current
13 from gluon.storage import Storage
14 from gluon.contrib.redis_utils import acquire_lock, release_lock
15 from gluon.contrib.redis_utils import register_release_lock
16 from gluon._compat import to_native
17 from datetime import datetime
18
19 logger = logging.getLogger("web2py.session.redis")
20
21 locker = Lock()
22
23
24 def RedisSession(redis_conn, session_expiry=False, with_lock=False, db=None):
25 """
26 Usage example: put in models::
27
28 from gluon.contrib.redis_utils import RConn
29 rconn = RConn()
30 from gluon.contrib.redis_session import RedisSession
31 sessiondb = RedisSession(redis_conn=rconn, with_lock=True, session_expiry=False)
32 session.connect(request, response, db = sessiondb)
33
34 Args:
35 redis_conn: a redis-like connection object
36 with_lock: prevent concurrent modifications to the same session
37 session_expiry: delete automatically sessions after n seconds
38 (still need to run sessions2trash.py every 1M sessions
39 or so)
40
41 Simple slip-in storage for session
42 """
43
44 locker.acquire()
45 try:
46 instance_name = 'redis_instance_' + current.request.application
47 if not hasattr(RedisSession, instance_name):
48 setattr(RedisSession, instance_name,
49 RedisClient(redis_conn, session_expiry=session_expiry, with_lock=with_lock))
50 return getattr(RedisSession, instance_name)
51 finally:
52 locker.release()
53
54
55 class RedisClient(object):
56
57 def __init__(self, redis_conn, session_expiry=False, with_lock=False):
58 self.r_server = redis_conn
59 self._release_script = register_release_lock(self.r_server)
60 self.tablename = None
61 self.session_expiry = session_expiry
62 self.with_lock = with_lock
63
64 def get(self, what, default):
65 return self.tablename
66
67 def Field(self, fieldname, type='string', length=None, default=None,
68 required=False, requires=None):
69 return fieldname, type
70
71 def define_table(self, tablename, *fields, **args):
72 if not self.tablename:
73 self.tablename = MockTable(
74 self, self.r_server, tablename, self.session_expiry,
75 with_lock=self.with_lock, fields=fields)
76 return self.tablename
77
78 def __getitem__(self, key):
79 return self.tablename
80
81 def __call__(self, where=''):
82 q = self.tablename.query
83 return q
84
85 def commit(self):
86 # this is only called by session2trash.py
87 pass
88
89 def convert_dict_string(self, dict_string):
90 fields = self.tablename.fields
91 typed_dict = dict()
92 converters = {
93 'boolean': lambda x: 1 if x.decode() == '1' else 0,
94 'blob': lambda x: x,
95 }
96 for field, ftype in fields:
97 if field not in dict_string:
98 continue
99 if ftype in converters:
100 typed_dict[field] = converters[ftype](dict_string[field])
101 else:
102 typed_dict[field] = dict_string[field].decode()
103 return typed_dict
104
105
106 class MockTable(object):
107
108 def __init__(self, db, r_server, tablename, session_expiry, with_lock=False, fields=None):
109 # here self.db is the RedisClient instance
110 self.db = db
111 self.tablename = tablename
112 # set the namespace for sessions of this app
113 self.keyprefix = 'w2p:sess:%s' % tablename.replace('web2py_session_', '')
114 # fast auto-increment id (needed for session handling)
115 self.serial = "%s:serial" % self.keyprefix
116 # index of all the session keys of this app
117 self.id_idx = "%s:id_idx" % self.keyprefix
118 # remember the session_expiry setting
119 self.session_expiry = session_expiry
120 self.with_lock = with_lock
121 self.fields = fields if fields is not None else []
122
123 def __call__(self, record_id, unique_key=None):
124 # Support DAL shortcut query: table(record_id)
125
126 # This will call the __getattr__ below
127 # returning a MockQuery
128 q = self.id
129
130 # Instructs MockQuery, to behave as db(table.id == record_id)
131 q.op = 'eq'
132 q.value = record_id
133 q.unique_key = unique_key
134
135 row = q.select()
136 return row[0] if row else Storage()
137
138 def __getattr__(self, key):
139 if key == 'id':
140 # return a fake query. We need to query it just by id for normal operations
141 self.query = MockQuery(
142 field='id', db=self.db,
143 prefix=self.keyprefix, session_expiry=self.session_expiry,
144 with_lock=self.with_lock, unique_key=self.unique_key
145 )
146 return self.query
147 elif key == '_db':
148 # needed because of the calls in sessions2trash.py and globals.py
149 return self.db
150
151 def insert(self, **kwargs):
152 # usually kwargs would be a Storage with several keys:
153 # 'locked', 'client_ip','created_datetime','modified_datetime'
154 # 'unique_key', 'session_data'
155 # retrieve a new key
156 newid = str(self.db.r_server.incr(self.serial))
157 key = self.keyprefix + ':' + newid
158 if self.with_lock:
159 key_lock = key + ':lock'
160 acquire_lock(self.db.r_server, key_lock, newid)
161 with self.db.r_server.pipeline() as pipe:
162 # add it to the index
163 pipe.sadd(self.id_idx, key)
164 # set a hash key with the Storage
165 pipe.hmset(key, kwargs)
166 if self.session_expiry:
167 pipe.expire(key, self.session_expiry)
168 pipe.execute()
169 if self.with_lock:
170 release_lock(self.db, key_lock, newid)
171 return newid
172
173
174 class MockQuery(object):
175 """a fake Query object that supports querying by id
176 and listing all keys. No other operation is supported
177 """
178 def __init__(self, field=None, db=None, prefix=None, session_expiry=False,
179 with_lock=False, unique_key=None):
180 self.field = field
181 self.value = None
182 self.db = db
183 self.keyprefix = prefix
184 self.op = None
185 self.session_expiry = session_expiry
186 self.with_lock = with_lock
187 self.unique_key = unique_key
188
189 def __eq__(self, value, op='eq'):
190 self.value = value
191 self.op = op
192
193 def __ge__(self, value, op='ge'):
194 self.value = value
195 self.op = op
196
197 def __gt__(self, value, op='gt'):
198 self.value = value
199 self.op = op
200
201 def select(self):
202 if self.op == 'eq' and self.field == 'id' and self.value:
203 # means that someone wants to retrieve the key self.value
204 key = self.keyprefix + ':' + str(self.value)
205 if self.with_lock:
206 acquire_lock(self.db.r_server, key + ':lock', self.value, 2)
207 rtn = {to_native(k): v for k, v in self.db.r_server.hgetall(key).items()}
208 if rtn:
209 if self.unique_key:
210 # make sure the id and unique_key are correct
211 if rtn['unique_key'] == to_native(self.unique_key):
212 rtn['update_record'] = self.update # update record support
213 else:
214 rtn = None
215 return [Storage(self.db.convert_dict_string(rtn))] if rtn else []
216 elif self.op in ('ge', 'gt') and self.field == 'id' and self.value == 0:
217 # means that someone wants the complete list
218 rtn = []
219 id_idx = "%s:id_idx" % self.keyprefix
220 # find all session keys of this app
221 allkeys = self.db.r_server.smembers(id_idx)
222 for sess in allkeys:
223 val = self.db.r_server.hgetall(sess)
224 if not val:
225 if self.session_expiry:
226 # clean up the idx, because the key expired
227 self.db.r_server.srem(id_idx, sess)
228 continue
229 val = Storage(self.db.convert_dict_string(val))
230 # add a delete_record method (necessary for sessions2trash.py)
231 val.delete_record = RecordDeleter(
232 self.db, sess, self.keyprefix)
233 rtn.append(val)
234 return rtn
235 else:
236 raise Exception("Operation not supported")
237
238 def update(self, **kwargs):
239 # means that the session has been found and needs an update
240 if self.op == 'eq' and self.field == 'id' and self.value:
241 key = self.keyprefix + ':' + str(self.value)
242 if not self.db.r_server.exists(key):
243 return None
244 with self.db.r_server.pipeline() as pipe:
245 pipe.hmset(key, kwargs)
246 if self.session_expiry:
247 pipe.expire(key, self.session_expiry)
248 rtn = pipe.execute()[0]
249 if self.with_lock:
250 release_lock(self.db, key + ':lock', self.value)
251 return rtn
252
253 def delete(self, **kwargs):
254 # means that we want this session to be deleted
255 if self.op == 'eq' and self.field == 'id' and self.value:
256 id_idx = "%s:id_idx" % self.keyprefix
257 key = self.keyprefix + ':' + str(self.value)
258 with self.db.r_server.pipeline() as pipe:
259 pipe.delete(key)
260 pipe.srem(id_idx, key)
261 rtn = pipe.execute()
262 return rtn[1]
263
264
265 class RecordDeleter(object):
266 """Dumb record deleter to support sessions2trash.py"""
267
268 def __init__(self, db, key, keyprefix):
269 self.db, self.key, self.keyprefix = db, key, keyprefix
270
271 def __call__(self):
272 id_idx = "%s:id_idx" % self.keyprefix
273 # remove from the index
274 self.db.r_server.srem(id_idx, self.key)
275 # remove the key itself
276 self.db.r_server.delete(self.key)
277
[end of gluon/contrib/redis_session.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/gluon/contrib/redis_session.py b/gluon/contrib/redis_session.py
--- a/gluon/contrib/redis_session.py
+++ b/gluon/contrib/redis_session.py
@@ -13,7 +13,7 @@
from gluon.storage import Storage
from gluon.contrib.redis_utils import acquire_lock, release_lock
from gluon.contrib.redis_utils import register_release_lock
-from gluon._compat import to_native
+from gluon._compat import to_native, to_bytes
from datetime import datetime
logger = logging.getLogger("web2py.session.redis")
@@ -131,7 +131,6 @@
q.op = 'eq'
q.value = record_id
q.unique_key = unique_key
-
row = q.select()
return row[0] if row else Storage()
@@ -208,7 +207,7 @@
if rtn:
if self.unique_key:
# make sure the id and unique_key are correct
- if rtn['unique_key'] == to_native(self.unique_key):
+ if rtn['unique_key'] == to_bytes(self.unique_key):
rtn['update_record'] = self.update # update record support
else:
rtn = None
| {"golden_diff": "diff --git a/gluon/contrib/redis_session.py b/gluon/contrib/redis_session.py\n--- a/gluon/contrib/redis_session.py\n+++ b/gluon/contrib/redis_session.py\n@@ -13,7 +13,7 @@\n from gluon.storage import Storage\n from gluon.contrib.redis_utils import acquire_lock, release_lock\n from gluon.contrib.redis_utils import register_release_lock\n-from gluon._compat import to_native\n+from gluon._compat import to_native, to_bytes\n from datetime import datetime\n \n logger = logging.getLogger(\"web2py.session.redis\")\n@@ -131,7 +131,6 @@\n q.op = 'eq'\n q.value = record_id\n q.unique_key = unique_key\n-\n row = q.select()\n return row[0] if row else Storage()\n \n@@ -208,7 +207,7 @@\n if rtn:\n if self.unique_key:\n # make sure the id and unique_key are correct\n- if rtn['unique_key'] == to_native(self.unique_key):\n+ if rtn['unique_key'] == to_bytes(self.unique_key):\n rtn['update_record'] = self.update # update record support\n else:\n rtn = None\n", "issue": "Unable to login when using redis for storing sessions\nBasically a new session is created on each page load so the login is never established.\r\n\r\nIssue discussed at: https://groups.google.com/forum/?utm_medium=email&utm_source=footer#!msg/web2py/6Ig5YVgvIsI/HpueAUELBgAJ\r\n\r\nConfirmed with web2py versions from 2.18.5 up to 2.20.4. Python versions 3.6 and 3.8\r\n\r\n\r\n\n", "before_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\"\"\"\nDeveloped by [email protected]\nLicense MIT/BSD/GPL\n\nRedis-backed sessions\n\"\"\"\n\nimport logging\nfrom threading import Lock\nfrom gluon import current\nfrom gluon.storage import Storage\nfrom gluon.contrib.redis_utils import acquire_lock, release_lock\nfrom gluon.contrib.redis_utils import register_release_lock\nfrom gluon._compat import to_native\nfrom datetime import datetime\n\nlogger = logging.getLogger(\"web2py.session.redis\")\n\nlocker = Lock()\n\n\ndef RedisSession(redis_conn, session_expiry=False, with_lock=False, db=None):\n \"\"\"\n Usage example: put in models::\n\n from gluon.contrib.redis_utils import RConn\n rconn = RConn()\n from gluon.contrib.redis_session import RedisSession\n sessiondb = RedisSession(redis_conn=rconn, with_lock=True, session_expiry=False)\n session.connect(request, response, db = sessiondb)\n\n Args:\n redis_conn: a redis-like connection object\n with_lock: prevent concurrent modifications to the same session\n session_expiry: delete automatically sessions after n seconds\n (still need to run sessions2trash.py every 1M sessions\n or so)\n\n Simple slip-in storage for session\n \"\"\"\n\n locker.acquire()\n try:\n instance_name = 'redis_instance_' + current.request.application\n if not hasattr(RedisSession, instance_name):\n setattr(RedisSession, instance_name,\n RedisClient(redis_conn, session_expiry=session_expiry, with_lock=with_lock))\n return getattr(RedisSession, instance_name)\n finally:\n locker.release()\n\n\nclass RedisClient(object):\n\n def __init__(self, redis_conn, session_expiry=False, with_lock=False):\n self.r_server = redis_conn\n self._release_script = register_release_lock(self.r_server)\n self.tablename = None\n self.session_expiry = session_expiry\n self.with_lock = with_lock\n\n def get(self, what, default):\n return self.tablename\n\n def Field(self, fieldname, type='string', length=None, default=None,\n required=False, requires=None):\n return fieldname, type\n\n def define_table(self, tablename, *fields, **args):\n if not self.tablename:\n self.tablename = MockTable(\n self, self.r_server, tablename, self.session_expiry,\n with_lock=self.with_lock, fields=fields)\n return self.tablename\n\n def __getitem__(self, key):\n return self.tablename\n\n def __call__(self, where=''):\n q = self.tablename.query\n return q\n\n def commit(self):\n # this is only called by session2trash.py\n pass\n\n def convert_dict_string(self, dict_string):\n fields = self.tablename.fields\n typed_dict = dict()\n converters = {\n 'boolean': lambda x: 1 if x.decode() == '1' else 0,\n 'blob': lambda x: x,\n }\n for field, ftype in fields:\n if field not in dict_string:\n continue\n if ftype in converters:\n typed_dict[field] = converters[ftype](dict_string[field])\n else:\n typed_dict[field] = dict_string[field].decode()\n return typed_dict\n\n\nclass MockTable(object):\n\n def __init__(self, db, r_server, tablename, session_expiry, with_lock=False, fields=None):\n # here self.db is the RedisClient instance\n self.db = db\n self.tablename = tablename\n # set the namespace for sessions of this app\n self.keyprefix = 'w2p:sess:%s' % tablename.replace('web2py_session_', '')\n # fast auto-increment id (needed for session handling)\n self.serial = \"%s:serial\" % self.keyprefix\n # index of all the session keys of this app\n self.id_idx = \"%s:id_idx\" % self.keyprefix\n # remember the session_expiry setting\n self.session_expiry = session_expiry\n self.with_lock = with_lock\n self.fields = fields if fields is not None else []\n\n def __call__(self, record_id, unique_key=None):\n # Support DAL shortcut query: table(record_id)\n\n # This will call the __getattr__ below\n # returning a MockQuery\n q = self.id\n\n # Instructs MockQuery, to behave as db(table.id == record_id)\n q.op = 'eq'\n q.value = record_id\n q.unique_key = unique_key\n\n row = q.select()\n return row[0] if row else Storage()\n\n def __getattr__(self, key):\n if key == 'id':\n # return a fake query. We need to query it just by id for normal operations\n self.query = MockQuery(\n field='id', db=self.db,\n prefix=self.keyprefix, session_expiry=self.session_expiry,\n with_lock=self.with_lock, unique_key=self.unique_key\n )\n return self.query\n elif key == '_db':\n # needed because of the calls in sessions2trash.py and globals.py\n return self.db\n\n def insert(self, **kwargs):\n # usually kwargs would be a Storage with several keys:\n # 'locked', 'client_ip','created_datetime','modified_datetime'\n # 'unique_key', 'session_data'\n # retrieve a new key\n newid = str(self.db.r_server.incr(self.serial))\n key = self.keyprefix + ':' + newid\n if self.with_lock:\n key_lock = key + ':lock'\n acquire_lock(self.db.r_server, key_lock, newid)\n with self.db.r_server.pipeline() as pipe:\n # add it to the index\n pipe.sadd(self.id_idx, key)\n # set a hash key with the Storage\n pipe.hmset(key, kwargs)\n if self.session_expiry:\n pipe.expire(key, self.session_expiry)\n pipe.execute()\n if self.with_lock:\n release_lock(self.db, key_lock, newid)\n return newid\n\n\nclass MockQuery(object):\n \"\"\"a fake Query object that supports querying by id\n and listing all keys. No other operation is supported\n \"\"\"\n def __init__(self, field=None, db=None, prefix=None, session_expiry=False,\n with_lock=False, unique_key=None):\n self.field = field\n self.value = None\n self.db = db\n self.keyprefix = prefix\n self.op = None\n self.session_expiry = session_expiry\n self.with_lock = with_lock\n self.unique_key = unique_key\n\n def __eq__(self, value, op='eq'):\n self.value = value\n self.op = op\n\n def __ge__(self, value, op='ge'):\n self.value = value\n self.op = op\n\n def __gt__(self, value, op='gt'):\n self.value = value\n self.op = op\n\n def select(self):\n if self.op == 'eq' and self.field == 'id' and self.value:\n # means that someone wants to retrieve the key self.value\n key = self.keyprefix + ':' + str(self.value)\n if self.with_lock:\n acquire_lock(self.db.r_server, key + ':lock', self.value, 2)\n rtn = {to_native(k): v for k, v in self.db.r_server.hgetall(key).items()}\n if rtn:\n if self.unique_key:\n # make sure the id and unique_key are correct\n if rtn['unique_key'] == to_native(self.unique_key):\n rtn['update_record'] = self.update # update record support\n else:\n rtn = None\n return [Storage(self.db.convert_dict_string(rtn))] if rtn else []\n elif self.op in ('ge', 'gt') and self.field == 'id' and self.value == 0:\n # means that someone wants the complete list\n rtn = []\n id_idx = \"%s:id_idx\" % self.keyprefix\n # find all session keys of this app\n allkeys = self.db.r_server.smembers(id_idx)\n for sess in allkeys:\n val = self.db.r_server.hgetall(sess)\n if not val:\n if self.session_expiry:\n # clean up the idx, because the key expired\n self.db.r_server.srem(id_idx, sess)\n continue\n val = Storage(self.db.convert_dict_string(val))\n # add a delete_record method (necessary for sessions2trash.py)\n val.delete_record = RecordDeleter(\n self.db, sess, self.keyprefix)\n rtn.append(val)\n return rtn\n else:\n raise Exception(\"Operation not supported\")\n\n def update(self, **kwargs):\n # means that the session has been found and needs an update\n if self.op == 'eq' and self.field == 'id' and self.value:\n key = self.keyprefix + ':' + str(self.value)\n if not self.db.r_server.exists(key):\n return None\n with self.db.r_server.pipeline() as pipe:\n pipe.hmset(key, kwargs)\n if self.session_expiry:\n pipe.expire(key, self.session_expiry)\n rtn = pipe.execute()[0]\n if self.with_lock:\n release_lock(self.db, key + ':lock', self.value)\n return rtn\n\n def delete(self, **kwargs):\n # means that we want this session to be deleted\n if self.op == 'eq' and self.field == 'id' and self.value:\n id_idx = \"%s:id_idx\" % self.keyprefix\n key = self.keyprefix + ':' + str(self.value)\n with self.db.r_server.pipeline() as pipe:\n pipe.delete(key)\n pipe.srem(id_idx, key)\n rtn = pipe.execute()\n return rtn[1]\n\n\nclass RecordDeleter(object):\n \"\"\"Dumb record deleter to support sessions2trash.py\"\"\"\n\n def __init__(self, db, key, keyprefix):\n self.db, self.key, self.keyprefix = db, key, keyprefix\n\n def __call__(self):\n id_idx = \"%s:id_idx\" % self.keyprefix\n # remove from the index\n self.db.r_server.srem(id_idx, self.key)\n # remove the key itself\n self.db.r_server.delete(self.key)\n", "path": "gluon/contrib/redis_session.py"}]} | 3,603 | 273 |
gh_patches_debug_25093 | rasdani/github-patches | git_diff | urllib3__urllib3-1609 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Upgrade packaged rfc3986
Upgrade to v1.3.2
</issue>
<code>
[start of src/urllib3/packages/rfc3986/__init__.py]
1 # -*- coding: utf-8 -*-
2 # Copyright (c) 2014 Rackspace
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
12 # implied.
13 # See the License for the specific language governing permissions and
14 # limitations under the License.
15
16 """
17 An implementation of semantics and validations described in RFC 3986.
18
19 See http://rfc3986.readthedocs.io/ for detailed documentation.
20
21 :copyright: (c) 2014 Rackspace
22 :license: Apache v2.0, see LICENSE for details
23 """
24
25 from .api import iri_reference
26 from .api import IRIReference
27 from .api import is_valid_uri
28 from .api import normalize_uri
29 from .api import uri_reference
30 from .api import URIReference
31 from .api import urlparse
32 from .parseresult import ParseResult
33
34 __title__ = 'rfc3986'
35 __author__ = 'Ian Stapleton Cordasco'
36 __author_email__ = '[email protected]'
37 __license__ = 'Apache v2.0'
38 __copyright__ = 'Copyright 2014 Rackspace'
39 __version__ = '1.3.1'
40
41 __all__ = (
42 'ParseResult',
43 'URIReference',
44 'IRIReference',
45 'is_valid_uri',
46 'normalize_uri',
47 'uri_reference',
48 'iri_reference',
49 'urlparse',
50 '__title__',
51 '__author__',
52 '__author_email__',
53 '__license__',
54 '__copyright__',
55 '__version__',
56 )
57
[end of src/urllib3/packages/rfc3986/__init__.py]
[start of src/urllib3/packages/rfc3986/misc.py]
1 # -*- coding: utf-8 -*-
2 # Copyright (c) 2014 Rackspace
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
12 # implied.
13 # See the License for the specific language governing permissions and
14 # limitations under the License.
15 """
16 Module containing compiled regular expressions and constants.
17
18 This module contains important constants, patterns, and compiled regular
19 expressions for parsing and validating URIs and their components.
20 """
21
22 import re
23
24 from . import abnf_regexp
25
26 # These are enumerated for the named tuple used as a superclass of
27 # URIReference
28 URI_COMPONENTS = ['scheme', 'authority', 'path', 'query', 'fragment']
29
30 important_characters = {
31 'generic_delimiters': abnf_regexp.GENERIC_DELIMITERS,
32 'sub_delimiters': abnf_regexp.SUB_DELIMITERS,
33 # We need to escape the '*' in this case
34 're_sub_delimiters': abnf_regexp.SUB_DELIMITERS_RE,
35 'unreserved_chars': abnf_regexp.UNRESERVED_CHARS,
36 # We need to escape the '-' in this case:
37 're_unreserved': abnf_regexp.UNRESERVED_RE,
38 }
39
40 # For details about delimiters and reserved characters, see:
41 # http://tools.ietf.org/html/rfc3986#section-2.2
42 GENERIC_DELIMITERS = abnf_regexp.GENERIC_DELIMITERS_SET
43 SUB_DELIMITERS = abnf_regexp.SUB_DELIMITERS_SET
44 RESERVED_CHARS = abnf_regexp.RESERVED_CHARS_SET
45 # For details about unreserved characters, see:
46 # http://tools.ietf.org/html/rfc3986#section-2.3
47 UNRESERVED_CHARS = abnf_regexp.UNRESERVED_CHARS_SET
48 NON_PCT_ENCODED = abnf_regexp.NON_PCT_ENCODED_SET
49
50 URI_MATCHER = re.compile(abnf_regexp.URL_PARSING_RE)
51
52 SUBAUTHORITY_MATCHER = re.compile((
53 '^(?:(?P<userinfo>{0})@)?' # userinfo
54 '(?P<host>{1})' # host
55 ':?(?P<port>{2})?$' # port
56 ).format(abnf_regexp.USERINFO_RE,
57 abnf_regexp.HOST_PATTERN,
58 abnf_regexp.PORT_RE))
59
60
61 HOST_MATCHER = re.compile('^' + abnf_regexp.HOST_RE + '$')
62 IPv4_MATCHER = re.compile('^' + abnf_regexp.IPv4_RE + '$')
63 IPv6_MATCHER = re.compile(r'^\[' + abnf_regexp.IPv6_ADDRZ_RFC4007_RE + r'\]$')
64
65 # Used by host validator
66 IPv6_NO_RFC4007_MATCHER = re.compile(r'^\[%s\]$' % (
67 abnf_regexp.IPv6_ADDRZ_RE
68 ))
69
70 # Matcher used to validate path components
71 PATH_MATCHER = re.compile(abnf_regexp.PATH_RE)
72
73
74 # ##################################
75 # Query and Fragment Matcher Section
76 # ##################################
77
78 QUERY_MATCHER = re.compile(abnf_regexp.QUERY_RE)
79
80 FRAGMENT_MATCHER = QUERY_MATCHER
81
82 # Scheme validation, see: http://tools.ietf.org/html/rfc3986#section-3.1
83 SCHEME_MATCHER = re.compile('^{0}$'.format(abnf_regexp.SCHEME_RE))
84
85 RELATIVE_REF_MATCHER = re.compile(r'^%s(\?%s)?(#%s)?$' % (
86 abnf_regexp.RELATIVE_PART_RE,
87 abnf_regexp.QUERY_RE,
88 abnf_regexp.FRAGMENT_RE,
89 ))
90
91 # See http://tools.ietf.org/html/rfc3986#section-4.3
92 ABSOLUTE_URI_MATCHER = re.compile(r'^%s:%s(\?%s)?$' % (
93 abnf_regexp.COMPONENT_PATTERN_DICT['scheme'],
94 abnf_regexp.HIER_PART_RE,
95 abnf_regexp.QUERY_RE[1:-1],
96 ))
97
98 # ###############
99 # IRIs / RFC 3987
100 # ###############
101
102 IRI_MATCHER = re.compile(abnf_regexp.URL_PARSING_RE, re.UNICODE)
103
104 ISUBAUTHORITY_MATCHER = re.compile((
105 u'^(?:(?P<userinfo>{0})@)?' # iuserinfo
106 u'(?P<host>{1})' # ihost
107 u':?(?P<port>{2})?$' # port
108 ).format(abnf_regexp.IUSERINFO_RE,
109 abnf_regexp.IHOST_RE,
110 abnf_regexp.PORT_RE), re.UNICODE)
111
112
113 IHOST_MATCHER = re.compile('^' + abnf_regexp.IHOST_RE + '$', re.UNICODE)
114
115 IPATH_MATCHER = re.compile(abnf_regexp.IPATH_RE, re.UNICODE)
116
117 IQUERY_MATCHER = re.compile(abnf_regexp.IQUERY_RE, re.UNICODE)
118
119 IFRAGMENT_MATCHER = re.compile(abnf_regexp.IFRAGMENT_RE, re.UNICODE)
120
121
122 RELATIVE_IRI_MATCHER = re.compile(u'^%s(?:\\?%s)?(?:%s)?$' % (
123 abnf_regexp.IRELATIVE_PART_RE,
124 abnf_regexp.IQUERY_RE,
125 abnf_regexp.IFRAGMENT_RE
126 ), re.UNICODE)
127
128 ABSOLUTE_IRI_MATCHER = re.compile(u'^%s:%s(?:\\?%s)?$' % (
129 abnf_regexp.COMPONENT_PATTERN_DICT['scheme'],
130 abnf_regexp.IHIER_PART_RE,
131 abnf_regexp.IQUERY_RE[1:-1]
132 ), re.UNICODE)
133
134
135 # Path merger as defined in http://tools.ietf.org/html/rfc3986#section-5.2.3
136 def merge_paths(base_uri, relative_path):
137 """Merge a base URI's path with a relative URI's path."""
138 if base_uri.path is None and base_uri.authority is not None:
139 return '/' + relative_path
140 else:
141 path = base_uri.path or ''
142 index = path.rfind('/')
143 return path[:index] + '/' + relative_path
144
145
146 UseExisting = object()
147
[end of src/urllib3/packages/rfc3986/misc.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/urllib3/packages/rfc3986/__init__.py b/src/urllib3/packages/rfc3986/__init__.py
--- a/src/urllib3/packages/rfc3986/__init__.py
+++ b/src/urllib3/packages/rfc3986/__init__.py
@@ -36,7 +36,7 @@
__author_email__ = '[email protected]'
__license__ = 'Apache v2.0'
__copyright__ = 'Copyright 2014 Rackspace'
-__version__ = '1.3.1'
+__version__ = '1.3.2'
__all__ = (
'ParseResult',
diff --git a/src/urllib3/packages/rfc3986/misc.py b/src/urllib3/packages/rfc3986/misc.py
--- a/src/urllib3/packages/rfc3986/misc.py
+++ b/src/urllib3/packages/rfc3986/misc.py
@@ -110,28 +110,6 @@
abnf_regexp.PORT_RE), re.UNICODE)
-IHOST_MATCHER = re.compile('^' + abnf_regexp.IHOST_RE + '$', re.UNICODE)
-
-IPATH_MATCHER = re.compile(abnf_regexp.IPATH_RE, re.UNICODE)
-
-IQUERY_MATCHER = re.compile(abnf_regexp.IQUERY_RE, re.UNICODE)
-
-IFRAGMENT_MATCHER = re.compile(abnf_regexp.IFRAGMENT_RE, re.UNICODE)
-
-
-RELATIVE_IRI_MATCHER = re.compile(u'^%s(?:\\?%s)?(?:%s)?$' % (
- abnf_regexp.IRELATIVE_PART_RE,
- abnf_regexp.IQUERY_RE,
- abnf_regexp.IFRAGMENT_RE
-), re.UNICODE)
-
-ABSOLUTE_IRI_MATCHER = re.compile(u'^%s:%s(?:\\?%s)?$' % (
- abnf_regexp.COMPONENT_PATTERN_DICT['scheme'],
- abnf_regexp.IHIER_PART_RE,
- abnf_regexp.IQUERY_RE[1:-1]
-), re.UNICODE)
-
-
# Path merger as defined in http://tools.ietf.org/html/rfc3986#section-5.2.3
def merge_paths(base_uri, relative_path):
"""Merge a base URI's path with a relative URI's path."""
| {"golden_diff": "diff --git a/src/urllib3/packages/rfc3986/__init__.py b/src/urllib3/packages/rfc3986/__init__.py\n--- a/src/urllib3/packages/rfc3986/__init__.py\n+++ b/src/urllib3/packages/rfc3986/__init__.py\n@@ -36,7 +36,7 @@\n __author_email__ = '[email protected]'\n __license__ = 'Apache v2.0'\n __copyright__ = 'Copyright 2014 Rackspace'\n-__version__ = '1.3.1'\n+__version__ = '1.3.2'\n \n __all__ = (\n 'ParseResult',\ndiff --git a/src/urllib3/packages/rfc3986/misc.py b/src/urllib3/packages/rfc3986/misc.py\n--- a/src/urllib3/packages/rfc3986/misc.py\n+++ b/src/urllib3/packages/rfc3986/misc.py\n@@ -110,28 +110,6 @@\n abnf_regexp.PORT_RE), re.UNICODE)\n \n \n-IHOST_MATCHER = re.compile('^' + abnf_regexp.IHOST_RE + '$', re.UNICODE)\n-\n-IPATH_MATCHER = re.compile(abnf_regexp.IPATH_RE, re.UNICODE)\n-\n-IQUERY_MATCHER = re.compile(abnf_regexp.IQUERY_RE, re.UNICODE)\n-\n-IFRAGMENT_MATCHER = re.compile(abnf_regexp.IFRAGMENT_RE, re.UNICODE)\n-\n-\n-RELATIVE_IRI_MATCHER = re.compile(u'^%s(?:\\\\?%s)?(?:%s)?$' % (\n- abnf_regexp.IRELATIVE_PART_RE,\n- abnf_regexp.IQUERY_RE,\n- abnf_regexp.IFRAGMENT_RE\n-), re.UNICODE)\n-\n-ABSOLUTE_IRI_MATCHER = re.compile(u'^%s:%s(?:\\\\?%s)?$' % (\n- abnf_regexp.COMPONENT_PATTERN_DICT['scheme'],\n- abnf_regexp.IHIER_PART_RE,\n- abnf_regexp.IQUERY_RE[1:-1]\n-), re.UNICODE)\n-\n-\n # Path merger as defined in http://tools.ietf.org/html/rfc3986#section-5.2.3\n def merge_paths(base_uri, relative_path):\n \"\"\"Merge a base URI's path with a relative URI's path.\"\"\"\n", "issue": "Upgrade packaged rfc3986\nUpgrade to v1.3.2\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n# Copyright (c) 2014 Rackspace\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n# implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"\nAn implementation of semantics and validations described in RFC 3986.\n\nSee http://rfc3986.readthedocs.io/ for detailed documentation.\n\n:copyright: (c) 2014 Rackspace\n:license: Apache v2.0, see LICENSE for details\n\"\"\"\n\nfrom .api import iri_reference\nfrom .api import IRIReference\nfrom .api import is_valid_uri\nfrom .api import normalize_uri\nfrom .api import uri_reference\nfrom .api import URIReference\nfrom .api import urlparse\nfrom .parseresult import ParseResult\n\n__title__ = 'rfc3986'\n__author__ = 'Ian Stapleton Cordasco'\n__author_email__ = '[email protected]'\n__license__ = 'Apache v2.0'\n__copyright__ = 'Copyright 2014 Rackspace'\n__version__ = '1.3.1'\n\n__all__ = (\n 'ParseResult',\n 'URIReference',\n 'IRIReference',\n 'is_valid_uri',\n 'normalize_uri',\n 'uri_reference',\n 'iri_reference',\n 'urlparse',\n '__title__',\n '__author__',\n '__author_email__',\n '__license__',\n '__copyright__',\n '__version__',\n)\n", "path": "src/urllib3/packages/rfc3986/__init__.py"}, {"content": "# -*- coding: utf-8 -*-\n# Copyright (c) 2014 Rackspace\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n# implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"\nModule containing compiled regular expressions and constants.\n\nThis module contains important constants, patterns, and compiled regular\nexpressions for parsing and validating URIs and their components.\n\"\"\"\n\nimport re\n\nfrom . import abnf_regexp\n\n# These are enumerated for the named tuple used as a superclass of\n# URIReference\nURI_COMPONENTS = ['scheme', 'authority', 'path', 'query', 'fragment']\n\nimportant_characters = {\n 'generic_delimiters': abnf_regexp.GENERIC_DELIMITERS,\n 'sub_delimiters': abnf_regexp.SUB_DELIMITERS,\n # We need to escape the '*' in this case\n 're_sub_delimiters': abnf_regexp.SUB_DELIMITERS_RE,\n 'unreserved_chars': abnf_regexp.UNRESERVED_CHARS,\n # We need to escape the '-' in this case:\n 're_unreserved': abnf_regexp.UNRESERVED_RE,\n}\n\n# For details about delimiters and reserved characters, see:\n# http://tools.ietf.org/html/rfc3986#section-2.2\nGENERIC_DELIMITERS = abnf_regexp.GENERIC_DELIMITERS_SET\nSUB_DELIMITERS = abnf_regexp.SUB_DELIMITERS_SET\nRESERVED_CHARS = abnf_regexp.RESERVED_CHARS_SET\n# For details about unreserved characters, see:\n# http://tools.ietf.org/html/rfc3986#section-2.3\nUNRESERVED_CHARS = abnf_regexp.UNRESERVED_CHARS_SET\nNON_PCT_ENCODED = abnf_regexp.NON_PCT_ENCODED_SET\n\nURI_MATCHER = re.compile(abnf_regexp.URL_PARSING_RE)\n\nSUBAUTHORITY_MATCHER = re.compile((\n '^(?:(?P<userinfo>{0})@)?' # userinfo\n '(?P<host>{1})' # host\n ':?(?P<port>{2})?$' # port\n ).format(abnf_regexp.USERINFO_RE,\n abnf_regexp.HOST_PATTERN,\n abnf_regexp.PORT_RE))\n\n\nHOST_MATCHER = re.compile('^' + abnf_regexp.HOST_RE + '$')\nIPv4_MATCHER = re.compile('^' + abnf_regexp.IPv4_RE + '$')\nIPv6_MATCHER = re.compile(r'^\\[' + abnf_regexp.IPv6_ADDRZ_RFC4007_RE + r'\\]$')\n\n# Used by host validator\nIPv6_NO_RFC4007_MATCHER = re.compile(r'^\\[%s\\]$' % (\n abnf_regexp.IPv6_ADDRZ_RE\n))\n\n# Matcher used to validate path components\nPATH_MATCHER = re.compile(abnf_regexp.PATH_RE)\n\n\n# ##################################\n# Query and Fragment Matcher Section\n# ##################################\n\nQUERY_MATCHER = re.compile(abnf_regexp.QUERY_RE)\n\nFRAGMENT_MATCHER = QUERY_MATCHER\n\n# Scheme validation, see: http://tools.ietf.org/html/rfc3986#section-3.1\nSCHEME_MATCHER = re.compile('^{0}$'.format(abnf_regexp.SCHEME_RE))\n\nRELATIVE_REF_MATCHER = re.compile(r'^%s(\\?%s)?(#%s)?$' % (\n abnf_regexp.RELATIVE_PART_RE,\n abnf_regexp.QUERY_RE,\n abnf_regexp.FRAGMENT_RE,\n))\n\n# See http://tools.ietf.org/html/rfc3986#section-4.3\nABSOLUTE_URI_MATCHER = re.compile(r'^%s:%s(\\?%s)?$' % (\n abnf_regexp.COMPONENT_PATTERN_DICT['scheme'],\n abnf_regexp.HIER_PART_RE,\n abnf_regexp.QUERY_RE[1:-1],\n))\n\n# ###############\n# IRIs / RFC 3987\n# ###############\n\nIRI_MATCHER = re.compile(abnf_regexp.URL_PARSING_RE, re.UNICODE)\n\nISUBAUTHORITY_MATCHER = re.compile((\n u'^(?:(?P<userinfo>{0})@)?' # iuserinfo\n u'(?P<host>{1})' # ihost\n u':?(?P<port>{2})?$' # port\n ).format(abnf_regexp.IUSERINFO_RE,\n abnf_regexp.IHOST_RE,\n abnf_regexp.PORT_RE), re.UNICODE)\n\n\nIHOST_MATCHER = re.compile('^' + abnf_regexp.IHOST_RE + '$', re.UNICODE)\n\nIPATH_MATCHER = re.compile(abnf_regexp.IPATH_RE, re.UNICODE)\n\nIQUERY_MATCHER = re.compile(abnf_regexp.IQUERY_RE, re.UNICODE)\n\nIFRAGMENT_MATCHER = re.compile(abnf_regexp.IFRAGMENT_RE, re.UNICODE)\n\n\nRELATIVE_IRI_MATCHER = re.compile(u'^%s(?:\\\\?%s)?(?:%s)?$' % (\n abnf_regexp.IRELATIVE_PART_RE,\n abnf_regexp.IQUERY_RE,\n abnf_regexp.IFRAGMENT_RE\n), re.UNICODE)\n\nABSOLUTE_IRI_MATCHER = re.compile(u'^%s:%s(?:\\\\?%s)?$' % (\n abnf_regexp.COMPONENT_PATTERN_DICT['scheme'],\n abnf_regexp.IHIER_PART_RE,\n abnf_regexp.IQUERY_RE[1:-1]\n), re.UNICODE)\n\n\n# Path merger as defined in http://tools.ietf.org/html/rfc3986#section-5.2.3\ndef merge_paths(base_uri, relative_path):\n \"\"\"Merge a base URI's path with a relative URI's path.\"\"\"\n if base_uri.path is None and base_uri.authority is not None:\n return '/' + relative_path\n else:\n path = base_uri.path or ''\n index = path.rfind('/')\n return path[:index] + '/' + relative_path\n\n\nUseExisting = object()\n", "path": "src/urllib3/packages/rfc3986/misc.py"}]} | 2,884 | 532 |
gh_patches_debug_3210 | rasdani/github-patches | git_diff | ray-project__ray-10443 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[rllib] _get_torch_exploration_action doesn't support tuple action dist
<!--Please include [tune], [rllib], [autoscaler] etc. in the issue title if relevant-->
### System information
* **OS Platform and Distribution (e.g., Linux Ubuntu 16.04)**: Mac OS 10.15.4
* **Ray installed from (source or binary)**: binary (via pip)
* **Ray version**: 0.8.6., but nothing seems to have changed on master
* **Python version**: 3.7
### What is the problem?
When using tuple action distributions (as advised in #6372) and exploration is disabled, the line:
https://github.com/ray-project/ray/blob/a462ae2747afbeb9047e443cd51e67e3fe0b49e6/rllib/utils/exploration/stochastic_sampling.py#L75
from `_get_torch_exploration_action` raises the following exception:
```
AttributeError: 'tuple' object has no attribute 'size'
```
A simple fix that supports any type of distribution would be:
```python
logp = torch.zeros_like(action_dist.sampled_action_logp())
```
I can submit a PR if it helps.
### Reproduction (REQUIRED)
Exact command to reproduce: python `rllib_cartpole.py` for the following file
```python
import gym.envs.classic_control
from gym.spaces import Tuple, Discrete
import ray
from ray import tune
class CustomCartpole(gym.envs.classic_control.CartPoleEnv):
"""Add a dimension to the cartpole action space that is ignored."""
def __init__(self, env_config):
super().__init__()
# if override_actions is false this is just the Cartpole environment
self.override_actions = env_config['override_actions']
if self.override_actions:
# 2 is the environment's normal action space
# 4 is just a dummy number to give it an extra dimension
self.original_action_space = self.action_space
self.action_space = Tuple([Discrete(2), Discrete(4)])
self.tuple_action_space = self.action_space
def step(self, action):
# call the cartpole environment with the original action
if self.override_actions:
self.action_space = self.original_action_space
return super().step(action[0])
else:
return super().step(action)
def main():
ray.init()
tune.run(
"PPO",
stop={"episode_reward_mean": 50},
config={
"env": CustomCartpole,
"env_config": {'override_actions': True},
"num_gpus": 0,
"num_workers": 1,
"eager": False,
"evaluation_interval": 1,
"evaluation_config": {
"explore": False,
},
"framework": "torch",
},
)
if __name__ == '__main__':
main()
```
- [x] I have verified my script runs in a clean environment and reproduces the issue.
- [ ] I have verified the issue also occurs with the [latest wheels](https://docs.ray.io/en/latest/installation.html).
</issue>
<code>
[start of rllib/utils/exploration/stochastic_sampling.py]
1 import tree
2 from typing import Union
3
4 from ray.rllib.models.action_dist import ActionDistribution
5 from ray.rllib.models.modelv2 import ModelV2
6 from ray.rllib.utils.annotations import override
7 from ray.rllib.utils.exploration.exploration import Exploration
8 from ray.rllib.utils.framework import try_import_tf, try_import_torch, \
9 TensorType
10
11 tf1, tf, tfv = try_import_tf()
12 torch, _ = try_import_torch()
13
14
15 class StochasticSampling(Exploration):
16 """An exploration that simply samples from a distribution.
17
18 The sampling can be made deterministic by passing explore=False into
19 the call to `get_exploration_action`.
20 Also allows for scheduled parameters for the distributions, such as
21 lowering stddev, temperature, etc.. over time.
22 """
23
24 def __init__(self, action_space, *, framework: str, model: ModelV2,
25 **kwargs):
26 """Initializes a StochasticSampling Exploration object.
27
28 Args:
29 action_space (Space): The gym action space used by the environment.
30 framework (str): One of None, "tf", "torch".
31 """
32 assert framework is not None
33 super().__init__(
34 action_space, model=model, framework=framework, **kwargs)
35
36 @override(Exploration)
37 def get_exploration_action(self,
38 *,
39 action_distribution: ActionDistribution,
40 timestep: Union[int, TensorType],
41 explore: bool = True):
42 if self.framework == "torch":
43 return self._get_torch_exploration_action(action_distribution,
44 explore)
45 else:
46 return self._get_tf_exploration_action_op(action_distribution,
47 explore)
48
49 def _get_tf_exploration_action_op(self, action_dist, explore):
50 sample = action_dist.sample()
51 deterministic_sample = action_dist.deterministic_sample()
52 action = tf.cond(
53 tf.constant(explore) if isinstance(explore, bool) else explore,
54 true_fn=lambda: sample,
55 false_fn=lambda: deterministic_sample)
56
57 def logp_false_fn():
58 batch_size = tf.shape(tree.flatten(action)[0])[0]
59 return tf.zeros(shape=(batch_size, ), dtype=tf.float32)
60
61 logp = tf.cond(
62 tf.constant(explore) if isinstance(explore, bool) else explore,
63 true_fn=lambda: action_dist.sampled_action_logp(),
64 false_fn=logp_false_fn)
65
66 return action, logp
67
68 @staticmethod
69 def _get_torch_exploration_action(action_dist, explore):
70 if explore:
71 action = action_dist.sample()
72 logp = action_dist.sampled_action_logp()
73 else:
74 action = action_dist.deterministic_sample()
75 logp = torch.zeros((action.size()[0], ), dtype=torch.float32)
76 return action, logp
77
[end of rllib/utils/exploration/stochastic_sampling.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/rllib/utils/exploration/stochastic_sampling.py b/rllib/utils/exploration/stochastic_sampling.py
--- a/rllib/utils/exploration/stochastic_sampling.py
+++ b/rllib/utils/exploration/stochastic_sampling.py
@@ -72,5 +72,5 @@
logp = action_dist.sampled_action_logp()
else:
action = action_dist.deterministic_sample()
- logp = torch.zeros((action.size()[0], ), dtype=torch.float32)
+ logp = torch.zeros_like(action_dist.sampled_action_logp())
return action, logp
| {"golden_diff": "diff --git a/rllib/utils/exploration/stochastic_sampling.py b/rllib/utils/exploration/stochastic_sampling.py\n--- a/rllib/utils/exploration/stochastic_sampling.py\n+++ b/rllib/utils/exploration/stochastic_sampling.py\n@@ -72,5 +72,5 @@\n logp = action_dist.sampled_action_logp()\n else:\n action = action_dist.deterministic_sample()\n- logp = torch.zeros((action.size()[0], ), dtype=torch.float32)\n+ logp = torch.zeros_like(action_dist.sampled_action_logp())\n return action, logp\n", "issue": "[rllib] _get_torch_exploration_action doesn't support tuple action dist\n<!--Please include [tune], [rllib], [autoscaler] etc. in the issue title if relevant-->\r\n### System information\r\n\r\n* **OS Platform and Distribution (e.g., Linux Ubuntu 16.04)**: Mac OS 10.15.4\r\n* **Ray installed from (source or binary)**: binary (via pip)\r\n* **Ray version**: 0.8.6., but nothing seems to have changed on master\r\n* **Python version**: 3.7\r\n\r\n### What is the problem?\r\n\r\nWhen using tuple action distributions (as advised in #6372) and exploration is disabled, the line:\r\n\r\nhttps://github.com/ray-project/ray/blob/a462ae2747afbeb9047e443cd51e67e3fe0b49e6/rllib/utils/exploration/stochastic_sampling.py#L75\r\n\r\nfrom `_get_torch_exploration_action` raises the following exception:\r\n\r\n```\r\nAttributeError: 'tuple' object has no attribute 'size'\r\n```\r\n\r\nA simple fix that supports any type of distribution would be:\r\n```python\r\nlogp = torch.zeros_like(action_dist.sampled_action_logp())\r\n```\r\n\r\nI can submit a PR if it helps. \r\n\r\n### Reproduction (REQUIRED)\r\n\r\nExact command to reproduce: python `rllib_cartpole.py` for the following file\r\n\r\n```python\r\nimport gym.envs.classic_control\r\nfrom gym.spaces import Tuple, Discrete\r\n\r\nimport ray\r\nfrom ray import tune\r\n\r\n\r\nclass CustomCartpole(gym.envs.classic_control.CartPoleEnv):\r\n \"\"\"Add a dimension to the cartpole action space that is ignored.\"\"\"\r\n\r\n def __init__(self, env_config):\r\n super().__init__()\r\n # if override_actions is false this is just the Cartpole environment\r\n self.override_actions = env_config['override_actions']\r\n if self.override_actions:\r\n # 2 is the environment's normal action space\r\n # 4 is just a dummy number to give it an extra dimension\r\n self.original_action_space = self.action_space\r\n self.action_space = Tuple([Discrete(2), Discrete(4)])\r\n self.tuple_action_space = self.action_space\r\n\r\n def step(self, action):\r\n # call the cartpole environment with the original action\r\n if self.override_actions:\r\n self.action_space = self.original_action_space\r\n return super().step(action[0])\r\n else:\r\n return super().step(action)\r\n\r\n\r\ndef main():\r\n ray.init()\r\n tune.run(\r\n \"PPO\",\r\n stop={\"episode_reward_mean\": 50},\r\n config={\r\n \"env\": CustomCartpole,\r\n \"env_config\": {'override_actions': True},\r\n \"num_gpus\": 0,\r\n \"num_workers\": 1,\r\n \"eager\": False,\r\n \"evaluation_interval\": 1,\r\n \"evaluation_config\": {\r\n \"explore\": False,\r\n },\r\n \"framework\": \"torch\",\r\n },\r\n )\r\n\r\n\r\nif __name__ == '__main__':\r\n main()\r\n```\r\n\r\n\r\n- [x] I have verified my script runs in a clean environment and reproduces the issue.\r\n- [ ] I have verified the issue also occurs with the [latest wheels](https://docs.ray.io/en/latest/installation.html).\r\n\n", "before_files": [{"content": "import tree\nfrom typing import Union\n\nfrom ray.rllib.models.action_dist import ActionDistribution\nfrom ray.rllib.models.modelv2 import ModelV2\nfrom ray.rllib.utils.annotations import override\nfrom ray.rllib.utils.exploration.exploration import Exploration\nfrom ray.rllib.utils.framework import try_import_tf, try_import_torch, \\\n TensorType\n\ntf1, tf, tfv = try_import_tf()\ntorch, _ = try_import_torch()\n\n\nclass StochasticSampling(Exploration):\n \"\"\"An exploration that simply samples from a distribution.\n\n The sampling can be made deterministic by passing explore=False into\n the call to `get_exploration_action`.\n Also allows for scheduled parameters for the distributions, such as\n lowering stddev, temperature, etc.. over time.\n \"\"\"\n\n def __init__(self, action_space, *, framework: str, model: ModelV2,\n **kwargs):\n \"\"\"Initializes a StochasticSampling Exploration object.\n\n Args:\n action_space (Space): The gym action space used by the environment.\n framework (str): One of None, \"tf\", \"torch\".\n \"\"\"\n assert framework is not None\n super().__init__(\n action_space, model=model, framework=framework, **kwargs)\n\n @override(Exploration)\n def get_exploration_action(self,\n *,\n action_distribution: ActionDistribution,\n timestep: Union[int, TensorType],\n explore: bool = True):\n if self.framework == \"torch\":\n return self._get_torch_exploration_action(action_distribution,\n explore)\n else:\n return self._get_tf_exploration_action_op(action_distribution,\n explore)\n\n def _get_tf_exploration_action_op(self, action_dist, explore):\n sample = action_dist.sample()\n deterministic_sample = action_dist.deterministic_sample()\n action = tf.cond(\n tf.constant(explore) if isinstance(explore, bool) else explore,\n true_fn=lambda: sample,\n false_fn=lambda: deterministic_sample)\n\n def logp_false_fn():\n batch_size = tf.shape(tree.flatten(action)[0])[0]\n return tf.zeros(shape=(batch_size, ), dtype=tf.float32)\n\n logp = tf.cond(\n tf.constant(explore) if isinstance(explore, bool) else explore,\n true_fn=lambda: action_dist.sampled_action_logp(),\n false_fn=logp_false_fn)\n\n return action, logp\n\n @staticmethod\n def _get_torch_exploration_action(action_dist, explore):\n if explore:\n action = action_dist.sample()\n logp = action_dist.sampled_action_logp()\n else:\n action = action_dist.deterministic_sample()\n logp = torch.zeros((action.size()[0], ), dtype=torch.float32)\n return action, logp\n", "path": "rllib/utils/exploration/stochastic_sampling.py"}]} | 2,009 | 132 |
gh_patches_debug_1209 | rasdani/github-patches | git_diff | scrapy__scrapy-4503 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Fix the hoverxref configuration
> You shouldn't override hoverxref_version and hoverxref_project since they are taken automatically from Read the Docs.
>
> If you want to avoid your CI failing because of this, you can define the environment variables as Read the Docs does:
>
> READTHEDOCS_PROJECT=scrapy
> READTHEDOCS_VERSION=''
>
> With the current configuration, all the versions built on Read the Docs will point to a different version on Read the Docs and this will conflict. For example, current master version in Read the Docs defines hoverxref_version='2.0.0' but that version does not exist on Read the Docs and the tooltip does not known where to get the content from.
@humitos at https://github.com/scrapy/scrapy/pull/4480#discussion_r409026912
</issue>
<code>
[start of docs/conf.py]
1 # -*- coding: utf-8 -*-
2 #
3 # Scrapy documentation build configuration file, created by
4 # sphinx-quickstart on Mon Nov 24 12:02:52 2008.
5 #
6 # This file is execfile()d with the current directory set to its containing dir.
7 #
8 # The contents of this file are pickled, so don't put values in the namespace
9 # that aren't pickleable (module imports are okay, they're removed automatically).
10 #
11 # All configuration values have a default; values that are commented out
12 # serve to show the default.
13
14 import sys
15 from datetime import datetime
16 from os import path
17
18 # If your extensions are in another directory, add it here. If the directory
19 # is relative to the documentation root, use os.path.abspath to make it
20 # absolute, like shown here.
21 sys.path.append(path.join(path.dirname(__file__), "_ext"))
22 sys.path.insert(0, path.dirname(path.dirname(__file__)))
23
24
25 # General configuration
26 # ---------------------
27
28 # Add any Sphinx extension module names here, as strings. They can be extensions
29 # coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
30 extensions = [
31 'hoverxref.extension',
32 'notfound.extension',
33 'scrapydocs',
34 'sphinx.ext.autodoc',
35 'sphinx.ext.coverage',
36 'sphinx.ext.intersphinx',
37 'sphinx.ext.viewcode',
38 ]
39
40 # Add any paths that contain templates here, relative to this directory.
41 templates_path = ['_templates']
42
43 # The suffix of source filenames.
44 source_suffix = '.rst'
45
46 # The encoding of source files.
47 #source_encoding = 'utf-8'
48
49 # The master toctree document.
50 master_doc = 'index'
51
52 # General information about the project.
53 project = 'Scrapy'
54 copyright = '2008–{}, Scrapy developers'.format(datetime.now().year)
55
56 # The version info for the project you're documenting, acts as replacement for
57 # |version| and |release|, also used in various other places throughout the
58 # built documents.
59 #
60 # The short X.Y version.
61 try:
62 import scrapy
63 version = '.'.join(map(str, scrapy.version_info[:2]))
64 release = scrapy.__version__
65 except ImportError:
66 version = ''
67 release = ''
68
69 # The language for content autogenerated by Sphinx. Refer to documentation
70 # for a list of supported languages.
71 language = 'en'
72
73 # There are two options for replacing |today|: either, you set today to some
74 # non-false value, then it is used:
75 #today = ''
76 # Else, today_fmt is used as the format for a strftime call.
77 #today_fmt = '%B %d, %Y'
78
79 # List of documents that shouldn't be included in the build.
80 #unused_docs = []
81
82 exclude_patterns = ['build']
83
84 # List of directories, relative to source directory, that shouldn't be searched
85 # for source files.
86 exclude_trees = ['.build']
87
88 # The reST default role (used for this markup: `text`) to use for all documents.
89 #default_role = None
90
91 # If true, '()' will be appended to :func: etc. cross-reference text.
92 #add_function_parentheses = True
93
94 # If true, the current module name will be prepended to all description
95 # unit titles (such as .. function::).
96 #add_module_names = True
97
98 # If true, sectionauthor and moduleauthor directives will be shown in the
99 # output. They are ignored by default.
100 #show_authors = False
101
102 # The name of the Pygments (syntax highlighting) style to use.
103 pygments_style = 'sphinx'
104
105
106 # Options for HTML output
107 # -----------------------
108
109 # The theme to use for HTML and HTML Help pages. See the documentation for
110 # a list of builtin themes.
111 html_theme = 'sphinx_rtd_theme'
112
113 # Theme options are theme-specific and customize the look and feel of a theme
114 # further. For a list of options available for each theme, see the
115 # documentation.
116 #html_theme_options = {}
117
118 # Add any paths that contain custom themes here, relative to this directory.
119 # Add path to the RTD explicitly to robustify builds (otherwise might
120 # fail in a clean Debian build env)
121 import sphinx_rtd_theme
122 html_theme_path = [sphinx_rtd_theme.get_html_theme_path()]
123
124
125 # The style sheet to use for HTML and HTML Help pages. A file of that name
126 # must exist either in Sphinx' static/ path, or in one of the custom paths
127 # given in html_static_path.
128 # html_style = 'scrapydoc.css'
129
130 # The name for this set of Sphinx documents. If None, it defaults to
131 # "<project> v<release> documentation".
132 #html_title = None
133
134 # A shorter title for the navigation bar. Default is the same as html_title.
135 #html_short_title = None
136
137 # The name of an image file (relative to this directory) to place at the top
138 # of the sidebar.
139 #html_logo = None
140
141 # The name of an image file (within the static path) to use as favicon of the
142 # docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
143 # pixels large.
144 #html_favicon = None
145
146 # Add any paths that contain custom static files (such as style sheets) here,
147 # relative to this directory. They are copied after the builtin static files,
148 # so a file named "default.css" will overwrite the builtin "default.css".
149 html_static_path = ['_static']
150
151 # If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
152 # using the given strftime format.
153 html_last_updated_fmt = '%b %d, %Y'
154
155 # Custom sidebar templates, maps document names to template names.
156 #html_sidebars = {}
157
158 # Additional templates that should be rendered to pages, maps page names to
159 # template names.
160 #html_additional_pages = {}
161
162 # If false, no module index is generated.
163 #html_use_modindex = True
164
165 # If false, no index is generated.
166 #html_use_index = True
167
168 # If true, the index is split into individual pages for each letter.
169 #html_split_index = False
170
171 # If true, the reST sources are included in the HTML build as _sources/<name>.
172 html_copy_source = True
173
174 # If true, an OpenSearch description file will be output, and all pages will
175 # contain a <link> tag referring to it. The value of this option must be the
176 # base URL from which the finished HTML is served.
177 #html_use_opensearch = ''
178
179 # If nonempty, this is the file name suffix for HTML files (e.g. ".xhtml").
180 #html_file_suffix = ''
181
182 # Output file base name for HTML help builder.
183 htmlhelp_basename = 'Scrapydoc'
184
185
186 # Options for LaTeX output
187 # ------------------------
188
189 # The paper size ('letter' or 'a4').
190 #latex_paper_size = 'letter'
191
192 # The font size ('10pt', '11pt' or '12pt').
193 #latex_font_size = '10pt'
194
195 # Grouping the document tree into LaTeX files. List of tuples
196 # (source start file, target name, title, author, document class [howto/manual]).
197 latex_documents = [
198 ('index', 'Scrapy.tex', 'Scrapy Documentation',
199 'Scrapy developers', 'manual'),
200 ]
201
202 # The name of an image file (relative to this directory) to place at the top of
203 # the title page.
204 #latex_logo = None
205
206 # For "manual" documents, if this is true, then toplevel headings are parts,
207 # not chapters.
208 #latex_use_parts = False
209
210 # Additional stuff for the LaTeX preamble.
211 #latex_preamble = ''
212
213 # Documents to append as an appendix to all manuals.
214 #latex_appendices = []
215
216 # If false, no module index is generated.
217 #latex_use_modindex = True
218
219
220 # Options for the linkcheck builder
221 # ---------------------------------
222
223 # A list of regular expressions that match URIs that should not be checked when
224 # doing a linkcheck build.
225 linkcheck_ignore = [
226 'http://localhost:\d+', 'http://hg.scrapy.org',
227 'http://directory.google.com/'
228 ]
229
230
231 # Options for the Coverage extension
232 # ----------------------------------
233 coverage_ignore_pyobjects = [
234 # Contract’s add_pre_hook and add_post_hook are not documented because
235 # they should be transparent to contract developers, for whom pre_hook and
236 # post_hook should be the actual concern.
237 r'\bContract\.add_(pre|post)_hook$',
238
239 # ContractsManager is an internal class, developers are not expected to
240 # interact with it directly in any way.
241 r'\bContractsManager\b$',
242
243 # For default contracts we only want to document their general purpose in
244 # their __init__ method, the methods they reimplement to achieve that purpose
245 # should be irrelevant to developers using those contracts.
246 r'\w+Contract\.(adjust_request_args|(pre|post)_process)$',
247
248 # Methods of downloader middlewares are not documented, only the classes
249 # themselves, since downloader middlewares are controlled through Scrapy
250 # settings.
251 r'^scrapy\.downloadermiddlewares\.\w*?\.(\w*?Middleware|DownloaderStats)\.',
252
253 # Base classes of downloader middlewares are implementation details that
254 # are not meant for users.
255 r'^scrapy\.downloadermiddlewares\.\w*?\.Base\w*?Middleware',
256
257 # Private exception used by the command-line interface implementation.
258 r'^scrapy\.exceptions\.UsageError',
259
260 # Methods of BaseItemExporter subclasses are only documented in
261 # BaseItemExporter.
262 r'^scrapy\.exporters\.(?!BaseItemExporter\b)\w*?\.',
263
264 # Extension behavior is only modified through settings. Methods of
265 # extension classes, as well as helper functions, are implementation
266 # details that are not documented.
267 r'^scrapy\.extensions\.[a-z]\w*?\.[A-Z]\w*?\.', # methods
268 r'^scrapy\.extensions\.[a-z]\w*?\.[a-z]', # helper functions
269
270 # Never documented before, and deprecated now.
271 r'^scrapy\.item\.DictItem$',
272 r'^scrapy\.linkextractors\.FilteringLinkExtractor$',
273
274 # Implementation detail of LxmlLinkExtractor
275 r'^scrapy\.linkextractors\.lxmlhtml\.LxmlParserLinkExtractor',
276 ]
277
278
279 # Options for the InterSphinx extension
280 # -------------------------------------
281
282 intersphinx_mapping = {
283 'coverage': ('https://coverage.readthedocs.io/en/stable', None),
284 'cssselect': ('https://cssselect.readthedocs.io/en/latest', None),
285 'pytest': ('https://docs.pytest.org/en/latest', None),
286 'python': ('https://docs.python.org/3', None),
287 'sphinx': ('https://www.sphinx-doc.org/en/master', None),
288 'tox': ('https://tox.readthedocs.io/en/latest', None),
289 'twisted': ('https://twistedmatrix.com/documents/current', None),
290 'twistedapi': ('https://twistedmatrix.com/documents/current/api', None),
291 }
292
293
294 # Options for sphinx-hoverxref options
295 # ------------------------------------
296
297 hoverxref_auto_ref = True
298 hoverxref_project = "scrapy"
299 hoverxref_version = release
300 hoverxref_role_types = {
301 "class": "tooltip",
302 "confval": "tooltip",
303 "hoverxref": "tooltip",
304 "mod": "tooltip",
305 "ref": "tooltip",
306 }
307
[end of docs/conf.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/docs/conf.py b/docs/conf.py
--- a/docs/conf.py
+++ b/docs/conf.py
@@ -295,8 +295,6 @@
# ------------------------------------
hoverxref_auto_ref = True
-hoverxref_project = "scrapy"
-hoverxref_version = release
hoverxref_role_types = {
"class": "tooltip",
"confval": "tooltip",
| {"golden_diff": "diff --git a/docs/conf.py b/docs/conf.py\n--- a/docs/conf.py\n+++ b/docs/conf.py\n@@ -295,8 +295,6 @@\n # ------------------------------------\n \n hoverxref_auto_ref = True\n-hoverxref_project = \"scrapy\"\n-hoverxref_version = release\n hoverxref_role_types = {\n \"class\": \"tooltip\",\n \"confval\": \"tooltip\",\n", "issue": "Fix the hoverxref configuration\n> You shouldn't override hoverxref_version and hoverxref_project since they are taken automatically from Read the Docs.\r\n>\r\n> If you want to avoid your CI failing because of this, you can define the environment variables as Read the Docs does:\r\n> \r\n> READTHEDOCS_PROJECT=scrapy\r\n> READTHEDOCS_VERSION=''\r\n> \r\n> With the current configuration, all the versions built on Read the Docs will point to a different version on Read the Docs and this will conflict. For example, current master version in Read the Docs defines hoverxref_version='2.0.0' but that version does not exist on Read the Docs and the tooltip does not known where to get the content from.\r\n\r\n@humitos at https://github.com/scrapy/scrapy/pull/4480#discussion_r409026912\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# Scrapy documentation build configuration file, created by\n# sphinx-quickstart on Mon Nov 24 12:02:52 2008.\n#\n# This file is execfile()d with the current directory set to its containing dir.\n#\n# The contents of this file are pickled, so don't put values in the namespace\n# that aren't pickleable (module imports are okay, they're removed automatically).\n#\n# All configuration values have a default; values that are commented out\n# serve to show the default.\n\nimport sys\nfrom datetime import datetime\nfrom os import path\n\n# If your extensions are in another directory, add it here. If the directory\n# is relative to the documentation root, use os.path.abspath to make it\n# absolute, like shown here.\nsys.path.append(path.join(path.dirname(__file__), \"_ext\"))\nsys.path.insert(0, path.dirname(path.dirname(__file__)))\n\n\n# General configuration\n# ---------------------\n\n# Add any Sphinx extension module names here, as strings. They can be extensions\n# coming with Sphinx (named 'sphinx.ext.*') or your custom ones.\nextensions = [\n 'hoverxref.extension',\n 'notfound.extension',\n 'scrapydocs',\n 'sphinx.ext.autodoc',\n 'sphinx.ext.coverage',\n 'sphinx.ext.intersphinx',\n 'sphinx.ext.viewcode',\n]\n\n# Add any paths that contain templates here, relative to this directory.\ntemplates_path = ['_templates']\n\n# The suffix of source filenames.\nsource_suffix = '.rst'\n\n# The encoding of source files.\n#source_encoding = 'utf-8'\n\n# The master toctree document.\nmaster_doc = 'index'\n\n# General information about the project.\nproject = 'Scrapy'\ncopyright = '2008\u2013{}, Scrapy developers'.format(datetime.now().year)\n\n# The version info for the project you're documenting, acts as replacement for\n# |version| and |release|, also used in various other places throughout the\n# built documents.\n#\n# The short X.Y version.\ntry:\n import scrapy\n version = '.'.join(map(str, scrapy.version_info[:2]))\n release = scrapy.__version__\nexcept ImportError:\n version = ''\n release = ''\n\n# The language for content autogenerated by Sphinx. Refer to documentation\n# for a list of supported languages.\nlanguage = 'en'\n\n# There are two options for replacing |today|: either, you set today to some\n# non-false value, then it is used:\n#today = ''\n# Else, today_fmt is used as the format for a strftime call.\n#today_fmt = '%B %d, %Y'\n\n# List of documents that shouldn't be included in the build.\n#unused_docs = []\n\nexclude_patterns = ['build']\n\n# List of directories, relative to source directory, that shouldn't be searched\n# for source files.\nexclude_trees = ['.build']\n\n# The reST default role (used for this markup: `text`) to use for all documents.\n#default_role = None\n\n# If true, '()' will be appended to :func: etc. cross-reference text.\n#add_function_parentheses = True\n\n# If true, the current module name will be prepended to all description\n# unit titles (such as .. function::).\n#add_module_names = True\n\n# If true, sectionauthor and moduleauthor directives will be shown in the\n# output. They are ignored by default.\n#show_authors = False\n\n# The name of the Pygments (syntax highlighting) style to use.\npygments_style = 'sphinx'\n\n\n# Options for HTML output\n# -----------------------\n\n# The theme to use for HTML and HTML Help pages. See the documentation for\n# a list of builtin themes.\nhtml_theme = 'sphinx_rtd_theme'\n\n# Theme options are theme-specific and customize the look and feel of a theme\n# further. For a list of options available for each theme, see the\n# documentation.\n#html_theme_options = {}\n\n# Add any paths that contain custom themes here, relative to this directory.\n# Add path to the RTD explicitly to robustify builds (otherwise might\n# fail in a clean Debian build env)\nimport sphinx_rtd_theme\nhtml_theme_path = [sphinx_rtd_theme.get_html_theme_path()]\n\n\n# The style sheet to use for HTML and HTML Help pages. A file of that name\n# must exist either in Sphinx' static/ path, or in one of the custom paths\n# given in html_static_path.\n# html_style = 'scrapydoc.css'\n\n# The name for this set of Sphinx documents. If None, it defaults to\n# \"<project> v<release> documentation\".\n#html_title = None\n\n# A shorter title for the navigation bar. Default is the same as html_title.\n#html_short_title = None\n\n# The name of an image file (relative to this directory) to place at the top\n# of the sidebar.\n#html_logo = None\n\n# The name of an image file (within the static path) to use as favicon of the\n# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32\n# pixels large.\n#html_favicon = None\n\n# Add any paths that contain custom static files (such as style sheets) here,\n# relative to this directory. They are copied after the builtin static files,\n# so a file named \"default.css\" will overwrite the builtin \"default.css\".\nhtml_static_path = ['_static']\n\n# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,\n# using the given strftime format.\nhtml_last_updated_fmt = '%b %d, %Y'\n\n# Custom sidebar templates, maps document names to template names.\n#html_sidebars = {}\n\n# Additional templates that should be rendered to pages, maps page names to\n# template names.\n#html_additional_pages = {}\n\n# If false, no module index is generated.\n#html_use_modindex = True\n\n# If false, no index is generated.\n#html_use_index = True\n\n# If true, the index is split into individual pages for each letter.\n#html_split_index = False\n\n# If true, the reST sources are included in the HTML build as _sources/<name>.\nhtml_copy_source = True\n\n# If true, an OpenSearch description file will be output, and all pages will\n# contain a <link> tag referring to it. The value of this option must be the\n# base URL from which the finished HTML is served.\n#html_use_opensearch = ''\n\n# If nonempty, this is the file name suffix for HTML files (e.g. \".xhtml\").\n#html_file_suffix = ''\n\n# Output file base name for HTML help builder.\nhtmlhelp_basename = 'Scrapydoc'\n\n\n# Options for LaTeX output\n# ------------------------\n\n# The paper size ('letter' or 'a4').\n#latex_paper_size = 'letter'\n\n# The font size ('10pt', '11pt' or '12pt').\n#latex_font_size = '10pt'\n\n# Grouping the document tree into LaTeX files. List of tuples\n# (source start file, target name, title, author, document class [howto/manual]).\nlatex_documents = [\n ('index', 'Scrapy.tex', 'Scrapy Documentation',\n 'Scrapy developers', 'manual'),\n]\n\n# The name of an image file (relative to this directory) to place at the top of\n# the title page.\n#latex_logo = None\n\n# For \"manual\" documents, if this is true, then toplevel headings are parts,\n# not chapters.\n#latex_use_parts = False\n\n# Additional stuff for the LaTeX preamble.\n#latex_preamble = ''\n\n# Documents to append as an appendix to all manuals.\n#latex_appendices = []\n\n# If false, no module index is generated.\n#latex_use_modindex = True\n\n\n# Options for the linkcheck builder\n# ---------------------------------\n\n# A list of regular expressions that match URIs that should not be checked when\n# doing a linkcheck build.\nlinkcheck_ignore = [\n 'http://localhost:\\d+', 'http://hg.scrapy.org',\n 'http://directory.google.com/'\n]\n\n\n# Options for the Coverage extension\n# ----------------------------------\ncoverage_ignore_pyobjects = [\n # Contract\u2019s add_pre_hook and add_post_hook are not documented because\n # they should be transparent to contract developers, for whom pre_hook and\n # post_hook should be the actual concern.\n r'\\bContract\\.add_(pre|post)_hook$',\n\n # ContractsManager is an internal class, developers are not expected to\n # interact with it directly in any way.\n r'\\bContractsManager\\b$',\n\n # For default contracts we only want to document their general purpose in\n # their __init__ method, the methods they reimplement to achieve that purpose\n # should be irrelevant to developers using those contracts.\n r'\\w+Contract\\.(adjust_request_args|(pre|post)_process)$',\n\n # Methods of downloader middlewares are not documented, only the classes\n # themselves, since downloader middlewares are controlled through Scrapy\n # settings.\n r'^scrapy\\.downloadermiddlewares\\.\\w*?\\.(\\w*?Middleware|DownloaderStats)\\.',\n\n # Base classes of downloader middlewares are implementation details that\n # are not meant for users.\n r'^scrapy\\.downloadermiddlewares\\.\\w*?\\.Base\\w*?Middleware',\n\n # Private exception used by the command-line interface implementation.\n r'^scrapy\\.exceptions\\.UsageError',\n\n # Methods of BaseItemExporter subclasses are only documented in\n # BaseItemExporter.\n r'^scrapy\\.exporters\\.(?!BaseItemExporter\\b)\\w*?\\.',\n\n # Extension behavior is only modified through settings. Methods of\n # extension classes, as well as helper functions, are implementation\n # details that are not documented.\n r'^scrapy\\.extensions\\.[a-z]\\w*?\\.[A-Z]\\w*?\\.', # methods\n r'^scrapy\\.extensions\\.[a-z]\\w*?\\.[a-z]', # helper functions\n\n # Never documented before, and deprecated now.\n r'^scrapy\\.item\\.DictItem$',\n r'^scrapy\\.linkextractors\\.FilteringLinkExtractor$',\n\n # Implementation detail of LxmlLinkExtractor\n r'^scrapy\\.linkextractors\\.lxmlhtml\\.LxmlParserLinkExtractor',\n]\n\n\n# Options for the InterSphinx extension\n# -------------------------------------\n\nintersphinx_mapping = {\n 'coverage': ('https://coverage.readthedocs.io/en/stable', None),\n 'cssselect': ('https://cssselect.readthedocs.io/en/latest', None),\n 'pytest': ('https://docs.pytest.org/en/latest', None),\n 'python': ('https://docs.python.org/3', None),\n 'sphinx': ('https://www.sphinx-doc.org/en/master', None),\n 'tox': ('https://tox.readthedocs.io/en/latest', None),\n 'twisted': ('https://twistedmatrix.com/documents/current', None),\n 'twistedapi': ('https://twistedmatrix.com/documents/current/api', None),\n}\n\n\n# Options for sphinx-hoverxref options\n# ------------------------------------\n\nhoverxref_auto_ref = True\nhoverxref_project = \"scrapy\"\nhoverxref_version = release\nhoverxref_role_types = {\n \"class\": \"tooltip\",\n \"confval\": \"tooltip\",\n \"hoverxref\": \"tooltip\",\n \"mod\": \"tooltip\",\n \"ref\": \"tooltip\",\n}\n", "path": "docs/conf.py"}]} | 4,072 | 88 |
gh_patches_debug_35689 | rasdani/github-patches | git_diff | microsoft__torchgeo-1898 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add `ignore_index` support for Jaccard Loss
### Summary
Currently, the `SemanticSegmentationTask` recognises the `ignore_index` parameter only when cross entropy and focal loss is used. However, `smp.losses.JaccardLoss` implicitly supports this option via its `classes` parameter, which is currently set to `self.hparams["num_classes"]`. The FR is to adapt what is passed as argument to this parameter so that `ignore_index` can work with all currently supported losses.
### Rationale
The Jaccard index is a common semantic segmentation metric which also makes for a decent loss function. Because of the way it is defined, it is important to ignore overly dominant classes (e.g., the background when classifying building rooftops); otherwise performance can be hindered significantly.
### Implementation
Change the `classes` argument of `smp.losses.JaccardLoss` in `SemanticSegmentationTask.configure_losses` from `self.hparams["num_classes"]` to `list(set(list(range(self.hparams["num_classes"]))).difference(set([ignore_index])))`, assuming that `ignore_index` is not `None`.
### Alternatives
_No response_
### Additional information
_No response_
</issue>
<code>
[start of torchgeo/trainers/segmentation.py]
1 # Copyright (c) Microsoft Corporation. All rights reserved.
2 # Licensed under the MIT License.
3
4 """Trainers for semantic segmentation."""
5
6 import os
7 import warnings
8 from typing import Any, Optional, Union
9
10 import matplotlib.pyplot as plt
11 import segmentation_models_pytorch as smp
12 import torch.nn as nn
13 from matplotlib.figure import Figure
14 from torch import Tensor
15 from torchmetrics import MetricCollection
16 from torchmetrics.classification import MulticlassAccuracy, MulticlassJaccardIndex
17 from torchvision.models._api import WeightsEnum
18
19 from ..datasets import RGBBandsMissingError, unbind_samples
20 from ..models import FCN, get_weight
21 from . import utils
22 from .base import BaseTask
23
24
25 class SemanticSegmentationTask(BaseTask):
26 """Semantic Segmentation."""
27
28 def __init__(
29 self,
30 model: str = "unet",
31 backbone: str = "resnet50",
32 weights: Optional[Union[WeightsEnum, str, bool]] = None,
33 in_channels: int = 3,
34 num_classes: int = 1000,
35 num_filters: int = 3,
36 loss: str = "ce",
37 class_weights: Optional[Tensor] = None,
38 ignore_index: Optional[int] = None,
39 lr: float = 1e-3,
40 patience: int = 10,
41 freeze_backbone: bool = False,
42 freeze_decoder: bool = False,
43 ) -> None:
44 """Inititalize a new SemanticSegmentationTask instance.
45
46 Args:
47 model: Name of the
48 `smp <https://smp.readthedocs.io/en/latest/models.html>`__ model to use.
49 backbone: Name of the `timm
50 <https://smp.readthedocs.io/en/latest/encoders_timm.html>`__ or `smp
51 <https://smp.readthedocs.io/en/latest/encoders.html>`__ backbone to use.
52 weights: Initial model weights. Either a weight enum, the string
53 representation of a weight enum, True for ImageNet weights, False or
54 None for random weights, or the path to a saved model state dict. FCN
55 model does not support pretrained weights. Pretrained ViT weight enums
56 are not supported yet.
57 in_channels: Number of input channels to model.
58 num_classes: Number of prediction classes.
59 num_filters: Number of filters. Only applicable when model='fcn'.
60 loss: Name of the loss function, currently supports
61 'ce', 'jaccard' or 'focal' loss.
62 class_weights: Optional rescaling weight given to each
63 class and used with 'ce' loss.
64 ignore_index: Optional integer class index to ignore in the loss and
65 metrics.
66 lr: Learning rate for optimizer.
67 patience: Patience for learning rate scheduler.
68 freeze_backbone: Freeze the backbone network to fine-tune the
69 decoder and segmentation head.
70 freeze_decoder: Freeze the decoder network to linear probe
71 the segmentation head.
72
73 Warns:
74 UserWarning: When loss='jaccard' and ignore_index is specified.
75
76 .. versionchanged:: 0.3
77 *ignore_zeros* was renamed to *ignore_index*.
78
79 .. versionchanged:: 0.4
80 *segmentation_model*, *encoder_name*, and *encoder_weights*
81 were renamed to *model*, *backbone*, and *weights*.
82
83 .. versionadded: 0.5
84 The *class_weights*, *freeze_backbone*, and *freeze_decoder* parameters.
85
86 .. versionchanged:: 0.5
87 The *weights* parameter now supports WeightEnums and checkpoint paths.
88 *learning_rate* and *learning_rate_schedule_patience* were renamed to
89 *lr* and *patience*.
90 """
91 if ignore_index is not None and loss == "jaccard":
92 warnings.warn(
93 "ignore_index has no effect on training when loss='jaccard'",
94 UserWarning,
95 )
96
97 self.weights = weights
98 super().__init__(ignore="weights")
99
100 def configure_losses(self) -> None:
101 """Initialize the loss criterion.
102
103 Raises:
104 ValueError: If *loss* is invalid.
105 """
106 loss: str = self.hparams["loss"]
107 ignore_index = self.hparams["ignore_index"]
108 if loss == "ce":
109 ignore_value = -1000 if ignore_index is None else ignore_index
110 self.criterion = nn.CrossEntropyLoss(
111 ignore_index=ignore_value, weight=self.hparams["class_weights"]
112 )
113 elif loss == "jaccard":
114 self.criterion = smp.losses.JaccardLoss(
115 mode="multiclass", classes=self.hparams["num_classes"]
116 )
117 elif loss == "focal":
118 self.criterion = smp.losses.FocalLoss(
119 "multiclass", ignore_index=ignore_index, normalized=True
120 )
121 else:
122 raise ValueError(
123 f"Loss type '{loss}' is not valid. "
124 "Currently, supports 'ce', 'jaccard' or 'focal' loss."
125 )
126
127 def configure_metrics(self) -> None:
128 """Initialize the performance metrics."""
129 num_classes: int = self.hparams["num_classes"]
130 ignore_index: Optional[int] = self.hparams["ignore_index"]
131 metrics = MetricCollection(
132 [
133 MulticlassAccuracy(
134 num_classes=num_classes,
135 ignore_index=ignore_index,
136 multidim_average="global",
137 average="micro",
138 ),
139 MulticlassJaccardIndex(
140 num_classes=num_classes, ignore_index=ignore_index, average="micro"
141 ),
142 ]
143 )
144 self.train_metrics = metrics.clone(prefix="train_")
145 self.val_metrics = metrics.clone(prefix="val_")
146 self.test_metrics = metrics.clone(prefix="test_")
147
148 def configure_models(self) -> None:
149 """Initialize the model.
150
151 Raises:
152 ValueError: If *model* is invalid.
153 """
154 model: str = self.hparams["model"]
155 backbone: str = self.hparams["backbone"]
156 weights = self.weights
157 in_channels: int = self.hparams["in_channels"]
158 num_classes: int = self.hparams["num_classes"]
159 num_filters: int = self.hparams["num_filters"]
160
161 if model == "unet":
162 self.model = smp.Unet(
163 encoder_name=backbone,
164 encoder_weights="imagenet" if weights is True else None,
165 in_channels=in_channels,
166 classes=num_classes,
167 )
168 elif model == "deeplabv3+":
169 self.model = smp.DeepLabV3Plus(
170 encoder_name=backbone,
171 encoder_weights="imagenet" if weights is True else None,
172 in_channels=in_channels,
173 classes=num_classes,
174 )
175 elif model == "fcn":
176 self.model = FCN(
177 in_channels=in_channels, classes=num_classes, num_filters=num_filters
178 )
179 else:
180 raise ValueError(
181 f"Model type '{model}' is not valid. "
182 "Currently, only supports 'unet', 'deeplabv3+' and 'fcn'."
183 )
184
185 if model != "fcn":
186 if weights and weights is not True:
187 if isinstance(weights, WeightsEnum):
188 state_dict = weights.get_state_dict(progress=True)
189 elif os.path.exists(weights):
190 _, state_dict = utils.extract_backbone(weights)
191 else:
192 state_dict = get_weight(weights).get_state_dict(progress=True)
193 self.model.encoder.load_state_dict(state_dict)
194
195 # Freeze backbone
196 if self.hparams["freeze_backbone"] and model in ["unet", "deeplabv3+"]:
197 for param in self.model.encoder.parameters():
198 param.requires_grad = False
199
200 # Freeze decoder
201 if self.hparams["freeze_decoder"] and model in ["unet", "deeplabv3+"]:
202 for param in self.model.decoder.parameters():
203 param.requires_grad = False
204
205 def training_step(
206 self, batch: Any, batch_idx: int, dataloader_idx: int = 0
207 ) -> Tensor:
208 """Compute the training loss and additional metrics.
209
210 Args:
211 batch: The output of your DataLoader.
212 batch_idx: Integer displaying index of this batch.
213 dataloader_idx: Index of the current dataloader.
214
215 Returns:
216 The loss tensor.
217 """
218 x = batch["image"]
219 y = batch["mask"]
220 y_hat = self(x)
221 loss: Tensor = self.criterion(y_hat, y)
222 self.log("train_loss", loss)
223 self.train_metrics(y_hat, y)
224 self.log_dict(self.train_metrics)
225 return loss
226
227 def validation_step(
228 self, batch: Any, batch_idx: int, dataloader_idx: int = 0
229 ) -> None:
230 """Compute the validation loss and additional metrics.
231
232 Args:
233 batch: The output of your DataLoader.
234 batch_idx: Integer displaying index of this batch.
235 dataloader_idx: Index of the current dataloader.
236 """
237 x = batch["image"]
238 y = batch["mask"]
239 y_hat = self(x)
240 loss = self.criterion(y_hat, y)
241 self.log("val_loss", loss)
242 self.val_metrics(y_hat, y)
243 self.log_dict(self.val_metrics)
244
245 if (
246 batch_idx < 10
247 and hasattr(self.trainer, "datamodule")
248 and hasattr(self.trainer.datamodule, "plot")
249 and self.logger
250 and hasattr(self.logger, "experiment")
251 and hasattr(self.logger.experiment, "add_figure")
252 ):
253 datamodule = self.trainer.datamodule
254 batch["prediction"] = y_hat.argmax(dim=1)
255 for key in ["image", "mask", "prediction"]:
256 batch[key] = batch[key].cpu()
257 sample = unbind_samples(batch)[0]
258
259 fig: Optional[Figure] = None
260 try:
261 fig = datamodule.plot(sample)
262 except RGBBandsMissingError:
263 pass
264
265 if fig:
266 summary_writer = self.logger.experiment
267 summary_writer.add_figure(
268 f"image/{batch_idx}", fig, global_step=self.global_step
269 )
270 plt.close()
271
272 def test_step(self, batch: Any, batch_idx: int, dataloader_idx: int = 0) -> None:
273 """Compute the test loss and additional metrics.
274
275 Args:
276 batch: The output of your DataLoader.
277 batch_idx: Integer displaying index of this batch.
278 dataloader_idx: Index of the current dataloader.
279 """
280 x = batch["image"]
281 y = batch["mask"]
282 y_hat = self(x)
283 loss = self.criterion(y_hat, y)
284 self.log("test_loss", loss)
285 self.test_metrics(y_hat, y)
286 self.log_dict(self.test_metrics)
287
288 def predict_step(
289 self, batch: Any, batch_idx: int, dataloader_idx: int = 0
290 ) -> Tensor:
291 """Compute the predicted class probabilities.
292
293 Args:
294 batch: The output of your DataLoader.
295 batch_idx: Integer displaying index of this batch.
296 dataloader_idx: Index of the current dataloader.
297
298 Returns:
299 Output predicted probabilities.
300 """
301 x = batch["image"]
302 y_hat: Tensor = self(x).softmax(dim=1)
303 return y_hat
304
[end of torchgeo/trainers/segmentation.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/torchgeo/trainers/segmentation.py b/torchgeo/trainers/segmentation.py
--- a/torchgeo/trainers/segmentation.py
+++ b/torchgeo/trainers/segmentation.py
@@ -4,7 +4,6 @@
"""Trainers for semantic segmentation."""
import os
-import warnings
from typing import Any, Optional, Union
import matplotlib.pyplot as plt
@@ -70,9 +69,6 @@
freeze_decoder: Freeze the decoder network to linear probe
the segmentation head.
- Warns:
- UserWarning: When loss='jaccard' and ignore_index is specified.
-
.. versionchanged:: 0.3
*ignore_zeros* was renamed to *ignore_index*.
@@ -87,13 +83,10 @@
The *weights* parameter now supports WeightEnums and checkpoint paths.
*learning_rate* and *learning_rate_schedule_patience* were renamed to
*lr* and *patience*.
- """
- if ignore_index is not None and loss == "jaccard":
- warnings.warn(
- "ignore_index has no effect on training when loss='jaccard'",
- UserWarning,
- )
+ .. versionchanged:: 0.6
+ The *ignore_index* parameter now works for jaccard loss.
+ """
self.weights = weights
super().__init__(ignore="weights")
@@ -111,9 +104,13 @@
ignore_index=ignore_value, weight=self.hparams["class_weights"]
)
elif loss == "jaccard":
- self.criterion = smp.losses.JaccardLoss(
- mode="multiclass", classes=self.hparams["num_classes"]
- )
+ # JaccardLoss requires a list of classes to use instead of a class
+ # index to ignore.
+ classes = [
+ i for i in range(self.hparams["num_classes"]) if i != ignore_index
+ ]
+
+ self.criterion = smp.losses.JaccardLoss(mode="multiclass", classes=classes)
elif loss == "focal":
self.criterion = smp.losses.FocalLoss(
"multiclass", ignore_index=ignore_index, normalized=True
| {"golden_diff": "diff --git a/torchgeo/trainers/segmentation.py b/torchgeo/trainers/segmentation.py\n--- a/torchgeo/trainers/segmentation.py\n+++ b/torchgeo/trainers/segmentation.py\n@@ -4,7 +4,6 @@\n \"\"\"Trainers for semantic segmentation.\"\"\"\n \n import os\n-import warnings\n from typing import Any, Optional, Union\n \n import matplotlib.pyplot as plt\n@@ -70,9 +69,6 @@\n freeze_decoder: Freeze the decoder network to linear probe\n the segmentation head.\n \n- Warns:\n- UserWarning: When loss='jaccard' and ignore_index is specified.\n-\n .. versionchanged:: 0.3\n *ignore_zeros* was renamed to *ignore_index*.\n \n@@ -87,13 +83,10 @@\n The *weights* parameter now supports WeightEnums and checkpoint paths.\n *learning_rate* and *learning_rate_schedule_patience* were renamed to\n *lr* and *patience*.\n- \"\"\"\n- if ignore_index is not None and loss == \"jaccard\":\n- warnings.warn(\n- \"ignore_index has no effect on training when loss='jaccard'\",\n- UserWarning,\n- )\n \n+ .. versionchanged:: 0.6\n+ The *ignore_index* parameter now works for jaccard loss.\n+ \"\"\"\n self.weights = weights\n super().__init__(ignore=\"weights\")\n \n@@ -111,9 +104,13 @@\n ignore_index=ignore_value, weight=self.hparams[\"class_weights\"]\n )\n elif loss == \"jaccard\":\n- self.criterion = smp.losses.JaccardLoss(\n- mode=\"multiclass\", classes=self.hparams[\"num_classes\"]\n- )\n+ # JaccardLoss requires a list of classes to use instead of a class\n+ # index to ignore.\n+ classes = [\n+ i for i in range(self.hparams[\"num_classes\"]) if i != ignore_index\n+ ]\n+\n+ self.criterion = smp.losses.JaccardLoss(mode=\"multiclass\", classes=classes)\n elif loss == \"focal\":\n self.criterion = smp.losses.FocalLoss(\n \"multiclass\", ignore_index=ignore_index, normalized=True\n", "issue": "Add `ignore_index` support for Jaccard Loss\n### Summary\r\n\r\nCurrently, the `SemanticSegmentationTask` recognises the `ignore_index` parameter only when cross entropy and focal loss is used. However, `smp.losses.JaccardLoss` implicitly supports this option via its `classes` parameter, which is currently set to `self.hparams[\"num_classes\"]`. The FR is to adapt what is passed as argument to this parameter so that `ignore_index` can work with all currently supported losses.\r\n\r\n### Rationale\r\n\r\nThe Jaccard index is a common semantic segmentation metric which also makes for a decent loss function. Because of the way it is defined, it is important to ignore overly dominant classes (e.g., the background when classifying building rooftops); otherwise performance can be hindered significantly.\r\n\r\n### Implementation\r\n\r\nChange the `classes` argument of `smp.losses.JaccardLoss` in `SemanticSegmentationTask.configure_losses` from `self.hparams[\"num_classes\"]` to `list(set(list(range(self.hparams[\"num_classes\"]))).difference(set([ignore_index])))`, assuming that `ignore_index` is not `None`.\r\n\r\n### Alternatives\r\n\r\n_No response_\r\n\r\n### Additional information\r\n\r\n_No response_\n", "before_files": [{"content": "# Copyright (c) Microsoft Corporation. All rights reserved.\n# Licensed under the MIT License.\n\n\"\"\"Trainers for semantic segmentation.\"\"\"\n\nimport os\nimport warnings\nfrom typing import Any, Optional, Union\n\nimport matplotlib.pyplot as plt\nimport segmentation_models_pytorch as smp\nimport torch.nn as nn\nfrom matplotlib.figure import Figure\nfrom torch import Tensor\nfrom torchmetrics import MetricCollection\nfrom torchmetrics.classification import MulticlassAccuracy, MulticlassJaccardIndex\nfrom torchvision.models._api import WeightsEnum\n\nfrom ..datasets import RGBBandsMissingError, unbind_samples\nfrom ..models import FCN, get_weight\nfrom . import utils\nfrom .base import BaseTask\n\n\nclass SemanticSegmentationTask(BaseTask):\n \"\"\"Semantic Segmentation.\"\"\"\n\n def __init__(\n self,\n model: str = \"unet\",\n backbone: str = \"resnet50\",\n weights: Optional[Union[WeightsEnum, str, bool]] = None,\n in_channels: int = 3,\n num_classes: int = 1000,\n num_filters: int = 3,\n loss: str = \"ce\",\n class_weights: Optional[Tensor] = None,\n ignore_index: Optional[int] = None,\n lr: float = 1e-3,\n patience: int = 10,\n freeze_backbone: bool = False,\n freeze_decoder: bool = False,\n ) -> None:\n \"\"\"Inititalize a new SemanticSegmentationTask instance.\n\n Args:\n model: Name of the\n `smp <https://smp.readthedocs.io/en/latest/models.html>`__ model to use.\n backbone: Name of the `timm\n <https://smp.readthedocs.io/en/latest/encoders_timm.html>`__ or `smp\n <https://smp.readthedocs.io/en/latest/encoders.html>`__ backbone to use.\n weights: Initial model weights. Either a weight enum, the string\n representation of a weight enum, True for ImageNet weights, False or\n None for random weights, or the path to a saved model state dict. FCN\n model does not support pretrained weights. Pretrained ViT weight enums\n are not supported yet.\n in_channels: Number of input channels to model.\n num_classes: Number of prediction classes.\n num_filters: Number of filters. Only applicable when model='fcn'.\n loss: Name of the loss function, currently supports\n 'ce', 'jaccard' or 'focal' loss.\n class_weights: Optional rescaling weight given to each\n class and used with 'ce' loss.\n ignore_index: Optional integer class index to ignore in the loss and\n metrics.\n lr: Learning rate for optimizer.\n patience: Patience for learning rate scheduler.\n freeze_backbone: Freeze the backbone network to fine-tune the\n decoder and segmentation head.\n freeze_decoder: Freeze the decoder network to linear probe\n the segmentation head.\n\n Warns:\n UserWarning: When loss='jaccard' and ignore_index is specified.\n\n .. versionchanged:: 0.3\n *ignore_zeros* was renamed to *ignore_index*.\n\n .. versionchanged:: 0.4\n *segmentation_model*, *encoder_name*, and *encoder_weights*\n were renamed to *model*, *backbone*, and *weights*.\n\n .. versionadded: 0.5\n The *class_weights*, *freeze_backbone*, and *freeze_decoder* parameters.\n\n .. versionchanged:: 0.5\n The *weights* parameter now supports WeightEnums and checkpoint paths.\n *learning_rate* and *learning_rate_schedule_patience* were renamed to\n *lr* and *patience*.\n \"\"\"\n if ignore_index is not None and loss == \"jaccard\":\n warnings.warn(\n \"ignore_index has no effect on training when loss='jaccard'\",\n UserWarning,\n )\n\n self.weights = weights\n super().__init__(ignore=\"weights\")\n\n def configure_losses(self) -> None:\n \"\"\"Initialize the loss criterion.\n\n Raises:\n ValueError: If *loss* is invalid.\n \"\"\"\n loss: str = self.hparams[\"loss\"]\n ignore_index = self.hparams[\"ignore_index\"]\n if loss == \"ce\":\n ignore_value = -1000 if ignore_index is None else ignore_index\n self.criterion = nn.CrossEntropyLoss(\n ignore_index=ignore_value, weight=self.hparams[\"class_weights\"]\n )\n elif loss == \"jaccard\":\n self.criterion = smp.losses.JaccardLoss(\n mode=\"multiclass\", classes=self.hparams[\"num_classes\"]\n )\n elif loss == \"focal\":\n self.criterion = smp.losses.FocalLoss(\n \"multiclass\", ignore_index=ignore_index, normalized=True\n )\n else:\n raise ValueError(\n f\"Loss type '{loss}' is not valid. \"\n \"Currently, supports 'ce', 'jaccard' or 'focal' loss.\"\n )\n\n def configure_metrics(self) -> None:\n \"\"\"Initialize the performance metrics.\"\"\"\n num_classes: int = self.hparams[\"num_classes\"]\n ignore_index: Optional[int] = self.hparams[\"ignore_index\"]\n metrics = MetricCollection(\n [\n MulticlassAccuracy(\n num_classes=num_classes,\n ignore_index=ignore_index,\n multidim_average=\"global\",\n average=\"micro\",\n ),\n MulticlassJaccardIndex(\n num_classes=num_classes, ignore_index=ignore_index, average=\"micro\"\n ),\n ]\n )\n self.train_metrics = metrics.clone(prefix=\"train_\")\n self.val_metrics = metrics.clone(prefix=\"val_\")\n self.test_metrics = metrics.clone(prefix=\"test_\")\n\n def configure_models(self) -> None:\n \"\"\"Initialize the model.\n\n Raises:\n ValueError: If *model* is invalid.\n \"\"\"\n model: str = self.hparams[\"model\"]\n backbone: str = self.hparams[\"backbone\"]\n weights = self.weights\n in_channels: int = self.hparams[\"in_channels\"]\n num_classes: int = self.hparams[\"num_classes\"]\n num_filters: int = self.hparams[\"num_filters\"]\n\n if model == \"unet\":\n self.model = smp.Unet(\n encoder_name=backbone,\n encoder_weights=\"imagenet\" if weights is True else None,\n in_channels=in_channels,\n classes=num_classes,\n )\n elif model == \"deeplabv3+\":\n self.model = smp.DeepLabV3Plus(\n encoder_name=backbone,\n encoder_weights=\"imagenet\" if weights is True else None,\n in_channels=in_channels,\n classes=num_classes,\n )\n elif model == \"fcn\":\n self.model = FCN(\n in_channels=in_channels, classes=num_classes, num_filters=num_filters\n )\n else:\n raise ValueError(\n f\"Model type '{model}' is not valid. \"\n \"Currently, only supports 'unet', 'deeplabv3+' and 'fcn'.\"\n )\n\n if model != \"fcn\":\n if weights and weights is not True:\n if isinstance(weights, WeightsEnum):\n state_dict = weights.get_state_dict(progress=True)\n elif os.path.exists(weights):\n _, state_dict = utils.extract_backbone(weights)\n else:\n state_dict = get_weight(weights).get_state_dict(progress=True)\n self.model.encoder.load_state_dict(state_dict)\n\n # Freeze backbone\n if self.hparams[\"freeze_backbone\"] and model in [\"unet\", \"deeplabv3+\"]:\n for param in self.model.encoder.parameters():\n param.requires_grad = False\n\n # Freeze decoder\n if self.hparams[\"freeze_decoder\"] and model in [\"unet\", \"deeplabv3+\"]:\n for param in self.model.decoder.parameters():\n param.requires_grad = False\n\n def training_step(\n self, batch: Any, batch_idx: int, dataloader_idx: int = 0\n ) -> Tensor:\n \"\"\"Compute the training loss and additional metrics.\n\n Args:\n batch: The output of your DataLoader.\n batch_idx: Integer displaying index of this batch.\n dataloader_idx: Index of the current dataloader.\n\n Returns:\n The loss tensor.\n \"\"\"\n x = batch[\"image\"]\n y = batch[\"mask\"]\n y_hat = self(x)\n loss: Tensor = self.criterion(y_hat, y)\n self.log(\"train_loss\", loss)\n self.train_metrics(y_hat, y)\n self.log_dict(self.train_metrics)\n return loss\n\n def validation_step(\n self, batch: Any, batch_idx: int, dataloader_idx: int = 0\n ) -> None:\n \"\"\"Compute the validation loss and additional metrics.\n\n Args:\n batch: The output of your DataLoader.\n batch_idx: Integer displaying index of this batch.\n dataloader_idx: Index of the current dataloader.\n \"\"\"\n x = batch[\"image\"]\n y = batch[\"mask\"]\n y_hat = self(x)\n loss = self.criterion(y_hat, y)\n self.log(\"val_loss\", loss)\n self.val_metrics(y_hat, y)\n self.log_dict(self.val_metrics)\n\n if (\n batch_idx < 10\n and hasattr(self.trainer, \"datamodule\")\n and hasattr(self.trainer.datamodule, \"plot\")\n and self.logger\n and hasattr(self.logger, \"experiment\")\n and hasattr(self.logger.experiment, \"add_figure\")\n ):\n datamodule = self.trainer.datamodule\n batch[\"prediction\"] = y_hat.argmax(dim=1)\n for key in [\"image\", \"mask\", \"prediction\"]:\n batch[key] = batch[key].cpu()\n sample = unbind_samples(batch)[0]\n\n fig: Optional[Figure] = None\n try:\n fig = datamodule.plot(sample)\n except RGBBandsMissingError:\n pass\n\n if fig:\n summary_writer = self.logger.experiment\n summary_writer.add_figure(\n f\"image/{batch_idx}\", fig, global_step=self.global_step\n )\n plt.close()\n\n def test_step(self, batch: Any, batch_idx: int, dataloader_idx: int = 0) -> None:\n \"\"\"Compute the test loss and additional metrics.\n\n Args:\n batch: The output of your DataLoader.\n batch_idx: Integer displaying index of this batch.\n dataloader_idx: Index of the current dataloader.\n \"\"\"\n x = batch[\"image\"]\n y = batch[\"mask\"]\n y_hat = self(x)\n loss = self.criterion(y_hat, y)\n self.log(\"test_loss\", loss)\n self.test_metrics(y_hat, y)\n self.log_dict(self.test_metrics)\n\n def predict_step(\n self, batch: Any, batch_idx: int, dataloader_idx: int = 0\n ) -> Tensor:\n \"\"\"Compute the predicted class probabilities.\n\n Args:\n batch: The output of your DataLoader.\n batch_idx: Integer displaying index of this batch.\n dataloader_idx: Index of the current dataloader.\n\n Returns:\n Output predicted probabilities.\n \"\"\"\n x = batch[\"image\"]\n y_hat: Tensor = self(x).softmax(dim=1)\n return y_hat\n", "path": "torchgeo/trainers/segmentation.py"}]} | 4,052 | 503 |
gh_patches_debug_43343 | rasdani/github-patches | git_diff | nonebot__nonebot2-947 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Bug: MessageTemplate.format 将消息段错误拼接为文本
**描述问题:**
`MessageTemplate.format` 将非文本类型消息段错误拼接为文本类型
**如何复现?**
```python
>>> from nonebot.adapters.onebot.v11 import Message, MessageSegment
>>> Message.template("{}{}").format(MessageSegment.image("file:///"), "hello")
[MessageSegment(type='text', data={'text': '[CQ:image,file=file:///,cache=true,proxy=true]'}), MessageSegment(type='text', data={'text': 'hello'})]
```
**期望的结果**
```python
>>> from nonebot.adapters.onebot.v11 import Message, MessageSegment
>>> Message.template("{}{}").format(MessageSegment.image("file:///"), "hello")
[MessageSegment(type='image', data={'file': 'file:///', 'type': None, 'cache': 'true', 'proxy': 'true', 'timeout': None}), MessageSegment(type='text', data={'text': 'hello'})]
```
**环境信息:**
- OS: Windows 10
- Python Version: 3.9.6
- Nonebot Version: 2.0.0-beta2
**截图或日志**

**备注**
我自己写了一段修复代码(不保证稳定性)
[`template.py`](https://github.com/nonebot/nonebot2/blob/master/nonebot/internal/adapter/template.py)将原来的
```python
formatted_text = self.format_field(obj, str(format_control))
results.append(formatted_text)
```
替换为
```python
from .message import MessageSegment
if isinstance(obj, MessageSegment):
results.append(obj)
else:
formatted_text = self.format_field(obj, str(format_control))
results.append(formatted_text)
```
修复后效果如下

Bug: MessageTemplate.format 将消息段错误拼接为文本
**描述问题:**
`MessageTemplate.format` 将非文本类型消息段错误拼接为文本类型
**如何复现?**
```python
>>> from nonebot.adapters.onebot.v11 import Message, MessageSegment
>>> Message.template("{}{}").format(MessageSegment.image("file:///"), "hello")
[MessageSegment(type='text', data={'text': '[CQ:image,file=file:///,cache=true,proxy=true]'}), MessageSegment(type='text', data={'text': 'hello'})]
```
**期望的结果**
```python
>>> from nonebot.adapters.onebot.v11 import Message, MessageSegment
>>> Message.template("{}{}").format(MessageSegment.image("file:///"), "hello")
[MessageSegment(type='image', data={'file': 'file:///', 'type': None, 'cache': 'true', 'proxy': 'true', 'timeout': None}), MessageSegment(type='text', data={'text': 'hello'})]
```
**环境信息:**
- OS: Windows 10
- Python Version: 3.9.6
- Nonebot Version: 2.0.0-beta2
**截图或日志**

**备注**
我自己写了一段修复代码(不保证稳定性)
[`template.py`](https://github.com/nonebot/nonebot2/blob/master/nonebot/internal/adapter/template.py)将原来的
```python
formatted_text = self.format_field(obj, str(format_control))
results.append(formatted_text)
```
替换为
```python
from .message import MessageSegment
if isinstance(obj, MessageSegment):
results.append(obj)
else:
formatted_text = self.format_field(obj, str(format_control))
results.append(formatted_text)
```
修复后效果如下

</issue>
<code>
[start of nonebot/internal/adapter/template.py]
1 import functools
2 from string import Formatter
3 from typing import (
4 TYPE_CHECKING,
5 Any,
6 Set,
7 Dict,
8 List,
9 Type,
10 Tuple,
11 Union,
12 Generic,
13 Mapping,
14 TypeVar,
15 Callable,
16 Optional,
17 Sequence,
18 cast,
19 overload,
20 )
21
22 if TYPE_CHECKING:
23 from .message import Message, MessageSegment
24
25 TM = TypeVar("TM", bound="Message")
26 TF = TypeVar("TF", str, "Message")
27
28 FormatSpecFunc = Callable[[Any], str]
29 FormatSpecFunc_T = TypeVar("FormatSpecFunc_T", bound=FormatSpecFunc)
30
31
32 class MessageTemplate(Formatter, Generic[TF]):
33 """消息模板格式化实现类。
34
35 参数:
36 template: 模板
37 factory: 消息类型工厂,默认为 `str`
38 """
39
40 @overload
41 def __init__(
42 self: "MessageTemplate[str]", template: str, factory: Type[str] = str
43 ) -> None:
44 ...
45
46 @overload
47 def __init__(
48 self: "MessageTemplate[TM]", template: Union[str, TM], factory: Type[TM]
49 ) -> None:
50 ...
51
52 def __init__(self, template, factory=str) -> None:
53 self.template: TF = template
54 self.factory: Type[TF] = factory
55 self.format_specs: Dict[str, FormatSpecFunc] = {}
56
57 def add_format_spec(
58 self, spec: FormatSpecFunc_T, name: Optional[str] = None
59 ) -> FormatSpecFunc_T:
60 name = name or spec.__name__
61 if name in self.format_specs:
62 raise ValueError(f"Format spec {name} already exists!")
63 self.format_specs[name] = spec
64 return spec
65
66 def format(self, *args, **kwargs):
67 """根据传入参数和模板生成消息对象"""
68 return self._format(args, kwargs)
69
70 def format_map(self, mapping: Mapping[str, Any]) -> TF:
71 """根据传入字典和模板生成消息对象, 在传入字段名不是有效标识符时有用"""
72 return self._format([], mapping)
73
74 def _format(self, args: Sequence[Any], kwargs: Mapping[str, Any]) -> TF:
75 msg = self.factory()
76 if isinstance(self.template, str):
77 msg += self.vformat(self.template, args, kwargs)
78 elif isinstance(self.template, self.factory):
79 template = cast("Message[MessageSegment]", self.template)
80 for seg in template:
81 msg += self.vformat(str(seg), args, kwargs) if seg.is_text() else seg
82 else:
83 raise TypeError("template must be a string or instance of Message!")
84
85 return msg # type:ignore
86
87 def vformat(
88 self, format_string: str, args: Sequence[Any], kwargs: Mapping[str, Any]
89 ) -> TF:
90 used_args = set()
91 result, _ = self._vformat(format_string, args, kwargs, used_args, 2)
92 self.check_unused_args(list(used_args), args, kwargs)
93 return result
94
95 def _vformat(
96 self,
97 format_string: str,
98 args: Sequence[Any],
99 kwargs: Mapping[str, Any],
100 used_args: Set[Union[int, str]],
101 recursion_depth: int,
102 auto_arg_index: int = 0,
103 ) -> Tuple[TF, int]:
104 if recursion_depth < 0:
105 raise ValueError("Max string recursion exceeded")
106
107 results: List[Any] = [self.factory()]
108
109 for (literal_text, field_name, format_spec, conversion) in self.parse(
110 format_string
111 ):
112
113 # output the literal text
114 if literal_text:
115 results.append(literal_text)
116
117 # if there's a field, output it
118 if field_name is not None:
119 # this is some markup, find the object and do
120 # the formatting
121
122 # handle arg indexing when empty field_names are given.
123 if field_name == "":
124 if auto_arg_index is False:
125 raise ValueError(
126 "cannot switch from manual field specification to "
127 "automatic field numbering"
128 )
129 field_name = str(auto_arg_index)
130 auto_arg_index += 1
131 elif field_name.isdigit():
132 if auto_arg_index:
133 raise ValueError(
134 "cannot switch from manual field specification to "
135 "automatic field numbering"
136 )
137 # disable auto arg incrementing, if it gets
138 # used later on, then an exception will be raised
139 auto_arg_index = False
140
141 # given the field_name, find the object it references
142 # and the argument it came from
143 obj, arg_used = self.get_field(field_name, args, kwargs)
144 used_args.add(arg_used)
145
146 assert format_spec is not None
147
148 # do any conversion on the resulting object
149 obj = self.convert_field(obj, conversion) if conversion else obj
150
151 # expand the format spec, if needed
152 format_control, auto_arg_index = self._vformat(
153 format_spec,
154 args,
155 kwargs,
156 used_args,
157 recursion_depth - 1,
158 auto_arg_index,
159 )
160
161 # format the object and append to the result
162 formatted_text = self.format_field(obj, str(format_control))
163 results.append(formatted_text)
164
165 return functools.reduce(self._add, results), auto_arg_index
166
167 def format_field(self, value: Any, format_spec: str) -> Any:
168 formatter: Optional[FormatSpecFunc] = self.format_specs.get(format_spec)
169 if formatter is None and not issubclass(self.factory, str):
170 segment_class: Type["MessageSegment"] = self.factory.get_segment_class()
171 method = getattr(segment_class, format_spec, None)
172 if callable(method) and not cast(str, method.__name__).startswith("_"):
173 formatter = getattr(segment_class, format_spec)
174 return (
175 super().format_field(value, format_spec)
176 if formatter is None
177 else formatter(value)
178 )
179
180 def _add(self, a: Any, b: Any) -> Any:
181 try:
182 return a + b
183 except TypeError:
184 return a + str(b)
185
[end of nonebot/internal/adapter/template.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/nonebot/internal/adapter/template.py b/nonebot/internal/adapter/template.py
--- a/nonebot/internal/adapter/template.py
+++ b/nonebot/internal/adapter/template.py
@@ -49,7 +49,9 @@
) -> None:
...
- def __init__(self, template, factory=str) -> None:
+ def __init__( # type:ignore
+ self, template, factory=str
+ ) -> None: # TODO: fix type hint here
self.template: TF = template
self.factory: Type[TF] = factory
self.format_specs: Dict[str, FormatSpecFunc] = {}
@@ -72,25 +74,37 @@
return self._format([], mapping)
def _format(self, args: Sequence[Any], kwargs: Mapping[str, Any]) -> TF:
- msg = self.factory()
+ full_message = self.factory()
+ used_args, arg_index = set(), 0
+
if isinstance(self.template, str):
- msg += self.vformat(self.template, args, kwargs)
+ msg, arg_index = self._vformat(
+ self.template, args, kwargs, used_args, arg_index
+ )
+ full_message += msg
elif isinstance(self.template, self.factory):
template = cast("Message[MessageSegment]", self.template)
for seg in template:
- msg += self.vformat(str(seg), args, kwargs) if seg.is_text() else seg
+ if not seg.is_text():
+ full_message += seg
+ else:
+ msg, arg_index = self._vformat(
+ str(seg), args, kwargs, used_args, arg_index
+ )
+ full_message += msg
else:
raise TypeError("template must be a string or instance of Message!")
- return msg # type:ignore
+ self.check_unused_args(list(used_args), args, kwargs)
+ return cast(TF, full_message)
def vformat(
- self, format_string: str, args: Sequence[Any], kwargs: Mapping[str, Any]
+ self,
+ format_string: str,
+ args: Sequence[Any],
+ kwargs: Mapping[str, Any],
) -> TF:
- used_args = set()
- result, _ = self._vformat(format_string, args, kwargs, used_args, 2)
- self.check_unused_args(list(used_args), args, kwargs)
- return result
+ raise NotImplementedError("`vformat` has merged into `_format`")
def _vformat(
self,
@@ -98,12 +112,8 @@
args: Sequence[Any],
kwargs: Mapping[str, Any],
used_args: Set[Union[int, str]],
- recursion_depth: int,
auto_arg_index: int = 0,
) -> Tuple[TF, int]:
- if recursion_depth < 0:
- raise ValueError("Max string recursion exceeded")
-
results: List[Any] = [self.factory()]
for (literal_text, field_name, format_spec, conversion) in self.parse(
@@ -143,23 +153,13 @@
obj, arg_used = self.get_field(field_name, args, kwargs)
used_args.add(arg_used)
- assert format_spec is not None
-
# do any conversion on the resulting object
obj = self.convert_field(obj, conversion) if conversion else obj
- # expand the format spec, if needed
- format_control, auto_arg_index = self._vformat(
- format_spec,
- args,
- kwargs,
- used_args,
- recursion_depth - 1,
- auto_arg_index,
- )
-
# format the object and append to the result
- formatted_text = self.format_field(obj, str(format_control))
+ formatted_text = (
+ self.format_field(obj, format_spec) if format_spec else obj
+ )
results.append(formatted_text)
return functools.reduce(self._add, results), auto_arg_index
| {"golden_diff": "diff --git a/nonebot/internal/adapter/template.py b/nonebot/internal/adapter/template.py\n--- a/nonebot/internal/adapter/template.py\n+++ b/nonebot/internal/adapter/template.py\n@@ -49,7 +49,9 @@\n ) -> None:\n ...\n \n- def __init__(self, template, factory=str) -> None:\n+ def __init__( # type:ignore\n+ self, template, factory=str\n+ ) -> None: # TODO: fix type hint here\n self.template: TF = template\n self.factory: Type[TF] = factory\n self.format_specs: Dict[str, FormatSpecFunc] = {}\n@@ -72,25 +74,37 @@\n return self._format([], mapping)\n \n def _format(self, args: Sequence[Any], kwargs: Mapping[str, Any]) -> TF:\n- msg = self.factory()\n+ full_message = self.factory()\n+ used_args, arg_index = set(), 0\n+\n if isinstance(self.template, str):\n- msg += self.vformat(self.template, args, kwargs)\n+ msg, arg_index = self._vformat(\n+ self.template, args, kwargs, used_args, arg_index\n+ )\n+ full_message += msg\n elif isinstance(self.template, self.factory):\n template = cast(\"Message[MessageSegment]\", self.template)\n for seg in template:\n- msg += self.vformat(str(seg), args, kwargs) if seg.is_text() else seg\n+ if not seg.is_text():\n+ full_message += seg\n+ else:\n+ msg, arg_index = self._vformat(\n+ str(seg), args, kwargs, used_args, arg_index\n+ )\n+ full_message += msg\n else:\n raise TypeError(\"template must be a string or instance of Message!\")\n \n- return msg # type:ignore\n+ self.check_unused_args(list(used_args), args, kwargs)\n+ return cast(TF, full_message)\n \n def vformat(\n- self, format_string: str, args: Sequence[Any], kwargs: Mapping[str, Any]\n+ self,\n+ format_string: str,\n+ args: Sequence[Any],\n+ kwargs: Mapping[str, Any],\n ) -> TF:\n- used_args = set()\n- result, _ = self._vformat(format_string, args, kwargs, used_args, 2)\n- self.check_unused_args(list(used_args), args, kwargs)\n- return result\n+ raise NotImplementedError(\"`vformat` has merged into `_format`\")\n \n def _vformat(\n self,\n@@ -98,12 +112,8 @@\n args: Sequence[Any],\n kwargs: Mapping[str, Any],\n used_args: Set[Union[int, str]],\n- recursion_depth: int,\n auto_arg_index: int = 0,\n ) -> Tuple[TF, int]:\n- if recursion_depth < 0:\n- raise ValueError(\"Max string recursion exceeded\")\n-\n results: List[Any] = [self.factory()]\n \n for (literal_text, field_name, format_spec, conversion) in self.parse(\n@@ -143,23 +153,13 @@\n obj, arg_used = self.get_field(field_name, args, kwargs)\n used_args.add(arg_used)\n \n- assert format_spec is not None\n-\n # do any conversion on the resulting object\n obj = self.convert_field(obj, conversion) if conversion else obj\n \n- # expand the format spec, if needed\n- format_control, auto_arg_index = self._vformat(\n- format_spec,\n- args,\n- kwargs,\n- used_args,\n- recursion_depth - 1,\n- auto_arg_index,\n- )\n-\n # format the object and append to the result\n- formatted_text = self.format_field(obj, str(format_control))\n+ formatted_text = (\n+ self.format_field(obj, format_spec) if format_spec else obj\n+ )\n results.append(formatted_text)\n \n return functools.reduce(self._add, results), auto_arg_index\n", "issue": "Bug: MessageTemplate.format \u5c06\u6d88\u606f\u6bb5\u9519\u8bef\u62fc\u63a5\u4e3a\u6587\u672c\n**\u63cf\u8ff0\u95ee\u9898\uff1a**\r\n\r\n`MessageTemplate.format` \u5c06\u975e\u6587\u672c\u7c7b\u578b\u6d88\u606f\u6bb5\u9519\u8bef\u62fc\u63a5\u4e3a\u6587\u672c\u7c7b\u578b\r\n\r\n**\u5982\u4f55\u590d\u73b0\uff1f**\r\n\r\n```python\r\n>>> from nonebot.adapters.onebot.v11 import Message, MessageSegment\r\n>>> Message.template(\"{}{}\").format(MessageSegment.image(\"file:///\"), \"hello\")\r\n[MessageSegment(type='text', data={'text': '[CQ:image,file=file:///,cache=true,proxy=true]'}), MessageSegment(type='text', data={'text': 'hello'})]\r\n```\r\n\r\n**\u671f\u671b\u7684\u7ed3\u679c**\r\n\r\n```python\r\n>>> from nonebot.adapters.onebot.v11 import Message, MessageSegment\r\n>>> Message.template(\"{}{}\").format(MessageSegment.image(\"file:///\"), \"hello\")\r\n[MessageSegment(type='image', data={'file': 'file:///', 'type': None, 'cache': 'true', 'proxy': 'true', 'timeout': None}), MessageSegment(type='text', data={'text': 'hello'})]\r\n```\r\n\r\n**\u73af\u5883\u4fe1\u606f\uff1a**\r\n\r\n - OS: Windows 10\r\n - Python Version: 3.9.6\r\n - Nonebot Version: 2.0.0-beta2\r\n\r\n**\u622a\u56fe\u6216\u65e5\u5fd7**\r\n\r\n\r\n\r\n**\u5907\u6ce8**\r\n\r\n\u6211\u81ea\u5df1\u5199\u4e86\u4e00\u6bb5\u4fee\u590d\u4ee3\u7801\uff08\u4e0d\u4fdd\u8bc1\u7a33\u5b9a\u6027\uff09\r\n\r\n[`template.py`](https://github.com/nonebot/nonebot2/blob/master/nonebot/internal/adapter/template.py)\u5c06\u539f\u6765\u7684\r\n\r\n```python\r\nformatted_text = self.format_field(obj, str(format_control))\r\nresults.append(formatted_text)\r\n```\r\n\u66ff\u6362\u4e3a\r\n```python\r\nfrom .message import MessageSegment\r\nif isinstance(obj, MessageSegment):\r\n results.append(obj)\r\nelse:\r\n formatted_text = self.format_field(obj, str(format_control))\r\n results.append(formatted_text)\r\n```\r\n\r\n\u4fee\u590d\u540e\u6548\u679c\u5982\u4e0b\r\n\r\n\nBug: MessageTemplate.format \u5c06\u6d88\u606f\u6bb5\u9519\u8bef\u62fc\u63a5\u4e3a\u6587\u672c\n**\u63cf\u8ff0\u95ee\u9898\uff1a**\r\n\r\n`MessageTemplate.format` \u5c06\u975e\u6587\u672c\u7c7b\u578b\u6d88\u606f\u6bb5\u9519\u8bef\u62fc\u63a5\u4e3a\u6587\u672c\u7c7b\u578b\r\n\r\n**\u5982\u4f55\u590d\u73b0\uff1f**\r\n\r\n```python\r\n>>> from nonebot.adapters.onebot.v11 import Message, MessageSegment\r\n>>> Message.template(\"{}{}\").format(MessageSegment.image(\"file:///\"), \"hello\")\r\n[MessageSegment(type='text', data={'text': '[CQ:image,file=file:///,cache=true,proxy=true]'}), MessageSegment(type='text', data={'text': 'hello'})]\r\n```\r\n\r\n**\u671f\u671b\u7684\u7ed3\u679c**\r\n\r\n```python\r\n>>> from nonebot.adapters.onebot.v11 import Message, MessageSegment\r\n>>> Message.template(\"{}{}\").format(MessageSegment.image(\"file:///\"), \"hello\")\r\n[MessageSegment(type='image', data={'file': 'file:///', 'type': None, 'cache': 'true', 'proxy': 'true', 'timeout': None}), MessageSegment(type='text', data={'text': 'hello'})]\r\n```\r\n\r\n**\u73af\u5883\u4fe1\u606f\uff1a**\r\n\r\n - OS: Windows 10\r\n - Python Version: 3.9.6\r\n - Nonebot Version: 2.0.0-beta2\r\n\r\n**\u622a\u56fe\u6216\u65e5\u5fd7**\r\n\r\n\r\n\r\n**\u5907\u6ce8**\r\n\r\n\u6211\u81ea\u5df1\u5199\u4e86\u4e00\u6bb5\u4fee\u590d\u4ee3\u7801\uff08\u4e0d\u4fdd\u8bc1\u7a33\u5b9a\u6027\uff09\r\n\r\n[`template.py`](https://github.com/nonebot/nonebot2/blob/master/nonebot/internal/adapter/template.py)\u5c06\u539f\u6765\u7684\r\n\r\n```python\r\nformatted_text = self.format_field(obj, str(format_control))\r\nresults.append(formatted_text)\r\n```\r\n\u66ff\u6362\u4e3a\r\n```python\r\nfrom .message import MessageSegment\r\nif isinstance(obj, MessageSegment):\r\n results.append(obj)\r\nelse:\r\n formatted_text = self.format_field(obj, str(format_control))\r\n results.append(formatted_text)\r\n```\r\n\r\n\u4fee\u590d\u540e\u6548\u679c\u5982\u4e0b\r\n\r\n\n", "before_files": [{"content": "import functools\nfrom string import Formatter\nfrom typing import (\n TYPE_CHECKING,\n Any,\n Set,\n Dict,\n List,\n Type,\n Tuple,\n Union,\n Generic,\n Mapping,\n TypeVar,\n Callable,\n Optional,\n Sequence,\n cast,\n overload,\n)\n\nif TYPE_CHECKING:\n from .message import Message, MessageSegment\n\nTM = TypeVar(\"TM\", bound=\"Message\")\nTF = TypeVar(\"TF\", str, \"Message\")\n\nFormatSpecFunc = Callable[[Any], str]\nFormatSpecFunc_T = TypeVar(\"FormatSpecFunc_T\", bound=FormatSpecFunc)\n\n\nclass MessageTemplate(Formatter, Generic[TF]):\n \"\"\"\u6d88\u606f\u6a21\u677f\u683c\u5f0f\u5316\u5b9e\u73b0\u7c7b\u3002\n\n \u53c2\u6570:\n template: \u6a21\u677f\n factory: \u6d88\u606f\u7c7b\u578b\u5de5\u5382\uff0c\u9ed8\u8ba4\u4e3a `str`\n \"\"\"\n\n @overload\n def __init__(\n self: \"MessageTemplate[str]\", template: str, factory: Type[str] = str\n ) -> None:\n ...\n\n @overload\n def __init__(\n self: \"MessageTemplate[TM]\", template: Union[str, TM], factory: Type[TM]\n ) -> None:\n ...\n\n def __init__(self, template, factory=str) -> None:\n self.template: TF = template\n self.factory: Type[TF] = factory\n self.format_specs: Dict[str, FormatSpecFunc] = {}\n\n def add_format_spec(\n self, spec: FormatSpecFunc_T, name: Optional[str] = None\n ) -> FormatSpecFunc_T:\n name = name or spec.__name__\n if name in self.format_specs:\n raise ValueError(f\"Format spec {name} already exists!\")\n self.format_specs[name] = spec\n return spec\n\n def format(self, *args, **kwargs):\n \"\"\"\u6839\u636e\u4f20\u5165\u53c2\u6570\u548c\u6a21\u677f\u751f\u6210\u6d88\u606f\u5bf9\u8c61\"\"\"\n return self._format(args, kwargs)\n\n def format_map(self, mapping: Mapping[str, Any]) -> TF:\n \"\"\"\u6839\u636e\u4f20\u5165\u5b57\u5178\u548c\u6a21\u677f\u751f\u6210\u6d88\u606f\u5bf9\u8c61, \u5728\u4f20\u5165\u5b57\u6bb5\u540d\u4e0d\u662f\u6709\u6548\u6807\u8bc6\u7b26\u65f6\u6709\u7528\"\"\"\n return self._format([], mapping)\n\n def _format(self, args: Sequence[Any], kwargs: Mapping[str, Any]) -> TF:\n msg = self.factory()\n if isinstance(self.template, str):\n msg += self.vformat(self.template, args, kwargs)\n elif isinstance(self.template, self.factory):\n template = cast(\"Message[MessageSegment]\", self.template)\n for seg in template:\n msg += self.vformat(str(seg), args, kwargs) if seg.is_text() else seg\n else:\n raise TypeError(\"template must be a string or instance of Message!\")\n\n return msg # type:ignore\n\n def vformat(\n self, format_string: str, args: Sequence[Any], kwargs: Mapping[str, Any]\n ) -> TF:\n used_args = set()\n result, _ = self._vformat(format_string, args, kwargs, used_args, 2)\n self.check_unused_args(list(used_args), args, kwargs)\n return result\n\n def _vformat(\n self,\n format_string: str,\n args: Sequence[Any],\n kwargs: Mapping[str, Any],\n used_args: Set[Union[int, str]],\n recursion_depth: int,\n auto_arg_index: int = 0,\n ) -> Tuple[TF, int]:\n if recursion_depth < 0:\n raise ValueError(\"Max string recursion exceeded\")\n\n results: List[Any] = [self.factory()]\n\n for (literal_text, field_name, format_spec, conversion) in self.parse(\n format_string\n ):\n\n # output the literal text\n if literal_text:\n results.append(literal_text)\n\n # if there's a field, output it\n if field_name is not None:\n # this is some markup, find the object and do\n # the formatting\n\n # handle arg indexing when empty field_names are given.\n if field_name == \"\":\n if auto_arg_index is False:\n raise ValueError(\n \"cannot switch from manual field specification to \"\n \"automatic field numbering\"\n )\n field_name = str(auto_arg_index)\n auto_arg_index += 1\n elif field_name.isdigit():\n if auto_arg_index:\n raise ValueError(\n \"cannot switch from manual field specification to \"\n \"automatic field numbering\"\n )\n # disable auto arg incrementing, if it gets\n # used later on, then an exception will be raised\n auto_arg_index = False\n\n # given the field_name, find the object it references\n # and the argument it came from\n obj, arg_used = self.get_field(field_name, args, kwargs)\n used_args.add(arg_used)\n\n assert format_spec is not None\n\n # do any conversion on the resulting object\n obj = self.convert_field(obj, conversion) if conversion else obj\n\n # expand the format spec, if needed\n format_control, auto_arg_index = self._vformat(\n format_spec,\n args,\n kwargs,\n used_args,\n recursion_depth - 1,\n auto_arg_index,\n )\n\n # format the object and append to the result\n formatted_text = self.format_field(obj, str(format_control))\n results.append(formatted_text)\n\n return functools.reduce(self._add, results), auto_arg_index\n\n def format_field(self, value: Any, format_spec: str) -> Any:\n formatter: Optional[FormatSpecFunc] = self.format_specs.get(format_spec)\n if formatter is None and not issubclass(self.factory, str):\n segment_class: Type[\"MessageSegment\"] = self.factory.get_segment_class()\n method = getattr(segment_class, format_spec, None)\n if callable(method) and not cast(str, method.__name__).startswith(\"_\"):\n formatter = getattr(segment_class, format_spec)\n return (\n super().format_field(value, format_spec)\n if formatter is None\n else formatter(value)\n )\n\n def _add(self, a: Any, b: Any) -> Any:\n try:\n return a + b\n except TypeError:\n return a + str(b)\n", "path": "nonebot/internal/adapter/template.py"}]} | 3,336 | 897 |
gh_patches_debug_31980 | rasdani/github-patches | git_diff | enthought__chaco-506 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Zoom broken in some demos
**Problem Description**
This bug affects the following demos:
* `shell/contour.py`
* `shell/contourf.py`
* `basic/contour_plot.py`
Mouse-scroll doesn't zoom the plot until the user pans the plot. After panning, the plot is really zoomed out.
**Reproduction Steps:**
Mouse-scroll, then left-drag.
```python
python shell/contour.py
```
**Expected behavior:**
Plot should zoom immediately on mouse-scroll.
**OS, Python version:**
MacOSX 10.14
Python3.6
</issue>
<code>
[start of chaco/base_contour_plot.py]
1 import six
2
3 from numpy import array, isscalar, issubsctype, linspace, number
4
5 # Enthought library imports
6 from enable.api import ColorTrait
7 from traits.api import Bool, Instance, Int, List, Property, \
8 Range, Str, Trait, Tuple
9
10 # Local relative imports
11 from .base_2d_plot import Base2DPlot
12 from .color_mapper import ColorMapper
13
14
15 class BaseContourPlot(Base2DPlot):
16 """ The base class for contour plots. Mostly manages configuration and
17 change events with colormap and contour parameters.
18 """
19
20 #------------------------------------------------------------------------
21 # Data-related traits
22 #------------------------------------------------------------------------
23
24 # Defines the levels to contour.
25 # ``levels`` can be either: a list of floating point numbers that define
26 # the value of the function at the contours; a positive integer, in which
27 # case the range of the value is divided in the given number of equally
28 # spaced levels; or "auto" (default), which divides the range in 10 levels
29 levels = Trait("auto", Int, List)
30
31 # The color(s) of the lines.
32 # ``colors`` can be given as a color name, in which case all contours have
33 # the same color, as a list of colors, or as a colormap. If the list of
34 # colors is shorter than the number of levels, the values are repeated
35 # from the beginning of the list. Default is black.
36 # Colors are associated with levels of increasing value.
37 colors = Trait(None, Str, Instance(ColorMapper), List, Tuple)
38
39 # If present, the color mapper for the colorbar to look at.
40 color_mapper = Property(Instance(ColorMapper))
41
42 # A global alpha value to apply to all the contours
43 alpha = Trait(1.0, Range(0.0, 1.0))
44
45 #------------------------------------------------------------------------
46 # Private traits
47 #------------------------------------------------------------------------
48
49 # Is the cached level data valid?
50 _level_cache_valid = Bool(False)
51
52 # Is the cached color data valid?
53 _colors_cache_valid = Bool(False)
54
55 # List of levels and their associated line properties.
56 _levels = List
57
58 # List of colors
59 _colors = List
60
61 # Mapped trait used to convert user-suppied color values to AGG-acceptable
62 # ones. (Mapped traits in lists are not supported, must be converted one at
63 # a time.)
64 _color_map_trait = ColorTrait
65
66
67 def __init__(self, *args, **kwargs):
68 super(BaseContourPlot, self).__init__(*args, **kwargs)
69 if self.color_mapper:
70 self.color_mapper.on_trait_change(self._update_color_mapper, "updated")
71 return
72
73 def _update_levels(self):
74 """ Updates the levels cache. """
75 low, high = self.value.get_bounds()
76 if self.levels == "auto":
77 self._levels = list(linspace(low, high, 10))
78 elif isinstance(self.levels, int):
79 self._levels = list(linspace(low, high, self.levels))
80 else:
81 self._levels = self.levels
82 self._levels.sort()
83 self._level_cache_valid = True
84 self._colors_cache_valid = False
85
86 def _update_colors(self, numcolors=None):
87 """ Update the colors cache using our color mapper and based
88 on our number of levels. The **mode** parameter accounts for fenceposting:
89 - If **mode** is "poly", then the number of colors to generate is 1
90 less than the number of levels
91 - If **mode** is "line", then the number of colors to generate is
92 equal to the number of levels
93 """
94 if numcolors is None:
95 numcolors = len(self._levels)
96
97 colors = self.colors
98 # If we are given no colors, set a default for all levels
99 if colors is None:
100 self._color_map_trait = "black"
101 self._colors = [self._color_map_trait_] * numcolors
102
103 # If we are given a single color, apply it to all levels
104 elif isinstance(colors, six.string_types):
105 self._color_map_trait = colors
106 self._colors = [self._color_map_trait_] * numcolors
107
108 # If we are given a colormap, use it to map all the levels to colors
109 elif isinstance(colors, ColorMapper):
110 self._colors = []
111 mapped_colors = self.color_mapper.map_screen(array(self._levels))
112 for i in range(numcolors):
113 self._color_map_trait = tuple(mapped_colors[i])
114 self._colors.append(self._color_map_trait_)
115
116 # A list or tuple
117 # This could be a length 3 or 4 sequence of scalars, which indicates
118 # a color; otherwise, this is interpreted as a list of items to
119 # be converted via self._color_map_trait.
120 else:
121 if len(colors) in (3,4) and \
122 (isscalar(colors[0]) and issubsctype(type(colors[0]), number)):
123 self._color_map_trait = colors
124 self._colors = [self._color_map_trait_] * numcolors
125 else:
126 # if the list of colors is shorter than the list of levels, simply
127 # repeat colors from the beginning of the list as needed
128 self._colors = []
129 for i in range(len(self._levels)):
130 self._color_map_trait = colors[i%len(colors)]
131 self._colors.append(self._color_map_trait_)
132
133 self._colors_cache_valid = True
134 return
135
136
137 #------------------------------------------------------------------------
138 # Event handlers
139 #------------------------------------------------------------------------
140
141 def _index_data_changed_fired(self):
142 # If the index data has changed, the reset the levels cache (which
143 # also triggers all the other caches to reset).
144 self._level_cache_valid = False
145 self.invalidate_draw()
146
147 def _value_data_changed_fired(self):
148 # If the index data has changed, the reset the levels cache (which
149 # also triggers all the other caches to reset).
150 self._level_cache_valid = False
151 self.invalidate_draw()
152
153 def _index_mapper_changed_fired(self):
154 # If the index mapper has changed, then we need to redraw
155 self.invalidate_draw()
156
157 def _update_color_mapper(self):
158 # If the color mapper has changed, then we need to recompute the
159 # levels and cached data associated with that.
160 self._level_cache_valid = False
161 self.invalidate_draw()
162
163 def _levels_changed(self):
164 self._update_levels()
165 self.invalidate_draw()
166 self.request_redraw()
167
168 def _colors_changed(self):
169 if self._level_cache_valid:
170 self._update_colors()
171 self.invalidate_draw()
172
173 #------------------------------------------------------------------------
174 # Trait properties
175 #------------------------------------------------------------------------
176
177 def _get_color_mapper(self):
178 if isinstance(self.colors, ColorMapper):
179 return self.colors
180 else:
181 return None
182
183 def _set_color_mapper(self, color_mapper):
184 # Remove the dynamic event handler from the old color mapper
185 if self.colors is not None and isinstance(self.colors, ColorMapper):
186 self.colors.on_trait_change(self._update_color_mapper, "updated", remove=True)
187
188 # Check to see if we should copy over the range as well
189 if color_mapper is not None:
190 if color_mapper.range is None and self.colors.range is not None:
191 color_mapper.range = self.colors.range
192
193 # Attach the dynamic event handler to the new color mapper
194 if color_mapper is not None:
195 color_mapper.on_trait_change(self._update_color_mapper, "updated")
196
197 self.colors = color_mapper
198 self._update_color_mapper()
199
[end of chaco/base_contour_plot.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/chaco/base_contour_plot.py b/chaco/base_contour_plot.py
--- a/chaco/base_contour_plot.py
+++ b/chaco/base_contour_plot.py
@@ -142,33 +142,32 @@
# If the index data has changed, the reset the levels cache (which
# also triggers all the other caches to reset).
self._level_cache_valid = False
- self.invalidate_draw()
+ self.invalidate_and_redraw()
def _value_data_changed_fired(self):
# If the index data has changed, the reset the levels cache (which
# also triggers all the other caches to reset).
self._level_cache_valid = False
- self.invalidate_draw()
+ self.invalidate_and_redraw()
def _index_mapper_changed_fired(self):
# If the index mapper has changed, then we need to redraw
- self.invalidate_draw()
+ self.invalidate_and_redraw()
def _update_color_mapper(self):
# If the color mapper has changed, then we need to recompute the
# levels and cached data associated with that.
self._level_cache_valid = False
- self.invalidate_draw()
+ self.invalidate_and_redraw()
def _levels_changed(self):
self._update_levels()
- self.invalidate_draw()
- self.request_redraw()
+ self.invalidate_and_redraw()
def _colors_changed(self):
if self._level_cache_valid:
self._update_colors()
- self.invalidate_draw()
+ self.invalidate_and_redraw()
#------------------------------------------------------------------------
# Trait properties
| {"golden_diff": "diff --git a/chaco/base_contour_plot.py b/chaco/base_contour_plot.py\n--- a/chaco/base_contour_plot.py\n+++ b/chaco/base_contour_plot.py\n@@ -142,33 +142,32 @@\n # If the index data has changed, the reset the levels cache (which\n # also triggers all the other caches to reset).\n self._level_cache_valid = False\n- self.invalidate_draw()\n+ self.invalidate_and_redraw()\n \n def _value_data_changed_fired(self):\n # If the index data has changed, the reset the levels cache (which\n # also triggers all the other caches to reset).\n self._level_cache_valid = False\n- self.invalidate_draw()\n+ self.invalidate_and_redraw()\n \n def _index_mapper_changed_fired(self):\n # If the index mapper has changed, then we need to redraw\n- self.invalidate_draw()\n+ self.invalidate_and_redraw()\n \n def _update_color_mapper(self):\n # If the color mapper has changed, then we need to recompute the\n # levels and cached data associated with that.\n self._level_cache_valid = False\n- self.invalidate_draw()\n+ self.invalidate_and_redraw()\n \n def _levels_changed(self):\n self._update_levels()\n- self.invalidate_draw()\n- self.request_redraw()\n+ self.invalidate_and_redraw()\n \n def _colors_changed(self):\n if self._level_cache_valid:\n self._update_colors()\n- self.invalidate_draw()\n+ self.invalidate_and_redraw()\n \n #------------------------------------------------------------------------\n # Trait properties\n", "issue": "Zoom broken in some demos\n**Problem Description**\r\nThis bug affects the following demos:\r\n* `shell/contour.py`\r\n* `shell/contourf.py`\r\n* `basic/contour_plot.py`\r\n\r\nMouse-scroll doesn't zoom the plot until the user pans the plot. After panning, the plot is really zoomed out.\r\n\r\n**Reproduction Steps:**\r\nMouse-scroll, then left-drag.\r\n\r\n```python\r\npython shell/contour.py\r\n```\r\n\r\n**Expected behavior:**\r\nPlot should zoom immediately on mouse-scroll.\r\n\r\n**OS, Python version:**\r\nMacOSX 10.14\r\nPython3.6\n", "before_files": [{"content": "import six\n\nfrom numpy import array, isscalar, issubsctype, linspace, number\n\n# Enthought library imports\nfrom enable.api import ColorTrait\nfrom traits.api import Bool, Instance, Int, List, Property, \\\n Range, Str, Trait, Tuple\n\n# Local relative imports\nfrom .base_2d_plot import Base2DPlot\nfrom .color_mapper import ColorMapper\n\n\nclass BaseContourPlot(Base2DPlot):\n \"\"\" The base class for contour plots. Mostly manages configuration and\n change events with colormap and contour parameters.\n \"\"\"\n\n #------------------------------------------------------------------------\n # Data-related traits\n #------------------------------------------------------------------------\n\n # Defines the levels to contour.\n # ``levels`` can be either: a list of floating point numbers that define\n # the value of the function at the contours; a positive integer, in which\n # case the range of the value is divided in the given number of equally\n # spaced levels; or \"auto\" (default), which divides the range in 10 levels\n levels = Trait(\"auto\", Int, List)\n\n # The color(s) of the lines.\n # ``colors`` can be given as a color name, in which case all contours have\n # the same color, as a list of colors, or as a colormap. If the list of\n # colors is shorter than the number of levels, the values are repeated\n # from the beginning of the list. Default is black.\n # Colors are associated with levels of increasing value.\n colors = Trait(None, Str, Instance(ColorMapper), List, Tuple)\n\n # If present, the color mapper for the colorbar to look at.\n color_mapper = Property(Instance(ColorMapper))\n\n # A global alpha value to apply to all the contours\n alpha = Trait(1.0, Range(0.0, 1.0))\n\n #------------------------------------------------------------------------\n # Private traits\n #------------------------------------------------------------------------\n\n # Is the cached level data valid?\n _level_cache_valid = Bool(False)\n\n # Is the cached color data valid?\n _colors_cache_valid = Bool(False)\n\n # List of levels and their associated line properties.\n _levels = List\n\n # List of colors\n _colors = List\n\n # Mapped trait used to convert user-suppied color values to AGG-acceptable\n # ones. (Mapped traits in lists are not supported, must be converted one at\n # a time.)\n _color_map_trait = ColorTrait\n\n\n def __init__(self, *args, **kwargs):\n super(BaseContourPlot, self).__init__(*args, **kwargs)\n if self.color_mapper:\n self.color_mapper.on_trait_change(self._update_color_mapper, \"updated\")\n return\n\n def _update_levels(self):\n \"\"\" Updates the levels cache. \"\"\"\n low, high = self.value.get_bounds()\n if self.levels == \"auto\":\n self._levels = list(linspace(low, high, 10))\n elif isinstance(self.levels, int):\n self._levels = list(linspace(low, high, self.levels))\n else:\n self._levels = self.levels\n self._levels.sort()\n self._level_cache_valid = True\n self._colors_cache_valid = False\n\n def _update_colors(self, numcolors=None):\n \"\"\" Update the colors cache using our color mapper and based\n on our number of levels. The **mode** parameter accounts for fenceposting:\n - If **mode** is \"poly\", then the number of colors to generate is 1\n less than the number of levels\n - If **mode** is \"line\", then the number of colors to generate is\n equal to the number of levels\n \"\"\"\n if numcolors is None:\n numcolors = len(self._levels)\n\n colors = self.colors\n # If we are given no colors, set a default for all levels\n if colors is None:\n self._color_map_trait = \"black\"\n self._colors = [self._color_map_trait_] * numcolors\n\n # If we are given a single color, apply it to all levels\n elif isinstance(colors, six.string_types):\n self._color_map_trait = colors\n self._colors = [self._color_map_trait_] * numcolors\n\n # If we are given a colormap, use it to map all the levels to colors\n elif isinstance(colors, ColorMapper):\n self._colors = []\n mapped_colors = self.color_mapper.map_screen(array(self._levels))\n for i in range(numcolors):\n self._color_map_trait = tuple(mapped_colors[i])\n self._colors.append(self._color_map_trait_)\n\n # A list or tuple\n # This could be a length 3 or 4 sequence of scalars, which indicates\n # a color; otherwise, this is interpreted as a list of items to\n # be converted via self._color_map_trait.\n else:\n if len(colors) in (3,4) and \\\n (isscalar(colors[0]) and issubsctype(type(colors[0]), number)):\n self._color_map_trait = colors\n self._colors = [self._color_map_trait_] * numcolors\n else:\n # if the list of colors is shorter than the list of levels, simply\n # repeat colors from the beginning of the list as needed\n self._colors = []\n for i in range(len(self._levels)):\n self._color_map_trait = colors[i%len(colors)]\n self._colors.append(self._color_map_trait_)\n\n self._colors_cache_valid = True\n return\n\n\n #------------------------------------------------------------------------\n # Event handlers\n #------------------------------------------------------------------------\n\n def _index_data_changed_fired(self):\n # If the index data has changed, the reset the levels cache (which\n # also triggers all the other caches to reset).\n self._level_cache_valid = False\n self.invalidate_draw()\n\n def _value_data_changed_fired(self):\n # If the index data has changed, the reset the levels cache (which\n # also triggers all the other caches to reset).\n self._level_cache_valid = False\n self.invalidate_draw()\n\n def _index_mapper_changed_fired(self):\n # If the index mapper has changed, then we need to redraw\n self.invalidate_draw()\n\n def _update_color_mapper(self):\n # If the color mapper has changed, then we need to recompute the\n # levels and cached data associated with that.\n self._level_cache_valid = False\n self.invalidate_draw()\n\n def _levels_changed(self):\n self._update_levels()\n self.invalidate_draw()\n self.request_redraw()\n\n def _colors_changed(self):\n if self._level_cache_valid:\n self._update_colors()\n self.invalidate_draw()\n\n #------------------------------------------------------------------------\n # Trait properties\n #------------------------------------------------------------------------\n\n def _get_color_mapper(self):\n if isinstance(self.colors, ColorMapper):\n return self.colors\n else:\n return None\n\n def _set_color_mapper(self, color_mapper):\n # Remove the dynamic event handler from the old color mapper\n if self.colors is not None and isinstance(self.colors, ColorMapper):\n self.colors.on_trait_change(self._update_color_mapper, \"updated\", remove=True)\n\n # Check to see if we should copy over the range as well\n if color_mapper is not None:\n if color_mapper.range is None and self.colors.range is not None:\n color_mapper.range = self.colors.range\n\n # Attach the dynamic event handler to the new color mapper\n if color_mapper is not None:\n color_mapper.on_trait_change(self._update_color_mapper, \"updated\")\n\n self.colors = color_mapper\n self._update_color_mapper()\n", "path": "chaco/base_contour_plot.py"}]} | 2,825 | 349 |
gh_patches_debug_21142 | rasdani/github-patches | git_diff | beetbox__beets-3671 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
parentwork: In tests, mock all MusicBrainz responses
I didn't notice this when we originally merged the parentwork plugin in #3279, but its tests rely on real communication with the MusicBrainz web service, i.e., they fail if there is no network connectivity. [This Travis job](https://travis-ci.org/beetbox/beets/jobs/558936634) is an example of a spurious failure caused by network interruptions.
We need to isolate these tests by mocking the MB queries so that no network traffic is every actually sent.
@dosoe, can you please look into this?
</issue>
<code>
[start of beetsplug/parentwork.py]
1 # -*- coding: utf-8 -*-
2 # This file is part of beets.
3 # Copyright 2017, Dorian Soergel.
4 #
5 # Permission is hereby granted, free of charge, to any person obtaining
6 # a copy of this software and associated documentation files (the
7 # "Software"), to deal in the Software without restriction, including
8 # without limitation the rights to use, copy, modify, merge, publish,
9 # distribute, sublicense, and/or sell copies of the Software, and to
10 # permit persons to whom the Software is furnished to do so, subject to
11 # the following conditions:
12 #
13 # The above copyright notice and this permission notice shall be
14 # included in all copies or substantial portions of the Software.
15
16 """Gets parent work, its disambiguation and id, composer, composer sort name
17 and work composition date
18 """
19
20 from __future__ import division, absolute_import, print_function
21
22 from beets import ui
23 from beets.plugins import BeetsPlugin
24
25 import musicbrainzngs
26
27
28 def direct_parent_id(mb_workid, work_date=None):
29 """Given a Musicbrainz work id, find the id one of the works the work is
30 part of and the first composition date it encounters.
31 """
32 work_info = musicbrainzngs.get_work_by_id(mb_workid,
33 includes=["work-rels",
34 "artist-rels"])
35 if 'artist-relation-list' in work_info['work'] and work_date is None:
36 for artist in work_info['work']['artist-relation-list']:
37 if artist['type'] == 'composer':
38 if 'end' in artist.keys():
39 work_date = artist['end']
40
41 if 'work-relation-list' in work_info['work']:
42 for direct_parent in work_info['work']['work-relation-list']:
43 if direct_parent['type'] == 'parts' \
44 and direct_parent.get('direction') == 'backward':
45 direct_id = direct_parent['work']['id']
46 return direct_id, work_date
47 return None, work_date
48
49
50 def work_parent_id(mb_workid):
51 """Find the parent work id and composition date of a work given its id.
52 """
53 work_date = None
54 while True:
55 new_mb_workid, work_date = direct_parent_id(mb_workid, work_date)
56 if not new_mb_workid:
57 return mb_workid, work_date
58 mb_workid = new_mb_workid
59 return mb_workid, work_date
60
61
62 def find_parentwork_info(mb_workid):
63 """Get the MusicBrainz information dict about a parent work, including
64 the artist relations, and the composition date for a work's parent work.
65 """
66 parent_id, work_date = work_parent_id(mb_workid)
67 work_info = musicbrainzngs.get_work_by_id(parent_id,
68 includes=["artist-rels"])
69 return work_info, work_date
70
71
72 class ParentWorkPlugin(BeetsPlugin):
73 def __init__(self):
74 super(ParentWorkPlugin, self).__init__()
75
76 self.config.add({
77 'auto': False,
78 'force': False,
79 })
80
81 if self.config['auto']:
82 self.import_stages = [self.imported]
83
84 def commands(self):
85
86 def func(lib, opts, args):
87 self.config.set_args(opts)
88 force_parent = self.config['force'].get(bool)
89 write = ui.should_write()
90
91 for item in lib.items(ui.decargs(args)):
92 changed = self.find_work(item, force_parent)
93 if changed:
94 item.store()
95 if write:
96 item.try_write()
97 command = ui.Subcommand(
98 'parentwork',
99 help=u'fetche parent works, composers and dates')
100
101 command.parser.add_option(
102 u'-f', u'--force', dest='force',
103 action='store_true', default=None,
104 help=u're-fetch when parent work is already present')
105
106 command.func = func
107 return [command]
108
109 def imported(self, session, task):
110 """Import hook for fetching parent works automatically.
111 """
112 force_parent = self.config['force'].get(bool)
113
114 for item in task.imported_items():
115 self.find_work(item, force_parent)
116 item.store()
117
118 def get_info(self, item, work_info):
119 """Given the parent work info dict, fetch parent_composer,
120 parent_composer_sort, parentwork, parentwork_disambig, mb_workid and
121 composer_ids.
122 """
123
124 parent_composer = []
125 parent_composer_sort = []
126 parentwork_info = {}
127
128 composer_exists = False
129 if 'artist-relation-list' in work_info['work']:
130 for artist in work_info['work']['artist-relation-list']:
131 if artist['type'] == 'composer':
132 parent_composer.append(artist['artist']['name'])
133 parent_composer_sort.append(artist['artist']['sort-name'])
134 if 'end' in artist.keys():
135 parentwork_info["parentwork_date"] = artist['end']
136
137 parentwork_info['parent_composer'] = u', '.join(parent_composer)
138 parentwork_info['parent_composer_sort'] = u', '.join(
139 parent_composer_sort)
140
141 if not composer_exists:
142 self._log.debug(
143 'no composer for {}; add one at '
144 'https://musicbrainz.org/work/{}',
145 item, work_info['work']['id'],
146 )
147
148 parentwork_info['parentwork'] = work_info['work']['title']
149 parentwork_info['mb_parentworkid'] = work_info['work']['id']
150
151 if 'disambiguation' in work_info['work']:
152 parentwork_info['parentwork_disambig'] = work_info[
153 'work']['disambiguation']
154
155 else:
156 parentwork_info['parentwork_disambig'] = None
157
158 return parentwork_info
159
160 def find_work(self, item, force):
161 """Finds the parent work of a recording and populates the tags
162 accordingly.
163
164 The parent work is found recursively, by finding the direct parent
165 repeatedly until there are no more links in the chain. We return the
166 final, topmost work in the chain.
167
168 Namely, the tags parentwork, parentwork_disambig, mb_parentworkid,
169 parent_composer, parent_composer_sort and work_date are populated.
170 """
171
172 if not item.mb_workid:
173 self._log.info('No work for {}, \
174 add one at https://musicbrainz.org/recording/{}', item, item.mb_trackid)
175 return
176
177 hasparent = hasattr(item, 'parentwork')
178 work_changed = True
179 if hasattr(item, 'parentwork_workid_current'):
180 work_changed = item.parentwork_workid_current != item.mb_workid
181 if force or not hasparent or work_changed:
182 try:
183 work_info, work_date = find_parentwork_info(item.mb_workid)
184 except musicbrainzngs.musicbrainz.WebServiceError as e:
185 self._log.debug("error fetching work: {}", e)
186 return
187 parent_info = self.get_info(item, work_info)
188 parent_info['parentwork_workid_current'] = item.mb_workid
189 if 'parent_composer' in parent_info:
190 self._log.debug("Work fetched: {} - {}",
191 parent_info['parentwork'],
192 parent_info['parent_composer'])
193 else:
194 self._log.debug("Work fetched: {} - no parent composer",
195 parent_info['parentwork'])
196
197 elif hasparent:
198 self._log.debug("{}: Work present, skipping", item)
199 return
200
201 # apply all non-null values to the item
202 for key, value in parent_info.items():
203 if value:
204 item[key] = value
205
206 if work_date:
207 item['work_date'] = work_date
208 return ui.show_model_changes(
209 item, fields=['parentwork', 'parentwork_disambig',
210 'mb_parentworkid', 'parent_composer',
211 'parent_composer_sort', 'work_date',
212 'parentwork_workid_current', 'parentwork_date'])
213
[end of beetsplug/parentwork.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/beetsplug/parentwork.py b/beetsplug/parentwork.py
--- a/beetsplug/parentwork.py
+++ b/beetsplug/parentwork.py
@@ -96,7 +96,7 @@
item.try_write()
command = ui.Subcommand(
'parentwork',
- help=u'fetche parent works, composers and dates')
+ help=u'fetch parent works, composers and dates')
command.parser.add_option(
u'-f', u'--force', dest='force',
@@ -129,6 +129,7 @@
if 'artist-relation-list' in work_info['work']:
for artist in work_info['work']['artist-relation-list']:
if artist['type'] == 'composer':
+ composer_exists = True
parent_composer.append(artist['artist']['name'])
parent_composer_sort.append(artist['artist']['sort-name'])
if 'end' in artist.keys():
| {"golden_diff": "diff --git a/beetsplug/parentwork.py b/beetsplug/parentwork.py\n--- a/beetsplug/parentwork.py\n+++ b/beetsplug/parentwork.py\n@@ -96,7 +96,7 @@\n item.try_write()\n command = ui.Subcommand(\n 'parentwork',\n- help=u'fetche parent works, composers and dates')\n+ help=u'fetch parent works, composers and dates')\n \n command.parser.add_option(\n u'-f', u'--force', dest='force',\n@@ -129,6 +129,7 @@\n if 'artist-relation-list' in work_info['work']:\n for artist in work_info['work']['artist-relation-list']:\n if artist['type'] == 'composer':\n+ composer_exists = True\n parent_composer.append(artist['artist']['name'])\n parent_composer_sort.append(artist['artist']['sort-name'])\n if 'end' in artist.keys():\n", "issue": "parentwork: In tests, mock all MusicBrainz responses\nI didn't notice this when we originally merged the parentwork plugin in #3279, but its tests rely on real communication with the MusicBrainz web service, i.e., they fail if there is no network connectivity. [This Travis job](https://travis-ci.org/beetbox/beets/jobs/558936634) is an example of a spurious failure caused by network interruptions.\r\n\r\nWe need to isolate these tests by mocking the MB queries so that no network traffic is every actually sent.\r\n\r\n@dosoe, can you please look into this?\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n# This file is part of beets.\n# Copyright 2017, Dorian Soergel.\n#\n# Permission is hereby granted, free of charge, to any person obtaining\n# a copy of this software and associated documentation files (the\n# \"Software\"), to deal in the Software without restriction, including\n# without limitation the rights to use, copy, modify, merge, publish,\n# distribute, sublicense, and/or sell copies of the Software, and to\n# permit persons to whom the Software is furnished to do so, subject to\n# the following conditions:\n#\n# The above copyright notice and this permission notice shall be\n# included in all copies or substantial portions of the Software.\n\n\"\"\"Gets parent work, its disambiguation and id, composer, composer sort name\nand work composition date\n\"\"\"\n\nfrom __future__ import division, absolute_import, print_function\n\nfrom beets import ui\nfrom beets.plugins import BeetsPlugin\n\nimport musicbrainzngs\n\n\ndef direct_parent_id(mb_workid, work_date=None):\n \"\"\"Given a Musicbrainz work id, find the id one of the works the work is\n part of and the first composition date it encounters.\n \"\"\"\n work_info = musicbrainzngs.get_work_by_id(mb_workid,\n includes=[\"work-rels\",\n \"artist-rels\"])\n if 'artist-relation-list' in work_info['work'] and work_date is None:\n for artist in work_info['work']['artist-relation-list']:\n if artist['type'] == 'composer':\n if 'end' in artist.keys():\n work_date = artist['end']\n\n if 'work-relation-list' in work_info['work']:\n for direct_parent in work_info['work']['work-relation-list']:\n if direct_parent['type'] == 'parts' \\\n and direct_parent.get('direction') == 'backward':\n direct_id = direct_parent['work']['id']\n return direct_id, work_date\n return None, work_date\n\n\ndef work_parent_id(mb_workid):\n \"\"\"Find the parent work id and composition date of a work given its id.\n \"\"\"\n work_date = None\n while True:\n new_mb_workid, work_date = direct_parent_id(mb_workid, work_date)\n if not new_mb_workid:\n return mb_workid, work_date\n mb_workid = new_mb_workid\n return mb_workid, work_date\n\n\ndef find_parentwork_info(mb_workid):\n \"\"\"Get the MusicBrainz information dict about a parent work, including\n the artist relations, and the composition date for a work's parent work.\n \"\"\"\n parent_id, work_date = work_parent_id(mb_workid)\n work_info = musicbrainzngs.get_work_by_id(parent_id,\n includes=[\"artist-rels\"])\n return work_info, work_date\n\n\nclass ParentWorkPlugin(BeetsPlugin):\n def __init__(self):\n super(ParentWorkPlugin, self).__init__()\n\n self.config.add({\n 'auto': False,\n 'force': False,\n })\n\n if self.config['auto']:\n self.import_stages = [self.imported]\n\n def commands(self):\n\n def func(lib, opts, args):\n self.config.set_args(opts)\n force_parent = self.config['force'].get(bool)\n write = ui.should_write()\n\n for item in lib.items(ui.decargs(args)):\n changed = self.find_work(item, force_parent)\n if changed:\n item.store()\n if write:\n item.try_write()\n command = ui.Subcommand(\n 'parentwork',\n help=u'fetche parent works, composers and dates')\n\n command.parser.add_option(\n u'-f', u'--force', dest='force',\n action='store_true', default=None,\n help=u're-fetch when parent work is already present')\n\n command.func = func\n return [command]\n\n def imported(self, session, task):\n \"\"\"Import hook for fetching parent works automatically.\n \"\"\"\n force_parent = self.config['force'].get(bool)\n\n for item in task.imported_items():\n self.find_work(item, force_parent)\n item.store()\n\n def get_info(self, item, work_info):\n \"\"\"Given the parent work info dict, fetch parent_composer,\n parent_composer_sort, parentwork, parentwork_disambig, mb_workid and\n composer_ids.\n \"\"\"\n\n parent_composer = []\n parent_composer_sort = []\n parentwork_info = {}\n\n composer_exists = False\n if 'artist-relation-list' in work_info['work']:\n for artist in work_info['work']['artist-relation-list']:\n if artist['type'] == 'composer':\n parent_composer.append(artist['artist']['name'])\n parent_composer_sort.append(artist['artist']['sort-name'])\n if 'end' in artist.keys():\n parentwork_info[\"parentwork_date\"] = artist['end']\n\n parentwork_info['parent_composer'] = u', '.join(parent_composer)\n parentwork_info['parent_composer_sort'] = u', '.join(\n parent_composer_sort)\n\n if not composer_exists:\n self._log.debug(\n 'no composer for {}; add one at '\n 'https://musicbrainz.org/work/{}',\n item, work_info['work']['id'],\n )\n\n parentwork_info['parentwork'] = work_info['work']['title']\n parentwork_info['mb_parentworkid'] = work_info['work']['id']\n\n if 'disambiguation' in work_info['work']:\n parentwork_info['parentwork_disambig'] = work_info[\n 'work']['disambiguation']\n\n else:\n parentwork_info['parentwork_disambig'] = None\n\n return parentwork_info\n\n def find_work(self, item, force):\n \"\"\"Finds the parent work of a recording and populates the tags\n accordingly.\n\n The parent work is found recursively, by finding the direct parent\n repeatedly until there are no more links in the chain. We return the\n final, topmost work in the chain.\n\n Namely, the tags parentwork, parentwork_disambig, mb_parentworkid,\n parent_composer, parent_composer_sort and work_date are populated.\n \"\"\"\n\n if not item.mb_workid:\n self._log.info('No work for {}, \\\nadd one at https://musicbrainz.org/recording/{}', item, item.mb_trackid)\n return\n\n hasparent = hasattr(item, 'parentwork')\n work_changed = True\n if hasattr(item, 'parentwork_workid_current'):\n work_changed = item.parentwork_workid_current != item.mb_workid\n if force or not hasparent or work_changed:\n try:\n work_info, work_date = find_parentwork_info(item.mb_workid)\n except musicbrainzngs.musicbrainz.WebServiceError as e:\n self._log.debug(\"error fetching work: {}\", e)\n return\n parent_info = self.get_info(item, work_info)\n parent_info['parentwork_workid_current'] = item.mb_workid\n if 'parent_composer' in parent_info:\n self._log.debug(\"Work fetched: {} - {}\",\n parent_info['parentwork'],\n parent_info['parent_composer'])\n else:\n self._log.debug(\"Work fetched: {} - no parent composer\",\n parent_info['parentwork'])\n\n elif hasparent:\n self._log.debug(\"{}: Work present, skipping\", item)\n return\n\n # apply all non-null values to the item\n for key, value in parent_info.items():\n if value:\n item[key] = value\n\n if work_date:\n item['work_date'] = work_date\n return ui.show_model_changes(\n item, fields=['parentwork', 'parentwork_disambig',\n 'mb_parentworkid', 'parent_composer',\n 'parent_composer_sort', 'work_date',\n 'parentwork_workid_current', 'parentwork_date'])\n", "path": "beetsplug/parentwork.py"}]} | 2,943 | 210 |
gh_patches_debug_21044 | rasdani/github-patches | git_diff | bridgecrewio__checkov-5336 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Sarif report creates invalid uri for folder with spaces
**Describe the issue**
SonarQube will not import SARIF report from Checkov correctly because of invalid URI in SARIF
1) Scan folders with spaces that has some issues
example:
Secrets/Access Tokens/Azure/main.tf
2) Output result as sarif
3) Resulting file is not valid SARIF due to invalid URI
The field Secrets/Access Tokens/Azure/main.tf corresponds to the results/locations/physicalLocation/artifactLocation/uri object in the SARIF report. There is character the space in the URI. This is not expected. The URI field shouldn’t have any spaces.
This is against specification of URI, which forbids spaces in URIs.
Because of this problem , import of issues in directories with spaces will fail in SonarQube and possibly other tools
</issue>
<code>
[start of checkov/common/output/sarif.py]
1 from __future__ import annotations
2
3 import itertools
4 import json
5 from typing import TYPE_CHECKING, Any
6
7 from checkov.common.models.enums import CheckResult
8 from checkov.common.output.cyclonedx_consts import SCA_CHECKTYPES
9 from checkov.common.util.http_utils import valid_url
10 from checkov.version import version
11
12 if TYPE_CHECKING:
13 from checkov.common.output.record import Record
14 from checkov.common.output.report import Report
15
16 SEVERITY_TO_SARIF_LEVEL = {
17 "critical": "error",
18 "high": "error",
19 "medium": "warning",
20 "low": "note",
21 "none": "none",
22 }
23
24
25 SEVERITY_TO_SCORE = {
26 "critical": "10.0",
27 "high": "8.9",
28 "medium": "6.9",
29 "low": "3.9",
30 "none": "0.0",
31 }
32
33
34 class Sarif:
35 def __init__(self, reports: list[Report], tool: str | None) -> None:
36 self.reports = reports
37 self.rule_index_map: "dict[str, int]" = {}
38 self.tool = tool if tool else "Bridgecrew"
39
40 self.json = self.create_json()
41
42 def create_json(self) -> dict[str, Any]:
43 return {
44 "$schema": "https://raw.githubusercontent.com/oasis-tcs/sarif-spec/master/Schemata/sarif-schema-2.1.0.json",
45 "version": "2.1.0",
46 "runs": self._create_runs(),
47 }
48
49 def _create_runs(self) -> list[dict[str, Any]]:
50 information_uri = "https://docs.bridgecrew.io" if self.tool.lower() == "bridgecrew" else "https://checkov.io"
51 rules = self._create_rules() # needs to be invoked before _create_results()
52 results = self._create_results()
53
54 return [
55 {
56 "tool": {
57 "driver": {
58 "name": self.tool,
59 "version": version,
60 "informationUri": information_uri,
61 "rules": rules,
62 "organization": "bridgecrew",
63 }
64 },
65 "results": results,
66 }
67 ]
68
69 def _create_rules(self) -> list[dict[str, Any]]:
70 rule_idx = 0
71 rules: "list[dict[str, Any]]" = []
72
73 for report in self.reports:
74 if report.check_type in SCA_CHECKTYPES:
75 for record in itertools.chain(report.failed_checks, report.skipped_checks):
76 rule = None
77 if record.check_id.startswith("BC_LIC"):
78 rule = self._create_license_rule(check_type=report.check_type, record=record)
79 elif record.check_id.startswith(("BC_VUL", "CKV_CVE")):
80 rule = self._create_cve_rule(check_type=report.check_type, record=record)
81
82 if rule and rule["id"] not in self.rule_index_map:
83 self.rule_index_map[rule["id"]] = rule_idx
84 rules.append(rule)
85 rule_idx += 1
86 else:
87 for record in itertools.chain(report.failed_checks, report.skipped_checks):
88 if record.check_id not in self.rule_index_map:
89 rule = self._create_iac_rule(check_type=report.check_type, record=record)
90 self.rule_index_map[rule["id"]] = rule_idx
91 rules.append(rule)
92 rule_idx += 1
93
94 return rules
95
96 def _create_iac_rule(self, check_type: str, record: Record) -> dict[str, Any]:
97 rule = {
98 "id": self._create_rule_id(check_type=check_type, record=record),
99 "name": record.short_description or record.check_name,
100 "shortDescription": {
101 "text": record.short_description or record.check_name,
102 },
103 "fullDescription": {
104 "text": record.description or record.check_name,
105 },
106 "help": {
107 "text": f"{record.check_name}\nResource: {record.resource}",
108 },
109 "defaultConfiguration": {"level": "error"},
110 }
111
112 # Adding 'properties' dictionary only if 'record.severity' exists
113 if record.severity:
114 rule["properties"] = {
115 "security-severity": SEVERITY_TO_SCORE.get(record.severity.name.lower(), "0.0"),
116 }
117
118 help_uri = record.guideline
119 if valid_url(help_uri):
120 rule["helpUri"] = help_uri
121
122 return rule
123
124 def _create_cve_rule(self, check_type: str, record: Record) -> dict[str, Any] | None:
125 details = record.vulnerability_details
126 if not details:
127 # this shouldn't happen
128 return None
129
130 rule = {
131 "id": self._create_rule_id(check_type=check_type, record=record),
132 "name": record.short_description or record.check_name,
133 "shortDescription": {
134 "text": record.short_description or record.check_name,
135 },
136 "fullDescription": {
137 "text": record.description or record.check_name,
138 },
139 "help": {
140 "text": f"{record.check_name}\nResource: {record.resource}\nStatus: {details.get('status')}",
141 },
142 "defaultConfiguration": {"level": "error"},
143 }
144
145 # Add properties dictionary with security-severity
146 cvss = details.get("cvss")
147 if cvss:
148 # use CVSS, if exists
149 rule["properties"] = {
150 "security-severity": str(cvss),
151 }
152 elif record.severity:
153 # otherwise severity, if exists
154 rule["properties"] = {
155 "security-severity": SEVERITY_TO_SCORE.get(record.severity.name.lower(), "0.0"),
156 }
157
158 help_uri = details.get("link")
159 if valid_url(help_uri):
160 rule["helpUri"] = help_uri
161
162 return rule
163
164 def _create_license_rule(self, check_type: str, record: Record) -> dict[str, Any] | None:
165 details = record.vulnerability_details
166 if not details:
167 # this shouldn't happen
168 return None
169
170 rule = {
171 "id": self._create_rule_id(check_type=check_type, record=record),
172 "name": record.short_description or record.check_name,
173 "shortDescription": {
174 "text": record.short_description or record.check_name,
175 },
176 "fullDescription": {
177 "text": f"Package {details['package_name']}@{details['package_version']} has license {details['license']}",
178 },
179 "help": {
180 "text": f"{record.check_name}\nResource: {record.resource}",
181 },
182 "defaultConfiguration": {"level": "error"},
183 }
184
185 # Adding 'properties' dictionary only if 'record.severity' exists
186 if record.severity:
187 rule["properties"] = {
188 "security-severity": SEVERITY_TO_SCORE.get(record.severity.name.lower(), "0.0"),
189 }
190
191 help_uri = record.guideline
192 if valid_url(help_uri):
193 rule["helpUri"] = help_uri
194
195 return rule
196
197 def _create_results(self) -> list[dict[str, Any]]:
198 results: "list[dict[str, Any]]" = []
199
200 for report in self.reports:
201 for record in itertools.chain(report.failed_checks, report.skipped_checks):
202 level = "warning"
203 if record.severity:
204 level = SEVERITY_TO_SARIF_LEVEL.get(record.severity.name.lower(), "none")
205 elif record.check_result.get("result") == CheckResult.FAILED:
206 level = "error"
207
208 rule_id = self._create_rule_id(check_type=report.check_type, record=record)
209 if not rule_id or rule_id not in self.rule_index_map:
210 # can happen if data is missing
211 continue
212
213 result = {
214 "ruleId": rule_id,
215 "ruleIndex": self.rule_index_map[rule_id],
216 "level": level,
217 "attachments": [{"description": detail} for detail in record.details],
218 "message": {
219 "text": record.short_description or record.check_name,
220 },
221 "locations": [
222 {
223 "physicalLocation": {
224 "artifactLocation": {"uri": record.repo_file_path.lstrip("/")},
225 "region": {
226 "startLine": int(record.file_line_range[0]) or 1,
227 "endLine": int(record.file_line_range[1]) or 1,
228 "snippet": {"text": "".join(line for _, line in record.code_block)},
229 },
230 }
231 }
232 ],
233 }
234
235 if record.check_result.get("result") == CheckResult.SKIPPED:
236 # sca_package suppression can only be enabled via flag
237 # other runners only report in source suppression
238 kind = "external" if record.vulnerability_details else "inSource"
239 justification = record.check_result.get("suppress_comment")
240 if justification is None:
241 justification = "No comment provided"
242
243 result["suppressions"] = [
244 {
245 "kind": kind,
246 "justification": justification,
247 }
248 ]
249
250 results.append(result)
251
252 return results
253
254 def _create_rule_id(self, check_type: str, record: Record) -> str | None:
255 if check_type in SCA_CHECKTYPES:
256 details = record.vulnerability_details
257 if not details:
258 # this shouldn't happen
259 return None
260
261 if record.check_id.startswith("BC_LIC"):
262 return f"{details['license']}_{details['package_name']}@{details['package_version']}".replace(" ", "_")
263 elif record.check_id.startswith(("BC_VUL", "CKV_CVE")):
264 return f"{details['id']}_{details['package_name']}@{details['package_version']}".replace(" ", "_")
265 else:
266 return record.check_id
267
268 return None
269
270 def write_sarif_output(self) -> None:
271 try:
272 with open("results.sarif", "w") as f:
273 f.write(json.dumps(self.json))
274 print("\nWrote output in SARIF format to the file 'results.sarif'")
275 except EnvironmentError as e:
276 print("\nAn error occurred while writing SARIF results to file: results.sarif")
277 print(f"More details: \n {e}")
278
[end of checkov/common/output/sarif.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/checkov/common/output/sarif.py b/checkov/common/output/sarif.py
--- a/checkov/common/output/sarif.py
+++ b/checkov/common/output/sarif.py
@@ -3,6 +3,7 @@
import itertools
import json
from typing import TYPE_CHECKING, Any
+from urllib.parse import quote
from checkov.common.models.enums import CheckResult
from checkov.common.output.cyclonedx_consts import SCA_CHECKTYPES
@@ -221,7 +222,7 @@
"locations": [
{
"physicalLocation": {
- "artifactLocation": {"uri": record.repo_file_path.lstrip("/")},
+ "artifactLocation": {"uri": quote(record.repo_file_path.lstrip("/"))},
"region": {
"startLine": int(record.file_line_range[0]) or 1,
"endLine": int(record.file_line_range[1]) or 1,
| {"golden_diff": "diff --git a/checkov/common/output/sarif.py b/checkov/common/output/sarif.py\n--- a/checkov/common/output/sarif.py\n+++ b/checkov/common/output/sarif.py\n@@ -3,6 +3,7 @@\n import itertools\n import json\n from typing import TYPE_CHECKING, Any\n+from urllib.parse import quote\n \n from checkov.common.models.enums import CheckResult\n from checkov.common.output.cyclonedx_consts import SCA_CHECKTYPES\n@@ -221,7 +222,7 @@\n \"locations\": [\n {\n \"physicalLocation\": {\n- \"artifactLocation\": {\"uri\": record.repo_file_path.lstrip(\"/\")},\n+ \"artifactLocation\": {\"uri\": quote(record.repo_file_path.lstrip(\"/\"))},\n \"region\": {\n \"startLine\": int(record.file_line_range[0]) or 1,\n \"endLine\": int(record.file_line_range[1]) or 1,\n", "issue": "Sarif report creates invalid uri for folder with spaces\n**Describe the issue**\r\nSonarQube will not import SARIF report from Checkov correctly because of invalid URI in SARIF\r\n\r\n1) Scan folders with spaces that has some issues\r\nexample:\r\nSecrets/Access Tokens/Azure/main.tf \r\n\r\n2) Output result as sarif\r\n3) Resulting file is not valid SARIF due to invalid URI\r\n\r\nThe field Secrets/Access Tokens/Azure/main.tf corresponds to the results/locations/physicalLocation/artifactLocation/uri object in the SARIF report. There is character the space in the URI. This is not expected. The URI field shouldn\u2019t have any spaces.\r\nThis is against specification of URI, which forbids spaces in URIs.\r\n\r\n\r\nBecause of this problem , import of issues in directories with spaces will fail in SonarQube and possibly other tools\n", "before_files": [{"content": "from __future__ import annotations\n\nimport itertools\nimport json\nfrom typing import TYPE_CHECKING, Any\n\nfrom checkov.common.models.enums import CheckResult\nfrom checkov.common.output.cyclonedx_consts import SCA_CHECKTYPES\nfrom checkov.common.util.http_utils import valid_url\nfrom checkov.version import version\n\nif TYPE_CHECKING:\n from checkov.common.output.record import Record\n from checkov.common.output.report import Report\n\nSEVERITY_TO_SARIF_LEVEL = {\n \"critical\": \"error\",\n \"high\": \"error\",\n \"medium\": \"warning\",\n \"low\": \"note\",\n \"none\": \"none\",\n}\n\n\nSEVERITY_TO_SCORE = {\n \"critical\": \"10.0\",\n \"high\": \"8.9\",\n \"medium\": \"6.9\",\n \"low\": \"3.9\",\n \"none\": \"0.0\",\n}\n\n\nclass Sarif:\n def __init__(self, reports: list[Report], tool: str | None) -> None:\n self.reports = reports\n self.rule_index_map: \"dict[str, int]\" = {}\n self.tool = tool if tool else \"Bridgecrew\"\n\n self.json = self.create_json()\n\n def create_json(self) -> dict[str, Any]:\n return {\n \"$schema\": \"https://raw.githubusercontent.com/oasis-tcs/sarif-spec/master/Schemata/sarif-schema-2.1.0.json\",\n \"version\": \"2.1.0\",\n \"runs\": self._create_runs(),\n }\n\n def _create_runs(self) -> list[dict[str, Any]]:\n information_uri = \"https://docs.bridgecrew.io\" if self.tool.lower() == \"bridgecrew\" else \"https://checkov.io\"\n rules = self._create_rules() # needs to be invoked before _create_results()\n results = self._create_results()\n\n return [\n {\n \"tool\": {\n \"driver\": {\n \"name\": self.tool,\n \"version\": version,\n \"informationUri\": information_uri,\n \"rules\": rules,\n \"organization\": \"bridgecrew\",\n }\n },\n \"results\": results,\n }\n ]\n\n def _create_rules(self) -> list[dict[str, Any]]:\n rule_idx = 0\n rules: \"list[dict[str, Any]]\" = []\n\n for report in self.reports:\n if report.check_type in SCA_CHECKTYPES:\n for record in itertools.chain(report.failed_checks, report.skipped_checks):\n rule = None\n if record.check_id.startswith(\"BC_LIC\"):\n rule = self._create_license_rule(check_type=report.check_type, record=record)\n elif record.check_id.startswith((\"BC_VUL\", \"CKV_CVE\")):\n rule = self._create_cve_rule(check_type=report.check_type, record=record)\n\n if rule and rule[\"id\"] not in self.rule_index_map:\n self.rule_index_map[rule[\"id\"]] = rule_idx\n rules.append(rule)\n rule_idx += 1\n else:\n for record in itertools.chain(report.failed_checks, report.skipped_checks):\n if record.check_id not in self.rule_index_map:\n rule = self._create_iac_rule(check_type=report.check_type, record=record)\n self.rule_index_map[rule[\"id\"]] = rule_idx\n rules.append(rule)\n rule_idx += 1\n\n return rules\n\n def _create_iac_rule(self, check_type: str, record: Record) -> dict[str, Any]:\n rule = {\n \"id\": self._create_rule_id(check_type=check_type, record=record),\n \"name\": record.short_description or record.check_name,\n \"shortDescription\": {\n \"text\": record.short_description or record.check_name,\n },\n \"fullDescription\": {\n \"text\": record.description or record.check_name,\n },\n \"help\": {\n \"text\": f\"{record.check_name}\\nResource: {record.resource}\",\n },\n \"defaultConfiguration\": {\"level\": \"error\"},\n }\n\n # Adding 'properties' dictionary only if 'record.severity' exists\n if record.severity:\n rule[\"properties\"] = {\n \"security-severity\": SEVERITY_TO_SCORE.get(record.severity.name.lower(), \"0.0\"),\n }\n\n help_uri = record.guideline\n if valid_url(help_uri):\n rule[\"helpUri\"] = help_uri\n\n return rule\n\n def _create_cve_rule(self, check_type: str, record: Record) -> dict[str, Any] | None:\n details = record.vulnerability_details\n if not details:\n # this shouldn't happen\n return None\n\n rule = {\n \"id\": self._create_rule_id(check_type=check_type, record=record),\n \"name\": record.short_description or record.check_name,\n \"shortDescription\": {\n \"text\": record.short_description or record.check_name,\n },\n \"fullDescription\": {\n \"text\": record.description or record.check_name,\n },\n \"help\": {\n \"text\": f\"{record.check_name}\\nResource: {record.resource}\\nStatus: {details.get('status')}\",\n },\n \"defaultConfiguration\": {\"level\": \"error\"},\n }\n\n # Add properties dictionary with security-severity\n cvss = details.get(\"cvss\")\n if cvss:\n # use CVSS, if exists\n rule[\"properties\"] = {\n \"security-severity\": str(cvss),\n }\n elif record.severity:\n # otherwise severity, if exists\n rule[\"properties\"] = {\n \"security-severity\": SEVERITY_TO_SCORE.get(record.severity.name.lower(), \"0.0\"),\n }\n\n help_uri = details.get(\"link\")\n if valid_url(help_uri):\n rule[\"helpUri\"] = help_uri\n\n return rule\n\n def _create_license_rule(self, check_type: str, record: Record) -> dict[str, Any] | None:\n details = record.vulnerability_details\n if not details:\n # this shouldn't happen\n return None\n\n rule = {\n \"id\": self._create_rule_id(check_type=check_type, record=record),\n \"name\": record.short_description or record.check_name,\n \"shortDescription\": {\n \"text\": record.short_description or record.check_name,\n },\n \"fullDescription\": {\n \"text\": f\"Package {details['package_name']}@{details['package_version']} has license {details['license']}\",\n },\n \"help\": {\n \"text\": f\"{record.check_name}\\nResource: {record.resource}\",\n },\n \"defaultConfiguration\": {\"level\": \"error\"},\n }\n\n # Adding 'properties' dictionary only if 'record.severity' exists\n if record.severity:\n rule[\"properties\"] = {\n \"security-severity\": SEVERITY_TO_SCORE.get(record.severity.name.lower(), \"0.0\"),\n }\n\n help_uri = record.guideline\n if valid_url(help_uri):\n rule[\"helpUri\"] = help_uri\n\n return rule\n\n def _create_results(self) -> list[dict[str, Any]]:\n results: \"list[dict[str, Any]]\" = []\n\n for report in self.reports:\n for record in itertools.chain(report.failed_checks, report.skipped_checks):\n level = \"warning\"\n if record.severity:\n level = SEVERITY_TO_SARIF_LEVEL.get(record.severity.name.lower(), \"none\")\n elif record.check_result.get(\"result\") == CheckResult.FAILED:\n level = \"error\"\n\n rule_id = self._create_rule_id(check_type=report.check_type, record=record)\n if not rule_id or rule_id not in self.rule_index_map:\n # can happen if data is missing\n continue\n\n result = {\n \"ruleId\": rule_id,\n \"ruleIndex\": self.rule_index_map[rule_id],\n \"level\": level,\n \"attachments\": [{\"description\": detail} for detail in record.details],\n \"message\": {\n \"text\": record.short_description or record.check_name,\n },\n \"locations\": [\n {\n \"physicalLocation\": {\n \"artifactLocation\": {\"uri\": record.repo_file_path.lstrip(\"/\")},\n \"region\": {\n \"startLine\": int(record.file_line_range[0]) or 1,\n \"endLine\": int(record.file_line_range[1]) or 1,\n \"snippet\": {\"text\": \"\".join(line for _, line in record.code_block)},\n },\n }\n }\n ],\n }\n\n if record.check_result.get(\"result\") == CheckResult.SKIPPED:\n # sca_package suppression can only be enabled via flag\n # other runners only report in source suppression\n kind = \"external\" if record.vulnerability_details else \"inSource\"\n justification = record.check_result.get(\"suppress_comment\")\n if justification is None:\n justification = \"No comment provided\"\n\n result[\"suppressions\"] = [\n {\n \"kind\": kind,\n \"justification\": justification,\n }\n ]\n\n results.append(result)\n\n return results\n\n def _create_rule_id(self, check_type: str, record: Record) -> str | None:\n if check_type in SCA_CHECKTYPES:\n details = record.vulnerability_details\n if not details:\n # this shouldn't happen\n return None\n\n if record.check_id.startswith(\"BC_LIC\"):\n return f\"{details['license']}_{details['package_name']}@{details['package_version']}\".replace(\" \", \"_\")\n elif record.check_id.startswith((\"BC_VUL\", \"CKV_CVE\")):\n return f\"{details['id']}_{details['package_name']}@{details['package_version']}\".replace(\" \", \"_\")\n else:\n return record.check_id\n\n return None\n\n def write_sarif_output(self) -> None:\n try:\n with open(\"results.sarif\", \"w\") as f:\n f.write(json.dumps(self.json))\n print(\"\\nWrote output in SARIF format to the file 'results.sarif'\")\n except EnvironmentError as e:\n print(\"\\nAn error occurred while writing SARIF results to file: results.sarif\")\n print(f\"More details: \\n {e}\")\n", "path": "checkov/common/output/sarif.py"}]} | 3,680 | 204 |
gh_patches_debug_14983 | rasdani/github-patches | git_diff | saleor__saleor-5302 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
The clear database command should be runnable with debug disabled
We should be able to run `cleardb` when `DEBUG=False` but we should have a `--force` flag to actually allow that action when the debug mode is turned off as it is a dangerous command.
Definition of done:
- Prints an error to stderr when `DEBUG=False` and `--force` is not passed (flagged)
- Exits with 1 (raises `SystemExit` which allows Django to handle it and cleanup the opened connections, such as the database)
- User can clear the database when debug mode is turned off and only when `--force` was passed
</issue>
<code>
[start of saleor/core/management/commands/cleardb.py]
1 """Clear the database preserving shop's configuration.
2
3 This command clears the database from data such as orders, products or customer
4 accounts. It doesn't remove shop's configuration, such as: staff accounts, service
5 accounts, plugin configurations, site settings or navigation menus.
6 """
7
8 from django.conf import settings
9 from django.core.management.base import BaseCommand, CommandError
10 from django.db.models import Q
11
12 from ....account.models import User
13 from ....checkout.models import Checkout
14 from ....discount.models import Sale, Voucher
15 from ....giftcard.models import GiftCard
16 from ....order.models import Order
17 from ....page.models import Page
18 from ....payment.models import Payment, Transaction
19 from ....product.models import Attribute, Category, Collection, Product, ProductType
20 from ....shipping.models import ShippingMethod, ShippingZone
21 from ....warehouse.models import Warehouse
22 from ....webhook.models import Webhook
23
24
25 class Command(BaseCommand):
26 help = "Removes data from the database preserving shop configuration."
27
28 def add_arguments(self, parser):
29 parser.add_argument(
30 "--delete-staff",
31 action="store_true",
32 help="Delete staff user accounts (doesn't delete superuser accounts).",
33 )
34
35 def handle(self, **options):
36 if not settings.DEBUG:
37 raise CommandError("Cannot clear the database in DEBUG=True mode.")
38
39 Checkout.objects.all().delete()
40 self.stdout.write("Removed checkouts")
41
42 Transaction.objects.all().delete()
43 self.stdout.write("Removed transactions")
44
45 Payment.objects.all().delete()
46 self.stdout.write("Removed payments")
47
48 Order.objects.all().delete()
49 self.stdout.write("Removed orders")
50
51 Product.objects.all().delete()
52 self.stdout.write("Removed products")
53
54 ProductType.objects.all().delete()
55 self.stdout.write("Removed product types")
56
57 Attribute.objects.all().delete()
58 self.stdout.write("Removed attributes")
59
60 Category.objects.all().delete()
61 self.stdout.write("Removed categories")
62
63 Collection.objects.all().delete()
64 self.stdout.write("Removed collections")
65
66 Sale.objects.all().delete()
67 self.stdout.write("Removed sales")
68
69 ShippingMethod.objects.all().delete()
70 self.stdout.write("Removed shipping methods")
71
72 ShippingZone.objects.all().delete()
73 self.stdout.write("Removed shipping zones")
74
75 Voucher.objects.all().delete()
76 self.stdout.write("Removed vouchers")
77
78 GiftCard.objects.all().delete()
79 self.stdout.write("Removed gift cards")
80
81 self.stdout.write("Removed warehouses")
82 Warehouse.objects.all().delete()
83
84 Page.objects.all().delete()
85 self.stdout.write("Removed pages")
86
87 Webhook.objects.all().delete()
88 self.stdout.write("Removed webhooks")
89
90 # Delete all users except for staff members.
91 staff = User.objects.filter(Q(is_staff=True) | Q(is_superuser=True))
92 User.objects.exclude(pk__in=staff).delete()
93 self.stdout.write("Removed customers")
94
95 should_delete_staff = options.get("delete_staff")
96 if should_delete_staff:
97 staff = staff.exclude(is_superuser=True)
98 staff.delete()
99 self.stdout.write("Removed staff users")
100
101 # Remove addresses of staff members. Used to clear saved addresses of staff
102 # accounts used on demo for testing checkout.
103 for user in staff:
104 user.addresses.all().delete()
105 self.stdout.write("Removed staff addresses")
106
[end of saleor/core/management/commands/cleardb.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/saleor/core/management/commands/cleardb.py b/saleor/core/management/commands/cleardb.py
--- a/saleor/core/management/commands/cleardb.py
+++ b/saleor/core/management/commands/cleardb.py
@@ -31,10 +31,16 @@
action="store_true",
help="Delete staff user accounts (doesn't delete superuser accounts).",
)
+ parser.add_argument(
+ "--force",
+ action="store_true",
+ help="Allows running the cleardb command in DEBUG=False mode.",
+ )
def handle(self, **options):
- if not settings.DEBUG:
- raise CommandError("Cannot clear the database in DEBUG=True mode.")
+ force = options.get("force", False)
+ if not settings.DEBUG and not force:
+ raise CommandError("Cannot clear the database in DEBUG=False mode.")
Checkout.objects.all().delete()
self.stdout.write("Removed checkouts")
| {"golden_diff": "diff --git a/saleor/core/management/commands/cleardb.py b/saleor/core/management/commands/cleardb.py\n--- a/saleor/core/management/commands/cleardb.py\n+++ b/saleor/core/management/commands/cleardb.py\n@@ -31,10 +31,16 @@\n action=\"store_true\",\n help=\"Delete staff user accounts (doesn't delete superuser accounts).\",\n )\n+ parser.add_argument(\n+ \"--force\",\n+ action=\"store_true\",\n+ help=\"Allows running the cleardb command in DEBUG=False mode.\",\n+ )\n \n def handle(self, **options):\n- if not settings.DEBUG:\n- raise CommandError(\"Cannot clear the database in DEBUG=True mode.\")\n+ force = options.get(\"force\", False)\n+ if not settings.DEBUG and not force:\n+ raise CommandError(\"Cannot clear the database in DEBUG=False mode.\")\n \n Checkout.objects.all().delete()\n self.stdout.write(\"Removed checkouts\")\n", "issue": "The clear database command should be runnable with debug disabled\nWe should be able to run `cleardb` when `DEBUG=False` but we should have a `--force` flag to actually allow that action when the debug mode is turned off as it is a dangerous command.\r\n\r\nDefinition of done:\r\n- Prints an error to stderr when `DEBUG=False` and `--force` is not passed (flagged)\r\n- Exits with 1 (raises `SystemExit` which allows Django to handle it and cleanup the opened connections, such as the database)\r\n- User can clear the database when debug mode is turned off and only when `--force` was passed\n", "before_files": [{"content": "\"\"\"Clear the database preserving shop's configuration.\n\nThis command clears the database from data such as orders, products or customer\naccounts. It doesn't remove shop's configuration, such as: staff accounts, service\naccounts, plugin configurations, site settings or navigation menus.\n\"\"\"\n\nfrom django.conf import settings\nfrom django.core.management.base import BaseCommand, CommandError\nfrom django.db.models import Q\n\nfrom ....account.models import User\nfrom ....checkout.models import Checkout\nfrom ....discount.models import Sale, Voucher\nfrom ....giftcard.models import GiftCard\nfrom ....order.models import Order\nfrom ....page.models import Page\nfrom ....payment.models import Payment, Transaction\nfrom ....product.models import Attribute, Category, Collection, Product, ProductType\nfrom ....shipping.models import ShippingMethod, ShippingZone\nfrom ....warehouse.models import Warehouse\nfrom ....webhook.models import Webhook\n\n\nclass Command(BaseCommand):\n help = \"Removes data from the database preserving shop configuration.\"\n\n def add_arguments(self, parser):\n parser.add_argument(\n \"--delete-staff\",\n action=\"store_true\",\n help=\"Delete staff user accounts (doesn't delete superuser accounts).\",\n )\n\n def handle(self, **options):\n if not settings.DEBUG:\n raise CommandError(\"Cannot clear the database in DEBUG=True mode.\")\n\n Checkout.objects.all().delete()\n self.stdout.write(\"Removed checkouts\")\n\n Transaction.objects.all().delete()\n self.stdout.write(\"Removed transactions\")\n\n Payment.objects.all().delete()\n self.stdout.write(\"Removed payments\")\n\n Order.objects.all().delete()\n self.stdout.write(\"Removed orders\")\n\n Product.objects.all().delete()\n self.stdout.write(\"Removed products\")\n\n ProductType.objects.all().delete()\n self.stdout.write(\"Removed product types\")\n\n Attribute.objects.all().delete()\n self.stdout.write(\"Removed attributes\")\n\n Category.objects.all().delete()\n self.stdout.write(\"Removed categories\")\n\n Collection.objects.all().delete()\n self.stdout.write(\"Removed collections\")\n\n Sale.objects.all().delete()\n self.stdout.write(\"Removed sales\")\n\n ShippingMethod.objects.all().delete()\n self.stdout.write(\"Removed shipping methods\")\n\n ShippingZone.objects.all().delete()\n self.stdout.write(\"Removed shipping zones\")\n\n Voucher.objects.all().delete()\n self.stdout.write(\"Removed vouchers\")\n\n GiftCard.objects.all().delete()\n self.stdout.write(\"Removed gift cards\")\n\n self.stdout.write(\"Removed warehouses\")\n Warehouse.objects.all().delete()\n\n Page.objects.all().delete()\n self.stdout.write(\"Removed pages\")\n\n Webhook.objects.all().delete()\n self.stdout.write(\"Removed webhooks\")\n\n # Delete all users except for staff members.\n staff = User.objects.filter(Q(is_staff=True) | Q(is_superuser=True))\n User.objects.exclude(pk__in=staff).delete()\n self.stdout.write(\"Removed customers\")\n\n should_delete_staff = options.get(\"delete_staff\")\n if should_delete_staff:\n staff = staff.exclude(is_superuser=True)\n staff.delete()\n self.stdout.write(\"Removed staff users\")\n\n # Remove addresses of staff members. Used to clear saved addresses of staff\n # accounts used on demo for testing checkout.\n for user in staff:\n user.addresses.all().delete()\n self.stdout.write(\"Removed staff addresses\")\n", "path": "saleor/core/management/commands/cleardb.py"}]} | 1,590 | 223 |
gh_patches_debug_6825 | rasdani/github-patches | git_diff | microsoft__botbuilder-python-1579 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
"no match response" button present in qna maker dialog when active learning is disabled
Python tracking issue for repo code-owners
See original issue for details: microsoft/botframework-sdk#6146
</issue>
<code>
[start of libraries/botbuilder-ai/botbuilder/ai/qna/utils/qna_card_builder.py]
1 # Copyright (c) Microsoft Corporation. All rights reserved.
2 # Licensed under the MIT License.
3
4 from typing import List
5 from botbuilder.core import CardFactory
6 from botbuilder.schema import Activity, ActivityTypes, CardAction, HeroCard
7
8 from ..models import QueryResult
9
10
11 class QnACardBuilder:
12 """
13 Message activity card builder for QnAMaker dialogs.
14 """
15
16 @staticmethod
17 def get_suggestions_card(
18 suggestions: List[str], card_title: str, card_no_match: str
19 ) -> Activity:
20 """
21 Get active learning suggestions card.
22 """
23
24 if not suggestions:
25 raise TypeError("suggestions list is required")
26
27 if not card_title:
28 raise TypeError("card_title is required")
29
30 if not card_no_match:
31 raise TypeError("card_no_match is required")
32
33 # Add all suggestions
34 button_list = [
35 CardAction(value=suggestion, type="imBack", title=suggestion)
36 for suggestion in suggestions
37 ]
38
39 # Add No match text
40 button_list.append(
41 CardAction(value=card_no_match, type="imBack", title=card_no_match)
42 )
43
44 attachment = CardFactory.hero_card(HeroCard(buttons=button_list))
45
46 return Activity(
47 type=ActivityTypes.message, text=card_title, attachments=[attachment]
48 )
49
50 @staticmethod
51 def get_qna_prompts_card(result: QueryResult, card_no_match_text: str) -> Activity:
52 """
53 Get active learning suggestions card.
54 """
55
56 if not result:
57 raise TypeError("result is required")
58
59 if not card_no_match_text:
60 raise TypeError("card_no_match_text is required")
61
62 # Add all prompts
63 button_list = [
64 CardAction(
65 value=prompt.display_text, type="imBack", title=prompt.display_text,
66 )
67 for prompt in result.context.prompts
68 ]
69
70 # Add No match text
71 button_list.append(
72 CardAction(
73 value=card_no_match_text, type="imBack", title=card_no_match_text,
74 )
75 )
76
77 attachment = CardFactory.hero_card(HeroCard(buttons=button_list))
78
79 return Activity(
80 type=ActivityTypes.message, text=result.answer, attachments=[attachment]
81 )
82
[end of libraries/botbuilder-ai/botbuilder/ai/qna/utils/qna_card_builder.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/libraries/botbuilder-ai/botbuilder/ai/qna/utils/qna_card_builder.py b/libraries/botbuilder-ai/botbuilder/ai/qna/utils/qna_card_builder.py
--- a/libraries/botbuilder-ai/botbuilder/ai/qna/utils/qna_card_builder.py
+++ b/libraries/botbuilder-ai/botbuilder/ai/qna/utils/qna_card_builder.py
@@ -67,13 +67,6 @@
for prompt in result.context.prompts
]
- # Add No match text
- button_list.append(
- CardAction(
- value=card_no_match_text, type="imBack", title=card_no_match_text,
- )
- )
-
attachment = CardFactory.hero_card(HeroCard(buttons=button_list))
return Activity(
| {"golden_diff": "diff --git a/libraries/botbuilder-ai/botbuilder/ai/qna/utils/qna_card_builder.py b/libraries/botbuilder-ai/botbuilder/ai/qna/utils/qna_card_builder.py\n--- a/libraries/botbuilder-ai/botbuilder/ai/qna/utils/qna_card_builder.py\n+++ b/libraries/botbuilder-ai/botbuilder/ai/qna/utils/qna_card_builder.py\n@@ -67,13 +67,6 @@\n for prompt in result.context.prompts\r\n ]\r\n \r\n- # Add No match text\r\n- button_list.append(\r\n- CardAction(\r\n- value=card_no_match_text, type=\"imBack\", title=card_no_match_text,\r\n- )\r\n- )\r\n-\r\n attachment = CardFactory.hero_card(HeroCard(buttons=button_list))\r\n \r\n return Activity(\n", "issue": "\"no match response\" button present in qna maker dialog when active learning is disabled\nPython tracking issue for repo code-owners\r\n\r\nSee original issue for details: microsoft/botframework-sdk#6146\n", "before_files": [{"content": "# Copyright (c) Microsoft Corporation. All rights reserved.\r\n# Licensed under the MIT License.\r\n\r\nfrom typing import List\r\nfrom botbuilder.core import CardFactory\r\nfrom botbuilder.schema import Activity, ActivityTypes, CardAction, HeroCard\r\n\r\nfrom ..models import QueryResult\r\n\r\n\r\nclass QnACardBuilder:\r\n \"\"\"\r\n Message activity card builder for QnAMaker dialogs.\r\n \"\"\"\r\n\r\n @staticmethod\r\n def get_suggestions_card(\r\n suggestions: List[str], card_title: str, card_no_match: str\r\n ) -> Activity:\r\n \"\"\"\r\n Get active learning suggestions card.\r\n \"\"\"\r\n\r\n if not suggestions:\r\n raise TypeError(\"suggestions list is required\")\r\n\r\n if not card_title:\r\n raise TypeError(\"card_title is required\")\r\n\r\n if not card_no_match:\r\n raise TypeError(\"card_no_match is required\")\r\n\r\n # Add all suggestions\r\n button_list = [\r\n CardAction(value=suggestion, type=\"imBack\", title=suggestion)\r\n for suggestion in suggestions\r\n ]\r\n\r\n # Add No match text\r\n button_list.append(\r\n CardAction(value=card_no_match, type=\"imBack\", title=card_no_match)\r\n )\r\n\r\n attachment = CardFactory.hero_card(HeroCard(buttons=button_list))\r\n\r\n return Activity(\r\n type=ActivityTypes.message, text=card_title, attachments=[attachment]\r\n )\r\n\r\n @staticmethod\r\n def get_qna_prompts_card(result: QueryResult, card_no_match_text: str) -> Activity:\r\n \"\"\"\r\n Get active learning suggestions card.\r\n \"\"\"\r\n\r\n if not result:\r\n raise TypeError(\"result is required\")\r\n\r\n if not card_no_match_text:\r\n raise TypeError(\"card_no_match_text is required\")\r\n\r\n # Add all prompts\r\n button_list = [\r\n CardAction(\r\n value=prompt.display_text, type=\"imBack\", title=prompt.display_text,\r\n )\r\n for prompt in result.context.prompts\r\n ]\r\n\r\n # Add No match text\r\n button_list.append(\r\n CardAction(\r\n value=card_no_match_text, type=\"imBack\", title=card_no_match_text,\r\n )\r\n )\r\n\r\n attachment = CardFactory.hero_card(HeroCard(buttons=button_list))\r\n\r\n return Activity(\r\n type=ActivityTypes.message, text=result.answer, attachments=[attachment]\r\n )\r\n", "path": "libraries/botbuilder-ai/botbuilder/ai/qna/utils/qna_card_builder.py"}]} | 1,250 | 185 |
gh_patches_debug_4001 | rasdani/github-patches | git_diff | pwndbg__pwndbg-1218 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
`leakfind` should default to `$sp`
The first argument to `leakfind` is required, but it should just default to `$sp` like `probeleak` does.
</issue>
<code>
[start of pwndbg/commands/leakfind.py]
1 """
2 Find a chain of leaks given some starting address.
3 """
4
5 import argparse
6 import queue
7
8 import gdb
9
10 import pwndbg.color.chain as C
11 import pwndbg.color.memory as M
12 import pwndbg.color.message as message
13 import pwndbg.commands
14 import pwndbg.vmmap
15 from pwndbg.chain import config_arrow_right
16
17
18 # Used to recursively print the pointer chain.
19 # addr is a pointer. It is taken to be a child pointer.
20 # visited_map is a map of children -> (parent,parent_start)
21 def get_rec_addr_string(addr, visited_map):
22 page = pwndbg.vmmap.find(addr)
23 arrow_right = C.arrow(" %s " % config_arrow_right)
24
25 if page is not None:
26 if addr not in visited_map:
27 return ""
28
29 parent_info = visited_map[addr]
30 parent = parent_info[0]
31 parent_base_addr = parent_info[1]
32 if parent - parent_base_addr < 0:
33 curText = hex(parent_base_addr) + hex(parent - parent_base_addr)
34 else:
35 curText = hex(parent_base_addr) + "+" + hex(parent - parent_base_addr)
36 if parent_base_addr == addr:
37 return ""
38 return (
39 get_rec_addr_string(parent_base_addr, visited_map)
40 + M.get(parent_base_addr, text=curText)
41 + arrow_right
42 )
43 else:
44 return ""
45
46
47 # Useful for debugging. Prints a map of child -> (parent, parent_start)
48 def dbg_print_map(maps):
49 for child, parent_info in maps.items():
50 print("0x%x + (0x%x, 0x%x)" % (child, parent_info[0], parent_info[1]))
51
52
53 parser = argparse.ArgumentParser()
54 parser.description = """
55 Attempt to find a leak chain given a starting address.
56 Scans memory near the given address, looks for pointers, and continues that process to attempt to find leaks.
57
58 Example: leakfind $rsp --page_name=filename --max_offset=0x48 --max_depth=6. This would look for any chains of leaks \
59 that point to a section in filename which begin near $rsp, are never 0x48 bytes further from a known pointer, \
60 and are a maximum length of 6.
61 """
62 parser.formatter_class = argparse.RawDescriptionHelpFormatter
63 parser.add_argument("address", help="Starting address to find a leak chain from")
64 parser.add_argument(
65 "-p",
66 "--page_name",
67 type=str,
68 nargs="?",
69 default=None,
70 help="Substring required to be part of the name of any found pages",
71 )
72 parser.add_argument(
73 "-o",
74 "--max_offset",
75 default=0x48,
76 nargs="?",
77 help="Max offset to add to addresses when looking for leak",
78 )
79 parser.add_argument(
80 "-d", "--max_depth", default=0x4, nargs="?", help="Maximum depth to follow pointers to"
81 )
82 parser.add_argument(
83 "-s",
84 "--step",
85 nargs="?",
86 default=0x1,
87 help="Step to add between pointers so they are considered. For example, if this is 4 it would only consider pointers at an offset divisible by 4 from the starting pointer",
88 )
89 parser.add_argument(
90 "--negative_offset",
91 nargs="?",
92 default=0x0,
93 help="Max negative offset to search before an address when looking for a leak",
94 )
95
96
97 @pwndbg.commands.ArgparsedCommand(parser)
98 @pwndbg.commands.OnlyWhenRunning
99 def leakfind(
100 address=None, page_name=None, max_offset=0x40, max_depth=0x4, step=0x1, negative_offset=0x0
101 ):
102 if address is None:
103 raise argparse.ArgumentTypeError("No starting address provided.")
104 foundPages = pwndbg.vmmap.find(address)
105
106 if not foundPages:
107 raise argparse.ArgumentTypeError("Starting address is not mapped.")
108
109 if not pwndbg.gdblib.memory.peek(address):
110 raise argparse.ArgumentTypeError("Unable to read from starting address.")
111
112 max_depth = int(max_depth)
113 # Just warn the user that a large depth might be slow.
114 # Probably worth checking offset^depth < threshold. Do this when more benchmarking is established.
115 if max_depth > 8:
116 print(message.warn("leakfind may take a while to run on larger depths."))
117
118 stride = int(step)
119 address = int(address)
120 max_offset = int(max_offset)
121 negative_offset = int(negative_offset)
122
123 # The below map stores a map of child address->(parent_address,parent_start_address)
124 # In the above tuple, parent_address is the exact address with a pointer to the child address.
125 # parent_start_address is an address that a previous address pointed to.
126 # We need to store both so that we can nicely create our leak chain.
127 visited_map = {}
128 visited_set = {int(address)}
129 address_queue = queue.Queue()
130 address_queue.put(int(address))
131 depth = 0
132 time_to_depth_increase = 0
133
134 # Run a bfs
135 # TODO look into performance gain from checking if an address is mapped before calling pwndbg.gdblib.memory.pvoid()
136 # TODO also check using pwndbg.gdblib.memory.read for possible performance boosts.
137 while address_queue.qsize() > 0 and depth < max_depth:
138 if time_to_depth_increase == 0:
139 depth = depth + 1
140 time_to_depth_increase = address_queue.qsize()
141 cur_start_addr = address_queue.get()
142 time_to_depth_increase -= 1
143 for cur_addr in range(
144 cur_start_addr - negative_offset, cur_start_addr + max_offset, stride
145 ):
146 try:
147 cur_addr &= pwndbg.gdblib.arch.ptrmask
148 result = int(pwndbg.gdblib.memory.pvoid(cur_addr))
149 if result in visited_map or result in visited_set:
150 continue
151 visited_map[result] = (
152 cur_addr,
153 cur_start_addr,
154 ) # map is of form child->(parent,parent_start)
155 address_queue.put(result)
156 visited_set.add(result)
157 except gdb.error:
158 # That means the memory was unmapped. Just skip it if we can't read it.
159 break
160
161 # A map of length->list of lines. Used to let us print in a somewhat nice manner.
162 output_map = {}
163 arrow_right = C.arrow(" %s " % config_arrow_right)
164
165 for child in visited_map:
166 child_page = pwndbg.vmmap.find(child)
167 if child_page is not None:
168 if page_name is not None and page_name not in child_page.objfile:
169 continue
170 line = (
171 get_rec_addr_string(child, visited_map)
172 + M.get(child)
173 + " "
174 + M.get(child, text=child_page.objfile)
175 )
176 chain_length = line.count(arrow_right)
177 if chain_length in output_map:
178 output_map[chain_length].append(line)
179 else:
180 output_map[chain_length] = [line]
181
182 # Output sorted by length of chain
183 for chain_length in output_map:
184 for line in output_map[chain_length]:
185 print(line)
186
187 if pwndbg.gdblib.qemu.is_qemu():
188 print("\n[QEMU target detected - leakfind result might not be accurate; see `help vmmap`]")
189
[end of pwndbg/commands/leakfind.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/pwndbg/commands/leakfind.py b/pwndbg/commands/leakfind.py
--- a/pwndbg/commands/leakfind.py
+++ b/pwndbg/commands/leakfind.py
@@ -60,7 +60,9 @@
and are a maximum length of 6.
"""
parser.formatter_class = argparse.RawDescriptionHelpFormatter
-parser.add_argument("address", help="Starting address to find a leak chain from")
+parser.add_argument(
+ "address", nargs="?", default="$sp", help="Starting address to find a leak chain from"
+)
parser.add_argument(
"-p",
"--page_name",
| {"golden_diff": "diff --git a/pwndbg/commands/leakfind.py b/pwndbg/commands/leakfind.py\n--- a/pwndbg/commands/leakfind.py\n+++ b/pwndbg/commands/leakfind.py\n@@ -60,7 +60,9 @@\n and are a maximum length of 6.\n \"\"\"\n parser.formatter_class = argparse.RawDescriptionHelpFormatter\n-parser.add_argument(\"address\", help=\"Starting address to find a leak chain from\")\n+parser.add_argument(\n+ \"address\", nargs=\"?\", default=\"$sp\", help=\"Starting address to find a leak chain from\"\n+)\n parser.add_argument(\n \"-p\",\n \"--page_name\",\n", "issue": "`leakfind` should default to `$sp`\nThe first argument to `leakfind` is required, but it should just default to `$sp` like `probeleak` does.\n", "before_files": [{"content": "\"\"\"\nFind a chain of leaks given some starting address.\n\"\"\"\n\nimport argparse\nimport queue\n\nimport gdb\n\nimport pwndbg.color.chain as C\nimport pwndbg.color.memory as M\nimport pwndbg.color.message as message\nimport pwndbg.commands\nimport pwndbg.vmmap\nfrom pwndbg.chain import config_arrow_right\n\n\n# Used to recursively print the pointer chain.\n# addr is a pointer. It is taken to be a child pointer.\n# visited_map is a map of children -> (parent,parent_start)\ndef get_rec_addr_string(addr, visited_map):\n page = pwndbg.vmmap.find(addr)\n arrow_right = C.arrow(\" %s \" % config_arrow_right)\n\n if page is not None:\n if addr not in visited_map:\n return \"\"\n\n parent_info = visited_map[addr]\n parent = parent_info[0]\n parent_base_addr = parent_info[1]\n if parent - parent_base_addr < 0:\n curText = hex(parent_base_addr) + hex(parent - parent_base_addr)\n else:\n curText = hex(parent_base_addr) + \"+\" + hex(parent - parent_base_addr)\n if parent_base_addr == addr:\n return \"\"\n return (\n get_rec_addr_string(parent_base_addr, visited_map)\n + M.get(parent_base_addr, text=curText)\n + arrow_right\n )\n else:\n return \"\"\n\n\n# Useful for debugging. Prints a map of child -> (parent, parent_start)\ndef dbg_print_map(maps):\n for child, parent_info in maps.items():\n print(\"0x%x + (0x%x, 0x%x)\" % (child, parent_info[0], parent_info[1]))\n\n\nparser = argparse.ArgumentParser()\nparser.description = \"\"\"\nAttempt to find a leak chain given a starting address.\nScans memory near the given address, looks for pointers, and continues that process to attempt to find leaks.\n\nExample: leakfind $rsp --page_name=filename --max_offset=0x48 --max_depth=6. This would look for any chains of leaks \\\nthat point to a section in filename which begin near $rsp, are never 0x48 bytes further from a known pointer, \\\nand are a maximum length of 6.\n\"\"\"\nparser.formatter_class = argparse.RawDescriptionHelpFormatter\nparser.add_argument(\"address\", help=\"Starting address to find a leak chain from\")\nparser.add_argument(\n \"-p\",\n \"--page_name\",\n type=str,\n nargs=\"?\",\n default=None,\n help=\"Substring required to be part of the name of any found pages\",\n)\nparser.add_argument(\n \"-o\",\n \"--max_offset\",\n default=0x48,\n nargs=\"?\",\n help=\"Max offset to add to addresses when looking for leak\",\n)\nparser.add_argument(\n \"-d\", \"--max_depth\", default=0x4, nargs=\"?\", help=\"Maximum depth to follow pointers to\"\n)\nparser.add_argument(\n \"-s\",\n \"--step\",\n nargs=\"?\",\n default=0x1,\n help=\"Step to add between pointers so they are considered. For example, if this is 4 it would only consider pointers at an offset divisible by 4 from the starting pointer\",\n)\nparser.add_argument(\n \"--negative_offset\",\n nargs=\"?\",\n default=0x0,\n help=\"Max negative offset to search before an address when looking for a leak\",\n)\n\n\[email protected](parser)\[email protected]\ndef leakfind(\n address=None, page_name=None, max_offset=0x40, max_depth=0x4, step=0x1, negative_offset=0x0\n):\n if address is None:\n raise argparse.ArgumentTypeError(\"No starting address provided.\")\n foundPages = pwndbg.vmmap.find(address)\n\n if not foundPages:\n raise argparse.ArgumentTypeError(\"Starting address is not mapped.\")\n\n if not pwndbg.gdblib.memory.peek(address):\n raise argparse.ArgumentTypeError(\"Unable to read from starting address.\")\n\n max_depth = int(max_depth)\n # Just warn the user that a large depth might be slow.\n # Probably worth checking offset^depth < threshold. Do this when more benchmarking is established.\n if max_depth > 8:\n print(message.warn(\"leakfind may take a while to run on larger depths.\"))\n\n stride = int(step)\n address = int(address)\n max_offset = int(max_offset)\n negative_offset = int(negative_offset)\n\n # The below map stores a map of child address->(parent_address,parent_start_address)\n # In the above tuple, parent_address is the exact address with a pointer to the child address.\n # parent_start_address is an address that a previous address pointed to.\n # We need to store both so that we can nicely create our leak chain.\n visited_map = {}\n visited_set = {int(address)}\n address_queue = queue.Queue()\n address_queue.put(int(address))\n depth = 0\n time_to_depth_increase = 0\n\n # Run a bfs\n # TODO look into performance gain from checking if an address is mapped before calling pwndbg.gdblib.memory.pvoid()\n # TODO also check using pwndbg.gdblib.memory.read for possible performance boosts.\n while address_queue.qsize() > 0 and depth < max_depth:\n if time_to_depth_increase == 0:\n depth = depth + 1\n time_to_depth_increase = address_queue.qsize()\n cur_start_addr = address_queue.get()\n time_to_depth_increase -= 1\n for cur_addr in range(\n cur_start_addr - negative_offset, cur_start_addr + max_offset, stride\n ):\n try:\n cur_addr &= pwndbg.gdblib.arch.ptrmask\n result = int(pwndbg.gdblib.memory.pvoid(cur_addr))\n if result in visited_map or result in visited_set:\n continue\n visited_map[result] = (\n cur_addr,\n cur_start_addr,\n ) # map is of form child->(parent,parent_start)\n address_queue.put(result)\n visited_set.add(result)\n except gdb.error:\n # That means the memory was unmapped. Just skip it if we can't read it.\n break\n\n # A map of length->list of lines. Used to let us print in a somewhat nice manner.\n output_map = {}\n arrow_right = C.arrow(\" %s \" % config_arrow_right)\n\n for child in visited_map:\n child_page = pwndbg.vmmap.find(child)\n if child_page is not None:\n if page_name is not None and page_name not in child_page.objfile:\n continue\n line = (\n get_rec_addr_string(child, visited_map)\n + M.get(child)\n + \" \"\n + M.get(child, text=child_page.objfile)\n )\n chain_length = line.count(arrow_right)\n if chain_length in output_map:\n output_map[chain_length].append(line)\n else:\n output_map[chain_length] = [line]\n\n # Output sorted by length of chain\n for chain_length in output_map:\n for line in output_map[chain_length]:\n print(line)\n\n if pwndbg.gdblib.qemu.is_qemu():\n print(\"\\n[QEMU target detected - leakfind result might not be accurate; see `help vmmap`]\")\n", "path": "pwndbg/commands/leakfind.py"}]} | 2,625 | 145 |
gh_patches_debug_9875 | rasdani/github-patches | git_diff | mosaicml__composer-592 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Require `split_batch_fn` only for `grad_accum > 1`
For easy out-of-the-box use with custom datatypes, we should only require `split_batch_fn` if `grad_accum > 1`
</issue>
<code>
[start of composer/core/data_spec.py]
1 # Copyright 2021 MosaicML. All Rights Reserved.
2
3 """Specifications for operating and training on data."""
4 from __future__ import annotations
5
6 import collections.abc
7 import textwrap
8 from typing import TYPE_CHECKING, Callable, List, Optional, Sequence
9
10 import torch
11
12 from composer.utils.iter_helpers import ensure_tuple
13
14 if TYPE_CHECKING:
15 from composer.core.types import Batch, DataLoader
16
17 __all__ = ["DataSpec"]
18
19
20 class DataSpec:
21 """Specifications for operating and training on data.
22
23 An example of constructing a :class:`DataSpec` object with a ``device_transforms`` callable
24 (:class:`~composer.datasets.utils.NormalizationFn`) and then using it with :class:`~.Trainer`:
25
26 >>> # In this case, we apply NormalizationFn
27 >>> # Construct DataSpec as shown below to apply this transformation
28 >>> from composer.datasets.utils import NormalizationFn
29 >>> CHANNEL_MEAN = (0.485 * 255, 0.456 * 255, 0.406 * 255)
30 >>> CHANNEL_STD = (0.229 * 255, 0.224 * 255, 0.225 * 255)
31 >>> device_transform_fn = NormalizationFn(mean=CHANNEL_MEAN, std=CHANNEL_STD)
32 >>> train_dspec = DataSpec(train_dataloader, device_transforms=device_transform_fn)
33 >>> # The same function can be used for eval dataloader as well
34 >>> eval_dspec = DataSpec(eval_dataloader, device_transforms=device_transform_fn)
35 >>> # Use this DataSpec object to construct trainer
36 >>> trainer = Trainer(
37 ... model=model,
38 ... train_dataloader=train_dspec,
39 ... eval_dataloader=eval_dspec,
40 ... optimizers=optimizer,
41 ... max_duration="1ep",
42 ... )
43
44 Args:
45 dataloader (DataLoader): The dataloader.
46
47 num_samples (int, optional): The total number of samples in an epoch, across all ranks. This field is used by
48 the :class:`~.time.Timer` (training progress tracker). If not specified, then ``len(dataloader.dataset)`` is
49 used (if this property is available). Otherwise, the dataset is assumed to be unsized.
50
51 num_tokens (int, optional): The total number of tokens in an epoch. This field is used by the
52 :class:`~.time.Timer` (training progress tracker).
53
54 device_transforms ((Batch) -> Batch, optional): Function called by the :class:`~.trainer.Trainer` to modify the
55 batch once it has been moved onto the device. For example, this function can be used for GPU-based
56 normalization. It can modify the batch in-place, and it should return the modified batch. If not specified, the
57 batch is not modified.
58
59 split_batch ((Batch, int) -> Sequence[Batch], optional): Function called by the :class:`~.trainer.Trainer` to
60 split a batch (the first parameter) into the number of microbatches specified (the second parameter). By
61 default, batches of type :attr:`~.types.BatchPair` can be split automatically. If the ``dataloader`` yields
62 batches of a different type, then this function must be specified.
63
64 get_num_samples_in_batch ((Batch) -> int, optional): Function that is called by the :class:`~.trainer.Trainer`
65 to get the number of samples in the provided batch.
66
67 By default, if the batch contains tensors that all have the same 0th dim, then the value of the 0th dim will
68 be returned. If the batch contains tensors where the 0th dim differ, then this function must be specified.
69
70 get_num_tokens_in_batch ((Batch) -> int, optional): Function that is called by the :class:`~.trainer.Trainer` to
71 get the number of tokens in the provided batch.
72
73 By default, it returns 0, meaning that number of tokens processed will not be tracked as a part of the
74 training progress tracking. This function must be specified to track the number of tokens processed during
75 training.
76 """
77
78 def __init__(
79 self,
80 dataloader: DataLoader,
81 num_samples: Optional[int] = None,
82 num_tokens: Optional[int] = None,
83 device_transforms: Optional[Callable[[Batch], Batch]] = None,
84 split_batch: Optional[Callable[[Batch, int], Sequence[Batch]]] = None,
85 get_num_samples_in_batch: Optional[Callable[[Batch], int]] = None,
86 get_num_tokens_in_batch: Optional[Callable[[Batch], int]] = None,
87 ) -> None:
88 self.dataloader = dataloader
89 self.num_tokens = num_tokens
90 self.device_transforms = self._default_device_transforms if device_transforms is None else device_transforms
91 self.split_batch = self._default_split_batch if split_batch is None else split_batch
92 self.get_num_samples_in_batch = self._default_get_num_samples_in_batch if get_num_samples_in_batch is None else get_num_samples_in_batch
93 self.get_num_tokens_in_batch = self._default_get_num_tokens_in_batch if get_num_tokens_in_batch is None else get_num_tokens_in_batch
94 if num_samples is not None:
95 self.num_samples = num_samples
96
97 else:
98 if isinstance(dataloader.dataset, collections.abc.Sized):
99 try:
100 self.num_samples = len(dataloader.dataset)
101 except (TypeError, NotImplementedError):
102 self.num_samples = None
103 else:
104 self.num_samples = None
105
106 def _default_device_transforms(self, batch: Batch):
107 return batch
108
109 def _default_split_batch(self, batch: Batch, num_microbatches: int) -> Sequence[Batch]:
110 if not isinstance(batch, Sequence):
111 raise ValueError(f'split_fn requires batch be a tuple pair of tensors, got {type(batch)}')
112 x, y = batch
113 if isinstance(x, torch.Tensor) and isinstance(y, torch.Tensor):
114 return list(zip(x.chunk(num_microbatches), y.chunk(num_microbatches)))
115 if isinstance(x, List) and isinstance(y, List):
116 return list(
117 zip(
118 [x[i::num_microbatches] for i in range(num_microbatches)],
119 [y[i::num_microbatches] for i in range(num_microbatches)],
120 ))
121 raise NotImplementedError(
122 textwrap.dedent("""\
123 The default split_fn is unable to split the output of this
124 dataloader. Please use a DataSpec and specify `split_batch`."""))
125
126 def _default_get_num_samples_in_batch(self, batch: Batch) -> int:
127 if isinstance(batch, torch.Tensor):
128 return batch.shape[0]
129
130 dim0_sizes = []
131 if isinstance(batch, (list, tuple)):
132 for tensors in batch:
133 for t in ensure_tuple(tensors):
134 dim0_sizes.append(t.shape[0])
135 elif isinstance(batch, dict):
136 dim0_sizes = [t.shape[0] for t in batch.values()]
137
138 if len(set(dim0_sizes)) == 1:
139 return dim0_sizes[0]
140 else:
141 raise NotImplementedError(
142 textwrap.dedent(f"""\
143 Cannot determine the batch size, as multiple Tensors of
144 different lengths were found in the batch: sizes in batch: {dim0_sizes}.
145 Please use a DataSpec and specify `get_num_samples_in_batch`."""))
146
147 def _default_get_num_tokens_in_batch(self, batch: Batch) -> int:
148 del batch # unused
149 return 0
150
[end of composer/core/data_spec.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/composer/core/data_spec.py b/composer/core/data_spec.py
--- a/composer/core/data_spec.py
+++ b/composer/core/data_spec.py
@@ -107,6 +107,10 @@
return batch
def _default_split_batch(self, batch: Batch, num_microbatches: int) -> Sequence[Batch]:
+ if num_microbatches < 1:
+ raise ValueError("num_microbatches must be at least 1")
+ if num_microbatches == 1:
+ return [batch]
if not isinstance(batch, Sequence):
raise ValueError(f'split_fn requires batch be a tuple pair of tensors, got {type(batch)}')
x, y = batch
| {"golden_diff": "diff --git a/composer/core/data_spec.py b/composer/core/data_spec.py\n--- a/composer/core/data_spec.py\n+++ b/composer/core/data_spec.py\n@@ -107,6 +107,10 @@\n return batch\n \n def _default_split_batch(self, batch: Batch, num_microbatches: int) -> Sequence[Batch]:\n+ if num_microbatches < 1:\n+ raise ValueError(\"num_microbatches must be at least 1\")\n+ if num_microbatches == 1:\n+ return [batch]\n if not isinstance(batch, Sequence):\n raise ValueError(f'split_fn requires batch be a tuple pair of tensors, got {type(batch)}')\n x, y = batch\n", "issue": "Require `split_batch_fn` only for `grad_accum > 1`\nFor easy out-of-the-box use with custom datatypes, we should only require `split_batch_fn` if `grad_accum > 1`\n", "before_files": [{"content": "# Copyright 2021 MosaicML. All Rights Reserved.\n\n\"\"\"Specifications for operating and training on data.\"\"\"\nfrom __future__ import annotations\n\nimport collections.abc\nimport textwrap\nfrom typing import TYPE_CHECKING, Callable, List, Optional, Sequence\n\nimport torch\n\nfrom composer.utils.iter_helpers import ensure_tuple\n\nif TYPE_CHECKING:\n from composer.core.types import Batch, DataLoader\n\n__all__ = [\"DataSpec\"]\n\n\nclass DataSpec:\n \"\"\"Specifications for operating and training on data.\n\n An example of constructing a :class:`DataSpec` object with a ``device_transforms`` callable\n (:class:`~composer.datasets.utils.NormalizationFn`) and then using it with :class:`~.Trainer`:\n\n >>> # In this case, we apply NormalizationFn \n >>> # Construct DataSpec as shown below to apply this transformation\n >>> from composer.datasets.utils import NormalizationFn\n >>> CHANNEL_MEAN = (0.485 * 255, 0.456 * 255, 0.406 * 255)\n >>> CHANNEL_STD = (0.229 * 255, 0.224 * 255, 0.225 * 255)\n >>> device_transform_fn = NormalizationFn(mean=CHANNEL_MEAN, std=CHANNEL_STD)\n >>> train_dspec = DataSpec(train_dataloader, device_transforms=device_transform_fn)\n >>> # The same function can be used for eval dataloader as well\n >>> eval_dspec = DataSpec(eval_dataloader, device_transforms=device_transform_fn)\n >>> # Use this DataSpec object to construct trainer\n >>> trainer = Trainer(\n ... model=model,\n ... train_dataloader=train_dspec,\n ... eval_dataloader=eval_dspec,\n ... optimizers=optimizer,\n ... max_duration=\"1ep\",\n ... )\n\n Args:\n dataloader (DataLoader): The dataloader.\n\n num_samples (int, optional): The total number of samples in an epoch, across all ranks. This field is used by\n the :class:`~.time.Timer` (training progress tracker). If not specified, then ``len(dataloader.dataset)`` is\n used (if this property is available). Otherwise, the dataset is assumed to be unsized.\n\n num_tokens (int, optional): The total number of tokens in an epoch. This field is used by the\n :class:`~.time.Timer` (training progress tracker).\n\n device_transforms ((Batch) -> Batch, optional): Function called by the :class:`~.trainer.Trainer` to modify the\n batch once it has been moved onto the device. For example, this function can be used for GPU-based\n normalization. It can modify the batch in-place, and it should return the modified batch. If not specified, the\n batch is not modified.\n\n split_batch ((Batch, int) -> Sequence[Batch], optional): Function called by the :class:`~.trainer.Trainer` to\n split a batch (the first parameter) into the number of microbatches specified (the second parameter). By\n default, batches of type :attr:`~.types.BatchPair` can be split automatically. If the ``dataloader`` yields\n batches of a different type, then this function must be specified.\n\n get_num_samples_in_batch ((Batch) -> int, optional): Function that is called by the :class:`~.trainer.Trainer`\n to get the number of samples in the provided batch.\n\n By default, if the batch contains tensors that all have the same 0th dim, then the value of the 0th dim will\n be returned. If the batch contains tensors where the 0th dim differ, then this function must be specified.\n\n get_num_tokens_in_batch ((Batch) -> int, optional): Function that is called by the :class:`~.trainer.Trainer` to\n get the number of tokens in the provided batch.\n\n By default, it returns 0, meaning that number of tokens processed will not be tracked as a part of the\n training progress tracking. This function must be specified to track the number of tokens processed during\n training.\n \"\"\"\n\n def __init__(\n self,\n dataloader: DataLoader,\n num_samples: Optional[int] = None,\n num_tokens: Optional[int] = None,\n device_transforms: Optional[Callable[[Batch], Batch]] = None,\n split_batch: Optional[Callable[[Batch, int], Sequence[Batch]]] = None,\n get_num_samples_in_batch: Optional[Callable[[Batch], int]] = None,\n get_num_tokens_in_batch: Optional[Callable[[Batch], int]] = None,\n ) -> None:\n self.dataloader = dataloader\n self.num_tokens = num_tokens\n self.device_transforms = self._default_device_transforms if device_transforms is None else device_transforms\n self.split_batch = self._default_split_batch if split_batch is None else split_batch\n self.get_num_samples_in_batch = self._default_get_num_samples_in_batch if get_num_samples_in_batch is None else get_num_samples_in_batch\n self.get_num_tokens_in_batch = self._default_get_num_tokens_in_batch if get_num_tokens_in_batch is None else get_num_tokens_in_batch\n if num_samples is not None:\n self.num_samples = num_samples\n\n else:\n if isinstance(dataloader.dataset, collections.abc.Sized):\n try:\n self.num_samples = len(dataloader.dataset)\n except (TypeError, NotImplementedError):\n self.num_samples = None\n else:\n self.num_samples = None\n\n def _default_device_transforms(self, batch: Batch):\n return batch\n\n def _default_split_batch(self, batch: Batch, num_microbatches: int) -> Sequence[Batch]:\n if not isinstance(batch, Sequence):\n raise ValueError(f'split_fn requires batch be a tuple pair of tensors, got {type(batch)}')\n x, y = batch\n if isinstance(x, torch.Tensor) and isinstance(y, torch.Tensor):\n return list(zip(x.chunk(num_microbatches), y.chunk(num_microbatches)))\n if isinstance(x, List) and isinstance(y, List):\n return list(\n zip(\n [x[i::num_microbatches] for i in range(num_microbatches)],\n [y[i::num_microbatches] for i in range(num_microbatches)],\n ))\n raise NotImplementedError(\n textwrap.dedent(\"\"\"\\\n The default split_fn is unable to split the output of this\n dataloader. Please use a DataSpec and specify `split_batch`.\"\"\"))\n\n def _default_get_num_samples_in_batch(self, batch: Batch) -> int:\n if isinstance(batch, torch.Tensor):\n return batch.shape[0]\n\n dim0_sizes = []\n if isinstance(batch, (list, tuple)):\n for tensors in batch:\n for t in ensure_tuple(tensors):\n dim0_sizes.append(t.shape[0])\n elif isinstance(batch, dict):\n dim0_sizes = [t.shape[0] for t in batch.values()]\n\n if len(set(dim0_sizes)) == 1:\n return dim0_sizes[0]\n else:\n raise NotImplementedError(\n textwrap.dedent(f\"\"\"\\\n Cannot determine the batch size, as multiple Tensors of\n different lengths were found in the batch: sizes in batch: {dim0_sizes}.\n Please use a DataSpec and specify `get_num_samples_in_batch`.\"\"\"))\n\n def _default_get_num_tokens_in_batch(self, batch: Batch) -> int:\n del batch # unused\n return 0\n", "path": "composer/core/data_spec.py"}]} | 2,578 | 160 |
gh_patches_debug_11514 | rasdani/github-patches | git_diff | plone__Products.CMFPlone-2915 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
The TinyMCE pattern loads HTML as CSS and slows edit forms
## The TinyMCE pattern loads HTML as CSS and slows edit forms
### What I did:
Open an `@@edit` form with the browser's network inspector open.
### What I expect to happen:
It should load quickly and with the correct resources.
### What actually happened:
For every field using the TinyMCE pattern, there was a network request for the portal root rendered at HTML but loaded as CSS. This takes a lot of time and greatly slows the page load time.
### What version of Plone/ Addons I am using:
Products.CMFPlone==5.1.5
plone.app.theming==2.0.5
</issue>
<code>
[start of Products/CMFPlone/patterns/tinymce.py]
1 # -*- coding: utf-8 -*-
2 from lxml import html
3 from plone.app.layout.navigation.root import getNavigationRootObject
4 from plone.app.theming.utils import theming_policy
5 from plone.registry.interfaces import IRegistry
6 from Products.CMFCore.utils import getToolByName
7 from Products.CMFPlone.interfaces import IFilterSchema
8 from Products.CMFPlone.interfaces import ITinyMCESchema
9 from Products.CMFPlone.utils import get_portal
10 from zope.component import getUtility
11
12 import json
13
14
15 class TinyMCESettingsGenerator(object):
16
17 def __init__(self, context, request):
18 self.context = context
19 self.request = request
20 self.settings = getUtility(IRegistry).forInterface(
21 ITinyMCESchema,
22 prefix="plone",
23 check=False
24 )
25 self.filter_settings = getUtility(IRegistry).forInterface(
26 IFilterSchema,
27 prefix="plone",
28 check=False
29 )
30 self.nav_root = getNavigationRootObject(
31 self.context,
32 get_portal(),
33 )
34 self.nav_root_url = self.nav_root.absolute_url()
35
36 def get_theme(self):
37 return theming_policy().get_theme()
38
39 def get_content_css(self, style_css=''):
40 files = [
41 '{0}/++plone++static/plone-compiled.css'.format(self.nav_root_url)
42 ]
43 if style_css:
44 files.extend(style_css.split(','))
45 content_css = self.settings.content_css or []
46 for url in content_css:
47 if url and url.strip():
48 files.append('/'.join([self.nav_root_url, url.strip()]))
49 theme = self.get_theme()
50 tinymce_content_css = getattr(theme, 'tinymce_content_css', None)
51 if tinymce_content_css is not None:
52 for path in theme.tinymce_content_css.split(','):
53 if path.startswith('http://') or path.startswith('https://'):
54 files.append(path)
55 else:
56 files.append(self.nav_root_url + path)
57
58 return ','.join(files)
59
60 def get_style_format(self, txt, _type='format', base=None):
61 parts = txt.strip().split('|')
62 if len(parts) < 2:
63 return
64 if base is None:
65 val = {}
66 else:
67 val = base.copy()
68 val.update({
69 'title': parts[0],
70 _type: parts[1]
71 })
72 if len(parts) > 2:
73 val['icon'] = parts[2]
74 return val
75
76 def get_styles(self, styles, _type='format', base=None):
77 result = []
78 for style in styles:
79 style = self.get_style_format(style, _type, base)
80 if not style:
81 continue
82 result.append(style)
83 return result
84
85 def get_all_style_formats(self):
86 header_styles = self.settings.header_styles or []
87 block_styles = self.settings.block_styles or []
88 inline_styles = self.settings.inline_styles or []
89 alignment_styles = self.settings.alignment_styles or []
90 table_styles = self.settings.table_styles or []
91 return [{
92 'title': 'Headers',
93 'items': self.get_styles(header_styles)
94 }, {
95 'title': 'Block',
96 'items': self.get_styles(block_styles)
97 }, {
98 'title': 'Inline',
99 'items': self.get_styles(inline_styles)
100 }, {
101 'title': 'Alignment',
102 'items': self.get_styles(alignment_styles)
103 }, {
104 'title': 'Tables',
105 'items': self.get_styles(
106 table_styles, 'classes', {'selector': 'table'})
107 }]
108
109 def get_tiny_config(self):
110 settings = self.settings
111 importcss_file_filter = '%s/++plone++static/tinymce-styles.css' % (
112 self.nav_root_url
113 )
114
115 theme = self.get_theme()
116 if theme and getattr(theme, 'tinymce_styles_css', None):
117 importcss_file_filter += ',%s/%s' % (
118 self.nav_root_url,
119 theme.tinymce_styles_css.lstrip('/'))
120
121 tiny_config = {
122 'resize': 'both' if settings.resizing else False,
123 'content_css': self.get_content_css(importcss_file_filter),
124 'plugins': [
125 'plonelink',
126 'ploneimage',
127 'importcss'
128 ] + settings.plugins,
129 'external_plugins': {},
130 'toolbar': settings.toolbar,
131 'entity_encoding': settings.entity_encoding,
132 'importcss_append': True,
133 'importcss_file_filter': importcss_file_filter,
134 'browser_spellcheck': True
135 }
136 toolbar_additions = settings.custom_buttons or []
137
138 if settings.editor_height:
139 tiny_config['height'] = settings.editor_height
140 if settings.autoresize:
141 tiny_config['plugins'].append('autoresize')
142 tiny_config['autoresize_max_height'] = 1000 # hard coded?
143 if settings.editor_width:
144 tiny_config['width'] = settings.editor_width
145
146 # specific plugin options
147 if 'contextmenu' in settings.plugins:
148 tiny_config['contextmenu'] = "plonelink ploneimage inserttable |"\
149 " cell row column deletetable"
150
151 if settings.libraries_spellchecker_choice == 'AtD':
152 mtool = getToolByName(self.context, 'portal_membership')
153 member = mtool.getAuthenticatedMember()
154 member_id = member.getId()
155 if member_id:
156 if 'compat3x' not in tiny_config['plugins']:
157 tiny_config['plugins'].append('compat3x')
158 tiny_config['external_plugins']['AtD'] = (
159 '{0}/++plone++static/tinymce-AtD-plugin/'
160 'editor_plugin.js'.format(self.nav_root_url)
161 )
162 # None when Anonymous User
163 tiny_config['atd_rpc_id'] = 'plone-' + member_id
164 tiny_config['atd_rpc_url'] = self.nav_root_url
165 tiny_config['atd_show_types'] = ','.join(
166 settings.libraries_atd_show_types
167 )
168 tiny_config['atd_ignore_strings'] = ','.join(
169 settings.libraries_atd_ignore_strings
170 )
171 toolbar_additions.append('AtD')
172 elif settings.libraries_spellchecker_choice == 'AtD':
173 tiny_config['browser_spellcheck'] = True
174
175 if toolbar_additions:
176 tiny_config['toolbar'] += ' | {0}'.format(
177 ' '.join(toolbar_additions)
178 )
179
180 for plugin in settings.custom_plugins or []:
181 parts = plugin.split('|')
182 if len(parts) != 2:
183 continue
184 tiny_config['external_plugins'][parts[0]] = parts[1]
185
186 tiny_config['style_formats'] = self.get_all_style_formats()
187 if settings.formats:
188 try:
189 tiny_config['formats'] = json.loads(settings.formats)
190 except ValueError:
191 pass
192
193 if settings.menubar:
194 tiny_config['menubar'] = settings.menubar
195 if settings.menu:
196 try:
197 tiny_config['menu'] = json.loads(settings.menu)
198 except ValueError:
199 pass
200
201 if hasattr(settings, 'templates') and settings.templates:
202 try:
203 tiny_config['templates'] = json.loads(settings.templates)
204 except ValueError:
205 pass
206
207 # add safe_html settings, which are useed in backend for filtering:
208 if not self.filter_settings.disable_filtering:
209 valid_tags = self.filter_settings.valid_tags
210 nasty_tags = self.filter_settings.nasty_tags
211 custom_attributes = self.filter_settings.custom_attributes
212 safe_attributes = [attr.decode() for attr in html.defs.safe_attrs]
213 valid_attributes = safe_attributes + custom_attributes
214 # valid_elements : 'a[href|target=_blank],strong/b,div[align],br'
215 tiny_valid_elements = []
216 for tag in valid_tags:
217 tag_str = "%s[%s]" % (tag, "|".join(valid_attributes))
218 tiny_valid_elements.append(tag_str)
219 # We want to remove the nasty tag including the content in the
220 # backend, so TinyMCE should allow them here.
221 for tag in nasty_tags:
222 tag_str = "%s[%s]" % (tag, "|".join(valid_attributes))
223 tiny_valid_elements.append(tag_str)
224 tiny_config['valid_elements'] = ",".join(tiny_valid_elements)
225
226 if settings.other_settings:
227 try:
228 tiny_config.update(json.loads(settings.other_settings))
229 except ValueError:
230 pass
231
232 return tiny_config
233
[end of Products/CMFPlone/patterns/tinymce.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/Products/CMFPlone/patterns/tinymce.py b/Products/CMFPlone/patterns/tinymce.py
--- a/Products/CMFPlone/patterns/tinymce.py
+++ b/Products/CMFPlone/patterns/tinymce.py
@@ -48,7 +48,7 @@
files.append('/'.join([self.nav_root_url, url.strip()]))
theme = self.get_theme()
tinymce_content_css = getattr(theme, 'tinymce_content_css', None)
- if tinymce_content_css is not None:
+ if tinymce_content_css:
for path in theme.tinymce_content_css.split(','):
if path.startswith('http://') or path.startswith('https://'):
files.append(path)
| {"golden_diff": "diff --git a/Products/CMFPlone/patterns/tinymce.py b/Products/CMFPlone/patterns/tinymce.py\n--- a/Products/CMFPlone/patterns/tinymce.py\n+++ b/Products/CMFPlone/patterns/tinymce.py\n@@ -48,7 +48,7 @@\n files.append('/'.join([self.nav_root_url, url.strip()]))\n theme = self.get_theme()\n tinymce_content_css = getattr(theme, 'tinymce_content_css', None)\n- if tinymce_content_css is not None:\n+ if tinymce_content_css:\n for path in theme.tinymce_content_css.split(','):\n if path.startswith('http://') or path.startswith('https://'):\n files.append(path)\n", "issue": "The TinyMCE pattern loads HTML as CSS and slows edit forms\n## The TinyMCE pattern loads HTML as CSS and slows edit forms\r\n\r\n### What I did:\r\n\r\nOpen an `@@edit` form with the browser's network inspector open.\r\n\r\n### What I expect to happen:\r\n\r\nIt should load quickly and with the correct resources.\r\n\r\n### What actually happened:\r\n\r\nFor every field using the TinyMCE pattern, there was a network request for the portal root rendered at HTML but loaded as CSS. This takes a lot of time and greatly slows the page load time.\r\n\r\n### What version of Plone/ Addons I am using:\r\n\r\nProducts.CMFPlone==5.1.5\r\nplone.app.theming==2.0.5\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\nfrom lxml import html\nfrom plone.app.layout.navigation.root import getNavigationRootObject\nfrom plone.app.theming.utils import theming_policy\nfrom plone.registry.interfaces import IRegistry\nfrom Products.CMFCore.utils import getToolByName\nfrom Products.CMFPlone.interfaces import IFilterSchema\nfrom Products.CMFPlone.interfaces import ITinyMCESchema\nfrom Products.CMFPlone.utils import get_portal\nfrom zope.component import getUtility\n\nimport json\n\n\nclass TinyMCESettingsGenerator(object):\n\n def __init__(self, context, request):\n self.context = context\n self.request = request\n self.settings = getUtility(IRegistry).forInterface(\n ITinyMCESchema,\n prefix=\"plone\",\n check=False\n )\n self.filter_settings = getUtility(IRegistry).forInterface(\n IFilterSchema,\n prefix=\"plone\",\n check=False\n )\n self.nav_root = getNavigationRootObject(\n self.context,\n get_portal(),\n )\n self.nav_root_url = self.nav_root.absolute_url()\n\n def get_theme(self):\n return theming_policy().get_theme()\n\n def get_content_css(self, style_css=''):\n files = [\n '{0}/++plone++static/plone-compiled.css'.format(self.nav_root_url)\n ]\n if style_css:\n files.extend(style_css.split(','))\n content_css = self.settings.content_css or []\n for url in content_css:\n if url and url.strip():\n files.append('/'.join([self.nav_root_url, url.strip()]))\n theme = self.get_theme()\n tinymce_content_css = getattr(theme, 'tinymce_content_css', None)\n if tinymce_content_css is not None:\n for path in theme.tinymce_content_css.split(','):\n if path.startswith('http://') or path.startswith('https://'):\n files.append(path)\n else:\n files.append(self.nav_root_url + path)\n\n return ','.join(files)\n\n def get_style_format(self, txt, _type='format', base=None):\n parts = txt.strip().split('|')\n if len(parts) < 2:\n return\n if base is None:\n val = {}\n else:\n val = base.copy()\n val.update({\n 'title': parts[0],\n _type: parts[1]\n })\n if len(parts) > 2:\n val['icon'] = parts[2]\n return val\n\n def get_styles(self, styles, _type='format', base=None):\n result = []\n for style in styles:\n style = self.get_style_format(style, _type, base)\n if not style:\n continue\n result.append(style)\n return result\n\n def get_all_style_formats(self):\n header_styles = self.settings.header_styles or []\n block_styles = self.settings.block_styles or []\n inline_styles = self.settings.inline_styles or []\n alignment_styles = self.settings.alignment_styles or []\n table_styles = self.settings.table_styles or []\n return [{\n 'title': 'Headers',\n 'items': self.get_styles(header_styles)\n }, {\n 'title': 'Block',\n 'items': self.get_styles(block_styles)\n }, {\n 'title': 'Inline',\n 'items': self.get_styles(inline_styles)\n }, {\n 'title': 'Alignment',\n 'items': self.get_styles(alignment_styles)\n }, {\n 'title': 'Tables',\n 'items': self.get_styles(\n table_styles, 'classes', {'selector': 'table'})\n }]\n\n def get_tiny_config(self):\n settings = self.settings\n importcss_file_filter = '%s/++plone++static/tinymce-styles.css' % (\n self.nav_root_url\n )\n\n theme = self.get_theme()\n if theme and getattr(theme, 'tinymce_styles_css', None):\n importcss_file_filter += ',%s/%s' % (\n self.nav_root_url,\n theme.tinymce_styles_css.lstrip('/'))\n\n tiny_config = {\n 'resize': 'both' if settings.resizing else False,\n 'content_css': self.get_content_css(importcss_file_filter),\n 'plugins': [\n 'plonelink',\n 'ploneimage',\n 'importcss'\n ] + settings.plugins,\n 'external_plugins': {},\n 'toolbar': settings.toolbar,\n 'entity_encoding': settings.entity_encoding,\n 'importcss_append': True,\n 'importcss_file_filter': importcss_file_filter,\n 'browser_spellcheck': True\n }\n toolbar_additions = settings.custom_buttons or []\n\n if settings.editor_height:\n tiny_config['height'] = settings.editor_height\n if settings.autoresize:\n tiny_config['plugins'].append('autoresize')\n tiny_config['autoresize_max_height'] = 1000 # hard coded?\n if settings.editor_width:\n tiny_config['width'] = settings.editor_width\n\n # specific plugin options\n if 'contextmenu' in settings.plugins:\n tiny_config['contextmenu'] = \"plonelink ploneimage inserttable |\"\\\n \" cell row column deletetable\"\n\n if settings.libraries_spellchecker_choice == 'AtD':\n mtool = getToolByName(self.context, 'portal_membership')\n member = mtool.getAuthenticatedMember()\n member_id = member.getId()\n if member_id:\n if 'compat3x' not in tiny_config['plugins']:\n tiny_config['plugins'].append('compat3x')\n tiny_config['external_plugins']['AtD'] = (\n '{0}/++plone++static/tinymce-AtD-plugin/'\n 'editor_plugin.js'.format(self.nav_root_url)\n )\n # None when Anonymous User\n tiny_config['atd_rpc_id'] = 'plone-' + member_id\n tiny_config['atd_rpc_url'] = self.nav_root_url\n tiny_config['atd_show_types'] = ','.join(\n settings.libraries_atd_show_types\n )\n tiny_config['atd_ignore_strings'] = ','.join(\n settings.libraries_atd_ignore_strings\n )\n toolbar_additions.append('AtD')\n elif settings.libraries_spellchecker_choice == 'AtD':\n tiny_config['browser_spellcheck'] = True\n\n if toolbar_additions:\n tiny_config['toolbar'] += ' | {0}'.format(\n ' '.join(toolbar_additions)\n )\n\n for plugin in settings.custom_plugins or []:\n parts = plugin.split('|')\n if len(parts) != 2:\n continue\n tiny_config['external_plugins'][parts[0]] = parts[1]\n\n tiny_config['style_formats'] = self.get_all_style_formats()\n if settings.formats:\n try:\n tiny_config['formats'] = json.loads(settings.formats)\n except ValueError:\n pass\n\n if settings.menubar:\n tiny_config['menubar'] = settings.menubar\n if settings.menu:\n try:\n tiny_config['menu'] = json.loads(settings.menu)\n except ValueError:\n pass\n\n if hasattr(settings, 'templates') and settings.templates:\n try:\n tiny_config['templates'] = json.loads(settings.templates)\n except ValueError:\n pass\n\n # add safe_html settings, which are useed in backend for filtering:\n if not self.filter_settings.disable_filtering:\n valid_tags = self.filter_settings.valid_tags\n nasty_tags = self.filter_settings.nasty_tags\n custom_attributes = self.filter_settings.custom_attributes\n safe_attributes = [attr.decode() for attr in html.defs.safe_attrs]\n valid_attributes = safe_attributes + custom_attributes\n # valid_elements : 'a[href|target=_blank],strong/b,div[align],br'\n tiny_valid_elements = []\n for tag in valid_tags:\n tag_str = \"%s[%s]\" % (tag, \"|\".join(valid_attributes))\n tiny_valid_elements.append(tag_str)\n # We want to remove the nasty tag including the content in the\n # backend, so TinyMCE should allow them here.\n for tag in nasty_tags:\n tag_str = \"%s[%s]\" % (tag, \"|\".join(valid_attributes))\n tiny_valid_elements.append(tag_str)\n tiny_config['valid_elements'] = \",\".join(tiny_valid_elements)\n\n if settings.other_settings:\n try:\n tiny_config.update(json.loads(settings.other_settings))\n except ValueError:\n pass\n\n return tiny_config\n", "path": "Products/CMFPlone/patterns/tinymce.py"}]} | 3,097 | 172 |
gh_patches_debug_36331 | rasdani/github-patches | git_diff | streamlink__streamlink-3142 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
sportschau plugin fails with "Unable to parse manifest XML" error
## Plugin Issue
- [x ] This is a plugin issue and I have read the contribution guidelines.
### Description
streamlink errors out when trying to watch a stream on sportschau.de, e.g. https://www.sportschau.de/tourdefrance/live/videostream-livestream---die--etappe-der-tour-de-france-nach-privas-100.html. It errors out with: "error: Unable to parse manifest XML: syntax error: line 1, column 0 (b'#EXTM3U\n#EXT-X-VERSION:3\n#EXT-X ...)"
### Reproduction steps / Explicit stream URLs to test
1. streamlink "https://www.sportschau.de/tourdefrance/live/videostream-livestream---die--etappe-der-tour-de-france-nach-privas-100.html"
### Log output
```
[14:25:23,464][cli][debug] OS: Linux-5.8.4-x86_64-with-glibc2.2.5
[14:25:23,464][cli][debug] Python: 3.8.5
[14:25:23,464][cli][debug] Streamlink: 1.5.0
[14:25:23,464][cli][debug] Requests(2.24.0), Socks(1.7.1), Websocket(0.57.0)
[14:25:23,465][cli][info] Found matching plugin sportschau for URL https://www.sportschau.de/tourdefrance/live/videostream-livestream---die--etappe-der-tour-de-france-nach-privas-100.html
[14:25:23,734][plugin.sportschau][info] Found player js http://deviceids-medp.wdr.de/ondemand/221/2214170.js
error: Unable to parse manifest XML: syntax error: line 1, column 0 (b'#EXTM3U\n#EXT-X-VERSION:3\n#EXT-X ...)
```
### Additional comments, screenshots, etc.
Not sure that I understand the cause of the error, especially as the problematic part seems truncated. This is what the .m3u file looks like:
```
#EXTM3U
#EXT-X-VERSION:3
#EXT-X-INDEPENDENT-SEGMENTS
#EXT-X-STREAM-INF:BANDWIDTH=5388416,AVERAGE-BANDWIDTH=4048000,CODECS="avc1.640020,mp4a.40.2",RESOLUTION=1280x720,FRAME-RATE=50.000
https://ardevent2.akamaized.net/hls/live/681512/ardevent2_geo/master_3680.m3u8
#EXT-X-STREAM-INF:BANDWIDTH=5388416,AVERAGE-BANDWIDTH=4048000,CODECS="avc1.640020,mp4a.40.2",RESOLUTION=1280x720,FRAME-RATE=50.000
https://ardevent2.akamaized.net/hls/live/681512-b/ardevent2_geo/master_3680.m3u8
#EXT-X-STREAM-INF:BANDWIDTH=2758800,AVERAGE-BANDWIDTH=2085600,CODECS="avc1.4d401f,mp4a.40.2",RESOLUTION=960x540,FRAME-RATE=50.000
https://ardevent2.akamaized.net/hls/live/681512/ardevent2_geo/master_1896.m3u8
#EXT-X-STREAM-INF:BANDWIDTH=2758800,AVERAGE-BANDWIDTH=2085600,CODECS="avc1.4d401f,mp4a.40.2",RESOLUTION=960x540,FRAME-RATE=50.000
https://ardevent2.akamaized.net/hls/live/681512-b/ardevent2_geo/master_1896.m3u8
#EXT-X-STREAM-INF:BANDWIDTH=1614976,AVERAGE-BANDWIDTH=1232000,CODECS="avc1.4d401f,mp4a.40.2",RESOLUTION=640x360,FRAME-RATE=50.000
https://ardevent2.akamaized.net/hls/live/681512/ardevent2_geo/master_1120.m3u8
#EXT-X-STREAM-INF:BANDWIDTH=1614976,AVERAGE-BANDWIDTH=1232000,CODECS="avc1.4d401f,mp4a.40.2",RESOLUTION=640x360,FRAME-RATE=50.000
https://ardevent2.akamaized.net/hls/live/681512-b/ardevent2_geo/master_1120.m3u8
#EXT-X-STREAM-INF:BANDWIDTH=860288,AVERAGE-BANDWIDTH=668800,CODECS="avc1.77.30,mp4a.40.2",RESOLUTION=512x288,FRAME-RATE=50.000
https://ardevent2.akamaized.net/hls/live/681512/ardevent2_geo/master_608.m3u8
#EXT-X-STREAM-INF:BANDWIDTH=860288,AVERAGE-BANDWIDTH=668800,CODECS="avc1.77.30,mp4a.40.2",RESOLUTION=512x288,FRAME-RATE=50.000
https://ardevent2.akamaized.net/hls/live/681512-b/ardevent2_geo/master_608.m3u8
#EXT-X-STREAM-INF:BANDWIDTH=482944,AVERAGE-BANDWIDTH=387200,CODECS="avc1.66.30,mp4a.40.2",RESOLUTION=480x270,FRAME-RATE=50.000
https://ardevent2.akamaized.net/hls/live/681512/ardevent2_geo/master_352.m3u8
#EXT-X-STREAM-INF:BANDWIDTH=482944,AVERAGE-BANDWIDTH=387200,CODECS="avc1.66.30,mp4a.40.2",RESOLUTION=480x270,FRAME-RATE=50.000
https://ardevent2.akamaized.net/hls/live/681512-b/ardevent2_geo/master_352.m3u8
#EXT-X-STREAM-INF:BANDWIDTH=294272,AVERAGE-BANDWIDTH=246400,CODECS="avc1.42c015,mp4a.40.2",RESOLUTION=320x180,FRAME-RATE=50.000
https://ardevent2.akamaized.net/hls/live/681512/ardevent2_geo/master_224.m3u8
#EXT-X-STREAM-INF:BANDWIDTH=294272,AVERAGE-BANDWIDTH=246400,CODECS="avc1.42c015,mp4a.40.2",RESOLUTION=320x180,FRAME-RATE=50.000
https://ardevent2.akamaized.net/hls/live/681512-b/ardevent2_geo/master_224.m3u8
```
</issue>
<code>
[start of src/streamlink/plugins/sportschau.py]
1 import re
2 import json
3
4 from streamlink.plugin import Plugin
5 from streamlink.stream import HDSStream
6 from streamlink.utils import update_scheme
7
8 _url_re = re.compile(r"http(s)?://(\w+\.)?sportschau.de/")
9 _player_js = re.compile(r"https?://deviceids-medp.wdr.de/ondemand/.*\.js")
10
11
12 class sportschau(Plugin):
13 @classmethod
14 def can_handle_url(cls, url):
15 return _url_re.match(url)
16
17 def _get_streams(self):
18 res = self.session.http.get(self.url)
19 match = _player_js.search(res.text)
20 if match:
21 player_js = match.group(0)
22 self.logger.info("Found player js {0}", player_js)
23 else:
24 self.logger.info("Didn't find player js. Probably this page doesn't contain a video")
25 return
26
27 res = self.session.http.get(player_js)
28
29 jsonp_start = res.text.find('(') + 1
30 jsonp_end = res.text.rfind(')')
31
32 if jsonp_start <= 0 or jsonp_end <= 0:
33 self.logger.info("Couldn't extract json metadata from player.js: {0}", player_js)
34 return
35
36 json_s = res.text[jsonp_start:jsonp_end]
37
38 stream_metadata = json.loads(json_s)
39
40 hds_url = stream_metadata['mediaResource']['dflt']['videoURL']
41 hds_url = update_scheme(self.url, hds_url)
42
43 return HDSStream.parse_manifest(self.session, hds_url).items()
44
45
46 __plugin__ = sportschau
47
[end of src/streamlink/plugins/sportschau.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/streamlink/plugins/sportschau.py b/src/streamlink/plugins/sportschau.py
--- a/src/streamlink/plugins/sportschau.py
+++ b/src/streamlink/plugins/sportschau.py
@@ -1,46 +1,52 @@
-import re
-import json
-
-from streamlink.plugin import Plugin
-from streamlink.stream import HDSStream
-from streamlink.utils import update_scheme
-
-_url_re = re.compile(r"http(s)?://(\w+\.)?sportschau.de/")
-_player_js = re.compile(r"https?://deviceids-medp.wdr.de/ondemand/.*\.js")
-
-
-class sportschau(Plugin):
- @classmethod
- def can_handle_url(cls, url):
- return _url_re.match(url)
-
- def _get_streams(self):
- res = self.session.http.get(self.url)
- match = _player_js.search(res.text)
- if match:
- player_js = match.group(0)
- self.logger.info("Found player js {0}", player_js)
- else:
- self.logger.info("Didn't find player js. Probably this page doesn't contain a video")
- return
-
- res = self.session.http.get(player_js)
-
- jsonp_start = res.text.find('(') + 1
- jsonp_end = res.text.rfind(')')
-
- if jsonp_start <= 0 or jsonp_end <= 0:
- self.logger.info("Couldn't extract json metadata from player.js: {0}", player_js)
- return
-
- json_s = res.text[jsonp_start:jsonp_end]
-
- stream_metadata = json.loads(json_s)
-
- hds_url = stream_metadata['mediaResource']['dflt']['videoURL']
- hds_url = update_scheme(self.url, hds_url)
-
- return HDSStream.parse_manifest(self.session, hds_url).items()
-
-
-__plugin__ = sportschau
+import logging
+import re
+
+from streamlink.plugin import Plugin
+from streamlink.plugin.api import validate
+from streamlink.stream import HLSStream
+from streamlink.utils import parse_json, update_scheme
+
+log = logging.getLogger(__name__)
+
+
+class Sportschau(Plugin):
+ _re_url = re.compile(r"https?://(?:\w+\.)*sportschau.de/")
+
+ _re_player = re.compile(r"https?:(//deviceids-medp.wdr.de/ondemand/\S+\.js)")
+ _re_json = re.compile(r"\$mediaObject.jsonpHelper.storeAndPlay\(({.+})\);?")
+
+ _schema_player = validate.Schema(
+ validate.transform(_re_player.search),
+ validate.any(None, validate.Schema(
+ validate.get(1),
+ validate.transform(lambda url: update_scheme("https:", url))
+ ))
+ )
+ _schema_json = validate.Schema(
+ validate.transform(_re_json.match),
+ validate.get(1),
+ validate.transform(parse_json),
+ validate.get("mediaResource"),
+ validate.get("dflt"),
+ validate.get("videoURL"),
+ validate.transform(lambda url: update_scheme("https:", url))
+ )
+
+ @classmethod
+ def can_handle_url(cls, url):
+ return cls._re_url.match(url) is not None
+
+ def _get_streams(self):
+ player_js = self.session.http.get(self.url, schema=self._schema_player)
+ if not player_js:
+ return
+
+ log.debug("Found player js {0}".format(player_js))
+
+ hls_url = self.session.http.get(player_js, schema=self._schema_json)
+
+ for stream in HLSStream.parse_variant_playlist(self.session, hls_url).items():
+ yield stream
+
+
+__plugin__ = Sportschau
| {"golden_diff": "diff --git a/src/streamlink/plugins/sportschau.py b/src/streamlink/plugins/sportschau.py\n--- a/src/streamlink/plugins/sportschau.py\n+++ b/src/streamlink/plugins/sportschau.py\n@@ -1,46 +1,52 @@\n-import re\r\n-import json\r\n-\r\n-from streamlink.plugin import Plugin\r\n-from streamlink.stream import HDSStream\r\n-from streamlink.utils import update_scheme\r\n-\r\n-_url_re = re.compile(r\"http(s)?://(\\w+\\.)?sportschau.de/\")\r\n-_player_js = re.compile(r\"https?://deviceids-medp.wdr.de/ondemand/.*\\.js\")\r\n-\r\n-\r\n-class sportschau(Plugin):\r\n- @classmethod\r\n- def can_handle_url(cls, url):\r\n- return _url_re.match(url)\r\n-\r\n- def _get_streams(self):\r\n- res = self.session.http.get(self.url)\r\n- match = _player_js.search(res.text)\r\n- if match:\r\n- player_js = match.group(0)\r\n- self.logger.info(\"Found player js {0}\", player_js)\r\n- else:\r\n- self.logger.info(\"Didn't find player js. Probably this page doesn't contain a video\")\r\n- return\r\n-\r\n- res = self.session.http.get(player_js)\r\n-\r\n- jsonp_start = res.text.find('(') + 1\r\n- jsonp_end = res.text.rfind(')')\r\n-\r\n- if jsonp_start <= 0 or jsonp_end <= 0:\r\n- self.logger.info(\"Couldn't extract json metadata from player.js: {0}\", player_js)\r\n- return\r\n-\r\n- json_s = res.text[jsonp_start:jsonp_end]\r\n-\r\n- stream_metadata = json.loads(json_s)\r\n-\r\n- hds_url = stream_metadata['mediaResource']['dflt']['videoURL']\r\n- hds_url = update_scheme(self.url, hds_url)\r\n-\r\n- return HDSStream.parse_manifest(self.session, hds_url).items()\r\n-\r\n-\r\n-__plugin__ = sportschau\r\n+import logging\n+import re\n+\n+from streamlink.plugin import Plugin\n+from streamlink.plugin.api import validate\n+from streamlink.stream import HLSStream\n+from streamlink.utils import parse_json, update_scheme\n+\n+log = logging.getLogger(__name__)\n+\n+\n+class Sportschau(Plugin):\n+ _re_url = re.compile(r\"https?://(?:\\w+\\.)*sportschau.de/\")\n+\n+ _re_player = re.compile(r\"https?:(//deviceids-medp.wdr.de/ondemand/\\S+\\.js)\")\n+ _re_json = re.compile(r\"\\$mediaObject.jsonpHelper.storeAndPlay\\(({.+})\\);?\")\n+\n+ _schema_player = validate.Schema(\n+ validate.transform(_re_player.search),\n+ validate.any(None, validate.Schema(\n+ validate.get(1),\n+ validate.transform(lambda url: update_scheme(\"https:\", url))\n+ ))\n+ )\n+ _schema_json = validate.Schema(\n+ validate.transform(_re_json.match),\n+ validate.get(1),\n+ validate.transform(parse_json),\n+ validate.get(\"mediaResource\"),\n+ validate.get(\"dflt\"),\n+ validate.get(\"videoURL\"),\n+ validate.transform(lambda url: update_scheme(\"https:\", url))\n+ )\n+\n+ @classmethod\n+ def can_handle_url(cls, url):\n+ return cls._re_url.match(url) is not None\n+\n+ def _get_streams(self):\n+ player_js = self.session.http.get(self.url, schema=self._schema_player)\n+ if not player_js:\n+ return\n+\n+ log.debug(\"Found player js {0}\".format(player_js))\n+\n+ hls_url = self.session.http.get(player_js, schema=self._schema_json)\n+\n+ for stream in HLSStream.parse_variant_playlist(self.session, hls_url).items():\n+ yield stream\n+\n+\n+__plugin__ = Sportschau\n", "issue": "sportschau plugin fails with \"Unable to parse manifest XML\" error\n\r\n## Plugin Issue\r\n\r\n\r\n- [x ] This is a plugin issue and I have read the contribution guidelines.\r\n\r\n\r\n### Description\r\n\r\nstreamlink errors out when trying to watch a stream on sportschau.de, e.g. https://www.sportschau.de/tourdefrance/live/videostream-livestream---die--etappe-der-tour-de-france-nach-privas-100.html. It errors out with: \"error: Unable to parse manifest XML: syntax error: line 1, column 0 (b'#EXTM3U\\n#EXT-X-VERSION:3\\n#EXT-X ...)\"\r\n\r\n### Reproduction steps / Explicit stream URLs to test\r\n\r\n1. streamlink \"https://www.sportschau.de/tourdefrance/live/videostream-livestream---die--etappe-der-tour-de-france-nach-privas-100.html\"\r\n\r\n\r\n### Log output\r\n\r\n```\r\n[14:25:23,464][cli][debug] OS: Linux-5.8.4-x86_64-with-glibc2.2.5\r\n[14:25:23,464][cli][debug] Python: 3.8.5\r\n[14:25:23,464][cli][debug] Streamlink: 1.5.0\r\n[14:25:23,464][cli][debug] Requests(2.24.0), Socks(1.7.1), Websocket(0.57.0)\r\n[14:25:23,465][cli][info] Found matching plugin sportschau for URL https://www.sportschau.de/tourdefrance/live/videostream-livestream---die--etappe-der-tour-de-france-nach-privas-100.html\r\n[14:25:23,734][plugin.sportschau][info] Found player js http://deviceids-medp.wdr.de/ondemand/221/2214170.js\r\nerror: Unable to parse manifest XML: syntax error: line 1, column 0 (b'#EXTM3U\\n#EXT-X-VERSION:3\\n#EXT-X ...)\r\n```\r\n\r\n\r\n### Additional comments, screenshots, etc.\r\n\r\nNot sure that I understand the cause of the error, especially as the problematic part seems truncated. This is what the .m3u file looks like:\r\n\r\n```\r\n#EXTM3U\r\n#EXT-X-VERSION:3\r\n#EXT-X-INDEPENDENT-SEGMENTS\r\n#EXT-X-STREAM-INF:BANDWIDTH=5388416,AVERAGE-BANDWIDTH=4048000,CODECS=\"avc1.640020,mp4a.40.2\",RESOLUTION=1280x720,FRAME-RATE=50.000\r\nhttps://ardevent2.akamaized.net/hls/live/681512/ardevent2_geo/master_3680.m3u8\r\n#EXT-X-STREAM-INF:BANDWIDTH=5388416,AVERAGE-BANDWIDTH=4048000,CODECS=\"avc1.640020,mp4a.40.2\",RESOLUTION=1280x720,FRAME-RATE=50.000\r\nhttps://ardevent2.akamaized.net/hls/live/681512-b/ardevent2_geo/master_3680.m3u8\r\n#EXT-X-STREAM-INF:BANDWIDTH=2758800,AVERAGE-BANDWIDTH=2085600,CODECS=\"avc1.4d401f,mp4a.40.2\",RESOLUTION=960x540,FRAME-RATE=50.000\r\nhttps://ardevent2.akamaized.net/hls/live/681512/ardevent2_geo/master_1896.m3u8\r\n#EXT-X-STREAM-INF:BANDWIDTH=2758800,AVERAGE-BANDWIDTH=2085600,CODECS=\"avc1.4d401f,mp4a.40.2\",RESOLUTION=960x540,FRAME-RATE=50.000\r\nhttps://ardevent2.akamaized.net/hls/live/681512-b/ardevent2_geo/master_1896.m3u8\r\n#EXT-X-STREAM-INF:BANDWIDTH=1614976,AVERAGE-BANDWIDTH=1232000,CODECS=\"avc1.4d401f,mp4a.40.2\",RESOLUTION=640x360,FRAME-RATE=50.000\r\nhttps://ardevent2.akamaized.net/hls/live/681512/ardevent2_geo/master_1120.m3u8\r\n#EXT-X-STREAM-INF:BANDWIDTH=1614976,AVERAGE-BANDWIDTH=1232000,CODECS=\"avc1.4d401f,mp4a.40.2\",RESOLUTION=640x360,FRAME-RATE=50.000\r\nhttps://ardevent2.akamaized.net/hls/live/681512-b/ardevent2_geo/master_1120.m3u8\r\n#EXT-X-STREAM-INF:BANDWIDTH=860288,AVERAGE-BANDWIDTH=668800,CODECS=\"avc1.77.30,mp4a.40.2\",RESOLUTION=512x288,FRAME-RATE=50.000\r\nhttps://ardevent2.akamaized.net/hls/live/681512/ardevent2_geo/master_608.m3u8\r\n#EXT-X-STREAM-INF:BANDWIDTH=860288,AVERAGE-BANDWIDTH=668800,CODECS=\"avc1.77.30,mp4a.40.2\",RESOLUTION=512x288,FRAME-RATE=50.000\r\nhttps://ardevent2.akamaized.net/hls/live/681512-b/ardevent2_geo/master_608.m3u8\r\n#EXT-X-STREAM-INF:BANDWIDTH=482944,AVERAGE-BANDWIDTH=387200,CODECS=\"avc1.66.30,mp4a.40.2\",RESOLUTION=480x270,FRAME-RATE=50.000\r\nhttps://ardevent2.akamaized.net/hls/live/681512/ardevent2_geo/master_352.m3u8\r\n#EXT-X-STREAM-INF:BANDWIDTH=482944,AVERAGE-BANDWIDTH=387200,CODECS=\"avc1.66.30,mp4a.40.2\",RESOLUTION=480x270,FRAME-RATE=50.000\r\nhttps://ardevent2.akamaized.net/hls/live/681512-b/ardevent2_geo/master_352.m3u8\r\n#EXT-X-STREAM-INF:BANDWIDTH=294272,AVERAGE-BANDWIDTH=246400,CODECS=\"avc1.42c015,mp4a.40.2\",RESOLUTION=320x180,FRAME-RATE=50.000\r\nhttps://ardevent2.akamaized.net/hls/live/681512/ardevent2_geo/master_224.m3u8\r\n#EXT-X-STREAM-INF:BANDWIDTH=294272,AVERAGE-BANDWIDTH=246400,CODECS=\"avc1.42c015,mp4a.40.2\",RESOLUTION=320x180,FRAME-RATE=50.000\r\nhttps://ardevent2.akamaized.net/hls/live/681512-b/ardevent2_geo/master_224.m3u8\r\n```\n", "before_files": [{"content": "import re\r\nimport json\r\n\r\nfrom streamlink.plugin import Plugin\r\nfrom streamlink.stream import HDSStream\r\nfrom streamlink.utils import update_scheme\r\n\r\n_url_re = re.compile(r\"http(s)?://(\\w+\\.)?sportschau.de/\")\r\n_player_js = re.compile(r\"https?://deviceids-medp.wdr.de/ondemand/.*\\.js\")\r\n\r\n\r\nclass sportschau(Plugin):\r\n @classmethod\r\n def can_handle_url(cls, url):\r\n return _url_re.match(url)\r\n\r\n def _get_streams(self):\r\n res = self.session.http.get(self.url)\r\n match = _player_js.search(res.text)\r\n if match:\r\n player_js = match.group(0)\r\n self.logger.info(\"Found player js {0}\", player_js)\r\n else:\r\n self.logger.info(\"Didn't find player js. Probably this page doesn't contain a video\")\r\n return\r\n\r\n res = self.session.http.get(player_js)\r\n\r\n jsonp_start = res.text.find('(') + 1\r\n jsonp_end = res.text.rfind(')')\r\n\r\n if jsonp_start <= 0 or jsonp_end <= 0:\r\n self.logger.info(\"Couldn't extract json metadata from player.js: {0}\", player_js)\r\n return\r\n\r\n json_s = res.text[jsonp_start:jsonp_end]\r\n\r\n stream_metadata = json.loads(json_s)\r\n\r\n hds_url = stream_metadata['mediaResource']['dflt']['videoURL']\r\n hds_url = update_scheme(self.url, hds_url)\r\n\r\n return HDSStream.parse_manifest(self.session, hds_url).items()\r\n\r\n\r\n__plugin__ = sportschau\r\n", "path": "src/streamlink/plugins/sportschau.py"}]} | 2,884 | 848 |
gh_patches_debug_38316 | rasdani/github-patches | git_diff | streamlink__streamlink-5946 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
plugins.turkuvaz: no data on minikacocuk.com.tr
### Checklist
- [X] This is a [plugin issue](https://streamlink.github.io/plugins.html) and not [a different kind of issue](https://github.com/streamlink/streamlink/issues/new/choose)
- [X] [I have read the contribution guidelines](https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink)
- [X] [I have checked the list of open and recently closed plugin issues](https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22plugin+issue%22)
- [X] [I have checked the commit log of the master branch](https://github.com/streamlink/streamlink/commits/master)
### Streamlink version
6.7.2
### Description
### Debug log
```text
Not working Python 3!!!! "Minikacocuk" channel ,please help.
```
</issue>
<code>
[start of src/streamlink/plugins/turkuvaz.py]
1 """
2 $description Turkish live TV channels from Turkuvaz Media Group, including Ahaber, ATV, Minika COCUK and MinikaGO.
3 $url a2tv.com.tr
4 $url ahaber.com.tr
5 $url anews.com.tr
6 $url apara.com.tr
7 $url aspor.com.tr
8 $url atv.com.tr
9 $url atvavrupa.tv
10 $url minikacocuk.com.tr
11 $url minikago.com.tr
12 $url vavtv.com.tr
13 $type live, vod
14 $metadata id
15 $metadata title
16 $region various
17 """
18
19 import logging
20 import re
21
22 from streamlink.plugin import Plugin, pluginmatcher
23 from streamlink.plugin.api import validate
24 from streamlink.stream.hls import HLSStream
25
26
27 log = logging.getLogger(__name__)
28
29
30 @pluginmatcher(re.compile(r"""
31 https?://(?:www\.)?
32 (?:
33 atvavrupa\.tv
34 |
35 (?:a2tv|ahaber|anews|apara|aspor|atv|minikacocuk|minikago|vavtv)\.com\.tr
36 )
37 """, re.VERBOSE))
38 class Turkuvaz(Plugin):
39 def _get_streams(self):
40 _find_and_get_attrs = validate.Schema(
41 validate.xml_find(".//div[@data-videoid][@data-websiteid]"),
42 validate.union_get("data-videoid", "data-websiteid"),
43 )
44
45 id_data = self.session.http.get(
46 self.url,
47 schema=validate.Schema(
48 validate.parse_html(),
49 validate.any(
50 _find_and_get_attrs,
51 validate.all(
52 validate.xml_xpath_string(
53 ".//script[contains(text(),'data-videoid') and contains(text(),'data-websiteid')]/text()",
54 ),
55 validate.none_or_all(
56 str,
57 validate.regex(re.compile(r"""var\s+tmdPlayer\s*=\s*(?P<q>["'])(.*?)(?P=q)""")),
58 validate.get(0),
59 validate.parse_html(),
60 _find_and_get_attrs,
61 ),
62 ),
63 ),
64 ),
65 )
66
67 if not id_data:
68 return
69
70 video_id, website_id = id_data
71 log.debug(f"video_id={video_id}")
72 log.debug(f"website_id={website_id}")
73
74 self.id, self.title, hls_url = self.session.http.get(
75 f"https://videojs.tmgrup.com.tr/getvideo/{website_id}/{video_id}",
76 schema=validate.Schema(
77 validate.parse_json(),
78 {
79 "success": True,
80 "video": {
81 "VideoId": str,
82 "Title": str,
83 "VideoSmilUrl": validate.url(),
84 },
85 },
86 validate.get("video"),
87 validate.union_get("VideoId", "Title", "VideoSmilUrl"),
88 ),
89 )
90 log.debug(f"hls_url={hls_url}")
91
92 secure_hls_url = self.session.http.get(
93 "https://securevideotoken.tmgrup.com.tr/webtv/secure",
94 params=f"url={hls_url}",
95 headers={"Referer": self.url},
96 schema=validate.Schema(
97 validate.parse_json(),
98 {
99 "Success": True,
100 "Url": validate.url(),
101 },
102 validate.get("Url"),
103 ),
104 )
105 log.debug(f"secure_hls_url={secure_hls_url}")
106
107 if secure_hls_url:
108 return HLSStream.parse_variant_playlist(self.session, secure_hls_url)
109
110
111 __plugin__ = Turkuvaz
112
[end of src/streamlink/plugins/turkuvaz.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/streamlink/plugins/turkuvaz.py b/src/streamlink/plugins/turkuvaz.py
--- a/src/streamlink/plugins/turkuvaz.py
+++ b/src/streamlink/plugins/turkuvaz.py
@@ -36,8 +36,19 @@
)
""", re.VERBOSE))
class Turkuvaz(Plugin):
+ _VIDEOID_LIVE = "00000000-0000-0000-0000-000000000000"
+
+ # hardcoded in https://i.tmgrup.com.tr/videojs/js/tmdplayersetup.js?v=651
+ # (via https://www.minikacocuk.com.tr/webtv/canli-yayin)
+ _MAPPING_WEBSITEID_HLSURL = {
+ "9BBE055A-4CF6-4BC3-A675-D40E89B55B91": "https://trkvz.daioncdn.net/aspor/aspor.m3u8?ce=3&app=45f847c4-04e8-419a-a561-2ebf87084765",
+ "0C1BC8FF-C3B1-45BE-A95B-F7BB9C8B03ED": "https://trkvz.daioncdn.net/a2tv/a2tv.m3u8?ce=3&app=59363a60-be96-4f73-9eff-355d0ff2c758",
+ "AAE2E325-4EAE-45B7-B017-26FD7DDB6CE4": "https://trkvz.daioncdn.net/minikago/minikago.m3u8?app=web&ce=3",
+ "01ED59F2-4067-4945-8204-45F6C6DB4045": "https://trkvz.daioncdn.net/minikago_cocuk/minikago_cocuk.m3u8?app=web&ce=3",
+ }
+
def _get_streams(self):
- _find_and_get_attrs = validate.Schema(
+ _find_and_get_attrs = validate.all(
validate.xml_find(".//div[@data-videoid][@data-websiteid]"),
validate.union_get("data-videoid", "data-websiteid"),
)
@@ -68,8 +79,8 @@
return
video_id, website_id = id_data
- log.debug(f"video_id={video_id}")
- log.debug(f"website_id={website_id}")
+ log.debug(f"{video_id=}")
+ log.debug(f"{website_id=}")
self.id, self.title, hls_url = self.session.http.get(
f"https://videojs.tmgrup.com.tr/getvideo/{website_id}/{video_id}",
@@ -87,11 +98,14 @@
validate.union_get("VideoId", "Title", "VideoSmilUrl"),
),
)
- log.debug(f"hls_url={hls_url}")
+
+ if video_id == self._VIDEOID_LIVE:
+ hls_url = self._MAPPING_WEBSITEID_HLSURL.get(website_id.upper(), hls_url)
+ log.debug(f"{hls_url=}")
secure_hls_url = self.session.http.get(
"https://securevideotoken.tmgrup.com.tr/webtv/secure",
- params=f"url={hls_url}",
+ params={"url": hls_url},
headers={"Referer": self.url},
schema=validate.Schema(
validate.parse_json(),
@@ -102,7 +116,7 @@
validate.get("Url"),
),
)
- log.debug(f"secure_hls_url={secure_hls_url}")
+ log.debug(f"{secure_hls_url=}")
if secure_hls_url:
return HLSStream.parse_variant_playlist(self.session, secure_hls_url)
| {"golden_diff": "diff --git a/src/streamlink/plugins/turkuvaz.py b/src/streamlink/plugins/turkuvaz.py\n--- a/src/streamlink/plugins/turkuvaz.py\n+++ b/src/streamlink/plugins/turkuvaz.py\n@@ -36,8 +36,19 @@\n )\n \"\"\", re.VERBOSE))\n class Turkuvaz(Plugin):\n+ _VIDEOID_LIVE = \"00000000-0000-0000-0000-000000000000\"\n+\n+ # hardcoded in https://i.tmgrup.com.tr/videojs/js/tmdplayersetup.js?v=651\n+ # (via https://www.minikacocuk.com.tr/webtv/canli-yayin)\n+ _MAPPING_WEBSITEID_HLSURL = {\n+ \"9BBE055A-4CF6-4BC3-A675-D40E89B55B91\": \"https://trkvz.daioncdn.net/aspor/aspor.m3u8?ce=3&app=45f847c4-04e8-419a-a561-2ebf87084765\",\n+ \"0C1BC8FF-C3B1-45BE-A95B-F7BB9C8B03ED\": \"https://trkvz.daioncdn.net/a2tv/a2tv.m3u8?ce=3&app=59363a60-be96-4f73-9eff-355d0ff2c758\",\n+ \"AAE2E325-4EAE-45B7-B017-26FD7DDB6CE4\": \"https://trkvz.daioncdn.net/minikago/minikago.m3u8?app=web&ce=3\",\n+ \"01ED59F2-4067-4945-8204-45F6C6DB4045\": \"https://trkvz.daioncdn.net/minikago_cocuk/minikago_cocuk.m3u8?app=web&ce=3\",\n+ }\n+\n def _get_streams(self):\n- _find_and_get_attrs = validate.Schema(\n+ _find_and_get_attrs = validate.all(\n validate.xml_find(\".//div[@data-videoid][@data-websiteid]\"),\n validate.union_get(\"data-videoid\", \"data-websiteid\"),\n )\n@@ -68,8 +79,8 @@\n return\n \n video_id, website_id = id_data\n- log.debug(f\"video_id={video_id}\")\n- log.debug(f\"website_id={website_id}\")\n+ log.debug(f\"{video_id=}\")\n+ log.debug(f\"{website_id=}\")\n \n self.id, self.title, hls_url = self.session.http.get(\n f\"https://videojs.tmgrup.com.tr/getvideo/{website_id}/{video_id}\",\n@@ -87,11 +98,14 @@\n validate.union_get(\"VideoId\", \"Title\", \"VideoSmilUrl\"),\n ),\n )\n- log.debug(f\"hls_url={hls_url}\")\n+\n+ if video_id == self._VIDEOID_LIVE:\n+ hls_url = self._MAPPING_WEBSITEID_HLSURL.get(website_id.upper(), hls_url)\n+ log.debug(f\"{hls_url=}\")\n \n secure_hls_url = self.session.http.get(\n \"https://securevideotoken.tmgrup.com.tr/webtv/secure\",\n- params=f\"url={hls_url}\",\n+ params={\"url\": hls_url},\n headers={\"Referer\": self.url},\n schema=validate.Schema(\n validate.parse_json(),\n@@ -102,7 +116,7 @@\n validate.get(\"Url\"),\n ),\n )\n- log.debug(f\"secure_hls_url={secure_hls_url}\")\n+ log.debug(f\"{secure_hls_url=}\")\n \n if secure_hls_url:\n return HLSStream.parse_variant_playlist(self.session, secure_hls_url)\n", "issue": "plugins.turkuvaz: no data on minikacocuk.com.tr\n### Checklist\r\n\r\n- [X] This is a [plugin issue](https://streamlink.github.io/plugins.html) and not [a different kind of issue](https://github.com/streamlink/streamlink/issues/new/choose)\r\n- [X] [I have read the contribution guidelines](https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink)\r\n- [X] [I have checked the list of open and recently closed plugin issues](https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22plugin+issue%22)\r\n- [X] [I have checked the commit log of the master branch](https://github.com/streamlink/streamlink/commits/master)\r\n\r\n### Streamlink version\r\n\r\n6.7.2\r\n\r\n### Description\r\n\r\n### Debug log\r\n\r\n```text\r\nNot working Python 3!!!! \"Minikacocuk\" channel ,please help.\r\n```\r\n\n", "before_files": [{"content": "\"\"\"\n$description Turkish live TV channels from Turkuvaz Media Group, including Ahaber, ATV, Minika COCUK and MinikaGO.\n$url a2tv.com.tr\n$url ahaber.com.tr\n$url anews.com.tr\n$url apara.com.tr\n$url aspor.com.tr\n$url atv.com.tr\n$url atvavrupa.tv\n$url minikacocuk.com.tr\n$url minikago.com.tr\n$url vavtv.com.tr\n$type live, vod\n$metadata id\n$metadata title\n$region various\n\"\"\"\n\nimport logging\nimport re\n\nfrom streamlink.plugin import Plugin, pluginmatcher\nfrom streamlink.plugin.api import validate\nfrom streamlink.stream.hls import HLSStream\n\n\nlog = logging.getLogger(__name__)\n\n\n@pluginmatcher(re.compile(r\"\"\"\n https?://(?:www\\.)?\n (?:\n atvavrupa\\.tv\n |\n (?:a2tv|ahaber|anews|apara|aspor|atv|minikacocuk|minikago|vavtv)\\.com\\.tr\n )\n\"\"\", re.VERBOSE))\nclass Turkuvaz(Plugin):\n def _get_streams(self):\n _find_and_get_attrs = validate.Schema(\n validate.xml_find(\".//div[@data-videoid][@data-websiteid]\"),\n validate.union_get(\"data-videoid\", \"data-websiteid\"),\n )\n\n id_data = self.session.http.get(\n self.url,\n schema=validate.Schema(\n validate.parse_html(),\n validate.any(\n _find_and_get_attrs,\n validate.all(\n validate.xml_xpath_string(\n \".//script[contains(text(),'data-videoid') and contains(text(),'data-websiteid')]/text()\",\n ),\n validate.none_or_all(\n str,\n validate.regex(re.compile(r\"\"\"var\\s+tmdPlayer\\s*=\\s*(?P<q>[\"'])(.*?)(?P=q)\"\"\")),\n validate.get(0),\n validate.parse_html(),\n _find_and_get_attrs,\n ),\n ),\n ),\n ),\n )\n\n if not id_data:\n return\n\n video_id, website_id = id_data\n log.debug(f\"video_id={video_id}\")\n log.debug(f\"website_id={website_id}\")\n\n self.id, self.title, hls_url = self.session.http.get(\n f\"https://videojs.tmgrup.com.tr/getvideo/{website_id}/{video_id}\",\n schema=validate.Schema(\n validate.parse_json(),\n {\n \"success\": True,\n \"video\": {\n \"VideoId\": str,\n \"Title\": str,\n \"VideoSmilUrl\": validate.url(),\n },\n },\n validate.get(\"video\"),\n validate.union_get(\"VideoId\", \"Title\", \"VideoSmilUrl\"),\n ),\n )\n log.debug(f\"hls_url={hls_url}\")\n\n secure_hls_url = self.session.http.get(\n \"https://securevideotoken.tmgrup.com.tr/webtv/secure\",\n params=f\"url={hls_url}\",\n headers={\"Referer\": self.url},\n schema=validate.Schema(\n validate.parse_json(),\n {\n \"Success\": True,\n \"Url\": validate.url(),\n },\n validate.get(\"Url\"),\n ),\n )\n log.debug(f\"secure_hls_url={secure_hls_url}\")\n\n if secure_hls_url:\n return HLSStream.parse_variant_playlist(self.session, secure_hls_url)\n\n\n__plugin__ = Turkuvaz\n", "path": "src/streamlink/plugins/turkuvaz.py"}]} | 1,755 | 935 |
gh_patches_debug_20060 | rasdani/github-patches | git_diff | fail2ban__fail2ban-1144 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Fail2ban git fails to install from PKGBUILD on Arch Linux /var/run/ exists
Hi there.
I wrote (re-wrote / modified / plagiarized...) PKGBUILD a while back:
http://pastebin.com/raw.php?i=5E4cpjNq
It used to work fine and sweet... but now:
[andrzejl@andrzejl fail2ban-git]$ makepkg -s -i ./
==> WARNING: Cannot find the sudo binary. Will use su to acquire root privileges.
==> Making package: fail2ban-git 0.9.2.r132.gc37009a-1 (Thu Jul 30 18:25:25 IST 2015)
==> Checking runtime dependencies...
==> Checking buildtime dependencies...
==> Retrieving sources...
-> Updating fail2ban-git git repo...
Fetching origin
==> Validating source files with sha512sums...
fail2ban-git ... Skipped
==> Extracting sources...
-> Creating working copy of fail2ban git repo...
Switched to a new branch 'makepkg'
==> Starting pkgver()...
==> WARNING: A package has already been built, installing existing package...
==> Installing package fail2ban-git with pacman -U...
Password:
loading packages...
resolving dependencies...
looking for conflicting packages...
Packages (1) fail2ban-git-0.9.2.r132.gc37009a-1
Total Installed Size: 1.87 MiB
Net Upgrade Size: 0.03 MiB
:: Proceed with installation? [Y/n](1/1) checking keys in keyring [##########################################] 100%
(1/1) checking package integrity [##########################################] 100%
(1/1) loading package files [##########################################] 100%
(1/1) checking for file conflicts [##########################################] 100%
error: failed to commit transaction (conflicting files)
fail2ban-git: /var/run exists in filesystem
Errors occurred, no packages were upgraded.
==> WARNING: Failed to install built package(s).
[andrzejl@andrzejl fail2ban-git]$
The problem is that:
[root@andrzejl andrzejl]# ls --full /var/ | grep run
lrwxrwxrwx 1 root root 11 2015-02-15 21:58:46.000000000 +0000 lock -> ../run/lock
lrwxrwxrwx 1 root root 6 2015-02-15 21:58:46.000000000 +0000 run -> ../run
[root@andrzejl andrzejl]#
/var/run is a symlink pointing to /run.
Anyone knows how to bite this thing?
Cheers.
Andrzej
</issue>
<code>
[start of setup.py]
1 #!/usr/bin/python
2 # emacs: -*- mode: python; py-indent-offset: 4; indent-tabs-mode: t -*-
3 # vi: set ft=python sts=4 ts=4 sw=4 noet :
4
5 # This file is part of Fail2Ban.
6 #
7 # Fail2Ban is free software; you can redistribute it and/or modify
8 # it under the terms of the GNU General Public License as published by
9 # the Free Software Foundation; either version 2 of the License, or
10 # (at your option) any later version.
11 #
12 # Fail2Ban is distributed in the hope that it will be useful,
13 # but WITHOUT ANY WARRANTY; without even the implied warranty of
14 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
15 # GNU General Public License for more details.
16 #
17 # You should have received a copy of the GNU General Public License
18 # along with Fail2Ban; if not, write to the Free Software
19 # Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
20
21 __author__ = "Cyril Jaquier, Steven Hiscocks, Yaroslav Halchenko"
22 __copyright__ = "Copyright (c) 2004 Cyril Jaquier, 2008-2013 Fail2Ban Contributors"
23 __license__ = "GPL"
24
25 try:
26 import setuptools
27 from setuptools import setup
28 except ImportError:
29 setuptools = None
30 from distutils.core import setup
31
32 try:
33 # python 3.x
34 from distutils.command.build_py import build_py_2to3 as build_py
35 from distutils.command.build_scripts \
36 import build_scripts_2to3 as build_scripts
37 except ImportError:
38 # python 2.x
39 from distutils.command.build_py import build_py
40 from distutils.command.build_scripts import build_scripts
41 import os
42 from os.path import isfile, join, isdir
43 import sys
44 import warnings
45 from glob import glob
46
47 if setuptools and "test" in sys.argv:
48 import logging
49 logSys = logging.getLogger("fail2ban")
50 hdlr = logging.StreamHandler(sys.stdout)
51 fmt = logging.Formatter("%(asctime)-15s %(message)s")
52 hdlr.setFormatter(fmt)
53 logSys.addHandler(hdlr)
54 if set(["-q", "--quiet"]) & set(sys.argv):
55 logSys.setLevel(logging.CRITICAL)
56 warnings.simplefilter("ignore")
57 sys.warnoptions.append("ignore")
58 elif set(["-v", "--verbose"]) & set(sys.argv):
59 logSys.setLevel(logging.DEBUG)
60 else:
61 logSys.setLevel(logging.INFO)
62 elif "test" in sys.argv:
63 print("python distribute required to execute fail2ban tests")
64 print("")
65
66 longdesc = '''
67 Fail2Ban scans log files like /var/log/pwdfail or
68 /var/log/apache/error_log and bans IP that makes
69 too many password failures. It updates firewall rules
70 to reject the IP address or executes user defined
71 commands.'''
72
73 if setuptools:
74 setup_extra = {
75 'test_suite': "fail2ban.tests.utils.gatherTests",
76 'use_2to3': True,
77 }
78 else:
79 setup_extra = {}
80
81 data_files_extra = []
82 if os.path.exists('/var/run'):
83 # if we are on the system with /var/run -- we are to use it for having fail2ban/
84 # directory there for socket file etc
85 data_files_extra += [('/var/run/fail2ban', '')]
86
87 # Get version number, avoiding importing fail2ban.
88 # This is due to tests not functioning for python3 as 2to3 takes place later
89 exec(open(join("fail2ban", "version.py")).read())
90
91 setup(
92 name = "fail2ban",
93 version = version,
94 description = "Ban IPs that make too many password failures",
95 long_description = longdesc,
96 author = "Cyril Jaquier & Fail2Ban Contributors",
97 author_email = "[email protected]",
98 url = "http://www.fail2ban.org",
99 license = "GPL",
100 platforms = "Posix",
101 cmdclass = {'build_py': build_py, 'build_scripts': build_scripts},
102 scripts = [
103 'bin/fail2ban-client',
104 'bin/fail2ban-server',
105 'bin/fail2ban-regex',
106 'bin/fail2ban-testcases',
107 ],
108 packages = [
109 'fail2ban',
110 'fail2ban.client',
111 'fail2ban.server',
112 'fail2ban.tests',
113 'fail2ban.tests.action_d',
114 ],
115 package_data = {
116 'fail2ban.tests':
117 [ join(w[0], f).replace("fail2ban/tests/", "", 1)
118 for w in os.walk('fail2ban/tests/files')
119 for f in w[2]] +
120 [ join(w[0], f).replace("fail2ban/tests/", "", 1)
121 for w in os.walk('fail2ban/tests/config')
122 for f in w[2]] +
123 [ join(w[0], f).replace("fail2ban/tests/", "", 1)
124 for w in os.walk('fail2ban/tests/action_d')
125 for f in w[2]]
126 },
127 data_files = [
128 ('/etc/fail2ban',
129 glob("config/*.conf")
130 ),
131 ('/etc/fail2ban/filter.d',
132 glob("config/filter.d/*.conf")
133 ),
134 ('/etc/fail2ban/filter.d/ignorecommands',
135 glob("config/filter.d/ignorecommands/*")
136 ),
137 ('/etc/fail2ban/action.d',
138 glob("config/action.d/*.conf") +
139 glob("config/action.d/*.py")
140 ),
141 ('/etc/fail2ban/fail2ban.d',
142 ''
143 ),
144 ('/etc/fail2ban/jail.d',
145 ''
146 ),
147 ('/var/lib/fail2ban',
148 ''
149 ),
150 ('/usr/share/doc/fail2ban',
151 ['README.md', 'README.Solaris', 'DEVELOP', 'FILTERS',
152 'doc/run-rootless.txt']
153 )
154 ] + data_files_extra,
155 **setup_extra
156 )
157
158 # Do some checks after installation
159 # Search for obsolete files.
160 obsoleteFiles = []
161 elements = {
162 "/etc/":
163 [
164 "fail2ban.conf"
165 ],
166 "/usr/bin/":
167 [
168 "fail2ban.py"
169 ],
170 "/usr/lib/fail2ban/":
171 [
172 "version.py",
173 "protocol.py"
174 ]
175 }
176
177 for directory in elements:
178 for f in elements[directory]:
179 path = join(directory, f)
180 if isfile(path):
181 obsoleteFiles.append(path)
182
183 if obsoleteFiles:
184 print("")
185 print("Obsolete files from previous Fail2Ban versions were found on "
186 "your system.")
187 print("Please delete them:")
188 print("")
189 for f in obsoleteFiles:
190 print("\t" + f)
191 print("")
192
193 if isdir("/usr/lib/fail2ban"):
194 print("")
195 print("Fail2ban is not installed under /usr/lib anymore. The new "
196 "location is under /usr/share. Please remove the directory "
197 "/usr/lib/fail2ban and everything under this directory.")
198 print("")
199
200 # Update config file
201 if sys.argv[1] == "install":
202 print("")
203 print("Please do not forget to update your configuration files.")
204 print("They are in /etc/fail2ban/.")
205 print("")
206
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -39,7 +39,7 @@
from distutils.command.build_py import build_py
from distutils.command.build_scripts import build_scripts
import os
-from os.path import isfile, join, isdir
+from os.path import isfile, join, isdir, realpath
import sys
import warnings
from glob import glob
@@ -81,8 +81,9 @@
data_files_extra = []
if os.path.exists('/var/run'):
# if we are on the system with /var/run -- we are to use it for having fail2ban/
- # directory there for socket file etc
- data_files_extra += [('/var/run/fail2ban', '')]
+ # directory there for socket file etc.
+ # realpath is used to possibly resolve /var/run -> /run symlink
+ data_files_extra += [(realpath('/var/run/fail2ban'), '')]
# Get version number, avoiding importing fail2ban.
# This is due to tests not functioning for python3 as 2to3 takes place later
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -39,7 +39,7 @@\n \tfrom distutils.command.build_py import build_py\n \tfrom distutils.command.build_scripts import build_scripts\n import os\n-from os.path import isfile, join, isdir\n+from os.path import isfile, join, isdir, realpath\n import sys\n import warnings\n from glob import glob\n@@ -81,8 +81,9 @@\n data_files_extra = []\n if os.path.exists('/var/run'):\n \t# if we are on the system with /var/run -- we are to use it for having fail2ban/\n-\t# directory there for socket file etc\n-\tdata_files_extra += [('/var/run/fail2ban', '')]\n+\t# directory there for socket file etc.\n+\t# realpath is used to possibly resolve /var/run -> /run symlink\n+\tdata_files_extra += [(realpath('/var/run/fail2ban'), '')]\n \n # Get version number, avoiding importing fail2ban.\n # This is due to tests not functioning for python3 as 2to3 takes place later\n", "issue": "Fail2ban git fails to install from PKGBUILD on Arch Linux /var/run/ exists\nHi there.\n\nI wrote (re-wrote / modified / plagiarized...) PKGBUILD a while back:\n\nhttp://pastebin.com/raw.php?i=5E4cpjNq\n\nIt used to work fine and sweet... but now:\n\n[andrzejl@andrzejl fail2ban-git]$ makepkg -s -i ./\n==> WARNING: Cannot find the sudo binary. Will use su to acquire root privileges.\n==> Making package: fail2ban-git 0.9.2.r132.gc37009a-1 (Thu Jul 30 18:25:25 IST 2015)\n==> Checking runtime dependencies...\n==> Checking buildtime dependencies...\n==> Retrieving sources...\n -> Updating fail2ban-git git repo...\nFetching origin\n==> Validating source files with sha512sums...\n fail2ban-git ... Skipped\n==> Extracting sources...\n -> Creating working copy of fail2ban git repo...\nSwitched to a new branch 'makepkg'\n==> Starting pkgver()...\n==> WARNING: A package has already been built, installing existing package...\n==> Installing package fail2ban-git with pacman -U...\nPassword:\nloading packages...\nresolving dependencies...\nlooking for conflicting packages...\n\nPackages (1) fail2ban-git-0.9.2.r132.gc37009a-1\n\nTotal Installed Size: 1.87 MiB\nNet Upgrade Size: 0.03 MiB\n\n:: Proceed with installation? [Y/n](1/1) checking keys in keyring [##########################################] 100%\n(1/1) checking package integrity [##########################################] 100%\n(1/1) loading package files [##########################################] 100%\n(1/1) checking for file conflicts [##########################################] 100%\nerror: failed to commit transaction (conflicting files)\nfail2ban-git: /var/run exists in filesystem\nErrors occurred, no packages were upgraded.\n==> WARNING: Failed to install built package(s).\n[andrzejl@andrzejl fail2ban-git]$ \n\nThe problem is that:\n\n[root@andrzejl andrzejl]# ls --full /var/ | grep run\nlrwxrwxrwx 1 root root 11 2015-02-15 21:58:46.000000000 +0000 lock -> ../run/lock\nlrwxrwxrwx 1 root root 6 2015-02-15 21:58:46.000000000 +0000 run -> ../run\n[root@andrzejl andrzejl]# \n\n/var/run is a symlink pointing to /run.\n\nAnyone knows how to bite this thing?\n\nCheers.\n\nAndrzej\n\n", "before_files": [{"content": "#!/usr/bin/python\n# emacs: -*- mode: python; py-indent-offset: 4; indent-tabs-mode: t -*-\n# vi: set ft=python sts=4 ts=4 sw=4 noet :\n\n# This file is part of Fail2Ban.\n#\n# Fail2Ban is free software; you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation; either version 2 of the License, or\n# (at your option) any later version.\n#\n# Fail2Ban is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with Fail2Ban; if not, write to the Free Software\n# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.\n\n__author__ = \"Cyril Jaquier, Steven Hiscocks, Yaroslav Halchenko\"\n__copyright__ = \"Copyright (c) 2004 Cyril Jaquier, 2008-2013 Fail2Ban Contributors\"\n__license__ = \"GPL\"\n\ntry:\n\timport setuptools\n\tfrom setuptools import setup\nexcept ImportError:\n\tsetuptools = None\n\tfrom distutils.core import setup\n\ntry:\n\t# python 3.x\n\tfrom distutils.command.build_py import build_py_2to3 as build_py\n\tfrom distutils.command.build_scripts \\\n\t\timport build_scripts_2to3 as build_scripts\nexcept ImportError:\n\t# python 2.x\n\tfrom distutils.command.build_py import build_py\n\tfrom distutils.command.build_scripts import build_scripts\nimport os\nfrom os.path import isfile, join, isdir\nimport sys\nimport warnings\nfrom glob import glob\n\nif setuptools and \"test\" in sys.argv:\n\timport logging\n\tlogSys = logging.getLogger(\"fail2ban\")\n\thdlr = logging.StreamHandler(sys.stdout)\n\tfmt = logging.Formatter(\"%(asctime)-15s %(message)s\")\n\thdlr.setFormatter(fmt)\n\tlogSys.addHandler(hdlr)\n\tif set([\"-q\", \"--quiet\"]) & set(sys.argv):\n\t\tlogSys.setLevel(logging.CRITICAL)\n\t\twarnings.simplefilter(\"ignore\")\n\t\tsys.warnoptions.append(\"ignore\")\n\telif set([\"-v\", \"--verbose\"]) & set(sys.argv):\n\t\tlogSys.setLevel(logging.DEBUG)\n\telse:\n\t\tlogSys.setLevel(logging.INFO)\nelif \"test\" in sys.argv:\n\tprint(\"python distribute required to execute fail2ban tests\")\n\tprint(\"\")\n\nlongdesc = '''\nFail2Ban scans log files like /var/log/pwdfail or\n/var/log/apache/error_log and bans IP that makes\ntoo many password failures. It updates firewall rules\nto reject the IP address or executes user defined\ncommands.'''\n\nif setuptools:\n\tsetup_extra = {\n\t\t'test_suite': \"fail2ban.tests.utils.gatherTests\",\n\t\t'use_2to3': True,\n\t}\nelse:\n\tsetup_extra = {}\n\ndata_files_extra = []\nif os.path.exists('/var/run'):\n\t# if we are on the system with /var/run -- we are to use it for having fail2ban/\n\t# directory there for socket file etc\n\tdata_files_extra += [('/var/run/fail2ban', '')]\n\n# Get version number, avoiding importing fail2ban.\n# This is due to tests not functioning for python3 as 2to3 takes place later\nexec(open(join(\"fail2ban\", \"version.py\")).read())\n\nsetup(\n\tname = \"fail2ban\",\n\tversion = version,\n\tdescription = \"Ban IPs that make too many password failures\",\n\tlong_description = longdesc,\n\tauthor = \"Cyril Jaquier & Fail2Ban Contributors\",\n\tauthor_email = \"[email protected]\",\n\turl = \"http://www.fail2ban.org\",\n\tlicense = \"GPL\",\n\tplatforms = \"Posix\",\n\tcmdclass = {'build_py': build_py, 'build_scripts': build_scripts},\n\tscripts = [\n\t\t'bin/fail2ban-client',\n\t\t'bin/fail2ban-server',\n\t\t'bin/fail2ban-regex',\n\t\t'bin/fail2ban-testcases',\n\t],\n\tpackages = [\n\t\t'fail2ban',\n\t\t'fail2ban.client',\n\t\t'fail2ban.server',\n\t\t'fail2ban.tests',\n\t\t'fail2ban.tests.action_d',\n\t],\n\tpackage_data = {\n\t\t'fail2ban.tests':\n\t\t\t[ join(w[0], f).replace(\"fail2ban/tests/\", \"\", 1)\n\t\t\t\tfor w in os.walk('fail2ban/tests/files')\n\t\t\t\tfor f in w[2]] +\n\t\t\t[ join(w[0], f).replace(\"fail2ban/tests/\", \"\", 1)\n\t\t\t\tfor w in os.walk('fail2ban/tests/config')\n\t\t\t\tfor f in w[2]] +\n\t\t\t[ join(w[0], f).replace(\"fail2ban/tests/\", \"\", 1)\n\t\t\t\tfor w in os.walk('fail2ban/tests/action_d')\n\t\t\t\tfor f in w[2]]\n\t},\n\tdata_files = [\n\t\t('/etc/fail2ban',\n\t\t\tglob(\"config/*.conf\")\n\t\t),\n\t\t('/etc/fail2ban/filter.d',\n\t\t\tglob(\"config/filter.d/*.conf\")\n\t\t),\n\t\t('/etc/fail2ban/filter.d/ignorecommands',\n\t\t\tglob(\"config/filter.d/ignorecommands/*\")\n\t\t),\n\t\t('/etc/fail2ban/action.d',\n\t\t\tglob(\"config/action.d/*.conf\") +\n\t\t\tglob(\"config/action.d/*.py\")\n\t\t),\n\t\t('/etc/fail2ban/fail2ban.d',\n\t\t\t''\n\t\t),\n\t\t('/etc/fail2ban/jail.d',\n\t\t\t''\n\t\t),\n\t\t('/var/lib/fail2ban',\n\t\t\t''\n\t\t),\n\t\t('/usr/share/doc/fail2ban',\n\t\t\t['README.md', 'README.Solaris', 'DEVELOP', 'FILTERS',\n\t\t\t 'doc/run-rootless.txt']\n\t\t)\n\t] + data_files_extra,\n\t**setup_extra\n)\n\n# Do some checks after installation\n# Search for obsolete files.\nobsoleteFiles = []\nelements = {\n\t\"/etc/\":\n\t\t[\n\t\t\t\"fail2ban.conf\"\n\t\t],\n\t\"/usr/bin/\":\n\t\t[\n\t\t\t\"fail2ban.py\"\n\t\t],\n\t\"/usr/lib/fail2ban/\":\n\t\t[\n\t\t\t\"version.py\",\n\t\t\t\"protocol.py\"\n\t\t]\n}\n\nfor directory in elements:\n\tfor f in elements[directory]:\n\t\tpath = join(directory, f)\n\t\tif isfile(path):\n\t\t\tobsoleteFiles.append(path)\n\nif obsoleteFiles:\n\tprint(\"\")\n\tprint(\"Obsolete files from previous Fail2Ban versions were found on \"\n\t\t \"your system.\")\n\tprint(\"Please delete them:\")\n\tprint(\"\")\n\tfor f in obsoleteFiles:\n\t\tprint(\"\\t\" + f)\n\tprint(\"\")\n\nif isdir(\"/usr/lib/fail2ban\"):\n\tprint(\"\")\n\tprint(\"Fail2ban is not installed under /usr/lib anymore. The new \"\n\t\t \"location is under /usr/share. Please remove the directory \"\n\t\t \"/usr/lib/fail2ban and everything under this directory.\")\n\tprint(\"\")\n\n# Update config file\nif sys.argv[1] == \"install\":\n\tprint(\"\")\n\tprint(\"Please do not forget to update your configuration files.\")\n\tprint(\"They are in /etc/fail2ban/.\")\n\tprint(\"\")\n", "path": "setup.py"}]} | 3,378 | 248 |
gh_patches_debug_34583 | rasdani/github-patches | git_diff | scikit-image__scikit-image-1526 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Error installing skimage-
Hi!
I've install skimage on a MacOs 10.10, like it's said in the documentation:
pip install -U scikit-image
and it say that need scipy to work, I had to install it to work.
</issue>
<code>
[start of setup.py]
1 #! /usr/bin/env python
2
3 descr = """Image Processing SciKit
4
5 Image processing algorithms for SciPy, including IO, morphology, filtering,
6 warping, color manipulation, object detection, etc.
7
8 Please refer to the online documentation at
9 http://scikit-image.org/
10 """
11
12 DISTNAME = 'scikit-image'
13 DESCRIPTION = 'Image processing routines for SciPy'
14 LONG_DESCRIPTION = descr
15 MAINTAINER = 'Stefan van der Walt'
16 MAINTAINER_EMAIL = '[email protected]'
17 URL = 'http://scikit-image.org'
18 LICENSE = 'Modified BSD'
19 DOWNLOAD_URL = 'http://github.com/scikit-image/scikit-image'
20
21 import os
22 import sys
23
24 import setuptools
25 from distutils.command.build_py import build_py
26
27
28 with open('skimage/__init__.py') as fid:
29 for line in fid:
30 if line.startswith('__version__'):
31 VERSION = line.strip().split()[-1][1:-1]
32 break
33
34 with open('requirements.txt') as fid:
35 INSTALL_REQUIRES = [l.strip() for l in fid.readlines() if l]
36
37 # development versions do not have the cythonized files
38 if VERSION.endswith('dev'):
39 SETUP_REQUIRES = [r for r in INSTALL_REQUIRES if r.startswith('cython')]
40 else:
41 INSTALL_REQUIRES = [r for r in INSTALL_REQUIRES
42 if not r.startswith('cython')]
43 SETUP_REQUIRES = []
44
45
46 # list requirements for PyPI
47 REQUIRES = [r.replace('>=', ' (>= ') + ')'
48 for r in INSTALL_REQUIRES + SETUP_REQUIRES]
49 REQUIRES = [r.replace('==', ' (== ') for r in REQUIRES]
50
51
52 # do not attempt to install numpy and scipy until they have eggs available
53 INSTALL_REQUIRES = [r for r in INSTALL_REQUIRES
54 if not r.startswith(('scipy', 'numpy'))]
55
56
57 def configuration(parent_package='', top_path=None):
58 if os.path.exists('MANIFEST'): os.remove('MANIFEST')
59
60 from numpy.distutils.misc_util import Configuration
61 config = Configuration(None, parent_package, top_path)
62
63 config.set_options(
64 ignore_setup_xxx_py=True,
65 assume_default_configuration=True,
66 delegate_options_to_subpackages=True,
67 quiet=True)
68
69 config.add_subpackage('skimage')
70 config.add_data_dir('skimage/data')
71
72 return config
73
74
75 if __name__ == "__main__":
76 # purposely fail loudly if numpy or scipy are not available
77 from numpy.distutils.core import setup
78 import scipy
79
80 setup(
81 name=DISTNAME,
82 description=DESCRIPTION,
83 long_description=LONG_DESCRIPTION,
84 maintainer=MAINTAINER,
85 maintainer_email=MAINTAINER_EMAIL,
86 url=URL,
87 license=LICENSE,
88 download_url=DOWNLOAD_URL,
89 version=VERSION,
90
91 classifiers=[
92 'Development Status :: 4 - Beta',
93 'Environment :: Console',
94 'Intended Audience :: Developers',
95 'Intended Audience :: Science/Research',
96 'License :: OSI Approved :: BSD License',
97 'Programming Language :: C',
98 'Programming Language :: Python',
99 'Programming Language :: Python :: 3',
100 'Topic :: Scientific/Engineering',
101 'Operating System :: Microsoft :: Windows',
102 'Operating System :: POSIX',
103 'Operating System :: Unix',
104 'Operating System :: MacOS',
105 ],
106
107 configuration=configuration,
108 setup_requires=SETUP_REQUIRES,
109 install_requires=INSTALL_REQUIRES,
110 requires=REQUIRES,
111 packages=setuptools.find_packages(exclude=['doc']),
112 include_package_data=True,
113 zip_safe=False, # the package can run out of an .egg file
114
115 entry_points={
116 'console_scripts': ['skivi = skimage.scripts.skivi:main'],
117 },
118
119 cmdclass={'build_py': build_py},
120 )
121
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -34,24 +34,10 @@
with open('requirements.txt') as fid:
INSTALL_REQUIRES = [l.strip() for l in fid.readlines() if l]
-# development versions do not have the cythonized files
-if VERSION.endswith('dev'):
- SETUP_REQUIRES = [r for r in INSTALL_REQUIRES if r.startswith('cython')]
-else:
- INSTALL_REQUIRES = [r for r in INSTALL_REQUIRES
- if not r.startswith('cython')]
- SETUP_REQUIRES = []
-
-
-# list requirements for PyPI
-REQUIRES = [r.replace('>=', ' (>= ') + ')'
- for r in INSTALL_REQUIRES + SETUP_REQUIRES]
+# requirements for those browsing PyPI
+REQUIRES = [r.replace('>=', ' (>= ') + ')' for r in INSTALL_REQUIRES]
REQUIRES = [r.replace('==', ' (== ') for r in REQUIRES]
-
-
-# do not attempt to install numpy and scipy until they have eggs available
-INSTALL_REQUIRES = [r for r in INSTALL_REQUIRES
- if not r.startswith(('scipy', 'numpy'))]
+REQUIRES = [r.replace('[array]', '') for r in REQUIRES]
def configuration(parent_package='', top_path=None):
@@ -73,9 +59,17 @@
if __name__ == "__main__":
- # purposely fail loudly if numpy or scipy are not available
- from numpy.distutils.core import setup
- import scipy
+ # purposely fail if numpy is not available
+ # other dependecies will be resolved by pip (install_requires)
+ try:
+ from numpy.distutils.core import setup
+ except ImportError:
+ print('To install scikit-image from source, you will need numpy.\n' +
+ 'Install numpy with pip:\n' +
+ 'pip install numpy\n'
+ 'Or use your operating system package manager. For more\n' +
+ 'details, see http://scikit-image.org/docs/stable/install.html')
+ sys.exit(1)
setup(
name=DISTNAME,
@@ -105,8 +99,9 @@
],
configuration=configuration,
- setup_requires=SETUP_REQUIRES,
install_requires=INSTALL_REQUIRES,
+ # install cython when running setup.py (source install)
+ setup_requires=['cython>=0.21'],
requires=REQUIRES,
packages=setuptools.find_packages(exclude=['doc']),
include_package_data=True,
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -34,24 +34,10 @@\n with open('requirements.txt') as fid:\n INSTALL_REQUIRES = [l.strip() for l in fid.readlines() if l]\n \n-# development versions do not have the cythonized files\n-if VERSION.endswith('dev'):\n- SETUP_REQUIRES = [r for r in INSTALL_REQUIRES if r.startswith('cython')]\n-else:\n- INSTALL_REQUIRES = [r for r in INSTALL_REQUIRES\n- if not r.startswith('cython')]\n- SETUP_REQUIRES = []\n-\n-\n-# list requirements for PyPI\n-REQUIRES = [r.replace('>=', ' (>= ') + ')'\n- for r in INSTALL_REQUIRES + SETUP_REQUIRES]\n+# requirements for those browsing PyPI\n+REQUIRES = [r.replace('>=', ' (>= ') + ')' for r in INSTALL_REQUIRES]\n REQUIRES = [r.replace('==', ' (== ') for r in REQUIRES]\n-\n-\n-# do not attempt to install numpy and scipy until they have eggs available\n-INSTALL_REQUIRES = [r for r in INSTALL_REQUIRES\n- if not r.startswith(('scipy', 'numpy'))]\n+REQUIRES = [r.replace('[array]', '') for r in REQUIRES]\n \n \n def configuration(parent_package='', top_path=None):\n@@ -73,9 +59,17 @@\n \n \n if __name__ == \"__main__\":\n- # purposely fail loudly if numpy or scipy are not available\n- from numpy.distutils.core import setup\n- import scipy\n+ # purposely fail if numpy is not available\n+ # other dependecies will be resolved by pip (install_requires)\n+ try:\n+ from numpy.distutils.core import setup\n+ except ImportError:\n+ print('To install scikit-image from source, you will need numpy.\\n' +\n+ 'Install numpy with pip:\\n' +\n+ 'pip install numpy\\n'\n+ 'Or use your operating system package manager. For more\\n' +\n+ 'details, see http://scikit-image.org/docs/stable/install.html')\n+ sys.exit(1)\n \n setup(\n name=DISTNAME,\n@@ -105,8 +99,9 @@\n ],\n \n configuration=configuration,\n- setup_requires=SETUP_REQUIRES,\n install_requires=INSTALL_REQUIRES,\n+ # install cython when running setup.py (source install)\n+ setup_requires=['cython>=0.21'],\n requires=REQUIRES,\n packages=setuptools.find_packages(exclude=['doc']),\n include_package_data=True,\n", "issue": "Error installing skimage-\nHi!\nI've install skimage on a MacOs 10.10, like it's said in the documentation:\npip install -U scikit-image\nand it say that need scipy to work, I had to install it to work.\n\n", "before_files": [{"content": "#! /usr/bin/env python\n\ndescr = \"\"\"Image Processing SciKit\n\nImage processing algorithms for SciPy, including IO, morphology, filtering,\nwarping, color manipulation, object detection, etc.\n\nPlease refer to the online documentation at\nhttp://scikit-image.org/\n\"\"\"\n\nDISTNAME = 'scikit-image'\nDESCRIPTION = 'Image processing routines for SciPy'\nLONG_DESCRIPTION = descr\nMAINTAINER = 'Stefan van der Walt'\nMAINTAINER_EMAIL = '[email protected]'\nURL = 'http://scikit-image.org'\nLICENSE = 'Modified BSD'\nDOWNLOAD_URL = 'http://github.com/scikit-image/scikit-image'\n\nimport os\nimport sys\n\nimport setuptools\nfrom distutils.command.build_py import build_py\n\n\nwith open('skimage/__init__.py') as fid:\n for line in fid:\n if line.startswith('__version__'):\n VERSION = line.strip().split()[-1][1:-1]\n break\n\nwith open('requirements.txt') as fid:\n INSTALL_REQUIRES = [l.strip() for l in fid.readlines() if l]\n\n# development versions do not have the cythonized files\nif VERSION.endswith('dev'):\n SETUP_REQUIRES = [r for r in INSTALL_REQUIRES if r.startswith('cython')]\nelse:\n INSTALL_REQUIRES = [r for r in INSTALL_REQUIRES\n if not r.startswith('cython')]\n SETUP_REQUIRES = []\n\n\n# list requirements for PyPI\nREQUIRES = [r.replace('>=', ' (>= ') + ')'\n for r in INSTALL_REQUIRES + SETUP_REQUIRES]\nREQUIRES = [r.replace('==', ' (== ') for r in REQUIRES]\n\n\n# do not attempt to install numpy and scipy until they have eggs available\nINSTALL_REQUIRES = [r for r in INSTALL_REQUIRES\n if not r.startswith(('scipy', 'numpy'))]\n\n\ndef configuration(parent_package='', top_path=None):\n if os.path.exists('MANIFEST'): os.remove('MANIFEST')\n\n from numpy.distutils.misc_util import Configuration\n config = Configuration(None, parent_package, top_path)\n\n config.set_options(\n ignore_setup_xxx_py=True,\n assume_default_configuration=True,\n delegate_options_to_subpackages=True,\n quiet=True)\n\n config.add_subpackage('skimage')\n config.add_data_dir('skimage/data')\n\n return config\n\n\nif __name__ == \"__main__\":\n # purposely fail loudly if numpy or scipy are not available\n from numpy.distutils.core import setup\n import scipy\n\n setup(\n name=DISTNAME,\n description=DESCRIPTION,\n long_description=LONG_DESCRIPTION,\n maintainer=MAINTAINER,\n maintainer_email=MAINTAINER_EMAIL,\n url=URL,\n license=LICENSE,\n download_url=DOWNLOAD_URL,\n version=VERSION,\n\n classifiers=[\n 'Development Status :: 4 - Beta',\n 'Environment :: Console',\n 'Intended Audience :: Developers',\n 'Intended Audience :: Science/Research',\n 'License :: OSI Approved :: BSD License',\n 'Programming Language :: C',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 3',\n 'Topic :: Scientific/Engineering',\n 'Operating System :: Microsoft :: Windows',\n 'Operating System :: POSIX',\n 'Operating System :: Unix',\n 'Operating System :: MacOS',\n ],\n\n configuration=configuration,\n setup_requires=SETUP_REQUIRES,\n install_requires=INSTALL_REQUIRES,\n requires=REQUIRES,\n packages=setuptools.find_packages(exclude=['doc']),\n include_package_data=True,\n zip_safe=False, # the package can run out of an .egg file\n\n entry_points={\n 'console_scripts': ['skivi = skimage.scripts.skivi:main'],\n },\n\n cmdclass={'build_py': build_py},\n )\n", "path": "setup.py"}]} | 1,669 | 576 |
gh_patches_debug_1542 | rasdani/github-patches | git_diff | pyodide__pyodide-987 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add PYODIDE_MINIMAL build option
From the added documentation,
> Minimal pyodide build can be enabled by setting the `PYODIDE_MINIMAL`
environment variable. For instance,
> ```
> PYODIDE_MINIMAL=true PYODIDE_PACKAGES="micropip" make
> ```
>
> This will,
> - not include freetype and libpng libraries (it won't be possible to build matplotlib)
> - not include the jedi library, disabling auto-completion in iodide
>
> As as a result the size will of the core pyodide binaries will be ~15% smaller.
Addresses two points from https://github.com/iodide-project/pyodide/issues/646
Before (master),
```
6,6M pyodide.asm.data
310K pyodide.asm.data.js
2,8M pyodide.asm.js
11M pyodide.asm.wasm
16K pyodide.js
16K pyodide_dev.js
Total: 20.7 MB
```
after (this PR with PYODIDE_MINIMAL=true)
```
5,1M build/pyodide.asm.data
124K build/pyodide.asm.data.js
2,6M build/pyodide.asm.js
9,9M build/pyodide.asm.wasm
16K build/pyodide.js
16K build/pyodide_dev.js
Total: 17.7 MB
```
so it's not that different (14% less), but it's start.
Draft PR for now, as I think I need to go in a bit more details through tests that are run in the minimal build CI job.
</issue>
<code>
[start of src/pyodide-py/pyodide/console.py]
1 from typing import List, Optional
2
3
4 def get_completions(
5 code: str, cursor: Optional[int] = None, namespaces: Optional[List] = None
6 ) -> List[str]:
7 """
8 Get code autocompletion candidates
9
10 Note that this function requires to have the jedi module loaded.
11
12 Parameters
13 ----------
14 code
15 the Python code to complete.
16 cursor
17 optional position in the code at which to autocomplete
18 namespaces
19 a list of namespaces
20
21 Returns
22 -------
23 a list of autocompleted modules
24 """
25 import jedi
26 import __main__
27
28 if namespaces is None:
29 namespaces = [__main__.__dict__]
30
31 if cursor is None:
32 cursor = len(code)
33 code = code[:cursor]
34 interp = jedi.Interpreter(code, namespaces)
35 completions = interp.completions()
36
37 return [x.name for x in completions]
38
[end of src/pyodide-py/pyodide/console.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/pyodide-py/pyodide/console.py b/src/pyodide-py/pyodide/console.py
--- a/src/pyodide-py/pyodide/console.py
+++ b/src/pyodide-py/pyodide/console.py
@@ -32,6 +32,6 @@
cursor = len(code)
code = code[:cursor]
interp = jedi.Interpreter(code, namespaces)
- completions = interp.completions()
+ completions = interp.complete()
return [x.name for x in completions]
| {"golden_diff": "diff --git a/src/pyodide-py/pyodide/console.py b/src/pyodide-py/pyodide/console.py\n--- a/src/pyodide-py/pyodide/console.py\n+++ b/src/pyodide-py/pyodide/console.py\n@@ -32,6 +32,6 @@\n cursor = len(code)\n code = code[:cursor]\n interp = jedi.Interpreter(code, namespaces)\n- completions = interp.completions()\n+ completions = interp.complete()\n \n return [x.name for x in completions]\n", "issue": "Add PYODIDE_MINIMAL build option\nFrom the added documentation,\r\n\r\n> Minimal pyodide build can be enabled by setting the `PYODIDE_MINIMAL`\r\nenvironment variable. For instance,\r\n> ``` \r\n> PYODIDE_MINIMAL=true PYODIDE_PACKAGES=\"micropip\" make\r\n> ``` \r\n> \r\n> This will,\r\n> - not include freetype and libpng libraries (it won't be possible to build matplotlib)\r\n> - not include the jedi library, disabling auto-completion in iodide\r\n> \r\n> As as a result the size will of the core pyodide binaries will be ~15% smaller.\r\n\r\nAddresses two points from https://github.com/iodide-project/pyodide/issues/646\r\n\r\nBefore (master),\r\n```\r\n6,6M pyodide.asm.data\r\n310K pyodide.asm.data.js\r\n2,8M pyodide.asm.js\r\n 11M pyodide.asm.wasm\r\n 16K pyodide.js\r\n 16K pyodide_dev.js\r\n\r\nTotal: 20.7 MB\r\n```\r\nafter (this PR with PYODIDE_MINIMAL=true)\r\n```\r\n5,1M build/pyodide.asm.data\r\n124K build/pyodide.asm.data.js\r\n2,6M build/pyodide.asm.js\r\n9,9M build/pyodide.asm.wasm\r\n 16K build/pyodide.js\r\n 16K build/pyodide_dev.js\r\n\r\nTotal: 17.7 MB\r\n```\r\n\r\nso it's not that different (14% less), but it's start. \r\n\r\nDraft PR for now, as I think I need to go in a bit more details through tests that are run in the minimal build CI job.\n", "before_files": [{"content": "from typing import List, Optional\n\n\ndef get_completions(\n code: str, cursor: Optional[int] = None, namespaces: Optional[List] = None\n) -> List[str]:\n \"\"\"\n Get code autocompletion candidates\n\n Note that this function requires to have the jedi module loaded.\n\n Parameters\n ----------\n code\n the Python code to complete.\n cursor\n optional position in the code at which to autocomplete\n namespaces\n a list of namespaces\n\n Returns\n -------\n a list of autocompleted modules\n \"\"\"\n import jedi\n import __main__\n\n if namespaces is None:\n namespaces = [__main__.__dict__]\n\n if cursor is None:\n cursor = len(code)\n code = code[:cursor]\n interp = jedi.Interpreter(code, namespaces)\n completions = interp.completions()\n\n return [x.name for x in completions]\n", "path": "src/pyodide-py/pyodide/console.py"}]} | 1,200 | 122 |
gh_patches_debug_24541 | rasdani/github-patches | git_diff | Lightning-AI__torchmetrics-2114 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Inconsistent docs and code output for `PerceptualEvaluationSpeechQuality `
## 🐛 Bug
[related to #1092]
The [docs](https://torchmetrics.readthedocs.io/en/stable/audio/perceptual_evaluation_speech_quality.html) for PESQ metric class state that
> As output of _forward_ and _compute_ the metric returns the following output `pesq` (Tensor): float tensor with shape (...,) of PESQ value per sample.
However, I always get one value, no matter the `batch_size`, as if the metric was being averaged.
### To Reproduce
```python
import torch
from torchmetrics.audio import PerceptualEvaluationSpeechQuality as PESQ
pesq = PESQ(fs=16000, mode="wb")
# create data
batch_size = 4
audio_size = 12345
preds = torch.FloatTensor(batch_size, audio_size).uniform_(-1, 1)
target = torch.FloatTensor(batch_size, audio_size).uniform_(-1, 1)
print("preds.shape:", preds.shape)
#>preds.shape: torch.Size([4, 12345])
# compute metric
pesq_score = pesq.forward(preds, target)
print("pesq_score:", pesq_score)
#>pesq_score: tensor(1.5049)
```
I expected the output to be of shape `torch.Size([4])`.
Same behaviour for `pesq.update(preds, target)` followed by `pesq.compute()`.
The functional counterpart returns a result [as documented](https://torchmetrics.readthedocs.io/en/stable/audio/perceptual_evaluation_speech_quality.html#torchmetrics.functional.audio.pesq.perceptual_evaluation_speech_quality):
```python
import torch
from torchmetrics.functional.audio.pesq import perceptual_evaluation_speech_quality as F_PESQ
n = 4
preds = torch.FloatTensor(n, 12345).uniform_(-1, 1)
target = torch.FloatTensor(n, 12345).uniform_(-1, 1)
pesq_score = F_PESQ(preds=preds, target=target, fs=16000, mode="wb")
print("pesq_score:", pesq_score)
#> pesq_score: tensor([1.5882, 1.5080, 1.5149, 1.5997])
```
### Environment
```
Linux
python==3.10.12
torch==2.0.1
torchmetrics== 1.1.2
```
</issue>
<code>
[start of src/torchmetrics/audio/pesq.py]
1 # Copyright The Lightning team.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 from typing import Any, Optional, Sequence, Union
15
16 from torch import Tensor, tensor
17
18 from torchmetrics.functional.audio.pesq import perceptual_evaluation_speech_quality
19 from torchmetrics.metric import Metric
20 from torchmetrics.utilities.imports import _MATPLOTLIB_AVAILABLE, _PESQ_AVAILABLE
21 from torchmetrics.utilities.plot import _AX_TYPE, _PLOT_OUT_TYPE
22
23 __doctest_requires__ = {"PerceptualEvaluationSpeechQuality": ["pesq"]}
24
25 if not _MATPLOTLIB_AVAILABLE:
26 __doctest_skip__ = ["PerceptualEvaluationSpeechQuality.plot"]
27
28
29 class PerceptualEvaluationSpeechQuality(Metric):
30 """Calculate `Perceptual Evaluation of Speech Quality`_ (PESQ).
31
32 It's a recognized industry standard for audio quality that takes into considerations characteristics such as:
33 audio sharpness, call volume, background noise, clipping, audio interference etc. PESQ returns a score between
34 -0.5 and 4.5 with the higher scores indicating a better quality.
35
36 This metric is a wrapper for the `pesq package`_. Note that input will be moved to ``cpu`` to perform the metric
37 calculation.
38
39 As input to ``forward`` and ``update`` the metric accepts the following input
40
41 - ``preds`` (:class:`~torch.Tensor`): float tensor with shape ``(...,time)``
42 - ``target`` (:class:`~torch.Tensor`): float tensor with shape ``(...,time)``
43
44 As output of `forward` and `compute` the metric returns the following output
45
46 - ``pesq`` (:class:`~torch.Tensor`): float tensor with shape ``(...,)`` of PESQ value per sample
47
48 .. note:: using this metrics requires you to have ``pesq`` install. Either install as ``pip install
49 torchmetrics[audio]`` or ``pip install pesq``. ``pesq`` will compile with your currently
50 installed version of numpy, meaning that if you upgrade numpy at some point in the future you will
51 most likely have to reinstall ``pesq``.
52
53 Args:
54 fs: sampling frequency, should be 16000 or 8000 (Hz)
55 mode: ``'wb'`` (wide-band) or ``'nb'`` (narrow-band)
56 keep_same_device: whether to move the pesq value to the device of preds
57 n_processes: integer specifying the number of processes to run in parallel for the metric calculation.
58 Only applies to batches of data and if ``multiprocessing`` package is installed.
59 kwargs: Additional keyword arguments, see :ref:`Metric kwargs` for more info.
60
61 Raises:
62 ModuleNotFoundError:
63 If ``pesq`` package is not installed
64 ValueError:
65 If ``fs`` is not either ``8000`` or ``16000``
66 ValueError:
67 If ``mode`` is not either ``"wb"`` or ``"nb"``
68
69 Example:
70 >>> import torch
71 >>> from torchmetrics.audio import PerceptualEvaluationSpeechQuality
72 >>> g = torch.manual_seed(1)
73 >>> preds = torch.randn(8000)
74 >>> target = torch.randn(8000)
75 >>> pesq = PerceptualEvaluationSpeechQuality(8000, 'nb')
76 >>> pesq(preds, target)
77 tensor(2.2076)
78 >>> wb_pesq = PerceptualEvaluationSpeechQuality(16000, 'wb')
79 >>> wb_pesq(preds, target)
80 tensor(1.7359)
81
82 """
83
84 sum_pesq: Tensor
85 total: Tensor
86 full_state_update: bool = False
87 is_differentiable: bool = False
88 higher_is_better: bool = True
89 plot_lower_bound: float = -0.5
90 plot_upper_bound: float = 4.5
91
92 def __init__(
93 self,
94 fs: int,
95 mode: str,
96 n_processes: int = 1,
97 **kwargs: Any,
98 ) -> None:
99 super().__init__(**kwargs)
100 if not _PESQ_AVAILABLE:
101 raise ModuleNotFoundError(
102 "PerceptualEvaluationSpeechQuality metric requires that `pesq` is installed."
103 " Either install as `pip install torchmetrics[audio]` or `pip install pesq`."
104 )
105 if fs not in (8000, 16000):
106 raise ValueError(f"Expected argument `fs` to either be 8000 or 16000 but got {fs}")
107 self.fs = fs
108 if mode not in ("wb", "nb"):
109 raise ValueError(f"Expected argument `mode` to either be 'wb' or 'nb' but got {mode}")
110 self.mode = mode
111 if not isinstance(n_processes, int) and n_processes <= 0:
112 raise ValueError(f"Expected argument `n_processes` to be an int larger than 0 but got {n_processes}")
113 self.n_processes = n_processes
114
115 self.add_state("sum_pesq", default=tensor(0.0), dist_reduce_fx="sum")
116 self.add_state("total", default=tensor(0), dist_reduce_fx="sum")
117
118 def update(self, preds: Tensor, target: Tensor) -> None:
119 """Update state with predictions and targets."""
120 pesq_batch = perceptual_evaluation_speech_quality(
121 preds, target, self.fs, self.mode, False, self.n_processes
122 ).to(self.sum_pesq.device)
123
124 self.sum_pesq += pesq_batch.sum()
125 self.total += pesq_batch.numel()
126
127 def compute(self) -> Tensor:
128 """Compute metric."""
129 return self.sum_pesq / self.total
130
131 def plot(self, val: Union[Tensor, Sequence[Tensor], None] = None, ax: Optional[_AX_TYPE] = None) -> _PLOT_OUT_TYPE:
132 """Plot a single or multiple values from the metric.
133
134 Args:
135 val: Either a single result from calling `metric.forward` or `metric.compute` or a list of these results.
136 If no value is provided, will automatically call `metric.compute` and plot that result.
137 ax: An matplotlib axis object. If provided will add plot to that axis
138
139 Returns:
140 Figure and Axes object
141
142 Raises:
143 ModuleNotFoundError:
144 If `matplotlib` is not installed
145
146 .. plot::
147 :scale: 75
148
149 >>> # Example plotting a single value
150 >>> import torch
151 >>> from torchmetrics.audio import PerceptualEvaluationSpeechQuality
152 >>> metric = PerceptualEvaluationSpeechQuality(8000, 'nb')
153 >>> metric.update(torch.rand(8000), torch.rand(8000))
154 >>> fig_, ax_ = metric.plot()
155
156 .. plot::
157 :scale: 75
158
159 >>> # Example plotting multiple values
160 >>> import torch
161 >>> from torchmetrics.audio import PerceptualEvaluationSpeechQuality
162 >>> metric = PerceptualEvaluationSpeechQuality(8000, 'nb')
163 >>> values = [ ]
164 >>> for _ in range(10):
165 ... values.append(metric(torch.rand(8000), torch.rand(8000)))
166 >>> fig_, ax_ = metric.plot(values)
167
168 """
169 return self._plot(val, ax)
170
[end of src/torchmetrics/audio/pesq.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/torchmetrics/audio/pesq.py b/src/torchmetrics/audio/pesq.py
--- a/src/torchmetrics/audio/pesq.py
+++ b/src/torchmetrics/audio/pesq.py
@@ -43,13 +43,17 @@
As output of `forward` and `compute` the metric returns the following output
- - ``pesq`` (:class:`~torch.Tensor`): float tensor with shape ``(...,)`` of PESQ value per sample
+ - ``pesq`` (:class:`~torch.Tensor`): float tensor of PESQ value reduced across the batch
.. note:: using this metrics requires you to have ``pesq`` install. Either install as ``pip install
torchmetrics[audio]`` or ``pip install pesq``. ``pesq`` will compile with your currently
installed version of numpy, meaning that if you upgrade numpy at some point in the future you will
most likely have to reinstall ``pesq``.
+ .. note:: the ``forward`` and ``compute`` methods in this class return a single (reduced) PESQ value
+ for a batch. To obtain a PESQ value for each sample, you may use the functional counterpart in
+ :func:`~torchmetrics.functional.audio.pesq.perceptual_evaluation_speech_quality`.
+
Args:
fs: sampling frequency, should be 16000 or 8000 (Hz)
mode: ``'wb'`` (wide-band) or ``'nb'`` (narrow-band)
| {"golden_diff": "diff --git a/src/torchmetrics/audio/pesq.py b/src/torchmetrics/audio/pesq.py\n--- a/src/torchmetrics/audio/pesq.py\n+++ b/src/torchmetrics/audio/pesq.py\n@@ -43,13 +43,17 @@\n \n As output of `forward` and `compute` the metric returns the following output\n \n- - ``pesq`` (:class:`~torch.Tensor`): float tensor with shape ``(...,)`` of PESQ value per sample\n+ - ``pesq`` (:class:`~torch.Tensor`): float tensor of PESQ value reduced across the batch\n \n .. note:: using this metrics requires you to have ``pesq`` install. Either install as ``pip install\n torchmetrics[audio]`` or ``pip install pesq``. ``pesq`` will compile with your currently\n installed version of numpy, meaning that if you upgrade numpy at some point in the future you will\n most likely have to reinstall ``pesq``.\n \n+ .. note:: the ``forward`` and ``compute`` methods in this class return a single (reduced) PESQ value\n+ for a batch. To obtain a PESQ value for each sample, you may use the functional counterpart in\n+ :func:`~torchmetrics.functional.audio.pesq.perceptual_evaluation_speech_quality`.\n+\n Args:\n fs: sampling frequency, should be 16000 or 8000 (Hz)\n mode: ``'wb'`` (wide-band) or ``'nb'`` (narrow-band)\n", "issue": "Inconsistent docs and code output for `PerceptualEvaluationSpeechQuality `\n## \ud83d\udc1b Bug\r\n[related to #1092]\r\n\r\nThe [docs](https://torchmetrics.readthedocs.io/en/stable/audio/perceptual_evaluation_speech_quality.html) for PESQ metric class state that \r\n> As output of _forward_ and _compute_ the metric returns the following output `pesq` (Tensor): float tensor with shape (...,) of PESQ value per sample.\r\n\r\nHowever, I always get one value, no matter the `batch_size`, as if the metric was being averaged.\r\n\r\n### To Reproduce\r\n\r\n```python\r\nimport torch \r\nfrom torchmetrics.audio import PerceptualEvaluationSpeechQuality as PESQ\r\n\r\npesq = PESQ(fs=16000, mode=\"wb\")\r\n\r\n# create data\r\nbatch_size = 4\r\naudio_size = 12345\r\npreds = torch.FloatTensor(batch_size, audio_size).uniform_(-1, 1)\r\ntarget = torch.FloatTensor(batch_size, audio_size).uniform_(-1, 1)\r\nprint(\"preds.shape:\", preds.shape)\r\n#>preds.shape: torch.Size([4, 12345])\r\n\r\n# compute metric\r\npesq_score = pesq.forward(preds, target)\r\nprint(\"pesq_score:\", pesq_score)\r\n#>pesq_score: tensor(1.5049)\r\n```\r\nI expected the output to be of shape `torch.Size([4])`.\r\nSame behaviour for `pesq.update(preds, target)` followed by `pesq.compute()`.\r\n\r\nThe functional counterpart returns a result [as documented](https://torchmetrics.readthedocs.io/en/stable/audio/perceptual_evaluation_speech_quality.html#torchmetrics.functional.audio.pesq.perceptual_evaluation_speech_quality):\r\n```python\r\nimport torch \r\nfrom torchmetrics.functional.audio.pesq import perceptual_evaluation_speech_quality as F_PESQ\r\n\r\nn = 4\r\npreds = torch.FloatTensor(n, 12345).uniform_(-1, 1)\r\ntarget = torch.FloatTensor(n, 12345).uniform_(-1, 1)\r\n\r\npesq_score = F_PESQ(preds=preds, target=target, fs=16000, mode=\"wb\")\r\nprint(\"pesq_score:\", pesq_score)\r\n#> pesq_score: tensor([1.5882, 1.5080, 1.5149, 1.5997])\r\n```\r\n\r\n### Environment\r\n\r\n```\r\nLinux\r\npython==3.10.12\r\ntorch==2.0.1\r\ntorchmetrics== 1.1.2\r\n```\n", "before_files": [{"content": "# Copyright The Lightning team.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nfrom typing import Any, Optional, Sequence, Union\n\nfrom torch import Tensor, tensor\n\nfrom torchmetrics.functional.audio.pesq import perceptual_evaluation_speech_quality\nfrom torchmetrics.metric import Metric\nfrom torchmetrics.utilities.imports import _MATPLOTLIB_AVAILABLE, _PESQ_AVAILABLE\nfrom torchmetrics.utilities.plot import _AX_TYPE, _PLOT_OUT_TYPE\n\n__doctest_requires__ = {\"PerceptualEvaluationSpeechQuality\": [\"pesq\"]}\n\nif not _MATPLOTLIB_AVAILABLE:\n __doctest_skip__ = [\"PerceptualEvaluationSpeechQuality.plot\"]\n\n\nclass PerceptualEvaluationSpeechQuality(Metric):\n \"\"\"Calculate `Perceptual Evaluation of Speech Quality`_ (PESQ).\n\n It's a recognized industry standard for audio quality that takes into considerations characteristics such as:\n audio sharpness, call volume, background noise, clipping, audio interference etc. PESQ returns a score between\n -0.5 and 4.5 with the higher scores indicating a better quality.\n\n This metric is a wrapper for the `pesq package`_. Note that input will be moved to ``cpu`` to perform the metric\n calculation.\n\n As input to ``forward`` and ``update`` the metric accepts the following input\n\n - ``preds`` (:class:`~torch.Tensor`): float tensor with shape ``(...,time)``\n - ``target`` (:class:`~torch.Tensor`): float tensor with shape ``(...,time)``\n\n As output of `forward` and `compute` the metric returns the following output\n\n - ``pesq`` (:class:`~torch.Tensor`): float tensor with shape ``(...,)`` of PESQ value per sample\n\n .. note:: using this metrics requires you to have ``pesq`` install. Either install as ``pip install\n torchmetrics[audio]`` or ``pip install pesq``. ``pesq`` will compile with your currently\n installed version of numpy, meaning that if you upgrade numpy at some point in the future you will\n most likely have to reinstall ``pesq``.\n\n Args:\n fs: sampling frequency, should be 16000 or 8000 (Hz)\n mode: ``'wb'`` (wide-band) or ``'nb'`` (narrow-band)\n keep_same_device: whether to move the pesq value to the device of preds\n n_processes: integer specifying the number of processes to run in parallel for the metric calculation.\n Only applies to batches of data and if ``multiprocessing`` package is installed.\n kwargs: Additional keyword arguments, see :ref:`Metric kwargs` for more info.\n\n Raises:\n ModuleNotFoundError:\n If ``pesq`` package is not installed\n ValueError:\n If ``fs`` is not either ``8000`` or ``16000``\n ValueError:\n If ``mode`` is not either ``\"wb\"`` or ``\"nb\"``\n\n Example:\n >>> import torch\n >>> from torchmetrics.audio import PerceptualEvaluationSpeechQuality\n >>> g = torch.manual_seed(1)\n >>> preds = torch.randn(8000)\n >>> target = torch.randn(8000)\n >>> pesq = PerceptualEvaluationSpeechQuality(8000, 'nb')\n >>> pesq(preds, target)\n tensor(2.2076)\n >>> wb_pesq = PerceptualEvaluationSpeechQuality(16000, 'wb')\n >>> wb_pesq(preds, target)\n tensor(1.7359)\n\n \"\"\"\n\n sum_pesq: Tensor\n total: Tensor\n full_state_update: bool = False\n is_differentiable: bool = False\n higher_is_better: bool = True\n plot_lower_bound: float = -0.5\n plot_upper_bound: float = 4.5\n\n def __init__(\n self,\n fs: int,\n mode: str,\n n_processes: int = 1,\n **kwargs: Any,\n ) -> None:\n super().__init__(**kwargs)\n if not _PESQ_AVAILABLE:\n raise ModuleNotFoundError(\n \"PerceptualEvaluationSpeechQuality metric requires that `pesq` is installed.\"\n \" Either install as `pip install torchmetrics[audio]` or `pip install pesq`.\"\n )\n if fs not in (8000, 16000):\n raise ValueError(f\"Expected argument `fs` to either be 8000 or 16000 but got {fs}\")\n self.fs = fs\n if mode not in (\"wb\", \"nb\"):\n raise ValueError(f\"Expected argument `mode` to either be 'wb' or 'nb' but got {mode}\")\n self.mode = mode\n if not isinstance(n_processes, int) and n_processes <= 0:\n raise ValueError(f\"Expected argument `n_processes` to be an int larger than 0 but got {n_processes}\")\n self.n_processes = n_processes\n\n self.add_state(\"sum_pesq\", default=tensor(0.0), dist_reduce_fx=\"sum\")\n self.add_state(\"total\", default=tensor(0), dist_reduce_fx=\"sum\")\n\n def update(self, preds: Tensor, target: Tensor) -> None:\n \"\"\"Update state with predictions and targets.\"\"\"\n pesq_batch = perceptual_evaluation_speech_quality(\n preds, target, self.fs, self.mode, False, self.n_processes\n ).to(self.sum_pesq.device)\n\n self.sum_pesq += pesq_batch.sum()\n self.total += pesq_batch.numel()\n\n def compute(self) -> Tensor:\n \"\"\"Compute metric.\"\"\"\n return self.sum_pesq / self.total\n\n def plot(self, val: Union[Tensor, Sequence[Tensor], None] = None, ax: Optional[_AX_TYPE] = None) -> _PLOT_OUT_TYPE:\n \"\"\"Plot a single or multiple values from the metric.\n\n Args:\n val: Either a single result from calling `metric.forward` or `metric.compute` or a list of these results.\n If no value is provided, will automatically call `metric.compute` and plot that result.\n ax: An matplotlib axis object. If provided will add plot to that axis\n\n Returns:\n Figure and Axes object\n\n Raises:\n ModuleNotFoundError:\n If `matplotlib` is not installed\n\n .. plot::\n :scale: 75\n\n >>> # Example plotting a single value\n >>> import torch\n >>> from torchmetrics.audio import PerceptualEvaluationSpeechQuality\n >>> metric = PerceptualEvaluationSpeechQuality(8000, 'nb')\n >>> metric.update(torch.rand(8000), torch.rand(8000))\n >>> fig_, ax_ = metric.plot()\n\n .. plot::\n :scale: 75\n\n >>> # Example plotting multiple values\n >>> import torch\n >>> from torchmetrics.audio import PerceptualEvaluationSpeechQuality\n >>> metric = PerceptualEvaluationSpeechQuality(8000, 'nb')\n >>> values = [ ]\n >>> for _ in range(10):\n ... values.append(metric(torch.rand(8000), torch.rand(8000)))\n >>> fig_, ax_ = metric.plot(values)\n\n \"\"\"\n return self._plot(val, ax)\n", "path": "src/torchmetrics/audio/pesq.py"}]} | 3,240 | 343 |
gh_patches_debug_19070 | rasdani/github-patches | git_diff | OpenNMT__OpenNMT-py-2339 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Difference in training results between single GPU and multi-GPU
For single GPU and multi-GPU training, the single card is 4% higher than the multi-card training. What is the cause?
Here is my configuration:
<img width="150" alt="image" src="https://user-images.githubusercontent.com/109410944/226503405-f70fa165-2fe6-4c23-b1bd-e4d4e7610c32.png">
<img width="207" alt="image" src="https://user-images.githubusercontent.com/109410944/226503453-a0e1ba34-47ff-482e-86ad-13602c338ef1.png">
</issue>
<code>
[start of onmt/utils/distributed.py]
1 """ Pytorch Distributed utils
2 This piece of code was heavily inspired by the equivalent of Fairseq-py
3 https://github.com/pytorch/fairseq
4 """
5 import os
6 import signal
7 import math
8 import pickle
9 import torch.distributed
10 from onmt.utils.logging import logger
11
12
13 def is_master(opt, device_id):
14 return opt.gpu_ranks[device_id] == 0
15
16
17 def multi_init(opt, device_id):
18 dist_init_method = 'tcp://{master_ip}:{master_port}'.format(
19 master_ip=opt.master_ip,
20 master_port=opt.master_port)
21 dist_world_size = opt.world_size
22 torch.distributed.init_process_group(
23 backend=opt.gpu_backend, init_method=dist_init_method,
24 world_size=dist_world_size, rank=opt.gpu_ranks[device_id])
25 gpu_rank = torch.distributed.get_rank()
26 if not is_master(opt, device_id):
27 logger.disabled = True
28
29 return gpu_rank
30
31
32 def all_reduce_and_rescale_tensors(tensors, rescale_denom,
33 buffer_size=104857600):
34 """All-reduce and rescale tensors in chunks of the specified size.
35
36 Args:
37 tensors: list of Tensors to all-reduce
38 rescale_denom: denominator for rescaling summed Tensors
39 buffer_size: all-reduce chunk size in bytes
40 """
41 # buffer size in bytes, determine equiv. # of elements based on data type
42 buffer_t = tensors[0].new(
43 math.ceil(buffer_size / tensors[0].element_size())).zero_()
44 buffer = []
45
46 def all_reduce_buffer():
47 # copy tensors into buffer_t
48 offset = 0
49 for t in buffer:
50 numel = t.numel()
51 buffer_t[offset:offset+numel].copy_(t.view(-1))
52 offset += numel
53
54 # all-reduce and rescale
55 torch.distributed.all_reduce(buffer_t[:offset], async_op=True)
56 buffer_t.div_(rescale_denom)
57
58 # copy all-reduced buffer back into tensors
59 offset = 0
60 for t in buffer:
61 numel = t.numel()
62 t.view(-1).copy_(buffer_t[offset:offset+numel])
63 offset += numel
64
65 filled = 0
66 for t in tensors:
67 sz = t.numel() * t.element_size()
68 # print(filled, sz)
69 if sz > buffer_size:
70 # tensor is bigger than buffer, all-reduce and rescale directly
71 torch.distributed.all_reduce(t, async_op=True)
72 t.div_(rescale_denom)
73 elif filled + sz > buffer_size:
74 # buffer is full, all-reduce and replace buffer with grad
75 all_reduce_buffer()
76 buffer = [t]
77 filled = sz
78 else:
79 # add tensor to buffer
80 buffer.append(t)
81 filled += sz
82
83 if len(buffer) > 0:
84 all_reduce_buffer()
85
86
87 def all_gather_list(data, max_size=4096):
88 """Gathers arbitrary data from all nodes into a list."""
89 world_size = torch.distributed.get_world_size()
90 if not hasattr(all_gather_list, '_in_buffer') or \
91 max_size != all_gather_list._in_buffer.size():
92 all_gather_list._in_buffer = torch.cuda.ByteTensor(max_size)
93 all_gather_list._out_buffers = [
94 torch.cuda.ByteTensor(max_size)
95 for i in range(world_size)
96 ]
97 in_buffer = all_gather_list._in_buffer
98 out_buffers = all_gather_list._out_buffers
99
100 enc = pickle.dumps(data)
101 enc_size = len(enc)
102 if enc_size + 2 > max_size:
103 raise ValueError(
104 'encoded data exceeds max_size: {}'.format(enc_size + 2))
105 assert max_size < 255*256
106 in_buffer[0] = enc_size // 255 # this encoding works for max_size < 65k
107 in_buffer[1] = enc_size % 255
108 in_buffer[2:enc_size+2] = torch.ByteTensor(list(enc))
109
110 torch.distributed.all_gather(out_buffers, in_buffer.cuda())
111
112 results = []
113 for i in range(world_size):
114 out_buffer = out_buffers[i]
115 size = (255 * out_buffer[0].item()) + out_buffer[1].item()
116
117 bytes_list = bytes(out_buffer[2:size+2].tolist())
118 result = pickle.loads(bytes_list)
119 results.append(result)
120 return results
121
122
123 class ErrorHandler(object):
124 """A class that listens for exceptions in children processes and propagates
125 the tracebacks to the parent process."""
126
127 def __init__(self, error_queue):
128 """ init error handler """
129 import signal
130 import threading
131 self.error_queue = error_queue
132 self.children_pids = []
133 self.error_thread = threading.Thread(
134 target=self.error_listener, daemon=True)
135 self.error_thread.start()
136 signal.signal(signal.SIGUSR1, self.signal_handler)
137
138 def add_child(self, pid):
139 """ error handler """
140 self.children_pids.append(pid)
141
142 def error_listener(self):
143 """ error listener """
144 (rank, original_trace) = self.error_queue.get()
145 self.error_queue.put((rank, original_trace))
146 os.kill(os.getpid(), signal.SIGUSR1)
147
148 def signal_handler(self, signalnum, stackframe):
149 """ signal handler """
150 for pid in self.children_pids:
151 os.kill(pid, signal.SIGINT) # kill children processes
152 (rank, original_trace) = self.error_queue.get()
153 msg = """\n\n-- Tracebacks above this line can probably
154 be ignored --\n\n"""
155 msg += original_trace
156 raise Exception(msg)
157
158
159 def consumer(process_fn, opt, device_id, error_queue): # noqa: E501
160 """Run `process_fn` on `device_id` with data from `batch_queue`."""
161 try:
162 gpu_rank = multi_init(opt, device_id)
163 if gpu_rank != opt.gpu_ranks[device_id]:
164 raise AssertionError("An error occurred in \
165 Distributed initialization")
166 process_fn(opt, device_id=device_id)
167 except KeyboardInterrupt:
168 pass # killed by parent, do nothing
169 except Exception:
170 # propagate exception to parent process, keeping original traceback
171 import traceback
172 error_queue.put((opt.gpu_ranks[device_id], traceback.format_exc()))
173
[end of onmt/utils/distributed.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/onmt/utils/distributed.py b/onmt/utils/distributed.py
--- a/onmt/utils/distributed.py
+++ b/onmt/utils/distributed.py
@@ -52,7 +52,7 @@
offset += numel
# all-reduce and rescale
- torch.distributed.all_reduce(buffer_t[:offset], async_op=True)
+ torch.distributed.all_reduce(buffer_t[:offset], async_op=False)
buffer_t.div_(rescale_denom)
# copy all-reduced buffer back into tensors
@@ -68,7 +68,7 @@
# print(filled, sz)
if sz > buffer_size:
# tensor is bigger than buffer, all-reduce and rescale directly
- torch.distributed.all_reduce(t, async_op=True)
+ torch.distributed.all_reduce(t, async_op=False)
t.div_(rescale_denom)
elif filled + sz > buffer_size:
# buffer is full, all-reduce and replace buffer with grad
| {"golden_diff": "diff --git a/onmt/utils/distributed.py b/onmt/utils/distributed.py\n--- a/onmt/utils/distributed.py\n+++ b/onmt/utils/distributed.py\n@@ -52,7 +52,7 @@\n offset += numel\n \n # all-reduce and rescale\n- torch.distributed.all_reduce(buffer_t[:offset], async_op=True)\n+ torch.distributed.all_reduce(buffer_t[:offset], async_op=False)\n buffer_t.div_(rescale_denom)\n \n # copy all-reduced buffer back into tensors\n@@ -68,7 +68,7 @@\n # print(filled, sz)\n if sz > buffer_size:\n # tensor is bigger than buffer, all-reduce and rescale directly\n- torch.distributed.all_reduce(t, async_op=True)\n+ torch.distributed.all_reduce(t, async_op=False)\n t.div_(rescale_denom)\n elif filled + sz > buffer_size:\n # buffer is full, all-reduce and replace buffer with grad\n", "issue": "Difference in training results between single GPU and multi-GPU\nFor single GPU and multi-GPU training, the single card is 4% higher than the multi-card training. What is the cause?\r\nHere is my configuration\uff1a\r\n<img width=\"150\" alt=\"image\" src=\"https://user-images.githubusercontent.com/109410944/226503405-f70fa165-2fe6-4c23-b1bd-e4d4e7610c32.png\">\r\n<img width=\"207\" alt=\"image\" src=\"https://user-images.githubusercontent.com/109410944/226503453-a0e1ba34-47ff-482e-86ad-13602c338ef1.png\">\r\n\n", "before_files": [{"content": "\"\"\" Pytorch Distributed utils\n This piece of code was heavily inspired by the equivalent of Fairseq-py\n https://github.com/pytorch/fairseq\n\"\"\"\nimport os\nimport signal\nimport math\nimport pickle\nimport torch.distributed\nfrom onmt.utils.logging import logger\n\n\ndef is_master(opt, device_id):\n return opt.gpu_ranks[device_id] == 0\n\n\ndef multi_init(opt, device_id):\n dist_init_method = 'tcp://{master_ip}:{master_port}'.format(\n master_ip=opt.master_ip,\n master_port=opt.master_port)\n dist_world_size = opt.world_size\n torch.distributed.init_process_group(\n backend=opt.gpu_backend, init_method=dist_init_method,\n world_size=dist_world_size, rank=opt.gpu_ranks[device_id])\n gpu_rank = torch.distributed.get_rank()\n if not is_master(opt, device_id):\n logger.disabled = True\n\n return gpu_rank\n\n\ndef all_reduce_and_rescale_tensors(tensors, rescale_denom,\n buffer_size=104857600):\n \"\"\"All-reduce and rescale tensors in chunks of the specified size.\n\n Args:\n tensors: list of Tensors to all-reduce\n rescale_denom: denominator for rescaling summed Tensors\n buffer_size: all-reduce chunk size in bytes\n \"\"\"\n # buffer size in bytes, determine equiv. # of elements based on data type\n buffer_t = tensors[0].new(\n math.ceil(buffer_size / tensors[0].element_size())).zero_()\n buffer = []\n\n def all_reduce_buffer():\n # copy tensors into buffer_t\n offset = 0\n for t in buffer:\n numel = t.numel()\n buffer_t[offset:offset+numel].copy_(t.view(-1))\n offset += numel\n\n # all-reduce and rescale\n torch.distributed.all_reduce(buffer_t[:offset], async_op=True)\n buffer_t.div_(rescale_denom)\n\n # copy all-reduced buffer back into tensors\n offset = 0\n for t in buffer:\n numel = t.numel()\n t.view(-1).copy_(buffer_t[offset:offset+numel])\n offset += numel\n\n filled = 0\n for t in tensors:\n sz = t.numel() * t.element_size()\n # print(filled, sz)\n if sz > buffer_size:\n # tensor is bigger than buffer, all-reduce and rescale directly\n torch.distributed.all_reduce(t, async_op=True)\n t.div_(rescale_denom)\n elif filled + sz > buffer_size:\n # buffer is full, all-reduce and replace buffer with grad\n all_reduce_buffer()\n buffer = [t]\n filled = sz\n else:\n # add tensor to buffer\n buffer.append(t)\n filled += sz\n\n if len(buffer) > 0:\n all_reduce_buffer()\n\n\ndef all_gather_list(data, max_size=4096):\n \"\"\"Gathers arbitrary data from all nodes into a list.\"\"\"\n world_size = torch.distributed.get_world_size()\n if not hasattr(all_gather_list, '_in_buffer') or \\\n max_size != all_gather_list._in_buffer.size():\n all_gather_list._in_buffer = torch.cuda.ByteTensor(max_size)\n all_gather_list._out_buffers = [\n torch.cuda.ByteTensor(max_size)\n for i in range(world_size)\n ]\n in_buffer = all_gather_list._in_buffer\n out_buffers = all_gather_list._out_buffers\n\n enc = pickle.dumps(data)\n enc_size = len(enc)\n if enc_size + 2 > max_size:\n raise ValueError(\n 'encoded data exceeds max_size: {}'.format(enc_size + 2))\n assert max_size < 255*256\n in_buffer[0] = enc_size // 255 # this encoding works for max_size < 65k\n in_buffer[1] = enc_size % 255\n in_buffer[2:enc_size+2] = torch.ByteTensor(list(enc))\n\n torch.distributed.all_gather(out_buffers, in_buffer.cuda())\n\n results = []\n for i in range(world_size):\n out_buffer = out_buffers[i]\n size = (255 * out_buffer[0].item()) + out_buffer[1].item()\n\n bytes_list = bytes(out_buffer[2:size+2].tolist())\n result = pickle.loads(bytes_list)\n results.append(result)\n return results\n\n\nclass ErrorHandler(object):\n \"\"\"A class that listens for exceptions in children processes and propagates\n the tracebacks to the parent process.\"\"\"\n\n def __init__(self, error_queue):\n \"\"\" init error handler \"\"\"\n import signal\n import threading\n self.error_queue = error_queue\n self.children_pids = []\n self.error_thread = threading.Thread(\n target=self.error_listener, daemon=True)\n self.error_thread.start()\n signal.signal(signal.SIGUSR1, self.signal_handler)\n\n def add_child(self, pid):\n \"\"\" error handler \"\"\"\n self.children_pids.append(pid)\n\n def error_listener(self):\n \"\"\" error listener \"\"\"\n (rank, original_trace) = self.error_queue.get()\n self.error_queue.put((rank, original_trace))\n os.kill(os.getpid(), signal.SIGUSR1)\n\n def signal_handler(self, signalnum, stackframe):\n \"\"\" signal handler \"\"\"\n for pid in self.children_pids:\n os.kill(pid, signal.SIGINT) # kill children processes\n (rank, original_trace) = self.error_queue.get()\n msg = \"\"\"\\n\\n-- Tracebacks above this line can probably\n be ignored --\\n\\n\"\"\"\n msg += original_trace\n raise Exception(msg)\n\n\ndef consumer(process_fn, opt, device_id, error_queue): # noqa: E501\n \"\"\"Run `process_fn` on `device_id` with data from `batch_queue`.\"\"\"\n try:\n gpu_rank = multi_init(opt, device_id)\n if gpu_rank != opt.gpu_ranks[device_id]:\n raise AssertionError(\"An error occurred in \\\n Distributed initialization\")\n process_fn(opt, device_id=device_id)\n except KeyboardInterrupt:\n pass # killed by parent, do nothing\n except Exception:\n # propagate exception to parent process, keeping original traceback\n import traceback\n error_queue.put((opt.gpu_ranks[device_id], traceback.format_exc()))\n", "path": "onmt/utils/distributed.py"}]} | 2,543 | 218 |
gh_patches_debug_14695 | rasdani/github-patches | git_diff | Netflix__lemur-142 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
SubCA autogenerated descriptions for their certs are incorrect
If you create a root CA, and look up the certificate for that CA its description is:
This is the ROOT certificate for the $CN certificate authority.
If you create a subCA off of that rootCA, and look up the certificate for that SubCA its description is:
This is the ROOT certificate for the $CN certificate authority
</issue>
<code>
[start of lemur/authorities/service.py]
1 """
2 .. module: lemur.authorities.service
3 :platform: Unix
4 :synopsis: This module contains all of the services level functions used to
5 administer authorities in Lemur
6 :copyright: (c) 2015 by Netflix Inc., see AUTHORS for more
7 :license: Apache, see LICENSE for more details.
8 .. moduleauthor:: Kevin Glisson <[email protected]>
9
10 """
11 from flask import g
12 from flask import current_app
13
14 from lemur import database
15 from lemur.authorities.models import Authority
16 from lemur.roles import service as role_service
17 from lemur.notifications import service as notification_service
18
19 from lemur.roles.models import Role
20 from lemur.certificates.models import Certificate
21
22 from lemur.plugins.base import plugins
23
24
25 def update(authority_id, description=None, owner=None, active=None, roles=None):
26 """
27 Update a an authority with new values.
28
29 :param authority_id:
30 :param roles: roles that are allowed to use this authority
31 :rtype : Authority
32 :return:
33 """
34 authority = get(authority_id)
35 if roles:
36 authority = database.update_list(authority, 'roles', Role, roles)
37
38 if active:
39 authority.active = active
40
41 authority.description = description
42 authority.owner = owner
43 return database.update(authority)
44
45
46 def create(kwargs):
47 """
48 Create a new authority.
49
50 :rtype : Authority
51 :return:
52 """
53
54 issuer = plugins.get(kwargs.get('pluginName'))
55
56 kwargs['creator'] = g.current_user.email
57 cert_body, intermediate, issuer_roles = issuer.create_authority(kwargs)
58
59 cert = Certificate(cert_body, chain=intermediate)
60 cert.owner = kwargs['ownerEmail']
61 cert.description = "This is the ROOT certificate for the {0} certificate authority".format(kwargs.get('caName'))
62 cert.user = g.current_user
63
64 cert.notifications = notification_service.create_default_expiration_notifications(
65 'DEFAULT_SECURITY',
66 current_app.config.get('LEMUR_SECURITY_TEAM_EMAIL')
67 )
68
69 # we create and attach any roles that the issuer gives us
70 role_objs = []
71 for r in issuer_roles:
72
73 role = role_service.create(
74 r['name'],
75 password=r['password'],
76 description="{0} auto generated role".format(kwargs.get('pluginName')),
77 username=r['username'])
78
79 # the user creating the authority should be able to administer it
80 if role.username == 'admin':
81 g.current_user.roles.append(role)
82
83 role_objs.append(role)
84
85 authority = Authority(
86 kwargs.get('caName'),
87 kwargs['ownerEmail'],
88 kwargs['pluginName'],
89 cert_body,
90 description=kwargs['caDescription'],
91 chain=intermediate,
92 roles=role_objs
93 )
94
95 database.update(cert)
96 authority = database.create(authority)
97
98 g.current_user.authorities.append(authority)
99
100 return authority
101
102
103 def get_all():
104 """
105 Get all authorities that are currently in Lemur.
106
107 :rtype : List
108 :return:
109 """
110 query = database.session_query(Authority)
111 return database.find_all(query, Authority, {}).all()
112
113
114 def get(authority_id):
115 """
116 Retrieves an authority given it's ID
117
118 :rtype : Authority
119 :param authority_id:
120 :return:
121 """
122 return database.get(Authority, authority_id)
123
124
125 def get_by_name(authority_name):
126 """
127 Retrieves an authority given it's name.
128
129 :param authority_name:
130 :rtype : Authority
131 :return:
132 """
133 return database.get(Authority, authority_name, field='name')
134
135
136 def get_authority_role(ca_name):
137 """
138 Attempts to get the authority role for a given ca uses current_user
139 as a basis for accomplishing that.
140
141 :param ca_name:
142 """
143 if g.current_user.is_admin:
144 authority = get_by_name(ca_name)
145 # TODO we should pick admin ca roles for admin
146 return authority.roles[0]
147 else:
148 for role in g.current_user.roles:
149 if role.authority:
150 if role.authority.name == ca_name:
151 return role
152
153
154 def render(args):
155 """
156 Helper that helps us render the REST Api responses.
157 :param args:
158 :return:
159 """
160 query = database.session_query(Authority)
161 sort_by = args.pop('sort_by')
162 sort_dir = args.pop('sort_dir')
163 page = args.pop('page')
164 count = args.pop('count')
165 filt = args.pop('filter')
166
167 if filt:
168 terms = filt.split(';')
169 if 'active' in filt: # this is really weird but strcmp seems to not work here??
170 query = query.filter(Authority.active == terms[1])
171 else:
172 query = database.filter(query, Authority, terms)
173
174 # we make sure that a user can only use an authority they either own are are a member of - admins can see all
175 if not g.current_user.is_admin:
176 authority_ids = []
177 for role in g.current_user.roles:
178 if role.authority:
179 authority_ids.append(role.authority.id)
180 query = query.filter(Authority.id.in_(authority_ids))
181
182 query = database.find_all(query, Authority, args)
183
184 if sort_by and sort_dir:
185 query = database.sort(query, Authority, sort_by, sort_dir)
186
187 return database.paginate(query, page, count)
188
[end of lemur/authorities/service.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/lemur/authorities/service.py b/lemur/authorities/service.py
--- a/lemur/authorities/service.py
+++ b/lemur/authorities/service.py
@@ -58,7 +58,15 @@
cert = Certificate(cert_body, chain=intermediate)
cert.owner = kwargs['ownerEmail']
- cert.description = "This is the ROOT certificate for the {0} certificate authority".format(kwargs.get('caName'))
+
+ if kwargs['caType'] == 'subca':
+ cert.description = "This is the ROOT certificate for the {0} sub certificate authority the parent \
+ authority is {1}.".format(kwargs.get('caName'), kwargs.get('caParent'))
+ else:
+ cert.description = "This is the ROOT certificate for the {0} certificate authority.".format(
+ kwargs.get('caName')
+ )
+
cert.user = g.current_user
cert.notifications = notification_service.create_default_expiration_notifications(
| {"golden_diff": "diff --git a/lemur/authorities/service.py b/lemur/authorities/service.py\n--- a/lemur/authorities/service.py\n+++ b/lemur/authorities/service.py\n@@ -58,7 +58,15 @@\n \n cert = Certificate(cert_body, chain=intermediate)\n cert.owner = kwargs['ownerEmail']\n- cert.description = \"This is the ROOT certificate for the {0} certificate authority\".format(kwargs.get('caName'))\n+\n+ if kwargs['caType'] == 'subca':\n+ cert.description = \"This is the ROOT certificate for the {0} sub certificate authority the parent \\\n+ authority is {1}.\".format(kwargs.get('caName'), kwargs.get('caParent'))\n+ else:\n+ cert.description = \"This is the ROOT certificate for the {0} certificate authority.\".format(\n+ kwargs.get('caName')\n+ )\n+\n cert.user = g.current_user\n \n cert.notifications = notification_service.create_default_expiration_notifications(\n", "issue": "SubCA autogenerated descriptions for their certs are incorrect\nIf you create a root CA, and look up the certificate for that CA its description is: \nThis is the ROOT certificate for the $CN certificate authority.\nIf you create a subCA off of that rootCA, and look up the certificate for that SubCA its description is: \nThis is the ROOT certificate for the $CN certificate authority\n\n", "before_files": [{"content": "\"\"\"\n.. module: lemur.authorities.service\n :platform: Unix\n :synopsis: This module contains all of the services level functions used to\n administer authorities in Lemur\n :copyright: (c) 2015 by Netflix Inc., see AUTHORS for more\n :license: Apache, see LICENSE for more details.\n.. moduleauthor:: Kevin Glisson <[email protected]>\n\n\"\"\"\nfrom flask import g\nfrom flask import current_app\n\nfrom lemur import database\nfrom lemur.authorities.models import Authority\nfrom lemur.roles import service as role_service\nfrom lemur.notifications import service as notification_service\n\nfrom lemur.roles.models import Role\nfrom lemur.certificates.models import Certificate\n\nfrom lemur.plugins.base import plugins\n\n\ndef update(authority_id, description=None, owner=None, active=None, roles=None):\n \"\"\"\n Update a an authority with new values.\n\n :param authority_id:\n :param roles: roles that are allowed to use this authority\n :rtype : Authority\n :return:\n \"\"\"\n authority = get(authority_id)\n if roles:\n authority = database.update_list(authority, 'roles', Role, roles)\n\n if active:\n authority.active = active\n\n authority.description = description\n authority.owner = owner\n return database.update(authority)\n\n\ndef create(kwargs):\n \"\"\"\n Create a new authority.\n\n :rtype : Authority\n :return:\n \"\"\"\n\n issuer = plugins.get(kwargs.get('pluginName'))\n\n kwargs['creator'] = g.current_user.email\n cert_body, intermediate, issuer_roles = issuer.create_authority(kwargs)\n\n cert = Certificate(cert_body, chain=intermediate)\n cert.owner = kwargs['ownerEmail']\n cert.description = \"This is the ROOT certificate for the {0} certificate authority\".format(kwargs.get('caName'))\n cert.user = g.current_user\n\n cert.notifications = notification_service.create_default_expiration_notifications(\n 'DEFAULT_SECURITY',\n current_app.config.get('LEMUR_SECURITY_TEAM_EMAIL')\n )\n\n # we create and attach any roles that the issuer gives us\n role_objs = []\n for r in issuer_roles:\n\n role = role_service.create(\n r['name'],\n password=r['password'],\n description=\"{0} auto generated role\".format(kwargs.get('pluginName')),\n username=r['username'])\n\n # the user creating the authority should be able to administer it\n if role.username == 'admin':\n g.current_user.roles.append(role)\n\n role_objs.append(role)\n\n authority = Authority(\n kwargs.get('caName'),\n kwargs['ownerEmail'],\n kwargs['pluginName'],\n cert_body,\n description=kwargs['caDescription'],\n chain=intermediate,\n roles=role_objs\n )\n\n database.update(cert)\n authority = database.create(authority)\n\n g.current_user.authorities.append(authority)\n\n return authority\n\n\ndef get_all():\n \"\"\"\n Get all authorities that are currently in Lemur.\n\n :rtype : List\n :return:\n \"\"\"\n query = database.session_query(Authority)\n return database.find_all(query, Authority, {}).all()\n\n\ndef get(authority_id):\n \"\"\"\n Retrieves an authority given it's ID\n\n :rtype : Authority\n :param authority_id:\n :return:\n \"\"\"\n return database.get(Authority, authority_id)\n\n\ndef get_by_name(authority_name):\n \"\"\"\n Retrieves an authority given it's name.\n\n :param authority_name:\n :rtype : Authority\n :return:\n \"\"\"\n return database.get(Authority, authority_name, field='name')\n\n\ndef get_authority_role(ca_name):\n \"\"\"\n Attempts to get the authority role for a given ca uses current_user\n as a basis for accomplishing that.\n\n :param ca_name:\n \"\"\"\n if g.current_user.is_admin:\n authority = get_by_name(ca_name)\n # TODO we should pick admin ca roles for admin\n return authority.roles[0]\n else:\n for role in g.current_user.roles:\n if role.authority:\n if role.authority.name == ca_name:\n return role\n\n\ndef render(args):\n \"\"\"\n Helper that helps us render the REST Api responses.\n :param args:\n :return:\n \"\"\"\n query = database.session_query(Authority)\n sort_by = args.pop('sort_by')\n sort_dir = args.pop('sort_dir')\n page = args.pop('page')\n count = args.pop('count')\n filt = args.pop('filter')\n\n if filt:\n terms = filt.split(';')\n if 'active' in filt: # this is really weird but strcmp seems to not work here??\n query = query.filter(Authority.active == terms[1])\n else:\n query = database.filter(query, Authority, terms)\n\n # we make sure that a user can only use an authority they either own are are a member of - admins can see all\n if not g.current_user.is_admin:\n authority_ids = []\n for role in g.current_user.roles:\n if role.authority:\n authority_ids.append(role.authority.id)\n query = query.filter(Authority.id.in_(authority_ids))\n\n query = database.find_all(query, Authority, args)\n\n if sort_by and sort_dir:\n query = database.sort(query, Authority, sort_by, sort_dir)\n\n return database.paginate(query, page, count)\n", "path": "lemur/authorities/service.py"}]} | 2,250 | 219 |
gh_patches_debug_26002 | rasdani/github-patches | git_diff | open-telemetry__opentelemetry-python-2493 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
OTLP HTTP Exporter should append `v1/{signal}` to the endpoint URL when non-per-signal env var is used
Per the spec re: [Endpoint URLs for OTLP/HTTP](https://github.com/open-telemetry/opentelemetry-specification/blob/b7473b5de0f55f921f896948442ebb274f58b584/specification/protocol/exporter.md#per-signal-urls), when the non-per-signal endpoint env var (`OTEL_EXPORTER_OTLP_ENDPOINT`) is set, the exporter *must* construct per-signal URLs (either `v1/traces` or `v1/metrics`).
Currently, the [exporter does not do this](https://github.com/open-telemetry/opentelemetry-python/blob/80f5a20ba8f3a71450fe3020fecf362fedb76bff/exporter/opentelemetry-exporter-otlp-proto-http/src/opentelemetry/exporter/otlp/proto/http/trace_exporter/__init__.py#L68); `v1/traces` must be manually added to the end point when setting only `OTEL_EXPORTER_OTLP_ENDPOINT`. Not doing so produces a 404 error when attempting to export spans.
</issue>
<code>
[start of exporter/opentelemetry-exporter-otlp-proto-http/src/opentelemetry/exporter/otlp/proto/http/trace_exporter/__init__.py]
1 # Copyright The OpenTelemetry Authors
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import gzip
16 import logging
17 import zlib
18 from io import BytesIO
19 from os import environ
20 from typing import Dict, Optional
21 from time import sleep
22
23 import requests
24 from backoff import expo
25
26 from opentelemetry.sdk.environment_variables import (
27 OTEL_EXPORTER_OTLP_TRACES_CERTIFICATE,
28 OTEL_EXPORTER_OTLP_TRACES_COMPRESSION,
29 OTEL_EXPORTER_OTLP_TRACES_ENDPOINT,
30 OTEL_EXPORTER_OTLP_TRACES_HEADERS,
31 OTEL_EXPORTER_OTLP_TRACES_TIMEOUT,
32 OTEL_EXPORTER_OTLP_CERTIFICATE,
33 OTEL_EXPORTER_OTLP_COMPRESSION,
34 OTEL_EXPORTER_OTLP_ENDPOINT,
35 OTEL_EXPORTER_OTLP_HEADERS,
36 OTEL_EXPORTER_OTLP_TIMEOUT,
37 )
38 from opentelemetry.sdk.trace.export import SpanExporter, SpanExportResult
39 from opentelemetry.exporter.otlp.proto.http import Compression
40 from opentelemetry.exporter.otlp.proto.http.trace_exporter.encoder import (
41 _ProtobufEncoder,
42 )
43 from opentelemetry.util.re import parse_headers
44
45
46 _logger = logging.getLogger(__name__)
47
48
49 DEFAULT_COMPRESSION = Compression.NoCompression
50 DEFAULT_ENDPOINT = "http://localhost:4318/v1/traces"
51 DEFAULT_TIMEOUT = 10 # in seconds
52
53
54 class OTLPSpanExporter(SpanExporter):
55
56 _MAX_RETRY_TIMEOUT = 64
57
58 def __init__(
59 self,
60 endpoint: Optional[str] = None,
61 certificate_file: Optional[str] = None,
62 headers: Optional[Dict[str, str]] = None,
63 timeout: Optional[int] = None,
64 compression: Optional[Compression] = None,
65 ):
66 self._endpoint = endpoint or environ.get(
67 OTEL_EXPORTER_OTLP_TRACES_ENDPOINT,
68 environ.get(OTEL_EXPORTER_OTLP_ENDPOINT, DEFAULT_ENDPOINT),
69 )
70 self._certificate_file = certificate_file or environ.get(
71 OTEL_EXPORTER_OTLP_TRACES_CERTIFICATE,
72 environ.get(OTEL_EXPORTER_OTLP_CERTIFICATE, True),
73 )
74 headers_string = environ.get(
75 OTEL_EXPORTER_OTLP_TRACES_HEADERS,
76 environ.get(OTEL_EXPORTER_OTLP_HEADERS, ""),
77 )
78 self._headers = headers or parse_headers(headers_string)
79 self._timeout = timeout or int(
80 environ.get(
81 OTEL_EXPORTER_OTLP_TRACES_TIMEOUT,
82 environ.get(OTEL_EXPORTER_OTLP_TIMEOUT, DEFAULT_TIMEOUT),
83 )
84 )
85 self._compression = compression or _compression_from_env()
86 self._session = requests.Session()
87 self._session.headers.update(self._headers)
88 self._session.headers.update(
89 {"Content-Type": _ProtobufEncoder._CONTENT_TYPE}
90 )
91 if self._compression is not Compression.NoCompression:
92 self._session.headers.update(
93 {"Content-Encoding": self._compression.value}
94 )
95 self._shutdown = False
96
97 def _export(self, serialized_data: str):
98 data = serialized_data
99 if self._compression == Compression.Gzip:
100 gzip_data = BytesIO()
101 with gzip.GzipFile(fileobj=gzip_data, mode="w") as gzip_stream:
102 gzip_stream.write(serialized_data)
103 data = gzip_data.getvalue()
104 elif self._compression == Compression.Deflate:
105 data = zlib.compress(bytes(serialized_data))
106
107 return self._session.post(
108 url=self._endpoint,
109 data=data,
110 verify=self._certificate_file,
111 timeout=self._timeout,
112 )
113
114 @staticmethod
115 def _retryable(resp: requests.Response) -> bool:
116 if resp.status_code == 408:
117 return True
118 if resp.status_code >= 500 and resp.status_code <= 599:
119 return True
120 return False
121
122 def export(self, spans) -> SpanExportResult:
123 # After the call to Shutdown subsequent calls to Export are
124 # not allowed and should return a Failure result.
125 if self._shutdown:
126 _logger.warning("Exporter already shutdown, ignoring batch")
127 return SpanExportResult.FAILURE
128
129 serialized_data = _ProtobufEncoder.serialize(spans)
130
131 for delay in expo(max_value=self._MAX_RETRY_TIMEOUT):
132
133 if delay == self._MAX_RETRY_TIMEOUT:
134 return SpanExportResult.FAILURE
135
136 resp = self._export(serialized_data)
137 # pylint: disable=no-else-return
138 if resp.status_code in (200, 202):
139 return SpanExportResult.SUCCESS
140 elif self._retryable(resp):
141 _logger.warning(
142 "Transient error %s encountered while exporting span batch, retrying in %ss.",
143 resp.reason,
144 delay,
145 )
146 sleep(delay)
147 continue
148 else:
149 _logger.error(
150 "Failed to export batch code: %s, reason: %s",
151 resp.status_code,
152 resp.text,
153 )
154 return SpanExportResult.FAILURE
155 return SpanExportResult.FAILURE
156
157 def shutdown(self):
158 if self._shutdown:
159 _logger.warning("Exporter already shutdown, ignoring call")
160 return
161 self._session.close()
162 self._shutdown = True
163
164
165 def _compression_from_env() -> Compression:
166 compression = (
167 environ.get(
168 OTEL_EXPORTER_OTLP_TRACES_COMPRESSION,
169 environ.get(OTEL_EXPORTER_OTLP_COMPRESSION, "none"),
170 )
171 .lower()
172 .strip()
173 )
174 return Compression(compression)
175
[end of exporter/opentelemetry-exporter-otlp-proto-http/src/opentelemetry/exporter/otlp/proto/http/trace_exporter/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/exporter/opentelemetry-exporter-otlp-proto-http/src/opentelemetry/exporter/otlp/proto/http/trace_exporter/__init__.py b/exporter/opentelemetry-exporter-otlp-proto-http/src/opentelemetry/exporter/otlp/proto/http/trace_exporter/__init__.py
--- a/exporter/opentelemetry-exporter-otlp-proto-http/src/opentelemetry/exporter/otlp/proto/http/trace_exporter/__init__.py
+++ b/exporter/opentelemetry-exporter-otlp-proto-http/src/opentelemetry/exporter/otlp/proto/http/trace_exporter/__init__.py
@@ -47,7 +47,8 @@
DEFAULT_COMPRESSION = Compression.NoCompression
-DEFAULT_ENDPOINT = "http://localhost:4318/v1/traces"
+DEFAULT_ENDPOINT = "http://localhost:4318/"
+DEFAULT_TRACES_EXPORT_PATH = "v1/traces"
DEFAULT_TIMEOUT = 10 # in seconds
@@ -65,7 +66,9 @@
):
self._endpoint = endpoint or environ.get(
OTEL_EXPORTER_OTLP_TRACES_ENDPOINT,
- environ.get(OTEL_EXPORTER_OTLP_ENDPOINT, DEFAULT_ENDPOINT),
+ _append_trace_path(
+ environ.get(OTEL_EXPORTER_OTLP_ENDPOINT, DEFAULT_ENDPOINT)
+ ),
)
self._certificate_file = certificate_file or environ.get(
OTEL_EXPORTER_OTLP_TRACES_CERTIFICATE,
@@ -172,3 +175,9 @@
.strip()
)
return Compression(compression)
+
+
+def _append_trace_path(endpoint: str) -> str:
+ if endpoint.endswith("/"):
+ return endpoint + DEFAULT_TRACES_EXPORT_PATH
+ return endpoint + f"/{DEFAULT_TRACES_EXPORT_PATH}"
| {"golden_diff": "diff --git a/exporter/opentelemetry-exporter-otlp-proto-http/src/opentelemetry/exporter/otlp/proto/http/trace_exporter/__init__.py b/exporter/opentelemetry-exporter-otlp-proto-http/src/opentelemetry/exporter/otlp/proto/http/trace_exporter/__init__.py\n--- a/exporter/opentelemetry-exporter-otlp-proto-http/src/opentelemetry/exporter/otlp/proto/http/trace_exporter/__init__.py\n+++ b/exporter/opentelemetry-exporter-otlp-proto-http/src/opentelemetry/exporter/otlp/proto/http/trace_exporter/__init__.py\n@@ -47,7 +47,8 @@\n \n \n DEFAULT_COMPRESSION = Compression.NoCompression\n-DEFAULT_ENDPOINT = \"http://localhost:4318/v1/traces\"\n+DEFAULT_ENDPOINT = \"http://localhost:4318/\"\n+DEFAULT_TRACES_EXPORT_PATH = \"v1/traces\"\n DEFAULT_TIMEOUT = 10 # in seconds\n \n \n@@ -65,7 +66,9 @@\n ):\n self._endpoint = endpoint or environ.get(\n OTEL_EXPORTER_OTLP_TRACES_ENDPOINT,\n- environ.get(OTEL_EXPORTER_OTLP_ENDPOINT, DEFAULT_ENDPOINT),\n+ _append_trace_path(\n+ environ.get(OTEL_EXPORTER_OTLP_ENDPOINT, DEFAULT_ENDPOINT)\n+ ),\n )\n self._certificate_file = certificate_file or environ.get(\n OTEL_EXPORTER_OTLP_TRACES_CERTIFICATE,\n@@ -172,3 +175,9 @@\n .strip()\n )\n return Compression(compression)\n+\n+\n+def _append_trace_path(endpoint: str) -> str:\n+ if endpoint.endswith(\"/\"):\n+ return endpoint + DEFAULT_TRACES_EXPORT_PATH\n+ return endpoint + f\"/{DEFAULT_TRACES_EXPORT_PATH}\"\n", "issue": "OTLP HTTP Exporter should append `v1/{signal}` to the endpoint URL when non-per-signal env var is used\nPer the spec re: [Endpoint URLs for OTLP/HTTP](https://github.com/open-telemetry/opentelemetry-specification/blob/b7473b5de0f55f921f896948442ebb274f58b584/specification/protocol/exporter.md#per-signal-urls), when the non-per-signal endpoint env var (`OTEL_EXPORTER_OTLP_ENDPOINT`) is set, the exporter *must* construct per-signal URLs (either `v1/traces` or `v1/metrics`). \r\n\r\nCurrently, the [exporter does not do this](https://github.com/open-telemetry/opentelemetry-python/blob/80f5a20ba8f3a71450fe3020fecf362fedb76bff/exporter/opentelemetry-exporter-otlp-proto-http/src/opentelemetry/exporter/otlp/proto/http/trace_exporter/__init__.py#L68); `v1/traces` must be manually added to the end point when setting only `OTEL_EXPORTER_OTLP_ENDPOINT`. Not doing so produces a 404 error when attempting to export spans. \r\n\n", "before_files": [{"content": "# Copyright The OpenTelemetry Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport gzip\nimport logging\nimport zlib\nfrom io import BytesIO\nfrom os import environ\nfrom typing import Dict, Optional\nfrom time import sleep\n\nimport requests\nfrom backoff import expo\n\nfrom opentelemetry.sdk.environment_variables import (\n OTEL_EXPORTER_OTLP_TRACES_CERTIFICATE,\n OTEL_EXPORTER_OTLP_TRACES_COMPRESSION,\n OTEL_EXPORTER_OTLP_TRACES_ENDPOINT,\n OTEL_EXPORTER_OTLP_TRACES_HEADERS,\n OTEL_EXPORTER_OTLP_TRACES_TIMEOUT,\n OTEL_EXPORTER_OTLP_CERTIFICATE,\n OTEL_EXPORTER_OTLP_COMPRESSION,\n OTEL_EXPORTER_OTLP_ENDPOINT,\n OTEL_EXPORTER_OTLP_HEADERS,\n OTEL_EXPORTER_OTLP_TIMEOUT,\n)\nfrom opentelemetry.sdk.trace.export import SpanExporter, SpanExportResult\nfrom opentelemetry.exporter.otlp.proto.http import Compression\nfrom opentelemetry.exporter.otlp.proto.http.trace_exporter.encoder import (\n _ProtobufEncoder,\n)\nfrom opentelemetry.util.re import parse_headers\n\n\n_logger = logging.getLogger(__name__)\n\n\nDEFAULT_COMPRESSION = Compression.NoCompression\nDEFAULT_ENDPOINT = \"http://localhost:4318/v1/traces\"\nDEFAULT_TIMEOUT = 10 # in seconds\n\n\nclass OTLPSpanExporter(SpanExporter):\n\n _MAX_RETRY_TIMEOUT = 64\n\n def __init__(\n self,\n endpoint: Optional[str] = None,\n certificate_file: Optional[str] = None,\n headers: Optional[Dict[str, str]] = None,\n timeout: Optional[int] = None,\n compression: Optional[Compression] = None,\n ):\n self._endpoint = endpoint or environ.get(\n OTEL_EXPORTER_OTLP_TRACES_ENDPOINT,\n environ.get(OTEL_EXPORTER_OTLP_ENDPOINT, DEFAULT_ENDPOINT),\n )\n self._certificate_file = certificate_file or environ.get(\n OTEL_EXPORTER_OTLP_TRACES_CERTIFICATE,\n environ.get(OTEL_EXPORTER_OTLP_CERTIFICATE, True),\n )\n headers_string = environ.get(\n OTEL_EXPORTER_OTLP_TRACES_HEADERS,\n environ.get(OTEL_EXPORTER_OTLP_HEADERS, \"\"),\n )\n self._headers = headers or parse_headers(headers_string)\n self._timeout = timeout or int(\n environ.get(\n OTEL_EXPORTER_OTLP_TRACES_TIMEOUT,\n environ.get(OTEL_EXPORTER_OTLP_TIMEOUT, DEFAULT_TIMEOUT),\n )\n )\n self._compression = compression or _compression_from_env()\n self._session = requests.Session()\n self._session.headers.update(self._headers)\n self._session.headers.update(\n {\"Content-Type\": _ProtobufEncoder._CONTENT_TYPE}\n )\n if self._compression is not Compression.NoCompression:\n self._session.headers.update(\n {\"Content-Encoding\": self._compression.value}\n )\n self._shutdown = False\n\n def _export(self, serialized_data: str):\n data = serialized_data\n if self._compression == Compression.Gzip:\n gzip_data = BytesIO()\n with gzip.GzipFile(fileobj=gzip_data, mode=\"w\") as gzip_stream:\n gzip_stream.write(serialized_data)\n data = gzip_data.getvalue()\n elif self._compression == Compression.Deflate:\n data = zlib.compress(bytes(serialized_data))\n\n return self._session.post(\n url=self._endpoint,\n data=data,\n verify=self._certificate_file,\n timeout=self._timeout,\n )\n\n @staticmethod\n def _retryable(resp: requests.Response) -> bool:\n if resp.status_code == 408:\n return True\n if resp.status_code >= 500 and resp.status_code <= 599:\n return True\n return False\n\n def export(self, spans) -> SpanExportResult:\n # After the call to Shutdown subsequent calls to Export are\n # not allowed and should return a Failure result.\n if self._shutdown:\n _logger.warning(\"Exporter already shutdown, ignoring batch\")\n return SpanExportResult.FAILURE\n\n serialized_data = _ProtobufEncoder.serialize(spans)\n\n for delay in expo(max_value=self._MAX_RETRY_TIMEOUT):\n\n if delay == self._MAX_RETRY_TIMEOUT:\n return SpanExportResult.FAILURE\n\n resp = self._export(serialized_data)\n # pylint: disable=no-else-return\n if resp.status_code in (200, 202):\n return SpanExportResult.SUCCESS\n elif self._retryable(resp):\n _logger.warning(\n \"Transient error %s encountered while exporting span batch, retrying in %ss.\",\n resp.reason,\n delay,\n )\n sleep(delay)\n continue\n else:\n _logger.error(\n \"Failed to export batch code: %s, reason: %s\",\n resp.status_code,\n resp.text,\n )\n return SpanExportResult.FAILURE\n return SpanExportResult.FAILURE\n\n def shutdown(self):\n if self._shutdown:\n _logger.warning(\"Exporter already shutdown, ignoring call\")\n return\n self._session.close()\n self._shutdown = True\n\n\ndef _compression_from_env() -> Compression:\n compression = (\n environ.get(\n OTEL_EXPORTER_OTLP_TRACES_COMPRESSION,\n environ.get(OTEL_EXPORTER_OTLP_COMPRESSION, \"none\"),\n )\n .lower()\n .strip()\n )\n return Compression(compression)\n", "path": "exporter/opentelemetry-exporter-otlp-proto-http/src/opentelemetry/exporter/otlp/proto/http/trace_exporter/__init__.py"}]} | 2,570 | 394 |
gh_patches_debug_66174 | rasdani/github-patches | git_diff | cisagov__manage.get.gov-1985 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Provide documentation about user_groups and permission changes
### Issue description
If we edit permissions for User Groups, such as to make different items viewable on django admin, this requires a very specific set of instructions that is only documentated inside user group migration files. We should reference this documentation somewhere else in our developer readme so it's clear what is needed to be done if you ever change permissions in the future.
We had a case where permissions were changed and it wasn't clear what needed to be done (make a manual migration). The developers lost time trying to figure out why permissions didn't show and why running makemigrations changed nothing. I suggest adding an inline code comment around where permissions are set in user_groups too that points to documentation in developer readme. This may save future developer's time.
### Acceptance criteria
- [ ] update developer documentation (inline and md) about user_group/ permission changes
### Additional context
the migration files that have documentation about this are all ones that say _create_groups_v (followed by a number), such as 0075_create_groups_v08.py. See those for the current documentation and note that this was hard for developers to find.
[Slack thread](https://cisa-corp.slack.com/archives/C05BGB4L5NF/p1709939481415349)
### Links to other issues
_No response_
</issue>
<code>
[start of src/registrar/models/user_group.py]
1 from django.contrib.auth.models import Group
2 import logging
3
4 logger = logging.getLogger(__name__)
5
6
7 class UserGroup(Group):
8 class Meta:
9 verbose_name = "User group"
10 verbose_name_plural = "User groups"
11
12 def create_cisa_analyst_group(apps, schema_editor):
13 """This method gets run from a data migration."""
14
15 # Hard to pass self to these methods as the calls from migrations
16 # are only expecting apps and schema_editor, so we'll just define
17 # apps, schema_editor in the local scope instead
18 CISA_ANALYST_GROUP_PERMISSIONS = [
19 {
20 "app_label": "auditlog",
21 "model": "logentry",
22 "permissions": ["view_logentry"],
23 },
24 {
25 "app_label": "registrar",
26 "model": "contact",
27 "permissions": ["change_contact"],
28 },
29 {
30 "app_label": "registrar",
31 "model": "domainrequest",
32 "permissions": ["change_domainrequest"],
33 },
34 {
35 "app_label": "registrar",
36 "model": "domain",
37 "permissions": ["view_domain"],
38 },
39 {
40 "app_label": "registrar",
41 "model": "draftdomain",
42 "permissions": ["change_draftdomain"],
43 },
44 {
45 "app_label": "registrar",
46 "model": "user",
47 "permissions": ["analyst_access_permission", "change_user"],
48 },
49 {
50 "app_label": "registrar",
51 "model": "domaininvitation",
52 "permissions": ["add_domaininvitation", "view_domaininvitation"],
53 },
54 {
55 "app_label": "registrar",
56 "model": "website",
57 "permissions": ["change_website"],
58 },
59 {
60 "app_label": "registrar",
61 "model": "userdomainrole",
62 "permissions": ["view_userdomainrole", "delete_userdomainrole"],
63 },
64 {
65 "app_label": "registrar",
66 "model": "verifiedbystaff",
67 "permissions": ["add_verifiedbystaff", "change_verifiedbystaff", "delete_verifiedbystaff"],
68 },
69 {
70 "app_label": "registrar",
71 "model": "federalagency",
72 "permissions": ["add_federalagency", "change_federalagency", "delete_federalagency"],
73 },
74 ]
75
76 # Avoid error: You can't execute queries until the end
77 # of the 'atomic' block.
78 # From django docs:
79 # https://docs.djangoproject.com/en/4.2/topics/migrations/#data-migrations
80 # We can’t import the Person model directly as it may be a newer
81 # version than this migration expects. We use the historical version.
82 ContentType = apps.get_model("contenttypes", "ContentType")
83 Permission = apps.get_model("auth", "Permission")
84 UserGroup = apps.get_model("registrar", "UserGroup")
85
86 logger.info("Going to create the Analyst Group")
87 try:
88 cisa_analysts_group, _ = UserGroup.objects.get_or_create(
89 name="cisa_analysts_group",
90 )
91
92 cisa_analysts_group.permissions.clear()
93
94 for permission in CISA_ANALYST_GROUP_PERMISSIONS:
95 app_label = permission["app_label"]
96 model_name = permission["model"]
97 permissions = permission["permissions"]
98
99 # Retrieve the content type for the app and model
100 content_type = ContentType.objects.get(app_label=app_label, model=model_name)
101
102 # Retrieve the permissions based on their codenames
103 permissions = Permission.objects.filter(content_type=content_type, codename__in=permissions)
104
105 # Assign the permissions to the group
106 cisa_analysts_group.permissions.add(*permissions)
107
108 # Convert the permissions QuerySet to a list of codenames
109 permission_list = list(permissions.values_list("codename", flat=True))
110
111 logger.debug(
112 app_label
113 + " | "
114 + model_name
115 + " | "
116 + ", ".join(permission_list)
117 + " added to group "
118 + cisa_analysts_group.name
119 )
120
121 cisa_analysts_group.save()
122 logger.debug("CISA Analyst permissions added to group " + cisa_analysts_group.name)
123 except Exception as e:
124 logger.error(f"Error creating analyst permissions group: {e}")
125
126 def create_full_access_group(apps, schema_editor):
127 """This method gets run from a data migration."""
128
129 Permission = apps.get_model("auth", "Permission")
130 UserGroup = apps.get_model("registrar", "UserGroup")
131
132 logger.info("Going to create the Full Access Group")
133 try:
134 full_access_group, _ = UserGroup.objects.get_or_create(
135 name="full_access_group",
136 )
137 # Get all available permissions
138 all_permissions = Permission.objects.all()
139
140 # Assign all permissions to the group
141 full_access_group.permissions.add(*all_permissions)
142
143 full_access_group.save()
144 logger.debug("All permissions added to group " + full_access_group.name)
145 except Exception as e:
146 logger.error(f"Error creating full access group: {e}")
147
[end of src/registrar/models/user_group.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/registrar/models/user_group.py b/src/registrar/models/user_group.py
--- a/src/registrar/models/user_group.py
+++ b/src/registrar/models/user_group.py
@@ -5,6 +5,11 @@
class UserGroup(Group):
+ """
+ UserGroup sets read and write permissions for superusers (who have full access)
+ and analysts. For more details, see the dev docs for user-permissions.
+ """
+
class Meta:
verbose_name = "User group"
verbose_name_plural = "User groups"
| {"golden_diff": "diff --git a/src/registrar/models/user_group.py b/src/registrar/models/user_group.py\n--- a/src/registrar/models/user_group.py\n+++ b/src/registrar/models/user_group.py\n@@ -5,6 +5,11 @@\n \n \n class UserGroup(Group):\n+ \"\"\"\n+ UserGroup sets read and write permissions for superusers (who have full access)\n+ and analysts. For more details, see the dev docs for user-permissions.\n+ \"\"\"\n+\n class Meta:\n verbose_name = \"User group\"\n verbose_name_plural = \"User groups\"\n", "issue": "Provide documentation about user_groups and permission changes\n### Issue description\r\n\r\nIf we edit permissions for User Groups, such as to make different items viewable on django admin, this requires a very specific set of instructions that is only documentated inside user group migration files. We should reference this documentation somewhere else in our developer readme so it's clear what is needed to be done if you ever change permissions in the future.\r\n\r\n We had a case where permissions were changed and it wasn't clear what needed to be done (make a manual migration). The developers lost time trying to figure out why permissions didn't show and why running makemigrations changed nothing. I suggest adding an inline code comment around where permissions are set in user_groups too that points to documentation in developer readme. This may save future developer's time.\r\n\r\n### Acceptance criteria\r\n\r\n- [ ] update developer documentation (inline and md) about user_group/ permission changes\r\n\r\n### Additional context\r\nthe migration files that have documentation about this are all ones that say _create_groups_v (followed by a number), such as 0075_create_groups_v08.py. See those for the current documentation and note that this was hard for developers to find.\r\n[Slack thread](https://cisa-corp.slack.com/archives/C05BGB4L5NF/p1709939481415349)\r\n\r\n### Links to other issues\r\n\r\n_No response_\n", "before_files": [{"content": "from django.contrib.auth.models import Group\nimport logging\n\nlogger = logging.getLogger(__name__)\n\n\nclass UserGroup(Group):\n class Meta:\n verbose_name = \"User group\"\n verbose_name_plural = \"User groups\"\n\n def create_cisa_analyst_group(apps, schema_editor):\n \"\"\"This method gets run from a data migration.\"\"\"\n\n # Hard to pass self to these methods as the calls from migrations\n # are only expecting apps and schema_editor, so we'll just define\n # apps, schema_editor in the local scope instead\n CISA_ANALYST_GROUP_PERMISSIONS = [\n {\n \"app_label\": \"auditlog\",\n \"model\": \"logentry\",\n \"permissions\": [\"view_logentry\"],\n },\n {\n \"app_label\": \"registrar\",\n \"model\": \"contact\",\n \"permissions\": [\"change_contact\"],\n },\n {\n \"app_label\": \"registrar\",\n \"model\": \"domainrequest\",\n \"permissions\": [\"change_domainrequest\"],\n },\n {\n \"app_label\": \"registrar\",\n \"model\": \"domain\",\n \"permissions\": [\"view_domain\"],\n },\n {\n \"app_label\": \"registrar\",\n \"model\": \"draftdomain\",\n \"permissions\": [\"change_draftdomain\"],\n },\n {\n \"app_label\": \"registrar\",\n \"model\": \"user\",\n \"permissions\": [\"analyst_access_permission\", \"change_user\"],\n },\n {\n \"app_label\": \"registrar\",\n \"model\": \"domaininvitation\",\n \"permissions\": [\"add_domaininvitation\", \"view_domaininvitation\"],\n },\n {\n \"app_label\": \"registrar\",\n \"model\": \"website\",\n \"permissions\": [\"change_website\"],\n },\n {\n \"app_label\": \"registrar\",\n \"model\": \"userdomainrole\",\n \"permissions\": [\"view_userdomainrole\", \"delete_userdomainrole\"],\n },\n {\n \"app_label\": \"registrar\",\n \"model\": \"verifiedbystaff\",\n \"permissions\": [\"add_verifiedbystaff\", \"change_verifiedbystaff\", \"delete_verifiedbystaff\"],\n },\n {\n \"app_label\": \"registrar\",\n \"model\": \"federalagency\",\n \"permissions\": [\"add_federalagency\", \"change_federalagency\", \"delete_federalagency\"],\n },\n ]\n\n # Avoid error: You can't execute queries until the end\n # of the 'atomic' block.\n # From django docs:\n # https://docs.djangoproject.com/en/4.2/topics/migrations/#data-migrations\n # We can\u2019t import the Person model directly as it may be a newer\n # version than this migration expects. We use the historical version.\n ContentType = apps.get_model(\"contenttypes\", \"ContentType\")\n Permission = apps.get_model(\"auth\", \"Permission\")\n UserGroup = apps.get_model(\"registrar\", \"UserGroup\")\n\n logger.info(\"Going to create the Analyst Group\")\n try:\n cisa_analysts_group, _ = UserGroup.objects.get_or_create(\n name=\"cisa_analysts_group\",\n )\n\n cisa_analysts_group.permissions.clear()\n\n for permission in CISA_ANALYST_GROUP_PERMISSIONS:\n app_label = permission[\"app_label\"]\n model_name = permission[\"model\"]\n permissions = permission[\"permissions\"]\n\n # Retrieve the content type for the app and model\n content_type = ContentType.objects.get(app_label=app_label, model=model_name)\n\n # Retrieve the permissions based on their codenames\n permissions = Permission.objects.filter(content_type=content_type, codename__in=permissions)\n\n # Assign the permissions to the group\n cisa_analysts_group.permissions.add(*permissions)\n\n # Convert the permissions QuerySet to a list of codenames\n permission_list = list(permissions.values_list(\"codename\", flat=True))\n\n logger.debug(\n app_label\n + \" | \"\n + model_name\n + \" | \"\n + \", \".join(permission_list)\n + \" added to group \"\n + cisa_analysts_group.name\n )\n\n cisa_analysts_group.save()\n logger.debug(\"CISA Analyst permissions added to group \" + cisa_analysts_group.name)\n except Exception as e:\n logger.error(f\"Error creating analyst permissions group: {e}\")\n\n def create_full_access_group(apps, schema_editor):\n \"\"\"This method gets run from a data migration.\"\"\"\n\n Permission = apps.get_model(\"auth\", \"Permission\")\n UserGroup = apps.get_model(\"registrar\", \"UserGroup\")\n\n logger.info(\"Going to create the Full Access Group\")\n try:\n full_access_group, _ = UserGroup.objects.get_or_create(\n name=\"full_access_group\",\n )\n # Get all available permissions\n all_permissions = Permission.objects.all()\n\n # Assign all permissions to the group\n full_access_group.permissions.add(*all_permissions)\n\n full_access_group.save()\n logger.debug(\"All permissions added to group \" + full_access_group.name)\n except Exception as e:\n logger.error(f\"Error creating full access group: {e}\")\n", "path": "src/registrar/models/user_group.py"}]} | 2,267 | 121 |
gh_patches_debug_19297 | rasdani/github-patches | git_diff | sql-machine-learning__elasticdl-1442 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
DeepFM unit test for new PS
</issue>
<code>
[start of elasticdl/python/common/tensor.py]
1 import numpy as np
2 import tensorflow as tf
3
4 from elasticdl.proto import elasticdl_pb2
5 from elasticdl.python.common.dtypes import (
6 dtype_numpy_to_tensor,
7 dtype_tensor_to_numpy,
8 )
9 from elasticdl.python.common.log_utils import default_logger as logger
10
11
12 class Tensor(object):
13 """Data structure for tensors in ElasticDL.
14
15 `Tensor` can save dense tensors and sparse tensors. For sparse tensors,
16 this structure saves them in the same way as `TensorFlow.IndexedSlices`.
17 """
18
19 def __init__(self, values=None, indices=None, name=None):
20 """
21 `Tensor` can save dense tensors and sparse tensors.
22 To pass in a dense tensor, `values` should be `numpy.ndarray` and
23 `indices` should be None.
24 There are two ways to pass in a sparse tensor:
25 * `values` is a `numpy.ndarray` and `indices` is a `numpy.ndarray`.
26 * `values` is a `TensorFlow.IndexedSlices` and `indices` is None.
27
28 Args:
29 values: A `numpy.ndarray` or `TensorFlow.IndexedSlices`.
30 If `values` is a `TensorFlow.IndexedSlices`, `indices` should
31 be None.
32 indices: A `numpy.ndarray` or None.
33 name: A python string.
34 """
35 self.set(values, indices, name)
36
37 @classmethod
38 def from_tensor_pb(cls, tensor_pb):
39 """Create an ElasticDL Tensor object from tensor protocol buffer.
40
41 Return the created Tensor object.
42 """
43 tensor = cls()
44 deserialize_tensor_pb(tensor_pb, tensor)
45 return tensor
46
47 def set(self, values=None, indices=None, name=None):
48 self.name = name
49 if isinstance(values, tf.IndexedSlices):
50 if indices is not None:
51 raise ValueError(
52 "When creating a Tensor object with values of type "
53 "tf.IndexedSlices, indices must be None."
54 )
55 if values.dense_shape is not None:
56 # TODO(yunjian.lmh): Support dense shape, or do not print
57 # warning message, or there will be too much warning
58 # messages.
59 logger.warning(
60 "ElasticDL Tensor ignores dense_shape in "
61 "TensorFlow.IndexedSlices."
62 )
63
64 self.values = values.values.numpy()
65 self.indices = values.indices.numpy()
66 else:
67 self.values = (
68 values.numpy() if isinstance(values, tf.Tensor) else values
69 )
70 self.indices = (
71 indices.numpy() if isinstance(indices, tf.Tensor) else indices
72 )
73
74 def is_indexed_slices(self):
75 return self.indices is not None
76
77 def to_tensor_pb(self):
78 tensor_pb = elasticdl_pb2.Tensor()
79 serialize_tensor(self, tensor_pb)
80 return tensor_pb
81
82 def to_tf_tensor(self):
83 if self.is_indexed_slices():
84 return tf.IndexedSlices(self.values, self.indices)
85 else:
86 return tf.constant(self.values)
87
88 def to_ndarray(self):
89 if self.is_indexed_slices():
90 # Currently Tensor does not have a field representing dense shape,
91 # thus can not convert it to numpy.ndarray.
92 raise NotImplementedError(
93 "Converting an ElasticDL Tensor object, which contains a "
94 "sparse tensor, to a numpy.ndarray is not supported."
95 )
96 return self.values
97
98 def __add__(self, other):
99 if self.is_indexed_slices() and other.is_indexed_slices():
100 self.values = np.concatenate((self.values, other.values), axis=0)
101 self.indices = np.concatenate(
102 (self.indices, other.indices), axis=0
103 )
104 elif not self.is_indexed_slices() and not other.is_indexed_slices():
105 self.values = self.values + other.values
106 else:
107 raise NotImplementedError(
108 "Only Tensor with the same type could be added"
109 )
110 return self
111
112 def __radd__(self, other):
113 return self + other
114
115
116 def serialize_tensor(tensor, tensor_pb):
117 """Serialize ElasticDL Tensor to tensor protocol buffer."""
118 dtype = dtype_numpy_to_tensor(tensor.values.dtype)
119 if not dtype:
120 raise ValueError(
121 "Dtype of ndarray %s is not supported", tensor.values.dtype
122 )
123 tensor_pb.dtype = dtype
124 tensor_pb.dim.extend(tensor.values.shape)
125 tensor_pb.content = tensor.values.tobytes()
126 if tensor.is_indexed_slices():
127 tensor_pb.indices.extend(tuple(tensor.indices))
128 if tensor.name:
129 tensor_pb.name = tensor.name
130
131
132 def deserialize_tensor_pb(tensor_pb, tensor):
133 """Deserialize tensor protocol buffer to ElasticDL Tensor.
134
135 Note that the input tensor protocol buffer is reset and underlying buffer
136 is passed to the returned ndarray.
137 """
138 if not tensor_pb.dim:
139 raise ValueError("Tensor PB has no dim defined")
140
141 dtype = dtype_tensor_to_numpy(tensor_pb.dtype)
142 # Check that the buffer size agrees with dimensions.
143 size = dtype.itemsize
144 for d in tensor_pb.dim:
145 size *= d
146 if size != len(tensor_pb.content):
147 raise ValueError(
148 "Tensor PB size mismatch, dim: %s, len(content): %d",
149 tensor_pb.dim,
150 len(tensor_pb.content),
151 )
152 tensor.set(
153 values=np.ndarray(
154 shape=tensor_pb.dim, dtype=dtype, buffer=tensor_pb.content
155 ),
156 indices=np.array(tensor_pb.indices) if tensor_pb.indices else None,
157 name=tensor_pb.name,
158 )
159 tensor_pb.Clear()
160
161
162 def tensor_pb_to_ndarray(tensor_pb):
163 """Deserialize tensor protocol buffer and return a numpy ndarray."""
164 return Tensor.from_tensor_pb(tensor_pb).to_ndarray()
165
166
167 def tensor_pb_to_tf_tensor(tensor_pb):
168 """Deserialize tensor protocol buffer and return a TensorFlow tensor."""
169 return Tensor.from_tensor_pb(tensor_pb).to_tf_tensor()
170
171
172 def emplace_tensor_pb_from_ndarray(
173 tensor_pb_list, values, indices=None, name=None
174 ):
175 """Generate a tensor procotol buffer and append it to tensor_pb_list.
176
177 Note:
178 This function does not use list append function as following code
179 snippet. It is slow because append function will copy the input
180 protocol buffer.
181
182 ```
183 pb = elasticdl_pb2.Tensor()
184 pb.dim.extend([3])
185 pb.name = "test"
186 pb.dtype = DT_INT64
187 pb.content = np.array([1, 2, 3]).tobytes()
188 tensor_pb_list.append(tensor_pb) # slow, because append copies pb
189 ```
190 """
191 tensor_pb = tensor_pb_list.add()
192 tensor = Tensor(values, indices, name)
193 serialize_tensor(tensor, tensor_pb)
194
[end of elasticdl/python/common/tensor.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/elasticdl/python/common/tensor.py b/elasticdl/python/common/tensor.py
--- a/elasticdl/python/common/tensor.py
+++ b/elasticdl/python/common/tensor.py
@@ -6,7 +6,6 @@
dtype_numpy_to_tensor,
dtype_tensor_to_numpy,
)
-from elasticdl.python.common.log_utils import default_logger as logger
class Tensor(object):
@@ -53,13 +52,8 @@
"tf.IndexedSlices, indices must be None."
)
if values.dense_shape is not None:
- # TODO(yunjian.lmh): Support dense shape, or do not print
- # warning message, or there will be too much warning
- # messages.
- logger.warning(
- "ElasticDL Tensor ignores dense_shape in "
- "TensorFlow.IndexedSlices."
- )
+ # TODO(yunjian.lmh): Support dense shape
+ pass
self.values = values.values.numpy()
self.indices = values.indices.numpy()
| {"golden_diff": "diff --git a/elasticdl/python/common/tensor.py b/elasticdl/python/common/tensor.py\n--- a/elasticdl/python/common/tensor.py\n+++ b/elasticdl/python/common/tensor.py\n@@ -6,7 +6,6 @@\n dtype_numpy_to_tensor,\n dtype_tensor_to_numpy,\n )\n-from elasticdl.python.common.log_utils import default_logger as logger\n \n \n class Tensor(object):\n@@ -53,13 +52,8 @@\n \"tf.IndexedSlices, indices must be None.\"\n )\n if values.dense_shape is not None:\n- # TODO(yunjian.lmh): Support dense shape, or do not print\n- # warning message, or there will be too much warning\n- # messages.\n- logger.warning(\n- \"ElasticDL Tensor ignores dense_shape in \"\n- \"TensorFlow.IndexedSlices.\"\n- )\n+ # TODO(yunjian.lmh): Support dense shape\n+ pass\n \n self.values = values.values.numpy()\n self.indices = values.indices.numpy()\n", "issue": "DeepFM unit test for new PS\n\n", "before_files": [{"content": "import numpy as np\nimport tensorflow as tf\n\nfrom elasticdl.proto import elasticdl_pb2\nfrom elasticdl.python.common.dtypes import (\n dtype_numpy_to_tensor,\n dtype_tensor_to_numpy,\n)\nfrom elasticdl.python.common.log_utils import default_logger as logger\n\n\nclass Tensor(object):\n \"\"\"Data structure for tensors in ElasticDL.\n\n `Tensor` can save dense tensors and sparse tensors. For sparse tensors,\n this structure saves them in the same way as `TensorFlow.IndexedSlices`.\n \"\"\"\n\n def __init__(self, values=None, indices=None, name=None):\n \"\"\"\n `Tensor` can save dense tensors and sparse tensors.\n To pass in a dense tensor, `values` should be `numpy.ndarray` and\n `indices` should be None.\n There are two ways to pass in a sparse tensor:\n * `values` is a `numpy.ndarray` and `indices` is a `numpy.ndarray`.\n * `values` is a `TensorFlow.IndexedSlices` and `indices` is None.\n\n Args:\n values: A `numpy.ndarray` or `TensorFlow.IndexedSlices`.\n If `values` is a `TensorFlow.IndexedSlices`, `indices` should\n be None.\n indices: A `numpy.ndarray` or None.\n name: A python string.\n \"\"\"\n self.set(values, indices, name)\n\n @classmethod\n def from_tensor_pb(cls, tensor_pb):\n \"\"\"Create an ElasticDL Tensor object from tensor protocol buffer.\n\n Return the created Tensor object.\n \"\"\"\n tensor = cls()\n deserialize_tensor_pb(tensor_pb, tensor)\n return tensor\n\n def set(self, values=None, indices=None, name=None):\n self.name = name\n if isinstance(values, tf.IndexedSlices):\n if indices is not None:\n raise ValueError(\n \"When creating a Tensor object with values of type \"\n \"tf.IndexedSlices, indices must be None.\"\n )\n if values.dense_shape is not None:\n # TODO(yunjian.lmh): Support dense shape, or do not print\n # warning message, or there will be too much warning\n # messages.\n logger.warning(\n \"ElasticDL Tensor ignores dense_shape in \"\n \"TensorFlow.IndexedSlices.\"\n )\n\n self.values = values.values.numpy()\n self.indices = values.indices.numpy()\n else:\n self.values = (\n values.numpy() if isinstance(values, tf.Tensor) else values\n )\n self.indices = (\n indices.numpy() if isinstance(indices, tf.Tensor) else indices\n )\n\n def is_indexed_slices(self):\n return self.indices is not None\n\n def to_tensor_pb(self):\n tensor_pb = elasticdl_pb2.Tensor()\n serialize_tensor(self, tensor_pb)\n return tensor_pb\n\n def to_tf_tensor(self):\n if self.is_indexed_slices():\n return tf.IndexedSlices(self.values, self.indices)\n else:\n return tf.constant(self.values)\n\n def to_ndarray(self):\n if self.is_indexed_slices():\n # Currently Tensor does not have a field representing dense shape,\n # thus can not convert it to numpy.ndarray.\n raise NotImplementedError(\n \"Converting an ElasticDL Tensor object, which contains a \"\n \"sparse tensor, to a numpy.ndarray is not supported.\"\n )\n return self.values\n\n def __add__(self, other):\n if self.is_indexed_slices() and other.is_indexed_slices():\n self.values = np.concatenate((self.values, other.values), axis=0)\n self.indices = np.concatenate(\n (self.indices, other.indices), axis=0\n )\n elif not self.is_indexed_slices() and not other.is_indexed_slices():\n self.values = self.values + other.values\n else:\n raise NotImplementedError(\n \"Only Tensor with the same type could be added\"\n )\n return self\n\n def __radd__(self, other):\n return self + other\n\n\ndef serialize_tensor(tensor, tensor_pb):\n \"\"\"Serialize ElasticDL Tensor to tensor protocol buffer.\"\"\"\n dtype = dtype_numpy_to_tensor(tensor.values.dtype)\n if not dtype:\n raise ValueError(\n \"Dtype of ndarray %s is not supported\", tensor.values.dtype\n )\n tensor_pb.dtype = dtype\n tensor_pb.dim.extend(tensor.values.shape)\n tensor_pb.content = tensor.values.tobytes()\n if tensor.is_indexed_slices():\n tensor_pb.indices.extend(tuple(tensor.indices))\n if tensor.name:\n tensor_pb.name = tensor.name\n\n\ndef deserialize_tensor_pb(tensor_pb, tensor):\n \"\"\"Deserialize tensor protocol buffer to ElasticDL Tensor.\n\n Note that the input tensor protocol buffer is reset and underlying buffer\n is passed to the returned ndarray.\n \"\"\"\n if not tensor_pb.dim:\n raise ValueError(\"Tensor PB has no dim defined\")\n\n dtype = dtype_tensor_to_numpy(tensor_pb.dtype)\n # Check that the buffer size agrees with dimensions.\n size = dtype.itemsize\n for d in tensor_pb.dim:\n size *= d\n if size != len(tensor_pb.content):\n raise ValueError(\n \"Tensor PB size mismatch, dim: %s, len(content): %d\",\n tensor_pb.dim,\n len(tensor_pb.content),\n )\n tensor.set(\n values=np.ndarray(\n shape=tensor_pb.dim, dtype=dtype, buffer=tensor_pb.content\n ),\n indices=np.array(tensor_pb.indices) if tensor_pb.indices else None,\n name=tensor_pb.name,\n )\n tensor_pb.Clear()\n\n\ndef tensor_pb_to_ndarray(tensor_pb):\n \"\"\"Deserialize tensor protocol buffer and return a numpy ndarray.\"\"\"\n return Tensor.from_tensor_pb(tensor_pb).to_ndarray()\n\n\ndef tensor_pb_to_tf_tensor(tensor_pb):\n \"\"\"Deserialize tensor protocol buffer and return a TensorFlow tensor.\"\"\"\n return Tensor.from_tensor_pb(tensor_pb).to_tf_tensor()\n\n\ndef emplace_tensor_pb_from_ndarray(\n tensor_pb_list, values, indices=None, name=None\n):\n \"\"\"Generate a tensor procotol buffer and append it to tensor_pb_list.\n\n Note:\n This function does not use list append function as following code\n snippet. It is slow because append function will copy the input\n protocol buffer.\n\n ```\n pb = elasticdl_pb2.Tensor()\n pb.dim.extend([3])\n pb.name = \"test\"\n pb.dtype = DT_INT64\n pb.content = np.array([1, 2, 3]).tobytes()\n tensor_pb_list.append(tensor_pb) # slow, because append copies pb\n ```\n \"\"\"\n tensor_pb = tensor_pb_list.add()\n tensor = Tensor(values, indices, name)\n serialize_tensor(tensor, tensor_pb)\n", "path": "elasticdl/python/common/tensor.py"}]} | 2,449 | 226 |
gh_patches_debug_19548 | rasdani/github-patches | git_diff | liqd__a4-opin-347 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Name of Template visible while creating the project
Hi, when creating a project with a template it would be helpful to see the name of the project type. Maybe in the black button?
The author/creator currently has no idea which template s/he chose and in case s/he saves it and returns to it, it may appear helpful to know which one s/he choose. Thanks & Best

</issue>
<code>
[start of euth/dashboard/views.py]
1 from allauth.account import views as account_views
2 from allauth.socialaccount import views as socialaccount_views
3 from django.contrib.messages.views import SuccessMessageMixin
4 from django.core.urlresolvers import reverse
5 from django.shortcuts import get_object_or_404, redirect
6 from django.utils import functional
7 from django.utils.translation import ugettext as _
8 from django.views import generic
9 from rules.compat import access_mixins as mixins
10 from rules.contrib import views as rules_views
11
12 from euth.memberships import models as member_models
13 from euth.organisations import models as org_models
14 from euth.phases import models as phase_models
15 from euth.projects import models as project_models
16 from euth.users import models as user_models
17
18 from . import blueprints, forms
19
20
21 def dashboard(request):
22 return redirect('dashboard-profile')
23
24
25 class DashboardBaseMixin(mixins.LoginRequiredMixin,
26 generic.base.ContextMixin,):
27
28 @functional.cached_property
29 def user_has_organisation(self):
30 return bool(self.request.user.organisation_set.all())
31
32 @functional.cached_property
33 def organisation(self):
34 if 'organisation_slug' in self.kwargs:
35 slug = self.kwargs['organisation_slug']
36 return get_object_or_404(org_models.Organisation, slug=slug)
37 else:
38 return self.request.user.organisation_set.first()
39
40 @functional.cached_property
41 def other_organisations_of_user(self):
42 user = self.request.user
43 return user.organisation_set.exclude(pk=self.organisation.pk)
44
45
46 class DashboardEmailView(DashboardBaseMixin, account_views.EmailView):
47 pass
48
49
50 class DashboardAccountView(DashboardBaseMixin,
51 socialaccount_views.ConnectionsView):
52 pass
53
54
55 class DashboardProfileView(DashboardBaseMixin,
56 SuccessMessageMixin,
57 generic.UpdateView):
58
59 model = user_models.User
60 template_name = "euth_dashboard/profile_detail.html"
61 form_class = forms.ProfileForm
62 success_message = _("Your profile was successfully updated.")
63
64 def get_object(self):
65 return get_object_or_404(user_models.User, pk=self.request.user.id)
66
67 def get_success_url(self):
68 return self.request.path
69
70
71 class DashboardOrganisationUpdateView(DashboardBaseMixin,
72 rules_views.PermissionRequiredMixin,
73 SuccessMessageMixin,
74 generic.UpdateView):
75 model = org_models.Organisation
76 form_class = forms.OrganisationForm
77 slug_url_kwarg = 'organisation_slug'
78 template_name = 'euth_dashboard/organisation_form.html'
79 success_message = _('Organisation successfully updated.')
80 permission_required = 'euth_organisations.modify_organisation'
81
82 def get_success_url(self):
83 return self.request.path
84
85
86 class DashboardProjectListView(DashboardBaseMixin,
87 rules_views.PermissionRequiredMixin,
88 generic.ListView):
89 model = project_models.Project
90 template_name = 'euth_dashboard/project_list.html'
91 permission_required = 'euth_organisations.modify_organisation'
92
93 def get_queryset(self):
94 return self.model.objects.filter(
95 organisation=self.organisation
96 )
97
98 def get_permission_object(self):
99 return self.organisation
100
101 @property
102 def raise_exception(self):
103 return self.request.user.is_authenticated()
104
105 def get_success_url(self):
106 return reverse('dashboard-project-list')
107
108
109 class DashboardBlueprintListView(DashboardBaseMixin,
110 rules_views.PermissionRequiredMixin,
111 generic.TemplateView):
112 template_name = 'euth_dashboard/blueprint_list.html'
113 blueprints = blueprints.blueprints
114 permission_required = 'euth_organisations.initiate_project'
115
116
117 class DashboardProjectCreateView(DashboardBaseMixin,
118 rules_views.PermissionRequiredMixin,
119 SuccessMessageMixin,
120 blueprints.BlueprintMixin,
121 generic.CreateView):
122 model = project_models.Project
123 form_class = forms.ProjectCreateForm
124 template_name = 'euth_dashboard/project_form.html'
125 success_message = _('Project succesfully created.')
126 permission_required = 'euth_organisations.initiate_project'
127
128 def get_permission_object(self):
129 return self.organisation
130
131 @property
132 def raise_exception(self):
133 return self.request.user.is_authenticated()
134
135 def get_form_kwargs(self):
136 kwargs = super().get_form_kwargs()
137 kwargs['blueprint'] = self.blueprint
138 kwargs['organisation'] = self.organisation
139 return kwargs
140
141 def get_success_url(self):
142 return reverse('dashboard-project-list',
143 kwargs={
144 'organisation_slug': self.organisation.slug,
145 })
146
147
148 class DashboardProjectUpdateView(DashboardBaseMixin,
149 rules_views.PermissionRequiredMixin,
150 SuccessMessageMixin,
151 generic.UpdateView):
152 model = project_models.Project
153 form_class = forms.ProjectCompleteForm
154 template_name = 'euth_dashboard/project_form.html'
155 success_message = _('Project successfully updated.')
156 permission_required = 'euth_organisations.initiate_project'
157
158 def get_permission_object(self):
159 return self.organisation
160
161 @property
162 def raise_exception(self):
163 return self.request.user.is_authenticated()
164
165 def get_success_url(self):
166 return reverse('dashboard-project-edit',
167 kwargs={
168 'organisation_slug': self.organisation.slug,
169 'slug': self.get_object().slug
170 })
171
172 def get_form_kwargs(self):
173 kwargs = super().get_form_kwargs()
174 qs = phase_models.Phase.objects.filter(module__project=self.object)
175 kwargs['phases__queryset'] = qs
176 return kwargs
177
178
179 class DashboardProjectInviteView(DashboardBaseMixin,
180 rules_views.PermissionRequiredMixin,
181 SuccessMessageMixin,
182 generic.FormView):
183 form_class = forms.ProjectInviteForm
184 template_name = 'euth_dashboard/project_invites.html'
185 success_message = _("Invitations successfully sent.")
186 permission_required = 'euth_organisations.initiate_project'
187
188 def get_permission_object(self):
189 return self.organisation
190
191 @property
192 def raise_exception(self):
193 return self.request.user.is_authenticated()
194
195 @functional.cached_property
196 def project(self):
197 return project_models.Project.objects.get(
198 slug=self.kwargs['slug']
199 )
200
201 def get_form_kwargs(self):
202 kwargs = super().get_form_kwargs()
203 kwargs['project'] = self.project
204 return kwargs
205
206 def form_valid(self, form):
207 emails = form.cleaned_data['emails']
208 user = self.request.user
209 project = self.project
210 for (name, address) in emails:
211 member_models.Invite.objects.invite(user, project, address)
212 return super().form_valid(form)
213
214 def get_success_url(self):
215 return reverse('dashboard-project-users',
216 kwargs={
217 'organisation_slug': self.organisation.slug,
218 'slug': self.project.slug
219 })
220
221
222 class DashboardProjectUserView(DashboardBaseMixin,
223 rules_views.PermissionRequiredMixin,
224 SuccessMessageMixin,
225 generic.FormView):
226
227 form_class = forms.ProjectUserForm
228 template_name = 'euth_dashboard/project_users.html'
229 success_message = _("User request successfully updated.")
230 permission_required = 'euth_organisations.initiate_project'
231
232 def get_permission_object(self):
233 return self.organisation
234
235 @property
236 def raise_exception(self):
237 return self.request.user.is_authenticated()
238
239 def get_form_kwargs(self):
240 kwargs = super().get_form_kwargs()
241 qs = member_models.Request.objects.order_by('created').filter(
242 project__slug=self.kwargs['slug']
243 )
244 kwargs['requests__queryset'] = qs
245 qs = member_models.Invite.objects.order_by('created').filter(
246 project__slug=self.kwargs['slug']
247 )
248 kwargs['invites__queryset'] = qs
249 qs = user_models.User.objects.order_by('email').filter(
250 project_participant__slug=self.kwargs['slug']
251 )
252 kwargs['users__queryset'] = qs
253 kwargs['project'] = self.project
254 return kwargs
255
256 @functional.cached_property
257 def project(self):
258 return project_models.Project.objects.get(
259 slug=self.kwargs['slug']
260 )
261
262 def get_context_data(self, **kwargs):
263 context = super().get_context_data(**kwargs)
264 context['project'] = self.project
265 return context
266
267 def get_success_url(self):
268 return self.request.path
269
270 def form_valid(self, form):
271 form.save()
272 return super().form_valid(form)
273
[end of euth/dashboard/views.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/euth/dashboard/views.py b/euth/dashboard/views.py
--- a/euth/dashboard/views.py
+++ b/euth/dashboard/views.py
@@ -125,6 +125,11 @@
success_message = _('Project succesfully created.')
permission_required = 'euth_organisations.initiate_project'
+ def get_context_data(self, **kwargs):
+ context = super().get_context_data(**kwargs)
+ context['heading'] = _("New project based on")
+ return context
+
def get_permission_object(self):
return self.organisation
@@ -155,6 +160,11 @@
success_message = _('Project successfully updated.')
permission_required = 'euth_organisations.initiate_project'
+ def get_context_data(self, **kwargs):
+ context = super().get_context_data(**kwargs)
+ context['heading'] = _("Update project: " + self.object.name)
+ return context
+
def get_permission_object(self):
return self.organisation
| {"golden_diff": "diff --git a/euth/dashboard/views.py b/euth/dashboard/views.py\n--- a/euth/dashboard/views.py\n+++ b/euth/dashboard/views.py\n@@ -125,6 +125,11 @@\n success_message = _('Project succesfully created.')\n permission_required = 'euth_organisations.initiate_project'\n \n+ def get_context_data(self, **kwargs):\n+ context = super().get_context_data(**kwargs)\n+ context['heading'] = _(\"New project based on\")\n+ return context\n+\n def get_permission_object(self):\n return self.organisation\n \n@@ -155,6 +160,11 @@\n success_message = _('Project successfully updated.')\n permission_required = 'euth_organisations.initiate_project'\n \n+ def get_context_data(self, **kwargs):\n+ context = super().get_context_data(**kwargs)\n+ context['heading'] = _(\"Update project: \" + self.object.name)\n+ return context\n+\n def get_permission_object(self):\n return self.organisation\n", "issue": "Name of Template visible while creating the project\nHi, when creating a project with a template it would be helpful to see the name of the project type. Maybe in the black button? \nThe author/creator currently has no idea which template s/he chose and in case s/he saves it and returns to it, it may appear helpful to know which one s/he choose. Thanks & Best\n\n\n\n", "before_files": [{"content": "from allauth.account import views as account_views\nfrom allauth.socialaccount import views as socialaccount_views\nfrom django.contrib.messages.views import SuccessMessageMixin\nfrom django.core.urlresolvers import reverse\nfrom django.shortcuts import get_object_or_404, redirect\nfrom django.utils import functional\nfrom django.utils.translation import ugettext as _\nfrom django.views import generic\nfrom rules.compat import access_mixins as mixins\nfrom rules.contrib import views as rules_views\n\nfrom euth.memberships import models as member_models\nfrom euth.organisations import models as org_models\nfrom euth.phases import models as phase_models\nfrom euth.projects import models as project_models\nfrom euth.users import models as user_models\n\nfrom . import blueprints, forms\n\n\ndef dashboard(request):\n return redirect('dashboard-profile')\n\n\nclass DashboardBaseMixin(mixins.LoginRequiredMixin,\n generic.base.ContextMixin,):\n\n @functional.cached_property\n def user_has_organisation(self):\n return bool(self.request.user.organisation_set.all())\n\n @functional.cached_property\n def organisation(self):\n if 'organisation_slug' in self.kwargs:\n slug = self.kwargs['organisation_slug']\n return get_object_or_404(org_models.Organisation, slug=slug)\n else:\n return self.request.user.organisation_set.first()\n\n @functional.cached_property\n def other_organisations_of_user(self):\n user = self.request.user\n return user.organisation_set.exclude(pk=self.organisation.pk)\n\n\nclass DashboardEmailView(DashboardBaseMixin, account_views.EmailView):\n pass\n\n\nclass DashboardAccountView(DashboardBaseMixin,\n socialaccount_views.ConnectionsView):\n pass\n\n\nclass DashboardProfileView(DashboardBaseMixin,\n SuccessMessageMixin,\n generic.UpdateView):\n\n model = user_models.User\n template_name = \"euth_dashboard/profile_detail.html\"\n form_class = forms.ProfileForm\n success_message = _(\"Your profile was successfully updated.\")\n\n def get_object(self):\n return get_object_or_404(user_models.User, pk=self.request.user.id)\n\n def get_success_url(self):\n return self.request.path\n\n\nclass DashboardOrganisationUpdateView(DashboardBaseMixin,\n rules_views.PermissionRequiredMixin,\n SuccessMessageMixin,\n generic.UpdateView):\n model = org_models.Organisation\n form_class = forms.OrganisationForm\n slug_url_kwarg = 'organisation_slug'\n template_name = 'euth_dashboard/organisation_form.html'\n success_message = _('Organisation successfully updated.')\n permission_required = 'euth_organisations.modify_organisation'\n\n def get_success_url(self):\n return self.request.path\n\n\nclass DashboardProjectListView(DashboardBaseMixin,\n rules_views.PermissionRequiredMixin,\n generic.ListView):\n model = project_models.Project\n template_name = 'euth_dashboard/project_list.html'\n permission_required = 'euth_organisations.modify_organisation'\n\n def get_queryset(self):\n return self.model.objects.filter(\n organisation=self.organisation\n )\n\n def get_permission_object(self):\n return self.organisation\n\n @property\n def raise_exception(self):\n return self.request.user.is_authenticated()\n\n def get_success_url(self):\n return reverse('dashboard-project-list')\n\n\nclass DashboardBlueprintListView(DashboardBaseMixin,\n rules_views.PermissionRequiredMixin,\n generic.TemplateView):\n template_name = 'euth_dashboard/blueprint_list.html'\n blueprints = blueprints.blueprints\n permission_required = 'euth_organisations.initiate_project'\n\n\nclass DashboardProjectCreateView(DashboardBaseMixin,\n rules_views.PermissionRequiredMixin,\n SuccessMessageMixin,\n blueprints.BlueprintMixin,\n generic.CreateView):\n model = project_models.Project\n form_class = forms.ProjectCreateForm\n template_name = 'euth_dashboard/project_form.html'\n success_message = _('Project succesfully created.')\n permission_required = 'euth_organisations.initiate_project'\n\n def get_permission_object(self):\n return self.organisation\n\n @property\n def raise_exception(self):\n return self.request.user.is_authenticated()\n\n def get_form_kwargs(self):\n kwargs = super().get_form_kwargs()\n kwargs['blueprint'] = self.blueprint\n kwargs['organisation'] = self.organisation\n return kwargs\n\n def get_success_url(self):\n return reverse('dashboard-project-list',\n kwargs={\n 'organisation_slug': self.organisation.slug,\n })\n\n\nclass DashboardProjectUpdateView(DashboardBaseMixin,\n rules_views.PermissionRequiredMixin,\n SuccessMessageMixin,\n generic.UpdateView):\n model = project_models.Project\n form_class = forms.ProjectCompleteForm\n template_name = 'euth_dashboard/project_form.html'\n success_message = _('Project successfully updated.')\n permission_required = 'euth_organisations.initiate_project'\n\n def get_permission_object(self):\n return self.organisation\n\n @property\n def raise_exception(self):\n return self.request.user.is_authenticated()\n\n def get_success_url(self):\n return reverse('dashboard-project-edit',\n kwargs={\n 'organisation_slug': self.organisation.slug,\n 'slug': self.get_object().slug\n })\n\n def get_form_kwargs(self):\n kwargs = super().get_form_kwargs()\n qs = phase_models.Phase.objects.filter(module__project=self.object)\n kwargs['phases__queryset'] = qs\n return kwargs\n\n\nclass DashboardProjectInviteView(DashboardBaseMixin,\n rules_views.PermissionRequiredMixin,\n SuccessMessageMixin,\n generic.FormView):\n form_class = forms.ProjectInviteForm\n template_name = 'euth_dashboard/project_invites.html'\n success_message = _(\"Invitations successfully sent.\")\n permission_required = 'euth_organisations.initiate_project'\n\n def get_permission_object(self):\n return self.organisation\n\n @property\n def raise_exception(self):\n return self.request.user.is_authenticated()\n\n @functional.cached_property\n def project(self):\n return project_models.Project.objects.get(\n slug=self.kwargs['slug']\n )\n\n def get_form_kwargs(self):\n kwargs = super().get_form_kwargs()\n kwargs['project'] = self.project\n return kwargs\n\n def form_valid(self, form):\n emails = form.cleaned_data['emails']\n user = self.request.user\n project = self.project\n for (name, address) in emails:\n member_models.Invite.objects.invite(user, project, address)\n return super().form_valid(form)\n\n def get_success_url(self):\n return reverse('dashboard-project-users',\n kwargs={\n 'organisation_slug': self.organisation.slug,\n 'slug': self.project.slug\n })\n\n\nclass DashboardProjectUserView(DashboardBaseMixin,\n rules_views.PermissionRequiredMixin,\n SuccessMessageMixin,\n generic.FormView):\n\n form_class = forms.ProjectUserForm\n template_name = 'euth_dashboard/project_users.html'\n success_message = _(\"User request successfully updated.\")\n permission_required = 'euth_organisations.initiate_project'\n\n def get_permission_object(self):\n return self.organisation\n\n @property\n def raise_exception(self):\n return self.request.user.is_authenticated()\n\n def get_form_kwargs(self):\n kwargs = super().get_form_kwargs()\n qs = member_models.Request.objects.order_by('created').filter(\n project__slug=self.kwargs['slug']\n )\n kwargs['requests__queryset'] = qs\n qs = member_models.Invite.objects.order_by('created').filter(\n project__slug=self.kwargs['slug']\n )\n kwargs['invites__queryset'] = qs\n qs = user_models.User.objects.order_by('email').filter(\n project_participant__slug=self.kwargs['slug']\n )\n kwargs['users__queryset'] = qs\n kwargs['project'] = self.project\n return kwargs\n\n @functional.cached_property\n def project(self):\n return project_models.Project.objects.get(\n slug=self.kwargs['slug']\n )\n\n def get_context_data(self, **kwargs):\n context = super().get_context_data(**kwargs)\n context['project'] = self.project\n return context\n\n def get_success_url(self):\n return self.request.path\n\n def form_valid(self, form):\n form.save()\n return super().form_valid(form)\n", "path": "euth/dashboard/views.py"}]} | 3,164 | 224 |
gh_patches_debug_14881 | rasdani/github-patches | git_diff | kivy__kivy-4045 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
WM touch/pen warning messages after 1.9.1
```
[WARNING ] [Input ] WM_Touch/WM_Pen not supported by your version of Windows
[WARNING ] [Base ] Unknown <wm_touch> provider
[WARNING ] [Base ] Unknown <wm_pen> provider
```
</issue>
<code>
[start of kivy/input/providers/wm_touch.py]
1 '''
2 Support for WM_TOUCH messages (Windows platform)
3 ================================================
4 '''
5
6 __all__ = ('WM_MotionEventProvider', 'WM_MotionEvent')
7
8 import os
9 from kivy.input.providers.wm_common import (
10 WM_TABLET_QUERYSYSTEMGESTURE,
11 GWL_WNDPROC, QUERYSYSTEMGESTURE_WNDPROC, WM_TOUCH, WM_MOUSEMOVE,
12 WM_MOUSELAST, PEN_OR_TOUCH_MASK, PEN_OR_TOUCH_SIGNATURE,
13 PEN_EVENT_TOUCH_MASK, TOUCHEVENTF_UP, TOUCHEVENTF_DOWN,
14 TOUCHEVENTF_MOVE, SM_CYCAPTION)
15 from kivy.input.motionevent import MotionEvent
16 from kivy.input.shape import ShapeRect
17 from kivy.core.window import Window
18
19
20 class WM_MotionEvent(MotionEvent):
21 '''MotionEvent representing the WM_MotionEvent event.
22 Supports pos, shape and size profiles.
23 '''
24 __attrs__ = ('size', )
25
26 def depack(self, args):
27 self.is_touch = True
28 self.shape = ShapeRect()
29 self.sx, self.sy = args[0], args[1]
30 self.shape.width = args[2][0]
31 self.shape.height = args[2][1]
32 self.size = self.shape.width * self.shape.height
33 self.profile = ('pos', 'shape', 'size')
34
35 super(WM_MotionEvent, self).depack(args)
36
37 def __str__(self):
38 args = (self.id, self.uid, str(self.spos), self.device)
39 return '<WMMotionEvent id:%d uid:%d pos:%s device:%s>' % args
40
41 if 'KIVY_DOC' in os.environ:
42 # documentation hack
43 WM_MotionEventProvider = None
44
45 else:
46 from ctypes.wintypes import (ULONG, HANDLE, DWORD, LONG, UINT,
47 WPARAM, LPARAM, BOOL)
48 from ctypes import (windll, WINFUNCTYPE, POINTER,
49 c_int, Structure, sizeof, byref)
50 from collections import deque
51 from kivy.input.provider import MotionEventProvider
52 from kivy.input.factory import MotionEventFactory
53
54 # check availability of RegisterTouchWindow
55 if not hasattr(windll.user32, 'RegisterTouchWindow'):
56 raise Exception('Unsupported Window version')
57
58 LRESULT = LPARAM
59 WNDPROC = WINFUNCTYPE(LRESULT, HANDLE, UINT, WPARAM, LPARAM)
60
61 class TOUCHINPUT(Structure):
62 _fields_ = [
63 ('x', LONG),
64 ('y', LONG),
65 ('pSource', HANDLE),
66 ('id', DWORD),
67 ('flags', DWORD),
68 ('mask', DWORD),
69 ('time', DWORD),
70 ('extraInfo', POINTER(ULONG)),
71 ('size_x', DWORD),
72 ('size_y', DWORD)]
73
74 def size(self):
75 return (self.size_x, self.size_y)
76
77 def screen_x(self):
78 return self.x / 100.0
79
80 def screen_y(self):
81 return self.y / 100.0
82
83 def _event_type(self):
84 if self.flags & TOUCHEVENTF_MOVE:
85 return 'update'
86 if self.flags & TOUCHEVENTF_DOWN:
87 return 'begin'
88 if self.flags & TOUCHEVENTF_UP:
89 return 'end'
90 event_type = property(_event_type)
91
92 class RECT(Structure):
93 _fields_ = [
94 ('left', LONG),
95 ('top', LONG),
96 ('right', LONG),
97 ('bottom', LONG)]
98
99 x = property(lambda self: self.left)
100 y = property(lambda self: self.top)
101 w = property(lambda self: self.right - self.left)
102 h = property(lambda self: self.bottom - self.top)
103
104 try:
105 windll.user32.SetWindowLongPtrW.restype = WNDPROC
106 windll.user32.SetWindowLongPtrW.argtypes = [HANDLE, c_int, WNDPROC]
107 SetWindowLong_wrapper = windll.user32.SetWindowLongPtrW
108 except AttributeError:
109 windll.user32.SetWindowLongW.restype = WNDPROC
110 windll.user32.SetWindowLongW.argtypes = [HANDLE, c_int, WNDPROC]
111 SetWindowLong_wrapper = windll.user32.SetWindowLongW
112
113 windll.user32.GetMessageExtraInfo.restype = LPARAM
114 windll.user32.GetMessageExtraInfo.argtypes = []
115 windll.user32.GetClientRect.restype = BOOL
116 windll.user32.GetClientRect.argtypes = [HANDLE, POINTER(RECT)]
117 windll.user32.GetWindowRect.restype = BOOL
118 windll.user32.GetWindowRect.argtypes = [HANDLE, POINTER(RECT)]
119 windll.user32.CallWindowProcW.restype = LRESULT
120 windll.user32.CallWindowProcW.argtypes = [WNDPROC, HANDLE, UINT, WPARAM,
121 LPARAM]
122 windll.user32.GetActiveWindow.restype = HANDLE
123 windll.user32.GetActiveWindow.argtypes = []
124 windll.user32.RegisterTouchWindow.restype = BOOL
125 windll.user32.RegisterTouchWindow.argtypes = [HANDLE, ULONG]
126 windll.user32.UnregisterTouchWindow.restype = BOOL
127 windll.user32.UnregisterTouchWindow.argtypes = [HANDLE]
128 windll.user32.GetTouchInputInfo.restype = BOOL
129 windll.user32.GetTouchInputInfo.argtypes = [HANDLE, UINT,
130 POINTER(TOUCHINPUT), c_int]
131 windll.user32.GetSystemMetrics.restype = c_int
132 windll.user32.GetSystemMetrics.argtypes = [c_int]
133
134 class WM_MotionEventProvider(MotionEventProvider):
135
136 def start(self):
137 self.touch_events = deque()
138 self.touches = {}
139 self.uid = 0
140
141 # get window handle, and register to recive WM_TOUCH messages
142 self.hwnd = windll.user32.GetActiveWindow()
143 windll.user32.RegisterTouchWindow(self.hwnd, 1)
144
145 # inject our own wndProc to handle messages
146 # before window manager does
147 self.new_windProc = WNDPROC(self._touch_wndProc)
148 self.old_windProc = SetWindowLong_wrapper(
149 self.hwnd, GWL_WNDPROC, self.new_windProc)
150
151 if Window.borderless or Window.fullscreen:
152 self.caption_size = 0
153 else:
154 self.caption_size = windll.user32.GetSystemMetrics(SM_CYCAPTION)
155
156 def update(self, dispatch_fn):
157 win_rect = RECT()
158 windll.user32.GetWindowRect(self.hwnd, byref(win_rect))
159 caption = self.caption_size
160
161 while True:
162 try:
163 t = self.touch_events.pop()
164 except:
165 break
166
167 # adjust x,y to window coordinates (0.0 to 1.0)
168 x = (t.screen_x() - win_rect.x) / float(win_rect.w)
169 y = 1.0 - (t.screen_y() - win_rect.y - caption
170 ) / float(win_rect.h)
171
172 # actually dispatch input
173 if t.event_type == 'begin':
174 self.uid += 1
175 self.touches[t.id] = WM_MotionEvent(
176 self.device, self.uid, [x, y, t.size()])
177 dispatch_fn('begin', self.touches[t.id])
178
179 if t.event_type == 'update' and t.id in self.touches:
180 self.touches[t.id].move([x, y, t.size()])
181 dispatch_fn('update', self.touches[t.id])
182
183 if t.event_type == 'end' and t.id in self.touches:
184 touch = self.touches[t.id]
185 touch.move([x, y, t.size()])
186 touch.update_time_end()
187 dispatch_fn('end', touch)
188 del self.touches[t.id]
189
190 def stop(self):
191 windll.user32.UnregisterTouchWindow(self.hwnd)
192 self.new_windProc = SetWindowLong_wrapper(
193 self.hwnd, GWL_WNDPROC, self.old_windProc)
194
195 # we inject this wndProc into our main window, to process
196 # WM_TOUCH and mouse messages before the window manager does
197 def _touch_wndProc(self, hwnd, msg, wParam, lParam):
198 done = False
199 if msg == WM_TABLET_QUERYSYSTEMGESTURE:
200 return QUERYSYSTEMGESTURE_WNDPROC
201
202 if msg == WM_TOUCH:
203 done = self._touch_handler(msg, wParam, lParam)
204
205 if msg >= WM_MOUSEMOVE and msg <= WM_MOUSELAST:
206 done = self._mouse_handler(msg, wParam, lParam)
207
208 if not done:
209 return windll.user32.CallWindowProcW(self.old_windProc,
210 hwnd, msg, wParam,
211 lParam)
212 return 1
213
214 # this on pushes WM_TOUCH messages onto our event stack
215 def _touch_handler(self, msg, wParam, lParam):
216 touches = (TOUCHINPUT * wParam)()
217 windll.user32.GetTouchInputInfo(HANDLE(lParam),
218 wParam,
219 touches,
220 sizeof(TOUCHINPUT))
221 for i in range(wParam):
222 self.touch_events.appendleft(touches[i])
223 windll.user32.CloseTouchInputHandle(HANDLE(lParam))
224 return True
225
226 # filter fake mouse events, because touch and stylus
227 # also make mouse events
228 def _mouse_handler(self, msg, wparam, lParam):
229 info = windll.user32.GetMessageExtraInfo()
230 # its a touch or a pen
231 if (info & PEN_OR_TOUCH_MASK) == PEN_OR_TOUCH_SIGNATURE:
232 if info & PEN_EVENT_TOUCH_MASK:
233 return True
234
235 MotionEventFactory.register('wm_touch', WM_MotionEventProvider)
236
[end of kivy/input/providers/wm_touch.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/kivy/input/providers/wm_touch.py b/kivy/input/providers/wm_touch.py
--- a/kivy/input/providers/wm_touch.py
+++ b/kivy/input/providers/wm_touch.py
@@ -14,7 +14,8 @@
TOUCHEVENTF_MOVE, SM_CYCAPTION)
from kivy.input.motionevent import MotionEvent
from kivy.input.shape import ShapeRect
-from kivy.core.window import Window
+
+Window = None
class WM_MotionEvent(MotionEvent):
@@ -134,6 +135,10 @@
class WM_MotionEventProvider(MotionEventProvider):
def start(self):
+ global Window
+ if not Window:
+ from kivy.core.window import Window
+
self.touch_events = deque()
self.touches = {}
self.uid = 0
| {"golden_diff": "diff --git a/kivy/input/providers/wm_touch.py b/kivy/input/providers/wm_touch.py\n--- a/kivy/input/providers/wm_touch.py\n+++ b/kivy/input/providers/wm_touch.py\n@@ -14,7 +14,8 @@\n TOUCHEVENTF_MOVE, SM_CYCAPTION)\n from kivy.input.motionevent import MotionEvent\n from kivy.input.shape import ShapeRect\n-from kivy.core.window import Window\n+\n+Window = None\n \n \n class WM_MotionEvent(MotionEvent):\n@@ -134,6 +135,10 @@\n class WM_MotionEventProvider(MotionEventProvider):\n \n def start(self):\n+ global Window\n+ if not Window:\n+ from kivy.core.window import Window\n+\n self.touch_events = deque()\n self.touches = {}\n self.uid = 0\n", "issue": "WM touch/pen warning messages after 1.9.1\n```\n[WARNING ] [Input ] WM_Touch/WM_Pen not supported by your version of Windows\n[WARNING ] [Base ] Unknown <wm_touch> provider\n[WARNING ] [Base ] Unknown <wm_pen> provider\n```\n\n", "before_files": [{"content": "'''\nSupport for WM_TOUCH messages (Windows platform)\n================================================\n'''\n\n__all__ = ('WM_MotionEventProvider', 'WM_MotionEvent')\n\nimport os\nfrom kivy.input.providers.wm_common import (\n WM_TABLET_QUERYSYSTEMGESTURE,\n GWL_WNDPROC, QUERYSYSTEMGESTURE_WNDPROC, WM_TOUCH, WM_MOUSEMOVE,\n WM_MOUSELAST, PEN_OR_TOUCH_MASK, PEN_OR_TOUCH_SIGNATURE,\n PEN_EVENT_TOUCH_MASK, TOUCHEVENTF_UP, TOUCHEVENTF_DOWN,\n TOUCHEVENTF_MOVE, SM_CYCAPTION)\nfrom kivy.input.motionevent import MotionEvent\nfrom kivy.input.shape import ShapeRect\nfrom kivy.core.window import Window\n\n\nclass WM_MotionEvent(MotionEvent):\n '''MotionEvent representing the WM_MotionEvent event.\n Supports pos, shape and size profiles.\n '''\n __attrs__ = ('size', )\n\n def depack(self, args):\n self.is_touch = True\n self.shape = ShapeRect()\n self.sx, self.sy = args[0], args[1]\n self.shape.width = args[2][0]\n self.shape.height = args[2][1]\n self.size = self.shape.width * self.shape.height\n self.profile = ('pos', 'shape', 'size')\n\n super(WM_MotionEvent, self).depack(args)\n\n def __str__(self):\n args = (self.id, self.uid, str(self.spos), self.device)\n return '<WMMotionEvent id:%d uid:%d pos:%s device:%s>' % args\n\nif 'KIVY_DOC' in os.environ:\n # documentation hack\n WM_MotionEventProvider = None\n\nelse:\n from ctypes.wintypes import (ULONG, HANDLE, DWORD, LONG, UINT,\n WPARAM, LPARAM, BOOL)\n from ctypes import (windll, WINFUNCTYPE, POINTER,\n c_int, Structure, sizeof, byref)\n from collections import deque\n from kivy.input.provider import MotionEventProvider\n from kivy.input.factory import MotionEventFactory\n\n # check availability of RegisterTouchWindow\n if not hasattr(windll.user32, 'RegisterTouchWindow'):\n raise Exception('Unsupported Window version')\n\n LRESULT = LPARAM\n WNDPROC = WINFUNCTYPE(LRESULT, HANDLE, UINT, WPARAM, LPARAM)\n\n class TOUCHINPUT(Structure):\n _fields_ = [\n ('x', LONG),\n ('y', LONG),\n ('pSource', HANDLE),\n ('id', DWORD),\n ('flags', DWORD),\n ('mask', DWORD),\n ('time', DWORD),\n ('extraInfo', POINTER(ULONG)),\n ('size_x', DWORD),\n ('size_y', DWORD)]\n\n def size(self):\n return (self.size_x, self.size_y)\n\n def screen_x(self):\n return self.x / 100.0\n\n def screen_y(self):\n return self.y / 100.0\n\n def _event_type(self):\n if self.flags & TOUCHEVENTF_MOVE:\n return 'update'\n if self.flags & TOUCHEVENTF_DOWN:\n return 'begin'\n if self.flags & TOUCHEVENTF_UP:\n return 'end'\n event_type = property(_event_type)\n\n class RECT(Structure):\n _fields_ = [\n ('left', LONG),\n ('top', LONG),\n ('right', LONG),\n ('bottom', LONG)]\n\n x = property(lambda self: self.left)\n y = property(lambda self: self.top)\n w = property(lambda self: self.right - self.left)\n h = property(lambda self: self.bottom - self.top)\n\n try:\n windll.user32.SetWindowLongPtrW.restype = WNDPROC\n windll.user32.SetWindowLongPtrW.argtypes = [HANDLE, c_int, WNDPROC]\n SetWindowLong_wrapper = windll.user32.SetWindowLongPtrW\n except AttributeError:\n windll.user32.SetWindowLongW.restype = WNDPROC\n windll.user32.SetWindowLongW.argtypes = [HANDLE, c_int, WNDPROC]\n SetWindowLong_wrapper = windll.user32.SetWindowLongW\n\n windll.user32.GetMessageExtraInfo.restype = LPARAM\n windll.user32.GetMessageExtraInfo.argtypes = []\n windll.user32.GetClientRect.restype = BOOL\n windll.user32.GetClientRect.argtypes = [HANDLE, POINTER(RECT)]\n windll.user32.GetWindowRect.restype = BOOL\n windll.user32.GetWindowRect.argtypes = [HANDLE, POINTER(RECT)]\n windll.user32.CallWindowProcW.restype = LRESULT\n windll.user32.CallWindowProcW.argtypes = [WNDPROC, HANDLE, UINT, WPARAM,\n LPARAM]\n windll.user32.GetActiveWindow.restype = HANDLE\n windll.user32.GetActiveWindow.argtypes = []\n windll.user32.RegisterTouchWindow.restype = BOOL\n windll.user32.RegisterTouchWindow.argtypes = [HANDLE, ULONG]\n windll.user32.UnregisterTouchWindow.restype = BOOL\n windll.user32.UnregisterTouchWindow.argtypes = [HANDLE]\n windll.user32.GetTouchInputInfo.restype = BOOL\n windll.user32.GetTouchInputInfo.argtypes = [HANDLE, UINT,\n POINTER(TOUCHINPUT), c_int]\n windll.user32.GetSystemMetrics.restype = c_int\n windll.user32.GetSystemMetrics.argtypes = [c_int]\n\n class WM_MotionEventProvider(MotionEventProvider):\n\n def start(self):\n self.touch_events = deque()\n self.touches = {}\n self.uid = 0\n\n # get window handle, and register to recive WM_TOUCH messages\n self.hwnd = windll.user32.GetActiveWindow()\n windll.user32.RegisterTouchWindow(self.hwnd, 1)\n\n # inject our own wndProc to handle messages\n # before window manager does\n self.new_windProc = WNDPROC(self._touch_wndProc)\n self.old_windProc = SetWindowLong_wrapper(\n self.hwnd, GWL_WNDPROC, self.new_windProc)\n\n if Window.borderless or Window.fullscreen:\n self.caption_size = 0\n else:\n self.caption_size = windll.user32.GetSystemMetrics(SM_CYCAPTION)\n\n def update(self, dispatch_fn):\n win_rect = RECT()\n windll.user32.GetWindowRect(self.hwnd, byref(win_rect))\n caption = self.caption_size\n\n while True:\n try:\n t = self.touch_events.pop()\n except:\n break\n\n # adjust x,y to window coordinates (0.0 to 1.0)\n x = (t.screen_x() - win_rect.x) / float(win_rect.w)\n y = 1.0 - (t.screen_y() - win_rect.y - caption\n ) / float(win_rect.h)\n\n # actually dispatch input\n if t.event_type == 'begin':\n self.uid += 1\n self.touches[t.id] = WM_MotionEvent(\n self.device, self.uid, [x, y, t.size()])\n dispatch_fn('begin', self.touches[t.id])\n\n if t.event_type == 'update' and t.id in self.touches:\n self.touches[t.id].move([x, y, t.size()])\n dispatch_fn('update', self.touches[t.id])\n\n if t.event_type == 'end' and t.id in self.touches:\n touch = self.touches[t.id]\n touch.move([x, y, t.size()])\n touch.update_time_end()\n dispatch_fn('end', touch)\n del self.touches[t.id]\n\n def stop(self):\n windll.user32.UnregisterTouchWindow(self.hwnd)\n self.new_windProc = SetWindowLong_wrapper(\n self.hwnd, GWL_WNDPROC, self.old_windProc)\n\n # we inject this wndProc into our main window, to process\n # WM_TOUCH and mouse messages before the window manager does\n def _touch_wndProc(self, hwnd, msg, wParam, lParam):\n done = False\n if msg == WM_TABLET_QUERYSYSTEMGESTURE:\n return QUERYSYSTEMGESTURE_WNDPROC\n\n if msg == WM_TOUCH:\n done = self._touch_handler(msg, wParam, lParam)\n\n if msg >= WM_MOUSEMOVE and msg <= WM_MOUSELAST:\n done = self._mouse_handler(msg, wParam, lParam)\n\n if not done:\n return windll.user32.CallWindowProcW(self.old_windProc,\n hwnd, msg, wParam,\n lParam)\n return 1\n\n # this on pushes WM_TOUCH messages onto our event stack\n def _touch_handler(self, msg, wParam, lParam):\n touches = (TOUCHINPUT * wParam)()\n windll.user32.GetTouchInputInfo(HANDLE(lParam),\n wParam,\n touches,\n sizeof(TOUCHINPUT))\n for i in range(wParam):\n self.touch_events.appendleft(touches[i])\n windll.user32.CloseTouchInputHandle(HANDLE(lParam))\n return True\n\n # filter fake mouse events, because touch and stylus\n # also make mouse events\n def _mouse_handler(self, msg, wparam, lParam):\n info = windll.user32.GetMessageExtraInfo()\n # its a touch or a pen\n if (info & PEN_OR_TOUCH_MASK) == PEN_OR_TOUCH_SIGNATURE:\n if info & PEN_EVENT_TOUCH_MASK:\n return True\n\n MotionEventFactory.register('wm_touch', WM_MotionEventProvider)\n", "path": "kivy/input/providers/wm_touch.py"}]} | 3,309 | 181 |
gh_patches_debug_16575 | rasdani/github-patches | git_diff | deepchecks__deepchecks-1050 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[BUG][CV] deepchecks should either validate model.eval status or actively create it
**Describe the bug**
Checks can crash or give wrong results if models are set by mistake to training mode.
**To Reproduce**
Run metrics check with model.train() preceding it.
**Expected behavior**
Either we validate that the model is in the correct state, or we actively call model.eval()
</issue>
<code>
[start of deepchecks/vision/context.py]
1 # ----------------------------------------------------------------------------
2 # Copyright (C) 2021-2022 Deepchecks (https://www.deepchecks.com)
3 #
4 # This file is part of Deepchecks.
5 # Deepchecks is distributed under the terms of the GNU Affero General
6 # Public License (version 3 or later).
7 # You should have received a copy of the GNU Affero General Public License
8 # along with Deepchecks. If not, see <http://www.gnu.org/licenses/>.
9 # ----------------------------------------------------------------------------
10 #
11 """Module for base vision context."""
12 import logging
13 from typing import Mapping, Union, Iterable, Any, Tuple
14
15 import torch
16 from torch import nn
17 from ignite.metrics import Metric
18
19 from deepchecks.core import DatasetKind
20 from deepchecks.vision.vision_data import VisionData, TaskType
21 from deepchecks.vision.utils.validation import apply_to_tensor
22 from deepchecks.core.errors import (
23 DatasetValidationError, DeepchecksNotImplementedError, ModelValidationError,
24 DeepchecksNotSupportedError, DeepchecksValueError
25 )
26
27
28 __all__ = ['Context']
29
30
31 logger = logging.getLogger('deepchecks')
32
33
34 class Batch:
35 """Represents dataset batch returned by the dataloader during iteration."""
36
37 def __init__(
38 self,
39 batch: Tuple[Iterable[Any], Iterable[Any]],
40 context: 'Context',
41 dataset_kind: DatasetKind
42 ):
43 self._context = context
44 self._dataset_kind = dataset_kind
45 self._batch = apply_to_tensor(batch, lambda it: it.to(self._context.device))
46 self._labels = None
47 self._predictions = None
48 self._images = None
49
50 @property
51 def labels(self):
52 if self._labels is None:
53 dataset = self._context.get_data_by_kind(self._dataset_kind)
54 self._labels = dataset.batch_to_labels(self._batch)
55 return self._labels
56
57 @property
58 def predictions(self):
59 if self._predictions is None:
60 dataset = self._context.get_data_by_kind(self._dataset_kind)
61 self._predictions = dataset.infer_on_batch(self._batch, self._context.model, self._context.device)
62 return self._predictions
63
64 @property
65 def images(self):
66 if self._images is None:
67 dataset = self._context.get_data_by_kind(self._dataset_kind)
68 self._images = dataset.batch_to_images(self._batch)
69 return self._images
70
71 def __getitem__(self, index):
72 return self._batch[index]
73
74
75 class Context:
76 """Contains all the data + properties the user has passed to a check/suite, and validates it seamlessly.
77
78 Parameters
79 ----------
80 train : VisionData , default: None
81 Dataset or DataFrame object, representing data an estimator was fitted on
82 test : VisionData , default: None
83 Dataset or DataFrame object, representing data an estimator predicts on
84 model : BasicModel , default: None
85 A scikit-learn-compatible fitted estimator instance
86 model_name: str , default: ''
87 The name of the model
88 scorers : Mapping[str, Metric] , default: None
89 dict of scorers names to a Metric
90 scorers_per_class : Mapping[str, Metric] , default: None
91 dict of scorers for classification without averaging of the classes.
92 See <a href=
93 "https://scikit-learn.org/stable/modules/model_evaluation.html#from-binary-to-multiclass-and-multilabel">
94 scikit-learn docs</a>
95 device : Union[str, torch.device], default: 'cpu'
96 processing unit for use
97 random_state : int
98 A seed to set for pseudo-random functions
99 n_samples : int, default: None
100 """
101
102 def __init__(self,
103 train: VisionData = None,
104 test: VisionData = None,
105 model: nn.Module = None,
106 model_name: str = '',
107 scorers: Mapping[str, Metric] = None,
108 scorers_per_class: Mapping[str, Metric] = None,
109 device: Union[str, torch.device, None] = 'cpu',
110 random_state: int = 42,
111 n_samples: int = None
112 ):
113 # Validations
114 if train is None and test is None and model is None:
115 raise DeepchecksValueError('At least one dataset (or model) must be passed to the method!')
116 if test and not train:
117 raise DatasetValidationError('Can\'t initialize context with only test. if you have single dataset, '
118 'initialize it as train')
119 if train and test:
120 train.validate_shared_label(test)
121
122 self._device = torch.device(device) if isinstance(device, str) else (device if device else torch.device('cpu'))
123
124 if model is not None:
125 for dataset, dataset_type in zip([train, test], ['train', 'test']):
126 if dataset is not None:
127 try:
128 dataset.validate_prediction(next(iter(dataset.data_loader)), model, self._device)
129 except DeepchecksNotImplementedError:
130 logger.warning('validate_prediction() was not implemented in %s dataset, '
131 'some checks will not run', dataset_type)
132
133 # The copy does 2 things: Sample n_samples if parameter exists, and shuffle the data.
134 # we shuffle because the data in VisionData is set to be sampled in a fixed order (in the init), so if the user
135 # wants to run without random_state we need to forcefully shuffle (to have different results on different runs
136 # from the same VisionData object), and if there is a random_state the shuffle will always have same result
137 if train:
138 train = train.copy(shuffle=True, n_samples=n_samples, random_state=random_state)
139 if test:
140 test = test.copy(shuffle=True, n_samples=n_samples, random_state=random_state)
141
142 self._train = train
143 self._test = test
144 self._model = model
145 self._user_scorers = scorers
146 self._user_scorers_per_class = scorers_per_class
147 self._model_name = model_name
148 self.random_state = random_state
149
150 # Properties
151 # Validations note: We know train & test fit each other so all validations can be run only on train
152
153 @property
154 def train(self) -> VisionData:
155 """Return train if exists, otherwise raise error."""
156 if self._train is None:
157 raise DeepchecksNotSupportedError('Check is irrelevant for Datasets without train dataset')
158 return self._train
159
160 @property
161 def test(self) -> VisionData:
162 """Return test if exists, otherwise raise error."""
163 if self._test is None:
164 raise DeepchecksNotSupportedError('Check is irrelevant for Datasets without test dataset')
165 return self._test
166
167 @property
168 def model(self) -> nn.Module:
169 """Return & validate model if model exists, otherwise raise error."""
170 if self._model is None:
171 raise DeepchecksNotSupportedError('Check is irrelevant for Datasets without model')
172 return self._model
173
174 @property
175 def model_name(self):
176 """Return model name."""
177 return self._model_name
178
179 @property
180 def device(self) -> torch.device:
181 """Return device specified by the user."""
182 return self._device
183
184 def have_test(self):
185 """Return whether there is test dataset defined."""
186 return self._test is not None
187
188 def assert_task_type(self, *expected_types: TaskType):
189 """Assert task_type matching given types."""
190 if self.train.task_type not in expected_types:
191 raise ModelValidationError(
192 f'Check is irrelevant for task of type {self.train.task_type}')
193 return True
194
195 def get_data_by_kind(self, kind: DatasetKind):
196 """Return the relevant VisionData by given kind."""
197 if kind == DatasetKind.TRAIN:
198 return self.train
199 elif kind == DatasetKind.TEST:
200 return self.test
201 else:
202 raise DeepchecksValueError(f'Unexpected dataset kind {kind}')
203
[end of deepchecks/vision/context.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/deepchecks/vision/context.py b/deepchecks/vision/context.py
--- a/deepchecks/vision/context.py
+++ b/deepchecks/vision/context.py
@@ -122,6 +122,13 @@
self._device = torch.device(device) if isinstance(device, str) else (device if device else torch.device('cpu'))
if model is not None:
+ if not isinstance(model, nn.Module):
+ logger.warning('Model is not a torch.nn.Module. Deepchecks can\'t validate that model is in '
+ 'evaluation state.')
+ else:
+ if model.training:
+ raise DatasetValidationError('Model is not in evaluation state. Please set model training '
+ 'parameter to False or run model.eval() before passing it.')
for dataset, dataset_type in zip([train, test], ['train', 'test']):
if dataset is not None:
try:
| {"golden_diff": "diff --git a/deepchecks/vision/context.py b/deepchecks/vision/context.py\n--- a/deepchecks/vision/context.py\n+++ b/deepchecks/vision/context.py\n@@ -122,6 +122,13 @@\n self._device = torch.device(device) if isinstance(device, str) else (device if device else torch.device('cpu'))\n \n if model is not None:\n+ if not isinstance(model, nn.Module):\n+ logger.warning('Model is not a torch.nn.Module. Deepchecks can\\'t validate that model is in '\n+ 'evaluation state.')\n+ else:\n+ if model.training:\n+ raise DatasetValidationError('Model is not in evaluation state. Please set model training '\n+ 'parameter to False or run model.eval() before passing it.')\n for dataset, dataset_type in zip([train, test], ['train', 'test']):\n if dataset is not None:\n try:\n", "issue": "[BUG][CV] deepchecks should either validate model.eval status or actively create it \n**Describe the bug**\r\nChecks can crash or give wrong results if models are set by mistake to training mode. \r\n\r\n**To Reproduce**\r\nRun metrics check with model.train() preceding it.\r\n\r\n**Expected behavior**\r\nEither we validate that the model is in the correct state, or we actively call model.eval()\r\n\n", "before_files": [{"content": "# ----------------------------------------------------------------------------\n# Copyright (C) 2021-2022 Deepchecks (https://www.deepchecks.com)\n#\n# This file is part of Deepchecks.\n# Deepchecks is distributed under the terms of the GNU Affero General\n# Public License (version 3 or later).\n# You should have received a copy of the GNU Affero General Public License\n# along with Deepchecks. If not, see <http://www.gnu.org/licenses/>.\n# ----------------------------------------------------------------------------\n#\n\"\"\"Module for base vision context.\"\"\"\nimport logging\nfrom typing import Mapping, Union, Iterable, Any, Tuple\n\nimport torch\nfrom torch import nn\nfrom ignite.metrics import Metric\n\nfrom deepchecks.core import DatasetKind\nfrom deepchecks.vision.vision_data import VisionData, TaskType\nfrom deepchecks.vision.utils.validation import apply_to_tensor\nfrom deepchecks.core.errors import (\n DatasetValidationError, DeepchecksNotImplementedError, ModelValidationError,\n DeepchecksNotSupportedError, DeepchecksValueError\n)\n\n\n__all__ = ['Context']\n\n\nlogger = logging.getLogger('deepchecks')\n\n\nclass Batch:\n \"\"\"Represents dataset batch returned by the dataloader during iteration.\"\"\"\n\n def __init__(\n self,\n batch: Tuple[Iterable[Any], Iterable[Any]],\n context: 'Context',\n dataset_kind: DatasetKind\n ):\n self._context = context\n self._dataset_kind = dataset_kind\n self._batch = apply_to_tensor(batch, lambda it: it.to(self._context.device))\n self._labels = None\n self._predictions = None\n self._images = None\n\n @property\n def labels(self):\n if self._labels is None:\n dataset = self._context.get_data_by_kind(self._dataset_kind)\n self._labels = dataset.batch_to_labels(self._batch)\n return self._labels\n\n @property\n def predictions(self):\n if self._predictions is None:\n dataset = self._context.get_data_by_kind(self._dataset_kind)\n self._predictions = dataset.infer_on_batch(self._batch, self._context.model, self._context.device)\n return self._predictions\n\n @property\n def images(self):\n if self._images is None:\n dataset = self._context.get_data_by_kind(self._dataset_kind)\n self._images = dataset.batch_to_images(self._batch)\n return self._images\n\n def __getitem__(self, index):\n return self._batch[index]\n\n\nclass Context:\n \"\"\"Contains all the data + properties the user has passed to a check/suite, and validates it seamlessly.\n\n Parameters\n ----------\n train : VisionData , default: None\n Dataset or DataFrame object, representing data an estimator was fitted on\n test : VisionData , default: None\n Dataset or DataFrame object, representing data an estimator predicts on\n model : BasicModel , default: None\n A scikit-learn-compatible fitted estimator instance\n model_name: str , default: ''\n The name of the model\n scorers : Mapping[str, Metric] , default: None\n dict of scorers names to a Metric\n scorers_per_class : Mapping[str, Metric] , default: None\n dict of scorers for classification without averaging of the classes.\n See <a href=\n \"https://scikit-learn.org/stable/modules/model_evaluation.html#from-binary-to-multiclass-and-multilabel\">\n scikit-learn docs</a>\n device : Union[str, torch.device], default: 'cpu'\n processing unit for use\n random_state : int\n A seed to set for pseudo-random functions\n n_samples : int, default: None\n \"\"\"\n\n def __init__(self,\n train: VisionData = None,\n test: VisionData = None,\n model: nn.Module = None,\n model_name: str = '',\n scorers: Mapping[str, Metric] = None,\n scorers_per_class: Mapping[str, Metric] = None,\n device: Union[str, torch.device, None] = 'cpu',\n random_state: int = 42,\n n_samples: int = None\n ):\n # Validations\n if train is None and test is None and model is None:\n raise DeepchecksValueError('At least one dataset (or model) must be passed to the method!')\n if test and not train:\n raise DatasetValidationError('Can\\'t initialize context with only test. if you have single dataset, '\n 'initialize it as train')\n if train and test:\n train.validate_shared_label(test)\n\n self._device = torch.device(device) if isinstance(device, str) else (device if device else torch.device('cpu'))\n\n if model is not None:\n for dataset, dataset_type in zip([train, test], ['train', 'test']):\n if dataset is not None:\n try:\n dataset.validate_prediction(next(iter(dataset.data_loader)), model, self._device)\n except DeepchecksNotImplementedError:\n logger.warning('validate_prediction() was not implemented in %s dataset, '\n 'some checks will not run', dataset_type)\n\n # The copy does 2 things: Sample n_samples if parameter exists, and shuffle the data.\n # we shuffle because the data in VisionData is set to be sampled in a fixed order (in the init), so if the user\n # wants to run without random_state we need to forcefully shuffle (to have different results on different runs\n # from the same VisionData object), and if there is a random_state the shuffle will always have same result\n if train:\n train = train.copy(shuffle=True, n_samples=n_samples, random_state=random_state)\n if test:\n test = test.copy(shuffle=True, n_samples=n_samples, random_state=random_state)\n\n self._train = train\n self._test = test\n self._model = model\n self._user_scorers = scorers\n self._user_scorers_per_class = scorers_per_class\n self._model_name = model_name\n self.random_state = random_state\n\n # Properties\n # Validations note: We know train & test fit each other so all validations can be run only on train\n\n @property\n def train(self) -> VisionData:\n \"\"\"Return train if exists, otherwise raise error.\"\"\"\n if self._train is None:\n raise DeepchecksNotSupportedError('Check is irrelevant for Datasets without train dataset')\n return self._train\n\n @property\n def test(self) -> VisionData:\n \"\"\"Return test if exists, otherwise raise error.\"\"\"\n if self._test is None:\n raise DeepchecksNotSupportedError('Check is irrelevant for Datasets without test dataset')\n return self._test\n\n @property\n def model(self) -> nn.Module:\n \"\"\"Return & validate model if model exists, otherwise raise error.\"\"\"\n if self._model is None:\n raise DeepchecksNotSupportedError('Check is irrelevant for Datasets without model')\n return self._model\n\n @property\n def model_name(self):\n \"\"\"Return model name.\"\"\"\n return self._model_name\n\n @property\n def device(self) -> torch.device:\n \"\"\"Return device specified by the user.\"\"\"\n return self._device\n\n def have_test(self):\n \"\"\"Return whether there is test dataset defined.\"\"\"\n return self._test is not None\n\n def assert_task_type(self, *expected_types: TaskType):\n \"\"\"Assert task_type matching given types.\"\"\"\n if self.train.task_type not in expected_types:\n raise ModelValidationError(\n f'Check is irrelevant for task of type {self.train.task_type}')\n return True\n\n def get_data_by_kind(self, kind: DatasetKind):\n \"\"\"Return the relevant VisionData by given kind.\"\"\"\n if kind == DatasetKind.TRAIN:\n return self.train\n elif kind == DatasetKind.TEST:\n return self.test\n else:\n raise DeepchecksValueError(f'Unexpected dataset kind {kind}')\n", "path": "deepchecks/vision/context.py"}]} | 2,820 | 201 |
gh_patches_debug_18918 | rasdani/github-patches | git_diff | streamlit__streamlit-3501 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Setting `default` on multiselect widget that uses pandas.Series as `options` causes an error
### Summary
[`st.multiselect`](https://docs.streamlit.io/en/stable/api.html?highlight=options#streamlit.multiselect) is supposed to accept `pandas.Series` objects as labels for the select options. Setting a `default` value while using `options=pandas.Series` leads to:
> StreamlitAPIException : Every Multiselect default value must exist in options
### Steps to reproduce
Run the below code snippet.
Code snippet:
```python
import streamlit as st
import pandas as pd
names = pd.DataFrame({'labels':["Green","Yellow","Red","Blue"]})
nameSelect = st.multiselect(
"What are your favorite colors",
options=names['labels'],
default=["Yellow"]
)
```
### Is this a regression?
Possibly a core regression.
### Debug info
- Streamlit version: 0.82.0
- Python version: 3.8.5
- OS version: Ubuntu 20.04.2 LTS
- Browser version: Firefox 89.0 (64-bit)
### Additional information
Original source: https://discuss.streamlit.io/t/setting-default-value-on-multiselect-that-uses-a-series-for-the-options/13630
</issue>
<code>
[start of lib/streamlit/elements/multiselect.py]
1 # Copyright 2018-2021 Streamlit Inc.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from typing import cast, List
16
17 import streamlit
18 from streamlit.errors import StreamlitAPIException
19 from streamlit.proto.MultiSelect_pb2 import MultiSelect as MultiSelectProto
20 from streamlit.state.widgets import register_widget
21 from streamlit.type_util import is_type, ensure_iterable
22 from .form import current_form_id
23 from .utils import check_callback_rules, check_session_state_rules
24
25
26 class MultiSelectMixin:
27 def multiselect(
28 self,
29 label,
30 options,
31 default=None,
32 format_func=str,
33 key=None,
34 help=None,
35 on_change=None,
36 args=None,
37 kwargs=None,
38 ):
39 """Display a multiselect widget.
40 The multiselect widget starts as empty.
41
42 Parameters
43 ----------
44 label : str
45 A short label explaining to the user what this select widget is for.
46 options : list, tuple, numpy.ndarray, pandas.Series, or pandas.DataFrame
47 Labels for the select options. This will be cast to str internally
48 by default. For pandas.DataFrame, the first column is selected.
49 default: [str] or None
50 List of default values.
51 format_func : function
52 Function to modify the display of selectbox options. It receives
53 the raw option as an argument and should output the label to be
54 shown for that option. This has no impact on the return value of
55 the selectbox.
56 key : str
57 An optional string to use as the unique key for the widget.
58 If this is omitted, a key will be generated for the widget
59 based on its content. Multiple widgets of the same type may
60 not share the same key.
61 help : str
62 An optional tooltip that gets displayed next to the multiselect.
63 on_change : callable
64 An optional callback invoked when this multiselect's value changes.
65 args : tuple
66 An optional tuple of args to pass to the callback.
67 kwargs : dict
68 An optional dict of kwargs to pass to the callback.
69
70 Returns
71 -------
72 list
73 A list with the selected options
74
75 Example
76 -------
77 >>> options = st.multiselect(
78 ... 'What are your favorite colors',
79 ... ['Green', 'Yellow', 'Red', 'Blue'],
80 ... ['Yellow', 'Red'])
81 >>>
82 >>> st.write('You selected:', options)
83
84 .. note::
85 User experience can be degraded for large lists of `options` (100+), as this widget
86 is not designed to handle arbitrary text search efficiently. See this
87 `thread <https://discuss.streamlit.io/t/streamlit-loading-column-data-takes-too-much-time/1791>`_
88 on the Streamlit community forum for more information and
89 `GitHub issue #1059 <https://github.com/streamlit/streamlit/issues/1059>`_ for updates on the issue.
90
91 """
92 check_callback_rules(self.dg, on_change)
93 check_session_state_rules(default_value=default, key=key)
94
95 options = ensure_iterable(options)
96
97 # Perform validation checks and return indices base on the default values.
98 def _check_and_convert_to_indices(options, default_values):
99 if default_values is None and None not in options:
100 return None
101
102 if not isinstance(default_values, list):
103 # This if is done before others because calling if not x (done
104 # right below) when x is of type pd.Series() or np.array() throws a
105 # ValueError exception.
106 if is_type(default_values, "numpy.ndarray") or is_type(
107 default_values, "pandas.core.series.Series"
108 ):
109 default_values = list(default_values)
110 elif not default_values or default_values in options:
111 default_values = [default_values]
112 else:
113 default_values = list(default_values)
114
115 for value in default_values:
116 if value not in options:
117 raise StreamlitAPIException(
118 "Every Multiselect default value must exist in options"
119 )
120
121 return [options.index(value) for value in default_values]
122
123 indices = _check_and_convert_to_indices(options, default)
124 multiselect_proto = MultiSelectProto()
125 multiselect_proto.label = label
126 default_value = [] if indices is None else indices
127 multiselect_proto.default[:] = default_value
128 multiselect_proto.options[:] = [str(format_func(option)) for option in options]
129 multiselect_proto.form_id = current_form_id(self.dg)
130 if help is not None:
131 multiselect_proto.help = help
132
133 def deserialize_multiselect(ui_value, widget_id="") -> List[str]:
134 current_value = ui_value if ui_value is not None else default_value
135 return [options[i] for i in current_value]
136
137 def serialize_multiselect(value):
138 return _check_and_convert_to_indices(options, value)
139
140 current_value, set_frontend_value = register_widget(
141 "multiselect",
142 multiselect_proto,
143 user_key=key,
144 on_change_handler=on_change,
145 args=args,
146 kwargs=kwargs,
147 deserializer=deserialize_multiselect,
148 serializer=serialize_multiselect,
149 )
150
151 if set_frontend_value:
152 multiselect_proto.value[:] = _check_and_convert_to_indices(
153 options, current_value
154 )
155 multiselect_proto.set_value = True
156
157 self.dg._enqueue("multiselect", multiselect_proto)
158 return current_value
159
160 @property
161 def dg(self) -> "streamlit.delta_generator.DeltaGenerator":
162 """Get our DeltaGenerator."""
163 return cast("streamlit.delta_generator.DeltaGenerator", self)
164
[end of lib/streamlit/elements/multiselect.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/lib/streamlit/elements/multiselect.py b/lib/streamlit/elements/multiselect.py
--- a/lib/streamlit/elements/multiselect.py
+++ b/lib/streamlit/elements/multiselect.py
@@ -19,6 +19,7 @@
from streamlit.proto.MultiSelect_pb2 import MultiSelect as MultiSelectProto
from streamlit.state.widgets import register_widget
from streamlit.type_util import is_type, ensure_iterable
+
from .form import current_form_id
from .utils import check_callback_rules, check_session_state_rules
@@ -112,6 +113,9 @@
else:
default_values = list(default_values)
+ if not isinstance(options, list):
+ options = list(options)
+
for value in default_values:
if value not in options:
raise StreamlitAPIException(
| {"golden_diff": "diff --git a/lib/streamlit/elements/multiselect.py b/lib/streamlit/elements/multiselect.py\n--- a/lib/streamlit/elements/multiselect.py\n+++ b/lib/streamlit/elements/multiselect.py\n@@ -19,6 +19,7 @@\n from streamlit.proto.MultiSelect_pb2 import MultiSelect as MultiSelectProto\n from streamlit.state.widgets import register_widget\n from streamlit.type_util import is_type, ensure_iterable\n+\n from .form import current_form_id\n from .utils import check_callback_rules, check_session_state_rules\n \n@@ -112,6 +113,9 @@\n else:\n default_values = list(default_values)\n \n+ if not isinstance(options, list):\n+ options = list(options)\n+\n for value in default_values:\n if value not in options:\n raise StreamlitAPIException(\n", "issue": "Setting `default` on multiselect widget that uses pandas.Series as `options` causes an error\n### Summary\r\n\r\n[`st.multiselect`](https://docs.streamlit.io/en/stable/api.html?highlight=options#streamlit.multiselect) is supposed to accept `pandas.Series` objects as labels for the select options. Setting a `default` value while using `options=pandas.Series` leads to:\r\n\r\n> StreamlitAPIException : Every Multiselect default value must exist in options \r\n\r\n### Steps to reproduce\r\nRun the below code snippet.\r\n\r\nCode snippet:\r\n\r\n```python\r\nimport streamlit as st\r\nimport pandas as pd\r\n\r\nnames = pd.DataFrame({'labels':[\"Green\",\"Yellow\",\"Red\",\"Blue\"]})\r\nnameSelect = st.multiselect(\r\n \"What are your favorite colors\",\r\n options=names['labels'],\r\n default=[\"Yellow\"]\r\n)\r\n```\r\n\r\n### Is this a regression?\r\n\r\nPossibly a core regression.\r\n\r\n### Debug info\r\n\r\n- Streamlit version: 0.82.0\r\n- Python version: 3.8.5\r\n- OS version: Ubuntu 20.04.2 LTS\r\n- Browser version: Firefox 89.0 (64-bit)\r\n\r\n### Additional information\r\n\r\nOriginal source: https://discuss.streamlit.io/t/setting-default-value-on-multiselect-that-uses-a-series-for-the-options/13630\r\n\n", "before_files": [{"content": "# Copyright 2018-2021 Streamlit Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom typing import cast, List\n\nimport streamlit\nfrom streamlit.errors import StreamlitAPIException\nfrom streamlit.proto.MultiSelect_pb2 import MultiSelect as MultiSelectProto\nfrom streamlit.state.widgets import register_widget\nfrom streamlit.type_util import is_type, ensure_iterable\nfrom .form import current_form_id\nfrom .utils import check_callback_rules, check_session_state_rules\n\n\nclass MultiSelectMixin:\n def multiselect(\n self,\n label,\n options,\n default=None,\n format_func=str,\n key=None,\n help=None,\n on_change=None,\n args=None,\n kwargs=None,\n ):\n \"\"\"Display a multiselect widget.\n The multiselect widget starts as empty.\n\n Parameters\n ----------\n label : str\n A short label explaining to the user what this select widget is for.\n options : list, tuple, numpy.ndarray, pandas.Series, or pandas.DataFrame\n Labels for the select options. This will be cast to str internally\n by default. For pandas.DataFrame, the first column is selected.\n default: [str] or None\n List of default values.\n format_func : function\n Function to modify the display of selectbox options. It receives\n the raw option as an argument and should output the label to be\n shown for that option. This has no impact on the return value of\n the selectbox.\n key : str\n An optional string to use as the unique key for the widget.\n If this is omitted, a key will be generated for the widget\n based on its content. Multiple widgets of the same type may\n not share the same key.\n help : str\n An optional tooltip that gets displayed next to the multiselect.\n on_change : callable\n An optional callback invoked when this multiselect's value changes.\n args : tuple\n An optional tuple of args to pass to the callback.\n kwargs : dict\n An optional dict of kwargs to pass to the callback.\n\n Returns\n -------\n list\n A list with the selected options\n\n Example\n -------\n >>> options = st.multiselect(\n ... 'What are your favorite colors',\n ... ['Green', 'Yellow', 'Red', 'Blue'],\n ... ['Yellow', 'Red'])\n >>>\n >>> st.write('You selected:', options)\n\n .. note::\n User experience can be degraded for large lists of `options` (100+), as this widget\n is not designed to handle arbitrary text search efficiently. See this\n `thread <https://discuss.streamlit.io/t/streamlit-loading-column-data-takes-too-much-time/1791>`_\n on the Streamlit community forum for more information and\n `GitHub issue #1059 <https://github.com/streamlit/streamlit/issues/1059>`_ for updates on the issue.\n\n \"\"\"\n check_callback_rules(self.dg, on_change)\n check_session_state_rules(default_value=default, key=key)\n\n options = ensure_iterable(options)\n\n # Perform validation checks and return indices base on the default values.\n def _check_and_convert_to_indices(options, default_values):\n if default_values is None and None not in options:\n return None\n\n if not isinstance(default_values, list):\n # This if is done before others because calling if not x (done\n # right below) when x is of type pd.Series() or np.array() throws a\n # ValueError exception.\n if is_type(default_values, \"numpy.ndarray\") or is_type(\n default_values, \"pandas.core.series.Series\"\n ):\n default_values = list(default_values)\n elif not default_values or default_values in options:\n default_values = [default_values]\n else:\n default_values = list(default_values)\n\n for value in default_values:\n if value not in options:\n raise StreamlitAPIException(\n \"Every Multiselect default value must exist in options\"\n )\n\n return [options.index(value) for value in default_values]\n\n indices = _check_and_convert_to_indices(options, default)\n multiselect_proto = MultiSelectProto()\n multiselect_proto.label = label\n default_value = [] if indices is None else indices\n multiselect_proto.default[:] = default_value\n multiselect_proto.options[:] = [str(format_func(option)) for option in options]\n multiselect_proto.form_id = current_form_id(self.dg)\n if help is not None:\n multiselect_proto.help = help\n\n def deserialize_multiselect(ui_value, widget_id=\"\") -> List[str]:\n current_value = ui_value if ui_value is not None else default_value\n return [options[i] for i in current_value]\n\n def serialize_multiselect(value):\n return _check_and_convert_to_indices(options, value)\n\n current_value, set_frontend_value = register_widget(\n \"multiselect\",\n multiselect_proto,\n user_key=key,\n on_change_handler=on_change,\n args=args,\n kwargs=kwargs,\n deserializer=deserialize_multiselect,\n serializer=serialize_multiselect,\n )\n\n if set_frontend_value:\n multiselect_proto.value[:] = _check_and_convert_to_indices(\n options, current_value\n )\n multiselect_proto.set_value = True\n\n self.dg._enqueue(\"multiselect\", multiselect_proto)\n return current_value\n\n @property\n def dg(self) -> \"streamlit.delta_generator.DeltaGenerator\":\n \"\"\"Get our DeltaGenerator.\"\"\"\n return cast(\"streamlit.delta_generator.DeltaGenerator\", self)\n", "path": "lib/streamlit/elements/multiselect.py"}]} | 2,531 | 184 |
gh_patches_debug_11722 | rasdani/github-patches | git_diff | kymatio__kymatio-184 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Warning in `mnist.py`
Specifically, https://github.com/kymatio/kymatio/blob/289bc26551e92456ef7a48fbe83d48e157f7632c/examples/2d/mnist.py#L50 generates a warning saying that `size_average` will be deprecated and says to use `reduction='sum'` instead. Is this ok for us to do?
</issue>
<code>
[start of examples/2d/mnist.py]
1 """
2 Classification of handwritten digits
3 ====================================
4
5 Based on pytorch example for MNIST
6 """
7
8
9 import torch.nn as nn
10 import torch.optim
11 from torchvision import datasets, transforms
12 import torch.nn.functional as F
13 from kymatio import Scattering2D
14 import kymatio.datasets as scattering_datasets
15 import kymatio
16 import torch
17 import argparse
18 import math
19
20 class View(nn.Module):
21 def __init__(self, *args):
22 super(View, self).__init__()
23 self.shape = args
24
25 def forward(self, x):
26 return x.view(-1,*self.shape)
27
28 def train(model, device, train_loader, optimizer, epoch, scattering):
29 model.train()
30 for batch_idx, (data, target) in enumerate(train_loader):
31 data, target = data.to(device), target.to(device)
32 optimizer.zero_grad()
33 output = model(scattering(data))
34 loss = F.cross_entropy(output, target)
35 loss.backward()
36 optimizer.step()
37 if batch_idx % 50 == 0:
38 print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format(
39 epoch, batch_idx * len(data), len(train_loader.dataset),
40 100. * batch_idx / len(train_loader), loss.item()))
41
42 def test(model, device, test_loader, scattering):
43 model.eval()
44 test_loss = 0
45 correct = 0
46 with torch.no_grad():
47 for data, target in test_loader:
48 data, target = data.to(device), target.to(device)
49 output = model(scattering(data))
50 test_loss += F.cross_entropy(output, target, size_average=False).item() # sum up batch loss
51 pred = output.max(1, keepdim=True)[1] # get the index of the max log-probability
52 correct += pred.eq(target.view_as(pred)).sum().item()
53
54 test_loss /= len(test_loader.dataset)
55 print('\nTest set: Average loss: {:.4f}, Accuracy: {}/{} ({:.2f}%)\n'.format(
56 test_loss, correct, len(test_loader.dataset),
57 100. * correct / len(test_loader.dataset)))
58
59 def main():
60 """Train a simple Hybrid Scattering + CNN model on MNIST.
61
62 Three models are demoed:
63 'linear' - scattering + linear model
64 'mlp' - scattering + MLP
65 'cnn' - scattering + CNN
66
67 scattering 1st order can also be set by the mode
68 Scattering features are normalized by batch normalization.
69
70 scatter + linear achieves 99.15% in 15 epochs
71 scatter + cnn achieves 99.3% in 15 epochs
72
73 """
74 parser = argparse.ArgumentParser(description='MNIST scattering + hybrid examples')
75 parser.add_argument('--mode', type=int, default=2,help='scattering 1st or 2nd order')
76 parser.add_argument('--classifier', type=str, default='linear',help='classifier model')
77 args = parser.parse_args()
78 assert(args.classifier in ['linear','mlp','cnn'])
79
80 use_cuda = torch.cuda.is_available()
81 device = torch.device("cuda" if use_cuda else "cpu")
82
83 if args.mode == 1:
84 scattering = Scattering2D(M=28, N=28, J=2,order2=False)
85 K = 17
86 else:
87 scattering = Scattering2D(M=28, N=28, J=2)
88 K = 81
89 if use_cuda:
90 scattering = scattering.cuda()
91
92
93
94
95 if args.classifier == 'cnn':
96 model = nn.Sequential(
97 View(K, 7, 7),
98 nn.BatchNorm2d(K),
99 nn.Conv2d(K, 64, 3,padding=1), nn.ReLU(),
100 View(64*7*7),
101 nn.Linear(64 * 7 * 7, 512), nn.ReLU(),
102 nn.Linear(512, 10)
103 ).to(device)
104
105 elif args.classifier == 'mlp':
106 model = nn.Sequential(
107 View(K, 7, 7),
108 nn.BatchNorm2d(K),
109 View(K*7*7),
110 nn.Linear(K*7*7, 512), nn.ReLU(),
111 nn.Linear(512, 512), nn.ReLU(),
112 nn.Linear(512, 10)
113 )
114
115 elif args.classifier == 'linear':
116 model = nn.Sequential(
117 View(K, 7, 7),
118 nn.BatchNorm2d(K),
119 View(K * 7 * 7),
120 nn.Linear(K * 7 * 7, 10)
121 )
122 else:
123 raise ValueError('Classifier should be cnn/mlp/linear')
124
125 model.to(device)
126
127 #initialize
128 for m in model.modules():
129 if isinstance(m, nn.Conv2d):
130 n = m.kernel_size[0] * m.kernel_size[1] * m.in_channels
131 m.weight.data.normal_(0, 2./math.sqrt(n))
132 m.bias.data.zero_()
133 if isinstance(m, nn.Linear):
134 m.weight.data.normal_(0, 2./math.sqrt(m.in_features))
135 m.bias.data.zero_()
136
137 # DataLoaders
138 if use_cuda:
139 num_workers = 4
140 pin_memory = True
141 else:
142 num_workers = None
143 pin_memory = False
144
145 train_loader = torch.utils.data.DataLoader(
146 datasets.MNIST(scattering_datasets.get_dataset_dir('MNIST'), train=True, download=True,
147 transform=transforms.Compose([
148 transforms.ToTensor(),
149 transforms.Normalize((0.1307,), (0.3081,))
150 ])),
151 batch_size=128, shuffle=True, num_workers=num_workers, pin_memory=pin_memory)
152 test_loader = torch.utils.data.DataLoader(
153 datasets.MNIST(scattering_datasets.get_dataset_dir('MNIST'), train=False, transform=transforms.Compose([
154 transforms.ToTensor(),
155 transforms.Normalize((0.1307,), (0.3081,))
156 ])),
157 batch_size=128, shuffle=True, num_workers=num_workers, pin_memory=pin_memory)
158
159 # Optimizer
160 optimizer = torch.optim.SGD(model.parameters(), lr=0.01, momentum=0.9,
161 weight_decay=0.0005)
162
163 for epoch in range(1, 16):
164 train( model, device, train_loader, optimizer, epoch, scattering)
165 test(model, device, test_loader, scattering)
166
167
168 if __name__ == '__main__':
169 main()
170
[end of examples/2d/mnist.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/examples/2d/mnist.py b/examples/2d/mnist.py
--- a/examples/2d/mnist.py
+++ b/examples/2d/mnist.py
@@ -47,7 +47,7 @@
for data, target in test_loader:
data, target = data.to(device), target.to(device)
output = model(scattering(data))
- test_loss += F.cross_entropy(output, target, size_average=False).item() # sum up batch loss
+ test_loss += F.cross_entropy(output, target, reduction='sum').item() # sum up batch loss
pred = output.max(1, keepdim=True)[1] # get the index of the max log-probability
correct += pred.eq(target.view_as(pred)).sum().item()
| {"golden_diff": "diff --git a/examples/2d/mnist.py b/examples/2d/mnist.py\n--- a/examples/2d/mnist.py\n+++ b/examples/2d/mnist.py\n@@ -47,7 +47,7 @@\n for data, target in test_loader:\n data, target = data.to(device), target.to(device)\n output = model(scattering(data))\n- test_loss += F.cross_entropy(output, target, size_average=False).item() # sum up batch loss\n+ test_loss += F.cross_entropy(output, target, reduction='sum').item() # sum up batch loss\n pred = output.max(1, keepdim=True)[1] # get the index of the max log-probability\n correct += pred.eq(target.view_as(pred)).sum().item()\n", "issue": "Warning in `mnist.py`\nSpecifically, https://github.com/kymatio/kymatio/blob/289bc26551e92456ef7a48fbe83d48e157f7632c/examples/2d/mnist.py#L50 generates a warning saying that `size_average` will be deprecated and says to use `reduction='sum'` instead. Is this ok for us to do?\n", "before_files": [{"content": "\"\"\"\nClassification of handwritten digits\n====================================\n\nBased on pytorch example for MNIST\n\"\"\"\n\n\nimport torch.nn as nn\nimport torch.optim\nfrom torchvision import datasets, transforms\nimport torch.nn.functional as F\nfrom kymatio import Scattering2D\nimport kymatio.datasets as scattering_datasets\nimport kymatio\nimport torch\nimport argparse\nimport math\n\nclass View(nn.Module):\n def __init__(self, *args):\n super(View, self).__init__()\n self.shape = args\n\n def forward(self, x):\n return x.view(-1,*self.shape)\n\ndef train(model, device, train_loader, optimizer, epoch, scattering):\n model.train()\n for batch_idx, (data, target) in enumerate(train_loader):\n data, target = data.to(device), target.to(device)\n optimizer.zero_grad()\n output = model(scattering(data))\n loss = F.cross_entropy(output, target)\n loss.backward()\n optimizer.step()\n if batch_idx % 50 == 0:\n print('Train Epoch: {} [{}/{} ({:.0f}%)]\\tLoss: {:.6f}'.format(\n epoch, batch_idx * len(data), len(train_loader.dataset),\n 100. * batch_idx / len(train_loader), loss.item()))\n\ndef test(model, device, test_loader, scattering):\n model.eval()\n test_loss = 0\n correct = 0\n with torch.no_grad():\n for data, target in test_loader:\n data, target = data.to(device), target.to(device)\n output = model(scattering(data))\n test_loss += F.cross_entropy(output, target, size_average=False).item() # sum up batch loss\n pred = output.max(1, keepdim=True)[1] # get the index of the max log-probability\n correct += pred.eq(target.view_as(pred)).sum().item()\n\n test_loss /= len(test_loader.dataset)\n print('\\nTest set: Average loss: {:.4f}, Accuracy: {}/{} ({:.2f}%)\\n'.format(\n test_loss, correct, len(test_loader.dataset),\n 100. * correct / len(test_loader.dataset)))\n\ndef main():\n \"\"\"Train a simple Hybrid Scattering + CNN model on MNIST.\n\n Three models are demoed:\n 'linear' - scattering + linear model\n 'mlp' - scattering + MLP\n 'cnn' - scattering + CNN\n\n scattering 1st order can also be set by the mode\n Scattering features are normalized by batch normalization.\n\n scatter + linear achieves 99.15% in 15 epochs\n scatter + cnn achieves 99.3% in 15 epochs\n\n \"\"\"\n parser = argparse.ArgumentParser(description='MNIST scattering + hybrid examples')\n parser.add_argument('--mode', type=int, default=2,help='scattering 1st or 2nd order')\n parser.add_argument('--classifier', type=str, default='linear',help='classifier model')\n args = parser.parse_args()\n assert(args.classifier in ['linear','mlp','cnn'])\n\n use_cuda = torch.cuda.is_available()\n device = torch.device(\"cuda\" if use_cuda else \"cpu\")\n\n if args.mode == 1:\n scattering = Scattering2D(M=28, N=28, J=2,order2=False)\n K = 17\n else:\n scattering = Scattering2D(M=28, N=28, J=2)\n K = 81\n if use_cuda:\n scattering = scattering.cuda()\n\n\n\n\n if args.classifier == 'cnn':\n model = nn.Sequential(\n View(K, 7, 7),\n nn.BatchNorm2d(K),\n nn.Conv2d(K, 64, 3,padding=1), nn.ReLU(),\n View(64*7*7),\n nn.Linear(64 * 7 * 7, 512), nn.ReLU(),\n nn.Linear(512, 10)\n ).to(device)\n\n elif args.classifier == 'mlp':\n model = nn.Sequential(\n View(K, 7, 7),\n nn.BatchNorm2d(K),\n View(K*7*7),\n nn.Linear(K*7*7, 512), nn.ReLU(),\n nn.Linear(512, 512), nn.ReLU(),\n nn.Linear(512, 10)\n )\n\n elif args.classifier == 'linear':\n model = nn.Sequential(\n View(K, 7, 7),\n nn.BatchNorm2d(K),\n View(K * 7 * 7),\n nn.Linear(K * 7 * 7, 10)\n )\n else:\n raise ValueError('Classifier should be cnn/mlp/linear')\n\n model.to(device)\n\n #initialize\n for m in model.modules():\n if isinstance(m, nn.Conv2d):\n n = m.kernel_size[0] * m.kernel_size[1] * m.in_channels\n m.weight.data.normal_(0, 2./math.sqrt(n))\n m.bias.data.zero_()\n if isinstance(m, nn.Linear):\n m.weight.data.normal_(0, 2./math.sqrt(m.in_features))\n m.bias.data.zero_()\n\n # DataLoaders\n if use_cuda:\n num_workers = 4\n pin_memory = True\n else:\n num_workers = None\n pin_memory = False\n\n train_loader = torch.utils.data.DataLoader(\n datasets.MNIST(scattering_datasets.get_dataset_dir('MNIST'), train=True, download=True,\n transform=transforms.Compose([\n transforms.ToTensor(),\n transforms.Normalize((0.1307,), (0.3081,))\n ])),\n batch_size=128, shuffle=True, num_workers=num_workers, pin_memory=pin_memory)\n test_loader = torch.utils.data.DataLoader(\n datasets.MNIST(scattering_datasets.get_dataset_dir('MNIST'), train=False, transform=transforms.Compose([\n transforms.ToTensor(),\n transforms.Normalize((0.1307,), (0.3081,))\n ])),\n batch_size=128, shuffle=True, num_workers=num_workers, pin_memory=pin_memory)\n\n # Optimizer\n optimizer = torch.optim.SGD(model.parameters(), lr=0.01, momentum=0.9,\n weight_decay=0.0005)\n\n for epoch in range(1, 16):\n train( model, device, train_loader, optimizer, epoch, scattering)\n test(model, device, test_loader, scattering)\n\n\nif __name__ == '__main__':\n main()\n", "path": "examples/2d/mnist.py"}]} | 2,495 | 170 |
gh_patches_debug_9130 | rasdani/github-patches | git_diff | opsdroid__opsdroid-615 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add Python 3.7 support
We need to update opsdroid to be fully supported in 3.7.
- [x] Test against Python 3.7.
- [x] Travis
- [x] AppVeyor
- [x] Fix any bugs highlighted.
- [x] Add 3.7 to supported versions in `setup.py`.
- [ ] ~Update docker base image to be latest supported version~.
</issue>
<code>
[start of setup.py]
1 #!/usr/bin/env python3
2 import os
3 from setuptools import setup, find_packages
4 from setuptools.command.build_py import build_py
5 from setuptools.command.sdist import sdist
6 from setuptools.command.develop import develop
7 from opsdroid import __version__
8
9 PACKAGE_NAME = 'opsdroid'
10 HERE = os.path.abspath(os.path.dirname(__file__))
11 README = open(os.path.join(HERE, 'README.md'), encoding="utf8").read()
12
13 PACKAGES = find_packages(exclude=['tests', 'tests.*', 'modules',
14 'modules.*', 'docs', 'docs.*'])
15
16
17 # For now we simply define the install_requires based on the contents
18 # of requirements.txt. In the future, install_requires may become much
19 # looser than the (automatically) resolved requirements.txt.
20 with open(os.path.join(HERE, 'requirements.txt'), 'r') as fh:
21 REQUIRES = [line.strip() for line in fh]
22
23
24 class Develop(develop):
25 """Custom `develop` command to always build mo files on install -e."""
26
27 def run(self):
28 self.run_command('compile_catalog')
29 develop.run(self) # old style class
30
31
32 class BuildPy(build_py):
33 """Custom `build_py` command to always build mo files for wheels."""
34
35 def run(self):
36 self.run_command('compile_catalog')
37 build_py.run(self) # old style class
38
39
40 class Sdist(sdist):
41 """Custom `sdist` command to ensure that mo files are always created."""
42
43 def run(self):
44 self.run_command('compile_catalog')
45 sdist.run(self) # old style class
46
47
48 setup(
49 name=PACKAGE_NAME,
50 version=__version__,
51 license='Apache License 2.0',
52 url='https://opsdroid.github.io/',
53 download_url='https://github.com/opsdroid/opsdroid/releases',
54 author='Jacob Tomlinson',
55 author_email='[email protected]',
56 description='An open source ChatOps bot framework.',
57 long_description=README,
58 packages=PACKAGES,
59 include_package_data=True,
60 zip_safe=False,
61 platforms='any',
62 classifiers=[
63 'Development Status :: 4 - Beta',
64 'Environment :: Console',
65 'Framework :: AsyncIO',
66 'Intended Audience :: Developers',
67 'Intended Audience :: System Administrators',
68 'Intended Audience :: Information Technology',
69 'License :: OSI Approved :: Apache Software License',
70 'Programming Language :: Python',
71 'Programming Language :: Python :: 3',
72 'Programming Language :: Python :: 3 :: Only',
73 'Programming Language :: Python :: 3.5',
74 'Programming Language :: Python :: 3.6',
75 'Topic :: Communications :: Chat',
76 'Topic :: Scientific/Engineering :: Artificial Intelligence',
77 'Topic :: Software Development :: Libraries :: Python Modules'
78 ],
79 install_requires=REQUIRES,
80 test_suite='tests',
81 keywords=[
82 'bot',
83 'bot-framework',
84 'opsdroid',
85 'botkit',
86 'python3',
87 'asyncio',
88 'chatops',
89 'devops',
90 'nlu'
91 ],
92 setup_requires=['Babel'],
93 cmdclass={'sdist': Sdist, 'build_py': BuildPy, 'develop': Develop},
94 entry_points={
95 'console_scripts': [
96 'opsdroid = opsdroid.__main__:main'
97 ]
98 },
99 )
100
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -72,6 +72,7 @@
'Programming Language :: Python :: 3 :: Only',
'Programming Language :: Python :: 3.5',
'Programming Language :: Python :: 3.6',
+ 'Programming Language :: Python :: 3.7',
'Topic :: Communications :: Chat',
'Topic :: Scientific/Engineering :: Artificial Intelligence',
'Topic :: Software Development :: Libraries :: Python Modules'
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -72,6 +72,7 @@\n 'Programming Language :: Python :: 3 :: Only',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n+ 'Programming Language :: Python :: 3.7',\n 'Topic :: Communications :: Chat',\n 'Topic :: Scientific/Engineering :: Artificial Intelligence',\n 'Topic :: Software Development :: Libraries :: Python Modules'\n", "issue": "Add Python 3.7 support\nWe need to update opsdroid to be fully supported in 3.7.\r\n\r\n- [x] Test against Python 3.7.\r\n - [x] Travis\r\n - [x] AppVeyor\r\n- [x] Fix any bugs highlighted.\r\n- [x] Add 3.7 to supported versions in `setup.py`.\r\n- [ ] ~Update docker base image to be latest supported version~.\n", "before_files": [{"content": "#!/usr/bin/env python3\nimport os\nfrom setuptools import setup, find_packages\nfrom setuptools.command.build_py import build_py\nfrom setuptools.command.sdist import sdist\nfrom setuptools.command.develop import develop\nfrom opsdroid import __version__\n\nPACKAGE_NAME = 'opsdroid'\nHERE = os.path.abspath(os.path.dirname(__file__))\nREADME = open(os.path.join(HERE, 'README.md'), encoding=\"utf8\").read()\n\nPACKAGES = find_packages(exclude=['tests', 'tests.*', 'modules',\n 'modules.*', 'docs', 'docs.*'])\n\n\n# For now we simply define the install_requires based on the contents\n# of requirements.txt. In the future, install_requires may become much\n# looser than the (automatically) resolved requirements.txt.\nwith open(os.path.join(HERE, 'requirements.txt'), 'r') as fh:\n REQUIRES = [line.strip() for line in fh]\n\n\nclass Develop(develop):\n \"\"\"Custom `develop` command to always build mo files on install -e.\"\"\"\n\n def run(self):\n self.run_command('compile_catalog')\n develop.run(self) # old style class\n\n\nclass BuildPy(build_py):\n \"\"\"Custom `build_py` command to always build mo files for wheels.\"\"\"\n\n def run(self):\n self.run_command('compile_catalog')\n build_py.run(self) # old style class\n\n\nclass Sdist(sdist):\n \"\"\"Custom `sdist` command to ensure that mo files are always created.\"\"\"\n\n def run(self):\n self.run_command('compile_catalog')\n sdist.run(self) # old style class\n\n\nsetup(\n name=PACKAGE_NAME,\n version=__version__,\n license='Apache License 2.0',\n url='https://opsdroid.github.io/',\n download_url='https://github.com/opsdroid/opsdroid/releases',\n author='Jacob Tomlinson',\n author_email='[email protected]',\n description='An open source ChatOps bot framework.',\n long_description=README,\n packages=PACKAGES,\n include_package_data=True,\n zip_safe=False,\n platforms='any',\n classifiers=[\n 'Development Status :: 4 - Beta',\n 'Environment :: Console',\n 'Framework :: AsyncIO',\n 'Intended Audience :: Developers',\n 'Intended Audience :: System Administrators',\n 'Intended Audience :: Information Technology',\n 'License :: OSI Approved :: Apache Software License',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3 :: Only',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'Topic :: Communications :: Chat',\n 'Topic :: Scientific/Engineering :: Artificial Intelligence',\n 'Topic :: Software Development :: Libraries :: Python Modules'\n ],\n install_requires=REQUIRES,\n test_suite='tests',\n keywords=[\n 'bot',\n 'bot-framework',\n 'opsdroid',\n 'botkit',\n 'python3',\n 'asyncio',\n 'chatops',\n 'devops',\n 'nlu'\n ],\n setup_requires=['Babel'],\n cmdclass={'sdist': Sdist, 'build_py': BuildPy, 'develop': Develop},\n entry_points={\n 'console_scripts': [\n 'opsdroid = opsdroid.__main__:main'\n ]\n },\n)\n", "path": "setup.py"}]} | 1,549 | 112 |
gh_patches_debug_43223 | rasdani/github-patches | git_diff | ephios-dev__ephios-80 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Benutzer bearbeiten
Als Manager möchte ich einen Nutzer bearbeiten. Dabei möchte ich die selben Optionen haben wie beim Anlegen des Nutzers. In der Liste der Nutzer soll es für jeden Nutzer eine entsprechende Schaltfläche geben. Der Nutzer soll über Änderungen per Mail informiert werden.
</issue>
<code>
[start of user_management/mail.py]
1 from django.contrib.auth.tokens import default_token_generator
2 from django.core import mail
3 from django.core.mail import EmailMultiAlternatives
4 from django.template.loader import render_to_string
5 from django.urls import reverse
6 from django.utils.encoding import force_bytes
7 from django.utils.http import urlsafe_base64_encode
8 from django.utils.translation import gettext as _
9
10 from jep.settings import SITE_URL
11
12
13 def send_account_creation_info(userprofile):
14 messages = []
15 subject = _("Welcome to JEP!")
16 uid = urlsafe_base64_encode(force_bytes(userprofile.id))
17 token = default_token_generator.make_token(userprofile)
18 reset_link = reverse("password_reset_confirm", kwargs={"uidb64": uid, "token": token})
19 text_content = _(
20 "You're receiving this email because a new account has been created for you at JEP.\n"
21 "Please go to the following page and choose a password: {url}{reset_link}\n"
22 "Your username is your email address: {email}\n"
23 ).format(url=SITE_URL, reset_link=reset_link, email=userprofile.email)
24
25 html_content = render_to_string(
26 "user_management/new_account_email.html",
27 {"uid": uid, "token": token, "site_url": SITE_URL, "email": userprofile.email,},
28 )
29 message = EmailMultiAlternatives(to=[userprofile.email], subject=subject, body=text_content)
30 message.attach_alternative(html_content, "text/html")
31 messages.append(message)
32
33 mail.get_connection().send_messages(messages)
34
[end of user_management/mail.py]
[start of user_management/urls.py]
1 from django.urls import path
2
3 from user_management import views
4
5 app_name = "user_management"
6 urlpatterns = [
7 path("profile/", views.ProfileView.as_view(), name="profile"),
8 path("groups/", views.GroupListView.as_view(), name="group_list"),
9 path("groups/<int:pk>/edit", views.GroupUpdateView.as_view(), name="group_edit"),
10 path("groups/<int:pk>/delete", views.GroupDeleteView.as_view(), name="group_delete"),
11 path("groups/create", views.GroupCreateView.as_view(), name="group_add"),
12 path("users/", views.UserProfileListView.as_view(), name="user_list",),
13 path("users/create/", views.UserProfileCreateView.as_view(), name="userprofile_create",),
14 ]
15
[end of user_management/urls.py]
[start of user_management/views.py]
1 from django.contrib import messages
2 from django.contrib.auth.mixins import LoginRequiredMixin, PermissionRequiredMixin
3 from django.contrib.auth.models import Group
4 from django.urls import reverse
5 from django.views.generic import DetailView, ListView, UpdateView, CreateView, DeleteView
6 from guardian.shortcuts import get_objects_for_group
7
8 from user_management import mail
9 from user_management.forms import GroupForm, UserProfileForm
10 from django.utils.translation import gettext as _
11
12 from user_management.models import UserProfile
13
14
15 class ProfileView(LoginRequiredMixin, DetailView):
16 def get_object(self, queryset=None):
17 return self.request.user
18
19
20 class UserProfileListView(PermissionRequiredMixin, ListView):
21 model = UserProfile
22 permission_required = "user_management.view_userprofile"
23
24
25 class UserProfileCreateView(PermissionRequiredMixin, CreateView):
26 template_name = "user_management/userprofile_form.html"
27 permission_required = "user_management.add_userprofile"
28 model = UserProfile
29 form_class = UserProfileForm
30
31 def get_success_url(self):
32 messages.success(self.request, _("User added successfully."))
33 return reverse("user_management:user_list")
34
35 def form_valid(self, form):
36 response = super().form_valid(form)
37 userprofile = self.object
38 if userprofile.is_active:
39 mail.send_account_creation_info(userprofile)
40 return response
41
42
43 class GroupListView(PermissionRequiredMixin, ListView):
44 model = Group
45 permission_required = "auth.view_group"
46 template_name = "user_management/group_list.html"
47
48
49 class GroupCreateView(PermissionRequiredMixin, CreateView):
50 model = Group
51 permission_required = "auth.add_group"
52 template_name = "user_management/group_form.html"
53 form_class = GroupForm
54
55 def get_form_kwargs(self):
56 kwargs = super().get_form_kwargs()
57 kwargs["initial"] = {
58 "users": UserProfile.objects.none(),
59 "can_add_event": False,
60 "publish_event_for_group": Group.objects.none(),
61 }
62 return kwargs
63
64 def get_success_url(self):
65 messages.success(self.request, _("Group created successfully."))
66 return reverse("user_management:group_list")
67
68
69 class GroupUpdateView(PermissionRequiredMixin, UpdateView):
70 model = Group
71 permission_required = "auth.change_group"
72 template_name = "user_management/group_form.html"
73 form_class = GroupForm
74
75 def get_form_kwargs(self):
76 kwargs = super().get_form_kwargs()
77 kwargs["initial"] = {
78 "users": self.object.user_set.all(),
79 "can_view_past_event": self.object.permissions.filter(
80 codename="view_past_event"
81 ).exists(),
82 "can_add_event": self.object.permissions.filter(codename="add_event").exists(),
83 "publish_event_for_group": get_objects_for_group(
84 self.object, "publish_event_for_group", klass=Group
85 ),
86 }
87 return kwargs
88
89 def get_success_url(self):
90 return reverse("user_management:group_list")
91
92
93 class GroupDeleteView(PermissionRequiredMixin, DeleteView):
94 model = Group
95 permission_required = "auth.delete_group"
96 template_name = "user_management/group_confirm_delete.html"
97
98 def get_success_url(self):
99 return reverse("user_management:group_list")
100
[end of user_management/views.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/user_management/mail.py b/user_management/mail.py
--- a/user_management/mail.py
+++ b/user_management/mail.py
@@ -11,7 +11,6 @@
def send_account_creation_info(userprofile):
- messages = []
subject = _("Welcome to JEP!")
uid = urlsafe_base64_encode(force_bytes(userprofile.id))
token = default_token_generator.make_token(userprofile)
@@ -24,10 +23,26 @@
html_content = render_to_string(
"user_management/new_account_email.html",
- {"uid": uid, "token": token, "site_url": SITE_URL, "email": userprofile.email,},
+ {"uid": uid, "token": token, "site_url": SITE_URL, "email": userprofile.email},
)
message = EmailMultiAlternatives(to=[userprofile.email], subject=subject, body=text_content)
message.attach_alternative(html_content, "text/html")
- messages.append(message)
+ message.send()
- mail.get_connection().send_messages(messages)
+
+def send_account_update_info(userprofile):
+ subject = _("JEP account updated")
+ url = reverse("user_management:profile")
+ text_content = _(
+ "You're receiving this email because your account at JEP has been updated.\n"
+ "You can see the changes in your profile: {site_url}{url}\n"
+ "Your username is your email address: {email}\n"
+ ).format(site_url=SITE_URL, url=url, email=userprofile.email)
+
+ html_content = render_to_string(
+ "user_management/account_updated_email.html",
+ {"site_url": SITE_URL, "url": url, "email": userprofile.email},
+ )
+ message = EmailMultiAlternatives(to=[userprofile.email], subject=subject, body=text_content)
+ message.attach_alternative(html_content, "text/html")
+ message.send()
diff --git a/user_management/urls.py b/user_management/urls.py
--- a/user_management/urls.py
+++ b/user_management/urls.py
@@ -9,6 +9,7 @@
path("groups/<int:pk>/edit", views.GroupUpdateView.as_view(), name="group_edit"),
path("groups/<int:pk>/delete", views.GroupDeleteView.as_view(), name="group_delete"),
path("groups/create", views.GroupCreateView.as_view(), name="group_add"),
- path("users/", views.UserProfileListView.as_view(), name="user_list",),
+ path("users/", views.UserProfileListView.as_view(), name="userprofile_list",),
+ path("users/<int:pk>/edit", views.UserProfileUpdateView.as_view(), name="userprofile_edit",),
path("users/create/", views.UserProfileCreateView.as_view(), name="userprofile_create",),
]
diff --git a/user_management/views.py b/user_management/views.py
--- a/user_management/views.py
+++ b/user_management/views.py
@@ -30,7 +30,7 @@
def get_success_url(self):
messages.success(self.request, _("User added successfully."))
- return reverse("user_management:user_list")
+ return reverse("user_management:userprofile_list")
def form_valid(self, form):
response = super().form_valid(form)
@@ -40,6 +40,31 @@
return response
+class UserProfileUpdateView(PermissionRequiredMixin, UpdateView):
+ model = UserProfile
+ permission_required = "user_management.change_userprofile"
+ template_name = "user_management/userprofile_form.html"
+ form_class = UserProfileForm
+
+ def get_success_url(self):
+ messages.success(self.request, _("User updated successfully."))
+ return reverse("user_management:userprofile_list")
+
+ def form_valid(self, form):
+ response = super().form_valid(form)
+ userprofile = self.object
+ if userprofile.is_active:
+ mail.send_account_update_info(userprofile)
+ return response
+
+ def get_form_kwargs(self):
+ kwargs = super().get_form_kwargs()
+ kwargs["initial"] = {
+ "groups": self.object.groups.all(),
+ }
+ return kwargs
+
+
class GroupListView(PermissionRequiredMixin, ListView):
model = Group
permission_required = "auth.view_group"
@@ -87,6 +112,7 @@
return kwargs
def get_success_url(self):
+ messages.success(self.request, _("Group updated successfully."))
return reverse("user_management:group_list")
| {"golden_diff": "diff --git a/user_management/mail.py b/user_management/mail.py\n--- a/user_management/mail.py\n+++ b/user_management/mail.py\n@@ -11,7 +11,6 @@\n \n \n def send_account_creation_info(userprofile):\n- messages = []\n subject = _(\"Welcome to JEP!\")\n uid = urlsafe_base64_encode(force_bytes(userprofile.id))\n token = default_token_generator.make_token(userprofile)\n@@ -24,10 +23,26 @@\n \n html_content = render_to_string(\n \"user_management/new_account_email.html\",\n- {\"uid\": uid, \"token\": token, \"site_url\": SITE_URL, \"email\": userprofile.email,},\n+ {\"uid\": uid, \"token\": token, \"site_url\": SITE_URL, \"email\": userprofile.email},\n )\n message = EmailMultiAlternatives(to=[userprofile.email], subject=subject, body=text_content)\n message.attach_alternative(html_content, \"text/html\")\n- messages.append(message)\n+ message.send()\n \n- mail.get_connection().send_messages(messages)\n+\n+def send_account_update_info(userprofile):\n+ subject = _(\"JEP account updated\")\n+ url = reverse(\"user_management:profile\")\n+ text_content = _(\n+ \"You're receiving this email because your account at JEP has been updated.\\n\"\n+ \"You can see the changes in your profile: {site_url}{url}\\n\"\n+ \"Your username is your email address: {email}\\n\"\n+ ).format(site_url=SITE_URL, url=url, email=userprofile.email)\n+\n+ html_content = render_to_string(\n+ \"user_management/account_updated_email.html\",\n+ {\"site_url\": SITE_URL, \"url\": url, \"email\": userprofile.email},\n+ )\n+ message = EmailMultiAlternatives(to=[userprofile.email], subject=subject, body=text_content)\n+ message.attach_alternative(html_content, \"text/html\")\n+ message.send()\ndiff --git a/user_management/urls.py b/user_management/urls.py\n--- a/user_management/urls.py\n+++ b/user_management/urls.py\n@@ -9,6 +9,7 @@\n path(\"groups/<int:pk>/edit\", views.GroupUpdateView.as_view(), name=\"group_edit\"),\n path(\"groups/<int:pk>/delete\", views.GroupDeleteView.as_view(), name=\"group_delete\"),\n path(\"groups/create\", views.GroupCreateView.as_view(), name=\"group_add\"),\n- path(\"users/\", views.UserProfileListView.as_view(), name=\"user_list\",),\n+ path(\"users/\", views.UserProfileListView.as_view(), name=\"userprofile_list\",),\n+ path(\"users/<int:pk>/edit\", views.UserProfileUpdateView.as_view(), name=\"userprofile_edit\",),\n path(\"users/create/\", views.UserProfileCreateView.as_view(), name=\"userprofile_create\",),\n ]\ndiff --git a/user_management/views.py b/user_management/views.py\n--- a/user_management/views.py\n+++ b/user_management/views.py\n@@ -30,7 +30,7 @@\n \n def get_success_url(self):\n messages.success(self.request, _(\"User added successfully.\"))\n- return reverse(\"user_management:user_list\")\n+ return reverse(\"user_management:userprofile_list\")\n \n def form_valid(self, form):\n response = super().form_valid(form)\n@@ -40,6 +40,31 @@\n return response\n \n \n+class UserProfileUpdateView(PermissionRequiredMixin, UpdateView):\n+ model = UserProfile\n+ permission_required = \"user_management.change_userprofile\"\n+ template_name = \"user_management/userprofile_form.html\"\n+ form_class = UserProfileForm\n+\n+ def get_success_url(self):\n+ messages.success(self.request, _(\"User updated successfully.\"))\n+ return reverse(\"user_management:userprofile_list\")\n+\n+ def form_valid(self, form):\n+ response = super().form_valid(form)\n+ userprofile = self.object\n+ if userprofile.is_active:\n+ mail.send_account_update_info(userprofile)\n+ return response\n+\n+ def get_form_kwargs(self):\n+ kwargs = super().get_form_kwargs()\n+ kwargs[\"initial\"] = {\n+ \"groups\": self.object.groups.all(),\n+ }\n+ return kwargs\n+\n+\n class GroupListView(PermissionRequiredMixin, ListView):\n model = Group\n permission_required = \"auth.view_group\"\n@@ -87,6 +112,7 @@\n return kwargs\n \n def get_success_url(self):\n+ messages.success(self.request, _(\"Group updated successfully.\"))\n return reverse(\"user_management:group_list\")\n", "issue": "Benutzer bearbeiten\nAls Manager m\u00f6chte ich einen Nutzer bearbeiten. Dabei m\u00f6chte ich die selben Optionen haben wie beim Anlegen des Nutzers. In der Liste der Nutzer soll es f\u00fcr jeden Nutzer eine entsprechende Schaltfl\u00e4che geben. Der Nutzer soll \u00fcber \u00c4nderungen per Mail informiert werden.\n", "before_files": [{"content": "from django.contrib.auth.tokens import default_token_generator\nfrom django.core import mail\nfrom django.core.mail import EmailMultiAlternatives\nfrom django.template.loader import render_to_string\nfrom django.urls import reverse\nfrom django.utils.encoding import force_bytes\nfrom django.utils.http import urlsafe_base64_encode\nfrom django.utils.translation import gettext as _\n\nfrom jep.settings import SITE_URL\n\n\ndef send_account_creation_info(userprofile):\n messages = []\n subject = _(\"Welcome to JEP!\")\n uid = urlsafe_base64_encode(force_bytes(userprofile.id))\n token = default_token_generator.make_token(userprofile)\n reset_link = reverse(\"password_reset_confirm\", kwargs={\"uidb64\": uid, \"token\": token})\n text_content = _(\n \"You're receiving this email because a new account has been created for you at JEP.\\n\"\n \"Please go to the following page and choose a password: {url}{reset_link}\\n\"\n \"Your username is your email address: {email}\\n\"\n ).format(url=SITE_URL, reset_link=reset_link, email=userprofile.email)\n\n html_content = render_to_string(\n \"user_management/new_account_email.html\",\n {\"uid\": uid, \"token\": token, \"site_url\": SITE_URL, \"email\": userprofile.email,},\n )\n message = EmailMultiAlternatives(to=[userprofile.email], subject=subject, body=text_content)\n message.attach_alternative(html_content, \"text/html\")\n messages.append(message)\n\n mail.get_connection().send_messages(messages)\n", "path": "user_management/mail.py"}, {"content": "from django.urls import path\n\nfrom user_management import views\n\napp_name = \"user_management\"\nurlpatterns = [\n path(\"profile/\", views.ProfileView.as_view(), name=\"profile\"),\n path(\"groups/\", views.GroupListView.as_view(), name=\"group_list\"),\n path(\"groups/<int:pk>/edit\", views.GroupUpdateView.as_view(), name=\"group_edit\"),\n path(\"groups/<int:pk>/delete\", views.GroupDeleteView.as_view(), name=\"group_delete\"),\n path(\"groups/create\", views.GroupCreateView.as_view(), name=\"group_add\"),\n path(\"users/\", views.UserProfileListView.as_view(), name=\"user_list\",),\n path(\"users/create/\", views.UserProfileCreateView.as_view(), name=\"userprofile_create\",),\n]\n", "path": "user_management/urls.py"}, {"content": "from django.contrib import messages\nfrom django.contrib.auth.mixins import LoginRequiredMixin, PermissionRequiredMixin\nfrom django.contrib.auth.models import Group\nfrom django.urls import reverse\nfrom django.views.generic import DetailView, ListView, UpdateView, CreateView, DeleteView\nfrom guardian.shortcuts import get_objects_for_group\n\nfrom user_management import mail\nfrom user_management.forms import GroupForm, UserProfileForm\nfrom django.utils.translation import gettext as _\n\nfrom user_management.models import UserProfile\n\n\nclass ProfileView(LoginRequiredMixin, DetailView):\n def get_object(self, queryset=None):\n return self.request.user\n\n\nclass UserProfileListView(PermissionRequiredMixin, ListView):\n model = UserProfile\n permission_required = \"user_management.view_userprofile\"\n\n\nclass UserProfileCreateView(PermissionRequiredMixin, CreateView):\n template_name = \"user_management/userprofile_form.html\"\n permission_required = \"user_management.add_userprofile\"\n model = UserProfile\n form_class = UserProfileForm\n\n def get_success_url(self):\n messages.success(self.request, _(\"User added successfully.\"))\n return reverse(\"user_management:user_list\")\n\n def form_valid(self, form):\n response = super().form_valid(form)\n userprofile = self.object\n if userprofile.is_active:\n mail.send_account_creation_info(userprofile)\n return response\n\n\nclass GroupListView(PermissionRequiredMixin, ListView):\n model = Group\n permission_required = \"auth.view_group\"\n template_name = \"user_management/group_list.html\"\n\n\nclass GroupCreateView(PermissionRequiredMixin, CreateView):\n model = Group\n permission_required = \"auth.add_group\"\n template_name = \"user_management/group_form.html\"\n form_class = GroupForm\n\n def get_form_kwargs(self):\n kwargs = super().get_form_kwargs()\n kwargs[\"initial\"] = {\n \"users\": UserProfile.objects.none(),\n \"can_add_event\": False,\n \"publish_event_for_group\": Group.objects.none(),\n }\n return kwargs\n\n def get_success_url(self):\n messages.success(self.request, _(\"Group created successfully.\"))\n return reverse(\"user_management:group_list\")\n\n\nclass GroupUpdateView(PermissionRequiredMixin, UpdateView):\n model = Group\n permission_required = \"auth.change_group\"\n template_name = \"user_management/group_form.html\"\n form_class = GroupForm\n\n def get_form_kwargs(self):\n kwargs = super().get_form_kwargs()\n kwargs[\"initial\"] = {\n \"users\": self.object.user_set.all(),\n \"can_view_past_event\": self.object.permissions.filter(\n codename=\"view_past_event\"\n ).exists(),\n \"can_add_event\": self.object.permissions.filter(codename=\"add_event\").exists(),\n \"publish_event_for_group\": get_objects_for_group(\n self.object, \"publish_event_for_group\", klass=Group\n ),\n }\n return kwargs\n\n def get_success_url(self):\n return reverse(\"user_management:group_list\")\n\n\nclass GroupDeleteView(PermissionRequiredMixin, DeleteView):\n model = Group\n permission_required = \"auth.delete_group\"\n template_name = \"user_management/group_confirm_delete.html\"\n\n def get_success_url(self):\n return reverse(\"user_management:group_list\")\n", "path": "user_management/views.py"}]} | 2,055 | 985 |
gh_patches_debug_27185 | rasdani/github-patches | git_diff | kubeflow__pipelines-5782 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
sdk/client/auth - KeyError: 'id_token' from _auth.py in id_token_from_refresh_token
If getting “KeyError: 'id_token'” from “File "/Users/gongyuan/miniconda3/envs/mlpipeline/lib/python3.7/site-packages/kfp/_auth.py", line 192, in id_token_from_refresh_token
return (str(json.loads(res.text)[u"id_token"]))
The request to get id token from refresh token failed, but the client didn’t surface the underlying error message. The http response I got was
```
{
"error": "invalid_grant",
"error_description": "Bad Request"
}
```
And root cause: ~/.config/kfp/credentials.json has expired. I deleted it and got a new token: `rm ~/.config/kfp/credentials.json`.
## Solution
https://github.com/kubeflow/pipelines/blob/2a65eec1fa265ebbda69d5b8b1875e3e4b54ac82/sdk/python/kfp/_auth.py#L184-L185 and https://github.com/kubeflow/pipelines/blob/2a65eec1fa265ebbda69d5b8b1875e3e4b54ac82/sdk/python/kfp/_auth.py#L191-L192, we should first check request response status code by `Response.raise_for_exception`, it could be `401 unauthorized`.
</issue>
<code>
[start of sdk/python/kfp/_auth.py]
1 # Copyright 2018 The Kubeflow Authors
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import logging
16 import os
17 import google.auth
18 import google.auth.app_engine
19 import google.auth.compute_engine.credentials
20 import google.auth.iam
21 from google.auth.transport.requests import Request
22 import google.oauth2.credentials
23 import google.oauth2.service_account
24 import requests_toolbelt.adapters.appengine
25 from webbrowser import open_new_tab
26 import requests
27 import json
28
29 IAM_SCOPE = 'https://www.googleapis.com/auth/iam'
30 OAUTH_TOKEN_URI = 'https://www.googleapis.com/oauth2/v4/token'
31 LOCAL_KFP_CREDENTIAL = os.path.expanduser('~/.config/kfp/credentials.json')
32
33 def get_gcp_access_token():
34 """Get and return GCP access token for the current Application Default
35 Credentials. If not set, returns None. For more information, see
36 https://cloud.google.com/sdk/gcloud/reference/auth/application-default/print-access-token
37 """
38 token = None
39 try:
40 creds, project = google.auth.default(scopes=["https://www.googleapis.com/auth/cloud-platform"])
41 if not creds.valid:
42 auth_req = Request()
43 creds.refresh(auth_req)
44 if creds.valid:
45 token = creds.token
46 except Exception as e:
47 logging.warning('Failed to get GCP access token: %s', e)
48 return token
49
50 def get_auth_token(client_id, other_client_id, other_client_secret):
51 """Gets auth token from default service account or user account."""
52 if os.path.exists(LOCAL_KFP_CREDENTIAL):
53 # fetch IAP auth token using the locally stored credentials.
54 with open(LOCAL_KFP_CREDENTIAL, 'r') as f:
55 credentials = json.load(f)
56 if client_id in credentials:
57 return id_token_from_refresh_token(credentials[client_id]['other_client_id'],
58 credentials[client_id]['other_client_secret'],
59 credentials[client_id]['refresh_token'],
60 client_id)
61 if other_client_id is None or other_client_secret is None:
62 # fetch IAP auth token: service accounts
63 token = get_auth_token_from_sa(client_id)
64 else:
65 # fetch IAP auth token: user account
66 # Obtain the ID token for provided Client ID with user accounts.
67 # Flow: get authorization code -> exchange for refresh token -> obtain and return ID token
68 refresh_token = get_refresh_token_from_client_id(other_client_id, other_client_secret)
69 credentials = {}
70 if os.path.exists(LOCAL_KFP_CREDENTIAL):
71 with open(LOCAL_KFP_CREDENTIAL, 'r') as f:
72 credentials = json.load(f)
73 credentials[client_id] = {}
74 credentials[client_id]['other_client_id'] = other_client_id
75 credentials[client_id]['other_client_secret'] = other_client_secret
76 credentials[client_id]['refresh_token'] = refresh_token
77 #TODO: handle the case when the refresh_token expires.
78 # which only happens if the refresh_token is not used once for six months.
79 if not os.path.exists(os.path.dirname(LOCAL_KFP_CREDENTIAL)):
80 os.makedirs(os.path.dirname(LOCAL_KFP_CREDENTIAL))
81 with open(LOCAL_KFP_CREDENTIAL, 'w') as f:
82 json.dump(credentials, f)
83 token = id_token_from_refresh_token(other_client_id, other_client_secret, refresh_token, client_id)
84 return token
85
86 def get_auth_token_from_sa(client_id):
87 """Gets auth token from default service account.
88
89 If no service account credential is found, returns None.
90 """
91 service_account_credentials = get_service_account_credentials(client_id)
92 if service_account_credentials:
93 return get_google_open_id_connect_token(service_account_credentials)
94 return None
95
96 def get_service_account_credentials(client_id):
97 # Figure out what environment we're running in and get some preliminary
98 # information about the service account.
99 bootstrap_credentials, _ = google.auth.default(
100 scopes=[IAM_SCOPE])
101 if isinstance(bootstrap_credentials,
102 google.oauth2.credentials.Credentials):
103 logging.info('Found OAuth2 credentials and skip SA auth.')
104 return None
105 elif isinstance(bootstrap_credentials,
106 google.auth.app_engine.Credentials):
107 requests_toolbelt.adapters.appengine.monkeypatch()
108
109 # For service account's using the Compute Engine metadata service,
110 # service_account_email isn't available until refresh is called.
111 bootstrap_credentials.refresh(Request())
112 signer_email = bootstrap_credentials.service_account_email
113 if isinstance(bootstrap_credentials,
114 google.auth.compute_engine.credentials.Credentials):
115 # Since the Compute Engine metadata service doesn't expose the service
116 # account key, we use the IAM signBlob API to sign instead.
117 # In order for this to work:
118 #
119 # 1. Your VM needs the https://www.googleapis.com/auth/iam scope.
120 # You can specify this specific scope when creating a VM
121 # through the API or gcloud. When using Cloud Console,
122 # you'll need to specify the "full access to all Cloud APIs"
123 # scope. A VM's scopes can only be specified at creation time.
124 #
125 # 2. The VM's default service account needs the "Service Account Actor"
126 # role. This can be found under the "Project" category in Cloud
127 # Console, or roles/iam.serviceAccountActor in gcloud.
128 signer = google.auth.iam.Signer(
129 Request(), bootstrap_credentials, signer_email)
130 else:
131 # A Signer object can sign a JWT using the service account's key.
132 signer = bootstrap_credentials.signer
133
134 # Construct OAuth 2.0 service account credentials using the signer
135 # and email acquired from the bootstrap credentials.
136 return google.oauth2.service_account.Credentials(
137 signer, signer_email, token_uri=OAUTH_TOKEN_URI, additional_claims={
138 'target_audience': client_id
139 })
140
141 def get_google_open_id_connect_token(service_account_credentials):
142 """Get an OpenID Connect token issued by Google for the service account.
143 This function:
144 1. Generates a JWT signed with the service account's private key
145 containing a special "target_audience" claim.
146 2. Sends it to the OAUTH_TOKEN_URI endpoint. Because the JWT in #1
147 has a target_audience claim, that endpoint will respond with
148 an OpenID Connect token for the service account -- in other words,
149 a JWT signed by *Google*. The aud claim in this JWT will be
150 set to the value from the target_audience claim in #1.
151 For more information, see
152 https://developers.google.com/identity/protocols/OAuth2ServiceAccount .
153 The HTTP/REST example on that page describes the JWT structure and
154 demonstrates how to call the token endpoint. (The example on that page
155 shows how to get an OAuth2 access token; this code is using a
156 modified version of it to get an OpenID Connect token.)
157 """
158
159 service_account_jwt = (
160 service_account_credentials._make_authorization_grant_assertion())
161 request = google.auth.transport.requests.Request()
162 body = {
163 'assertion': service_account_jwt,
164 'grant_type': google.oauth2._client._JWT_GRANT_TYPE,
165 }
166 token_response = google.oauth2._client._token_endpoint_request(
167 request, OAUTH_TOKEN_URI, body)
168 return token_response['id_token']
169
170 def get_refresh_token_from_client_id(client_id, client_secret):
171 """Obtain the ID token for provided Client ID with user accounts.
172 Flow: get authorization code -> exchange for refresh token -> obtain and return ID token
173 """
174 auth_code = get_auth_code(client_id)
175 return get_refresh_token_from_code(auth_code, client_id, client_secret)
176
177 def get_auth_code(client_id):
178 auth_url = "https://accounts.google.com/o/oauth2/v2/auth?client_id=%s&response_type=code&scope=openid%%20email&access_type=offline&redirect_uri=urn:ietf:wg:oauth:2.0:oob"%client_id
179 print(auth_url)
180 open_new_tab(auth_url)
181 return input("If there's no browser window prompt, please direct to the URL above, then copy and paste the authorization code here: ")
182
183 def get_refresh_token_from_code(auth_code, client_id, client_secret):
184 payload = {"code": auth_code, "client_id": client_id, "client_secret": client_secret,
185 "redirect_uri": "urn:ietf:wg:oauth:2.0:oob", "grant_type": "authorization_code"}
186 res = requests.post(OAUTH_TOKEN_URI, data=payload)
187 return (str(json.loads(res.text)[u"refresh_token"]))
188
189 def id_token_from_refresh_token(client_id, client_secret, refresh_token, audience):
190 payload = {"client_id": client_id, "client_secret": client_secret,
191 "refresh_token": refresh_token, "grant_type": "refresh_token",
192 "audience": audience}
193 res = requests.post(OAUTH_TOKEN_URI, data=payload)
194 return (str(json.loads(res.text)[u"id_token"]))
195
[end of sdk/python/kfp/_auth.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/sdk/python/kfp/_auth.py b/sdk/python/kfp/_auth.py
--- a/sdk/python/kfp/_auth.py
+++ b/sdk/python/kfp/_auth.py
@@ -180,15 +180,19 @@
open_new_tab(auth_url)
return input("If there's no browser window prompt, please direct to the URL above, then copy and paste the authorization code here: ")
+
def get_refresh_token_from_code(auth_code, client_id, client_secret):
payload = {"code": auth_code, "client_id": client_id, "client_secret": client_secret,
"redirect_uri": "urn:ietf:wg:oauth:2.0:oob", "grant_type": "authorization_code"}
res = requests.post(OAUTH_TOKEN_URI, data=payload)
- return (str(json.loads(res.text)[u"refresh_token"]))
+ res.raise_for_status()
+ return str(json.loads(res.text)[u"refresh_token"])
+
def id_token_from_refresh_token(client_id, client_secret, refresh_token, audience):
payload = {"client_id": client_id, "client_secret": client_secret,
"refresh_token": refresh_token, "grant_type": "refresh_token",
"audience": audience}
res = requests.post(OAUTH_TOKEN_URI, data=payload)
- return (str(json.loads(res.text)[u"id_token"]))
+ res.raise_for_status()
+ return str(json.loads(res.text)[u"id_token"])
| {"golden_diff": "diff --git a/sdk/python/kfp/_auth.py b/sdk/python/kfp/_auth.py\n--- a/sdk/python/kfp/_auth.py\n+++ b/sdk/python/kfp/_auth.py\n@@ -180,15 +180,19 @@\n open_new_tab(auth_url)\n return input(\"If there's no browser window prompt, please direct to the URL above, then copy and paste the authorization code here: \")\n \n+\n def get_refresh_token_from_code(auth_code, client_id, client_secret):\n payload = {\"code\": auth_code, \"client_id\": client_id, \"client_secret\": client_secret,\n \"redirect_uri\": \"urn:ietf:wg:oauth:2.0:oob\", \"grant_type\": \"authorization_code\"}\n res = requests.post(OAUTH_TOKEN_URI, data=payload)\n- return (str(json.loads(res.text)[u\"refresh_token\"]))\n+ res.raise_for_status()\n+ return str(json.loads(res.text)[u\"refresh_token\"])\n+\n \n def id_token_from_refresh_token(client_id, client_secret, refresh_token, audience):\n payload = {\"client_id\": client_id, \"client_secret\": client_secret,\n \"refresh_token\": refresh_token, \"grant_type\": \"refresh_token\",\n \"audience\": audience}\n res = requests.post(OAUTH_TOKEN_URI, data=payload)\n- return (str(json.loads(res.text)[u\"id_token\"]))\n+ res.raise_for_status()\n+ return str(json.loads(res.text)[u\"id_token\"])\n", "issue": "sdk/client/auth - KeyError: 'id_token' from _auth.py in id_token_from_refresh_token\nIf getting \u201cKeyError: 'id_token'\u201d from \u201cFile \"/Users/gongyuan/miniconda3/envs/mlpipeline/lib/python3.7/site-packages/kfp/_auth.py\", line 192, in id_token_from_refresh_token\r\n return (str(json.loads(res.text)[u\"id_token\"]))\r\n\r\nThe request to get id token from refresh token failed, but the client didn\u2019t surface the underlying error message. The http response I got was\r\n```\r\n{\r\n \"error\": \"invalid_grant\",\r\n \"error_description\": \"Bad Request\"\r\n}\r\n```\r\n\r\nAnd root cause: ~/.config/kfp/credentials.json has expired. I deleted it and got a new token: `rm ~/.config/kfp/credentials.json`.\r\n\r\n## Solution\r\n\r\nhttps://github.com/kubeflow/pipelines/blob/2a65eec1fa265ebbda69d5b8b1875e3e4b54ac82/sdk/python/kfp/_auth.py#L184-L185 and https://github.com/kubeflow/pipelines/blob/2a65eec1fa265ebbda69d5b8b1875e3e4b54ac82/sdk/python/kfp/_auth.py#L191-L192, we should first check request response status code by `Response.raise_for_exception`, it could be `401 unauthorized`.\n", "before_files": [{"content": "# Copyright 2018 The Kubeflow Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport logging\nimport os\nimport google.auth\nimport google.auth.app_engine\nimport google.auth.compute_engine.credentials\nimport google.auth.iam\nfrom google.auth.transport.requests import Request\nimport google.oauth2.credentials\nimport google.oauth2.service_account\nimport requests_toolbelt.adapters.appengine\nfrom webbrowser import open_new_tab\nimport requests\nimport json\n\nIAM_SCOPE = 'https://www.googleapis.com/auth/iam'\nOAUTH_TOKEN_URI = 'https://www.googleapis.com/oauth2/v4/token'\nLOCAL_KFP_CREDENTIAL = os.path.expanduser('~/.config/kfp/credentials.json')\n\ndef get_gcp_access_token():\n \"\"\"Get and return GCP access token for the current Application Default\n Credentials. If not set, returns None. For more information, see\n https://cloud.google.com/sdk/gcloud/reference/auth/application-default/print-access-token\n \"\"\"\n token = None\n try:\n creds, project = google.auth.default(scopes=[\"https://www.googleapis.com/auth/cloud-platform\"])\n if not creds.valid:\n auth_req = Request()\n creds.refresh(auth_req)\n if creds.valid:\n token = creds.token\n except Exception as e:\n logging.warning('Failed to get GCP access token: %s', e)\n return token\n\ndef get_auth_token(client_id, other_client_id, other_client_secret):\n \"\"\"Gets auth token from default service account or user account.\"\"\"\n if os.path.exists(LOCAL_KFP_CREDENTIAL):\n # fetch IAP auth token using the locally stored credentials.\n with open(LOCAL_KFP_CREDENTIAL, 'r') as f:\n credentials = json.load(f)\n if client_id in credentials:\n return id_token_from_refresh_token(credentials[client_id]['other_client_id'],\n credentials[client_id]['other_client_secret'],\n credentials[client_id]['refresh_token'],\n client_id)\n if other_client_id is None or other_client_secret is None:\n # fetch IAP auth token: service accounts\n token = get_auth_token_from_sa(client_id)\n else:\n # fetch IAP auth token: user account\n # Obtain the ID token for provided Client ID with user accounts.\n # Flow: get authorization code -> exchange for refresh token -> obtain and return ID token\n refresh_token = get_refresh_token_from_client_id(other_client_id, other_client_secret)\n credentials = {}\n if os.path.exists(LOCAL_KFP_CREDENTIAL):\n with open(LOCAL_KFP_CREDENTIAL, 'r') as f:\n credentials = json.load(f)\n credentials[client_id] = {}\n credentials[client_id]['other_client_id'] = other_client_id\n credentials[client_id]['other_client_secret'] = other_client_secret\n credentials[client_id]['refresh_token'] = refresh_token\n #TODO: handle the case when the refresh_token expires.\n # which only happens if the refresh_token is not used once for six months.\n if not os.path.exists(os.path.dirname(LOCAL_KFP_CREDENTIAL)):\n os.makedirs(os.path.dirname(LOCAL_KFP_CREDENTIAL))\n with open(LOCAL_KFP_CREDENTIAL, 'w') as f:\n json.dump(credentials, f)\n token = id_token_from_refresh_token(other_client_id, other_client_secret, refresh_token, client_id)\n return token\n\ndef get_auth_token_from_sa(client_id):\n \"\"\"Gets auth token from default service account.\n\n If no service account credential is found, returns None.\n \"\"\"\n service_account_credentials = get_service_account_credentials(client_id)\n if service_account_credentials:\n return get_google_open_id_connect_token(service_account_credentials)\n return None\n\ndef get_service_account_credentials(client_id):\n # Figure out what environment we're running in and get some preliminary\n # information about the service account.\n bootstrap_credentials, _ = google.auth.default(\n scopes=[IAM_SCOPE])\n if isinstance(bootstrap_credentials,\n google.oauth2.credentials.Credentials):\n logging.info('Found OAuth2 credentials and skip SA auth.')\n return None\n elif isinstance(bootstrap_credentials,\n google.auth.app_engine.Credentials):\n requests_toolbelt.adapters.appengine.monkeypatch()\n\n # For service account's using the Compute Engine metadata service,\n # service_account_email isn't available until refresh is called.\n bootstrap_credentials.refresh(Request())\n signer_email = bootstrap_credentials.service_account_email\n if isinstance(bootstrap_credentials,\n google.auth.compute_engine.credentials.Credentials):\n # Since the Compute Engine metadata service doesn't expose the service\n # account key, we use the IAM signBlob API to sign instead.\n # In order for this to work:\n #\n # 1. Your VM needs the https://www.googleapis.com/auth/iam scope.\n # You can specify this specific scope when creating a VM\n # through the API or gcloud. When using Cloud Console,\n # you'll need to specify the \"full access to all Cloud APIs\"\n # scope. A VM's scopes can only be specified at creation time.\n #\n # 2. The VM's default service account needs the \"Service Account Actor\"\n # role. This can be found under the \"Project\" category in Cloud\n # Console, or roles/iam.serviceAccountActor in gcloud.\n signer = google.auth.iam.Signer(\n Request(), bootstrap_credentials, signer_email)\n else:\n # A Signer object can sign a JWT using the service account's key.\n signer = bootstrap_credentials.signer\n\n # Construct OAuth 2.0 service account credentials using the signer\n # and email acquired from the bootstrap credentials.\n return google.oauth2.service_account.Credentials(\n signer, signer_email, token_uri=OAUTH_TOKEN_URI, additional_claims={\n 'target_audience': client_id\n })\n\ndef get_google_open_id_connect_token(service_account_credentials):\n \"\"\"Get an OpenID Connect token issued by Google for the service account.\n This function:\n 1. Generates a JWT signed with the service account's private key\n containing a special \"target_audience\" claim.\n 2. Sends it to the OAUTH_TOKEN_URI endpoint. Because the JWT in #1\n has a target_audience claim, that endpoint will respond with\n an OpenID Connect token for the service account -- in other words,\n a JWT signed by *Google*. The aud claim in this JWT will be\n set to the value from the target_audience claim in #1.\n For more information, see\n https://developers.google.com/identity/protocols/OAuth2ServiceAccount .\n The HTTP/REST example on that page describes the JWT structure and\n demonstrates how to call the token endpoint. (The example on that page\n shows how to get an OAuth2 access token; this code is using a\n modified version of it to get an OpenID Connect token.)\n \"\"\"\n\n service_account_jwt = (\n service_account_credentials._make_authorization_grant_assertion())\n request = google.auth.transport.requests.Request()\n body = {\n 'assertion': service_account_jwt,\n 'grant_type': google.oauth2._client._JWT_GRANT_TYPE,\n }\n token_response = google.oauth2._client._token_endpoint_request(\n request, OAUTH_TOKEN_URI, body)\n return token_response['id_token']\n\ndef get_refresh_token_from_client_id(client_id, client_secret):\n \"\"\"Obtain the ID token for provided Client ID with user accounts.\n Flow: get authorization code -> exchange for refresh token -> obtain and return ID token\n \"\"\"\n auth_code = get_auth_code(client_id)\n return get_refresh_token_from_code(auth_code, client_id, client_secret)\n\ndef get_auth_code(client_id):\n auth_url = \"https://accounts.google.com/o/oauth2/v2/auth?client_id=%s&response_type=code&scope=openid%%20email&access_type=offline&redirect_uri=urn:ietf:wg:oauth:2.0:oob\"%client_id\n print(auth_url)\n open_new_tab(auth_url)\n return input(\"If there's no browser window prompt, please direct to the URL above, then copy and paste the authorization code here: \")\n\ndef get_refresh_token_from_code(auth_code, client_id, client_secret):\n payload = {\"code\": auth_code, \"client_id\": client_id, \"client_secret\": client_secret,\n \"redirect_uri\": \"urn:ietf:wg:oauth:2.0:oob\", \"grant_type\": \"authorization_code\"}\n res = requests.post(OAUTH_TOKEN_URI, data=payload)\n return (str(json.loads(res.text)[u\"refresh_token\"]))\n\ndef id_token_from_refresh_token(client_id, client_secret, refresh_token, audience):\n payload = {\"client_id\": client_id, \"client_secret\": client_secret,\n \"refresh_token\": refresh_token, \"grant_type\": \"refresh_token\",\n \"audience\": audience}\n res = requests.post(OAUTH_TOKEN_URI, data=payload)\n return (str(json.loads(res.text)[u\"id_token\"]))\n", "path": "sdk/python/kfp/_auth.py"}]} | 3,375 | 317 |
gh_patches_debug_35503 | rasdani/github-patches | git_diff | falconry__falcon-1925 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Make JSONHandler customization docs clearer
As pointed out by @Stargateur in https://github.com/falconry/falcon/issues/1906#issuecomment-817374057, our [`JSONHandler`](https://falcon.readthedocs.io/en/stable/api/media.html#falcon.media.JSONHandler) customization docs could be made clearer by separately illustrating different (albeit closely related) concepts:
* Use a custom JSON library (such as the exemplified `rapidjson`). Customize parameters.
* Use the stdlib's `json` module, just provide custom serialization or deserialization parameters. Also link to the ["Prettifying JSON Responses" recipe](https://falcon.readthedocs.io/en/stable/user/recipes/pretty-json.html), which illustrates customization of `dumps` parameters.
* Add a sentence or two about replacing the default JSON handlers, not just toss in a code snippet as it is at the time of writing this. Also link to [Replacing the Default Handlers](https://falcon.readthedocs.io/en/stable/api/media.html#custom-media-handlers) from that explanation.
</issue>
<code>
[start of falcon/media/json.py]
1 from functools import partial
2 import json
3
4 from falcon import errors
5 from falcon import http_error
6 from falcon.media.base import BaseHandler
7 from falcon.media.base import TextBaseHandlerWS
8
9
10 class JSONHandler(BaseHandler):
11 """JSON media handler.
12
13 This handler uses Python's standard :py:mod:`json` library by default, but
14 can be easily configured to use any of a number of third-party JSON
15 libraries, depending on your needs. For example, you can often
16 realize a significant performance boost under CPython by using an
17 alternative library. Good options in this respect include `orjson`,
18 `python-rapidjson`, and `mujson`.
19
20 This handler will raise a :class:`falcon.MediaNotFoundError` when attempting
21 to parse an empty body, or a :class:`falcon.MediaMalformedError`
22 if an error happens while parsing the body.
23
24 Note:
25 If you are deploying to PyPy, we recommend sticking with the standard
26 library's JSON implementation, since it will be faster in most cases
27 as compared to a third-party library.
28
29 Overriding the default JSON implementation is simply a matter of specifying
30 the desired ``dumps`` and ``loads`` functions::
31
32 import falcon
33 from falcon import media
34
35 import rapidjson
36
37 json_handler = media.JSONHandler(
38 dumps=rapidjson.dumps,
39 loads=rapidjson.loads,
40 )
41 extra_handlers = {
42 'application/json': json_handler,
43 }
44
45 app = falcon.App()
46 app.req_options.media_handlers.update(extra_handlers)
47 app.resp_options.media_handlers.update(extra_handlers)
48
49 By default, ``ensure_ascii`` is passed to the ``json.dumps`` function.
50 If you override the ``dumps`` function, you will need to explicitly set
51 ``ensure_ascii`` to ``False`` in order to enable the serialization of
52 Unicode characters to UTF-8. This is easily done by using
53 :any:`functools.partial` to apply the desired keyword argument. In fact, you
54 can use this same technique to customize any option supported by the
55 ``dumps`` and ``loads`` functions::
56
57 from functools import partial
58
59 from falcon import media
60 import rapidjson
61
62 json_handler = media.JSONHandler(
63 dumps=partial(
64 rapidjson.dumps,
65 ensure_ascii=False, sort_keys=True
66 ),
67 )
68
69 Keyword Arguments:
70 dumps (func): Function to use when serializing JSON responses.
71 loads (func): Function to use when deserializing JSON requests.
72 """
73
74 def __init__(self, dumps=None, loads=None):
75 self._dumps = dumps or partial(json.dumps, ensure_ascii=False)
76 self._loads = loads or json.loads
77
78 # PERF(kgriffs): Test dumps once up front so we can set the
79 # proper serialize implementation.
80 result = self._dumps({'message': 'Hello World'})
81 if isinstance(result, str):
82 self.serialize = self._serialize_s
83 self.serialize_async = self._serialize_async_s
84 else:
85 self.serialize = self._serialize_b
86 self.serialize_async = self._serialize_async_b
87
88 # NOTE(kgriffs): To be safe, only enable the optimized protocol when
89 # not subclassed.
90 if type(self) is JSONHandler:
91 self._serialize_sync = self.serialize
92 self._deserialize_sync = self._deserialize
93
94 def _deserialize(self, data):
95 if not data:
96 raise errors.MediaNotFoundError('JSON')
97 try:
98 return self._loads(data.decode())
99 except ValueError as err:
100 raise errors.MediaMalformedError('JSON') from err
101
102 def deserialize(self, stream, content_type, content_length):
103 return self._deserialize(stream.read())
104
105 async def deserialize_async(self, stream, content_type, content_length):
106 return self._deserialize(await stream.read())
107
108 # NOTE(kgriffs): Make content_type a kwarg to support the
109 # Request.render_body() shortcut optimization.
110 def _serialize_s(self, media, content_type=None) -> bytes:
111 return self._dumps(media).encode()
112
113 async def _serialize_async_s(self, media, content_type) -> bytes:
114 return self._dumps(media).encode()
115
116 def _serialize_b(self, media, content_type) -> bytes:
117 return self._dumps(media)
118
119 async def _serialize_async_b(self, media, content_type) -> bytes:
120 return self._dumps(media)
121
122
123 class JSONHandlerWS(TextBaseHandlerWS):
124 """WebSocket media handler for de(serializing) JSON to/from TEXT payloads.
125
126 This handler uses Python's standard :py:mod:`json` library by default, but
127 can be easily configured to use any of a number of third-party JSON
128 libraries, depending on your needs. For example, you can often
129 realize a significant performance boost under CPython by using an
130 alternative library. Good options in this respect include `orjson`,
131 `python-rapidjson`, and `mujson`.
132
133 Note:
134 If you are deploying to PyPy, we recommend sticking with the standard
135 library's JSON implementation, since it will be faster in most cases
136 as compared to a third-party library.
137
138 Overriding the default JSON implementation is simply a matter of specifying
139 the desired ``dumps`` and ``loads`` functions::
140
141 import falcon
142 from falcon import media
143
144 import rapidjson
145
146 json_handler = media.JSONHandlerWS(
147 dumps=rapidjson.dumps,
148 loads=rapidjson.loads,
149 )
150
151 app = falcon.asgi.App()
152 app.ws_options.media_handlers[falcon.WebSocketPayloadType.TEXT] = json_handler
153
154 By default, ``ensure_ascii`` is passed to the ``json.dumps`` function.
155 If you override the ``dumps`` function, you will need to explicitly set
156 ``ensure_ascii`` to ``False`` in order to enable the serialization of
157 Unicode characters to UTF-8. This is easily done by using
158 :any:`functools.partial` to apply the desired keyword argument. In fact, you
159 can use this same technique to customize any option supported by the
160 ``dumps`` and ``loads`` functions::
161
162 from functools import partial
163
164 from falcon import media
165 import rapidjson
166
167 json_handler = media.JSONHandlerWS(
168 dumps=partial(
169 rapidjson.dumps,
170 ensure_ascii=False, sort_keys=True
171 ),
172 )
173
174 Keyword Arguments:
175 dumps (func): Function to use when serializing JSON.
176 loads (func): Function to use when deserializing JSON.
177 """
178
179 __slots__ = ['dumps', 'loads']
180
181 def __init__(self, dumps=None, loads=None):
182 self._dumps = dumps or partial(json.dumps, ensure_ascii=False)
183 self._loads = loads or json.loads
184
185 def serialize(self, media: object) -> str:
186 return self._dumps(media)
187
188 def deserialize(self, payload: str) -> object:
189 return self._loads(payload)
190
191
192 http_error._DEFAULT_JSON_HANDLER = _DEFAULT_JSON_HANDLER = JSONHandler() # type: ignore
193
[end of falcon/media/json.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/falcon/media/json.py b/falcon/media/json.py
--- a/falcon/media/json.py
+++ b/falcon/media/json.py
@@ -26,8 +26,10 @@
library's JSON implementation, since it will be faster in most cases
as compared to a third-party library.
- Overriding the default JSON implementation is simply a matter of specifying
- the desired ``dumps`` and ``loads`` functions::
+ You can replace the default JSON handler by using a custom JSON library
+ (see also: :ref:`custom_media_handlers`). Overriding the default JSON
+ implementation is simply a matter of specifying the desired ``dumps`` and
+ ``loads`` functions::
import falcon
from falcon import media
@@ -46,13 +48,39 @@
app.req_options.media_handlers.update(extra_handlers)
app.resp_options.media_handlers.update(extra_handlers)
+ Even if you decide to stick with the stdlib's :any:`json.dump` and
+ :any:`json.loads`, you can wrap them using :any:`functools.partial` to
+ provide custom serialization or deserialization parameters supported by the
+ ``dumps`` and ``loads`` functions, respectively
+ (see also: :ref:`prettifying-json-responses`)::
+
+ import falcon
+ from falcon import media
+
+ from functools import partial
+
+ json_handler = media.JSONHandler(
+ dumps=partial(
+ json.dumps,
+ default=str,
+ sort_keys=True,
+ ),
+ )
+ extra_handlers = {
+ 'application/json': json_handler,
+ }
+
+ app = falcon.App()
+ app.req_options.media_handlers.update(extra_handlers)
+ app.resp_options.media_handlers.update(extra_handlers)
+
By default, ``ensure_ascii`` is passed to the ``json.dumps`` function.
If you override the ``dumps`` function, you will need to explicitly set
``ensure_ascii`` to ``False`` in order to enable the serialization of
Unicode characters to UTF-8. This is easily done by using
- :any:`functools.partial` to apply the desired keyword argument. In fact, you
- can use this same technique to customize any option supported by the
- ``dumps`` and ``loads`` functions::
+ :any:`functools.partial` to apply the desired keyword argument. As also
+ demonstrated in the previous paragraph, you can use this same technique to
+ customize any option supported by the ``dumps`` and ``loads`` functions::
from functools import partial
| {"golden_diff": "diff --git a/falcon/media/json.py b/falcon/media/json.py\n--- a/falcon/media/json.py\n+++ b/falcon/media/json.py\n@@ -26,8 +26,10 @@\n library's JSON implementation, since it will be faster in most cases\n as compared to a third-party library.\n \n- Overriding the default JSON implementation is simply a matter of specifying\n- the desired ``dumps`` and ``loads`` functions::\n+ You can replace the default JSON handler by using a custom JSON library\n+ (see also: :ref:`custom_media_handlers`). Overriding the default JSON\n+ implementation is simply a matter of specifying the desired ``dumps`` and\n+ ``loads`` functions::\n \n import falcon\n from falcon import media\n@@ -46,13 +48,39 @@\n app.req_options.media_handlers.update(extra_handlers)\n app.resp_options.media_handlers.update(extra_handlers)\n \n+ Even if you decide to stick with the stdlib's :any:`json.dump` and\n+ :any:`json.loads`, you can wrap them using :any:`functools.partial` to\n+ provide custom serialization or deserialization parameters supported by the\n+ ``dumps`` and ``loads`` functions, respectively\n+ (see also: :ref:`prettifying-json-responses`)::\n+\n+ import falcon\n+ from falcon import media\n+\n+ from functools import partial\n+\n+ json_handler = media.JSONHandler(\n+ dumps=partial(\n+ json.dumps,\n+ default=str,\n+ sort_keys=True,\n+ ),\n+ )\n+ extra_handlers = {\n+ 'application/json': json_handler,\n+ }\n+\n+ app = falcon.App()\n+ app.req_options.media_handlers.update(extra_handlers)\n+ app.resp_options.media_handlers.update(extra_handlers)\n+\n By default, ``ensure_ascii`` is passed to the ``json.dumps`` function.\n If you override the ``dumps`` function, you will need to explicitly set\n ``ensure_ascii`` to ``False`` in order to enable the serialization of\n Unicode characters to UTF-8. This is easily done by using\n- :any:`functools.partial` to apply the desired keyword argument. In fact, you\n- can use this same technique to customize any option supported by the\n- ``dumps`` and ``loads`` functions::\n+ :any:`functools.partial` to apply the desired keyword argument. As also\n+ demonstrated in the previous paragraph, you can use this same technique to\n+ customize any option supported by the ``dumps`` and ``loads`` functions::\n \n from functools import partial\n", "issue": "Make JSONHandler customization docs clearer\nAs pointed out by @Stargateur in https://github.com/falconry/falcon/issues/1906#issuecomment-817374057, our [`JSONHandler`](https://falcon.readthedocs.io/en/stable/api/media.html#falcon.media.JSONHandler) customization docs could be made clearer by separately illustrating different (albeit closely related) concepts:\r\n* Use a custom JSON library (such as the exemplified `rapidjson`). Customize parameters.\r\n* Use the stdlib's `json` module, just provide custom serialization or deserialization parameters. Also link to the [\"Prettifying JSON Responses\" recipe](https://falcon.readthedocs.io/en/stable/user/recipes/pretty-json.html), which illustrates customization of `dumps` parameters.\r\n* Add a sentence or two about replacing the default JSON handlers, not just toss in a code snippet as it is at the time of writing this. Also link to [Replacing the Default Handlers](https://falcon.readthedocs.io/en/stable/api/media.html#custom-media-handlers) from that explanation.\n", "before_files": [{"content": "from functools import partial\nimport json\n\nfrom falcon import errors\nfrom falcon import http_error\nfrom falcon.media.base import BaseHandler\nfrom falcon.media.base import TextBaseHandlerWS\n\n\nclass JSONHandler(BaseHandler):\n \"\"\"JSON media handler.\n\n This handler uses Python's standard :py:mod:`json` library by default, but\n can be easily configured to use any of a number of third-party JSON\n libraries, depending on your needs. For example, you can often\n realize a significant performance boost under CPython by using an\n alternative library. Good options in this respect include `orjson`,\n `python-rapidjson`, and `mujson`.\n\n This handler will raise a :class:`falcon.MediaNotFoundError` when attempting\n to parse an empty body, or a :class:`falcon.MediaMalformedError`\n if an error happens while parsing the body.\n\n Note:\n If you are deploying to PyPy, we recommend sticking with the standard\n library's JSON implementation, since it will be faster in most cases\n as compared to a third-party library.\n\n Overriding the default JSON implementation is simply a matter of specifying\n the desired ``dumps`` and ``loads`` functions::\n\n import falcon\n from falcon import media\n\n import rapidjson\n\n json_handler = media.JSONHandler(\n dumps=rapidjson.dumps,\n loads=rapidjson.loads,\n )\n extra_handlers = {\n 'application/json': json_handler,\n }\n\n app = falcon.App()\n app.req_options.media_handlers.update(extra_handlers)\n app.resp_options.media_handlers.update(extra_handlers)\n\n By default, ``ensure_ascii`` is passed to the ``json.dumps`` function.\n If you override the ``dumps`` function, you will need to explicitly set\n ``ensure_ascii`` to ``False`` in order to enable the serialization of\n Unicode characters to UTF-8. This is easily done by using\n :any:`functools.partial` to apply the desired keyword argument. In fact, you\n can use this same technique to customize any option supported by the\n ``dumps`` and ``loads`` functions::\n\n from functools import partial\n\n from falcon import media\n import rapidjson\n\n json_handler = media.JSONHandler(\n dumps=partial(\n rapidjson.dumps,\n ensure_ascii=False, sort_keys=True\n ),\n )\n\n Keyword Arguments:\n dumps (func): Function to use when serializing JSON responses.\n loads (func): Function to use when deserializing JSON requests.\n \"\"\"\n\n def __init__(self, dumps=None, loads=None):\n self._dumps = dumps or partial(json.dumps, ensure_ascii=False)\n self._loads = loads or json.loads\n\n # PERF(kgriffs): Test dumps once up front so we can set the\n # proper serialize implementation.\n result = self._dumps({'message': 'Hello World'})\n if isinstance(result, str):\n self.serialize = self._serialize_s\n self.serialize_async = self._serialize_async_s\n else:\n self.serialize = self._serialize_b\n self.serialize_async = self._serialize_async_b\n\n # NOTE(kgriffs): To be safe, only enable the optimized protocol when\n # not subclassed.\n if type(self) is JSONHandler:\n self._serialize_sync = self.serialize\n self._deserialize_sync = self._deserialize\n\n def _deserialize(self, data):\n if not data:\n raise errors.MediaNotFoundError('JSON')\n try:\n return self._loads(data.decode())\n except ValueError as err:\n raise errors.MediaMalformedError('JSON') from err\n\n def deserialize(self, stream, content_type, content_length):\n return self._deserialize(stream.read())\n\n async def deserialize_async(self, stream, content_type, content_length):\n return self._deserialize(await stream.read())\n\n # NOTE(kgriffs): Make content_type a kwarg to support the\n # Request.render_body() shortcut optimization.\n def _serialize_s(self, media, content_type=None) -> bytes:\n return self._dumps(media).encode()\n\n async def _serialize_async_s(self, media, content_type) -> bytes:\n return self._dumps(media).encode()\n\n def _serialize_b(self, media, content_type) -> bytes:\n return self._dumps(media)\n\n async def _serialize_async_b(self, media, content_type) -> bytes:\n return self._dumps(media)\n\n\nclass JSONHandlerWS(TextBaseHandlerWS):\n \"\"\"WebSocket media handler for de(serializing) JSON to/from TEXT payloads.\n\n This handler uses Python's standard :py:mod:`json` library by default, but\n can be easily configured to use any of a number of third-party JSON\n libraries, depending on your needs. For example, you can often\n realize a significant performance boost under CPython by using an\n alternative library. Good options in this respect include `orjson`,\n `python-rapidjson`, and `mujson`.\n\n Note:\n If you are deploying to PyPy, we recommend sticking with the standard\n library's JSON implementation, since it will be faster in most cases\n as compared to a third-party library.\n\n Overriding the default JSON implementation is simply a matter of specifying\n the desired ``dumps`` and ``loads`` functions::\n\n import falcon\n from falcon import media\n\n import rapidjson\n\n json_handler = media.JSONHandlerWS(\n dumps=rapidjson.dumps,\n loads=rapidjson.loads,\n )\n\n app = falcon.asgi.App()\n app.ws_options.media_handlers[falcon.WebSocketPayloadType.TEXT] = json_handler\n\n By default, ``ensure_ascii`` is passed to the ``json.dumps`` function.\n If you override the ``dumps`` function, you will need to explicitly set\n ``ensure_ascii`` to ``False`` in order to enable the serialization of\n Unicode characters to UTF-8. This is easily done by using\n :any:`functools.partial` to apply the desired keyword argument. In fact, you\n can use this same technique to customize any option supported by the\n ``dumps`` and ``loads`` functions::\n\n from functools import partial\n\n from falcon import media\n import rapidjson\n\n json_handler = media.JSONHandlerWS(\n dumps=partial(\n rapidjson.dumps,\n ensure_ascii=False, sort_keys=True\n ),\n )\n\n Keyword Arguments:\n dumps (func): Function to use when serializing JSON.\n loads (func): Function to use when deserializing JSON.\n \"\"\"\n\n __slots__ = ['dumps', 'loads']\n\n def __init__(self, dumps=None, loads=None):\n self._dumps = dumps or partial(json.dumps, ensure_ascii=False)\n self._loads = loads or json.loads\n\n def serialize(self, media: object) -> str:\n return self._dumps(media)\n\n def deserialize(self, payload: str) -> object:\n return self._loads(payload)\n\n\nhttp_error._DEFAULT_JSON_HANDLER = _DEFAULT_JSON_HANDLER = JSONHandler() # type: ignore\n", "path": "falcon/media/json.py"}]} | 2,792 | 583 |
gh_patches_debug_17857 | rasdani/github-patches | git_diff | python-discord__bot-723 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Allow for throw away words after the rules command call
fixes #723
This simply catches all strings after a sequence of ints. This allows us to write a message after the list of rules we wish to display.
Example:
`!rules 5 6 We do not allow for paid work, and that will break ToS of x and y`
Disclaimer, didn't get site to respond properly so haven't tested this with bot+site.
</issue>
<code>
[start of bot/cogs/alias.py]
1 import inspect
2 import logging
3 from typing import Union
4
5 from discord import Colour, Embed, Member, User
6 from discord.ext.commands import Cog, Command, Context, clean_content, command, group
7
8 from bot.bot import Bot
9 from bot.cogs.extensions import Extension
10 from bot.cogs.watchchannels.watchchannel import proxy_user
11 from bot.converters import TagNameConverter
12 from bot.pagination import LinePaginator
13
14 log = logging.getLogger(__name__)
15
16
17 class Alias (Cog):
18 """Aliases for commonly used commands."""
19
20 def __init__(self, bot: Bot):
21 self.bot = bot
22
23 async def invoke(self, ctx: Context, cmd_name: str, *args, **kwargs) -> None:
24 """Invokes a command with args and kwargs."""
25 log.debug(f"{cmd_name} was invoked through an alias")
26 cmd = self.bot.get_command(cmd_name)
27 if not cmd:
28 return log.warning(f'Did not find command "{cmd_name}" to invoke.')
29 elif not await cmd.can_run(ctx):
30 return log.warning(
31 f'{str(ctx.author)} tried to run the command "{cmd_name}"'
32 )
33
34 await ctx.invoke(cmd, *args, **kwargs)
35
36 @command(name='aliases')
37 async def aliases_command(self, ctx: Context) -> None:
38 """Show configured aliases on the bot."""
39 embed = Embed(
40 title='Configured aliases',
41 colour=Colour.blue()
42 )
43 await LinePaginator.paginate(
44 (
45 f"• `{ctx.prefix}{value.name}` "
46 f"=> `{ctx.prefix}{name[:-len('_alias')].replace('_', ' ')}`"
47 for name, value in inspect.getmembers(self)
48 if isinstance(value, Command) and name.endswith('_alias')
49 ),
50 ctx, embed, empty=False, max_lines=20
51 )
52
53 @command(name="resources", aliases=("resource",), hidden=True)
54 async def site_resources_alias(self, ctx: Context) -> None:
55 """Alias for invoking <prefix>site resources."""
56 await self.invoke(ctx, "site resources")
57
58 @command(name="tools", hidden=True)
59 async def site_tools_alias(self, ctx: Context) -> None:
60 """Alias for invoking <prefix>site tools."""
61 await self.invoke(ctx, "site tools")
62
63 @command(name="watch", hidden=True)
64 async def bigbrother_watch_alias(self, ctx: Context, user: Union[Member, User, proxy_user], *, reason: str) -> None:
65 """Alias for invoking <prefix>bigbrother watch [user] [reason]."""
66 await self.invoke(ctx, "bigbrother watch", user, reason=reason)
67
68 @command(name="unwatch", hidden=True)
69 async def bigbrother_unwatch_alias(self, ctx: Context, user: Union[User, proxy_user], *, reason: str) -> None:
70 """Alias for invoking <prefix>bigbrother unwatch [user] [reason]."""
71 await self.invoke(ctx, "bigbrother unwatch", user, reason=reason)
72
73 @command(name="home", hidden=True)
74 async def site_home_alias(self, ctx: Context) -> None:
75 """Alias for invoking <prefix>site home."""
76 await self.invoke(ctx, "site home")
77
78 @command(name="faq", hidden=True)
79 async def site_faq_alias(self, ctx: Context) -> None:
80 """Alias for invoking <prefix>site faq."""
81 await self.invoke(ctx, "site faq")
82
83 @command(name="rules", aliases=("rule",), hidden=True)
84 async def site_rules_alias(self, ctx: Context, *rules: int) -> None:
85 """Alias for invoking <prefix>site rules."""
86 await self.invoke(ctx, "site rules", *rules)
87
88 @command(name="reload", hidden=True)
89 async def extensions_reload_alias(self, ctx: Context, *extensions: Extension) -> None:
90 """Alias for invoking <prefix>extensions reload [extensions...]."""
91 await self.invoke(ctx, "extensions reload", *extensions)
92
93 @command(name="defon", hidden=True)
94 async def defcon_enable_alias(self, ctx: Context) -> None:
95 """Alias for invoking <prefix>defcon enable."""
96 await self.invoke(ctx, "defcon enable")
97
98 @command(name="defoff", hidden=True)
99 async def defcon_disable_alias(self, ctx: Context) -> None:
100 """Alias for invoking <prefix>defcon disable."""
101 await self.invoke(ctx, "defcon disable")
102
103 @command(name="exception", hidden=True)
104 async def tags_get_traceback_alias(self, ctx: Context) -> None:
105 """Alias for invoking <prefix>tags get traceback."""
106 await self.invoke(ctx, "tags get", tag_name="traceback")
107
108 @group(name="get",
109 aliases=("show", "g"),
110 hidden=True,
111 invoke_without_command=True)
112 async def get_group_alias(self, ctx: Context) -> None:
113 """Group for reverse aliases for commands like `tags get`, allowing for `get tags` or `get docs`."""
114 pass
115
116 @get_group_alias.command(name="tags", aliases=("tag", "t"), hidden=True)
117 async def tags_get_alias(
118 self, ctx: Context, *, tag_name: TagNameConverter = None
119 ) -> None:
120 """
121 Alias for invoking <prefix>tags get [tag_name].
122
123 tag_name: str - tag to be viewed.
124 """
125 await self.invoke(ctx, "tags get", tag_name=tag_name)
126
127 @get_group_alias.command(name="docs", aliases=("doc", "d"), hidden=True)
128 async def docs_get_alias(
129 self, ctx: Context, symbol: clean_content = None
130 ) -> None:
131 """Alias for invoking <prefix>docs get [symbol]."""
132 await self.invoke(ctx, "docs get", symbol)
133
134 @command(name="nominate", hidden=True)
135 async def nomination_add_alias(self, ctx: Context, user: Union[Member, User, proxy_user], *, reason: str) -> None:
136 """Alias for invoking <prefix>talentpool add [user] [reason]."""
137 await self.invoke(ctx, "talentpool add", user, reason=reason)
138
139 @command(name="unnominate", hidden=True)
140 async def nomination_end_alias(self, ctx: Context, user: Union[User, proxy_user], *, reason: str) -> None:
141 """Alias for invoking <prefix>nomination end [user] [reason]."""
142 await self.invoke(ctx, "nomination end", user, reason=reason)
143
144 @command(name="nominees", hidden=True)
145 async def nominees_alias(self, ctx: Context) -> None:
146 """Alias for invoking <prefix>tp watched."""
147 await self.invoke(ctx, "talentpool watched")
148
149
150 def setup(bot: Bot) -> None:
151 """Load the Alias cog."""
152 bot.add_cog(Alias(bot))
153
[end of bot/cogs/alias.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/bot/cogs/alias.py b/bot/cogs/alias.py
--- a/bot/cogs/alias.py
+++ b/bot/cogs/alias.py
@@ -3,7 +3,10 @@
from typing import Union
from discord import Colour, Embed, Member, User
-from discord.ext.commands import Cog, Command, Context, clean_content, command, group
+from discord.ext.commands import (
+ Cog, Command, Context, Greedy,
+ clean_content, command, group,
+)
from bot.bot import Bot
from bot.cogs.extensions import Extension
@@ -81,7 +84,7 @@
await self.invoke(ctx, "site faq")
@command(name="rules", aliases=("rule",), hidden=True)
- async def site_rules_alias(self, ctx: Context, *rules: int) -> None:
+ async def site_rules_alias(self, ctx: Context, rules: Greedy[int], *_: str) -> None:
"""Alias for invoking <prefix>site rules."""
await self.invoke(ctx, "site rules", *rules)
| {"golden_diff": "diff --git a/bot/cogs/alias.py b/bot/cogs/alias.py\n--- a/bot/cogs/alias.py\n+++ b/bot/cogs/alias.py\n@@ -3,7 +3,10 @@\n from typing import Union\n \n from discord import Colour, Embed, Member, User\n-from discord.ext.commands import Cog, Command, Context, clean_content, command, group\n+from discord.ext.commands import (\n+ Cog, Command, Context, Greedy,\n+ clean_content, command, group,\n+)\n \n from bot.bot import Bot\n from bot.cogs.extensions import Extension\n@@ -81,7 +84,7 @@\n await self.invoke(ctx, \"site faq\")\n \n @command(name=\"rules\", aliases=(\"rule\",), hidden=True)\n- async def site_rules_alias(self, ctx: Context, *rules: int) -> None:\n+ async def site_rules_alias(self, ctx: Context, rules: Greedy[int], *_: str) -> None:\n \"\"\"Alias for invoking <prefix>site rules.\"\"\"\n await self.invoke(ctx, \"site rules\", *rules)\n", "issue": "Allow for throw away words after the rules command call\nfixes #723 \r\nThis simply catches all strings after a sequence of ints. This allows us to write a message after the list of rules we wish to display. \r\nExample:\r\n`!rules 5 6 We do not allow for paid work, and that will break ToS of x and y` \r\n\r\nDisclaimer, didn't get site to respond properly so haven't tested this with bot+site.\n", "before_files": [{"content": "import inspect\nimport logging\nfrom typing import Union\n\nfrom discord import Colour, Embed, Member, User\nfrom discord.ext.commands import Cog, Command, Context, clean_content, command, group\n\nfrom bot.bot import Bot\nfrom bot.cogs.extensions import Extension\nfrom bot.cogs.watchchannels.watchchannel import proxy_user\nfrom bot.converters import TagNameConverter\nfrom bot.pagination import LinePaginator\n\nlog = logging.getLogger(__name__)\n\n\nclass Alias (Cog):\n \"\"\"Aliases for commonly used commands.\"\"\"\n\n def __init__(self, bot: Bot):\n self.bot = bot\n\n async def invoke(self, ctx: Context, cmd_name: str, *args, **kwargs) -> None:\n \"\"\"Invokes a command with args and kwargs.\"\"\"\n log.debug(f\"{cmd_name} was invoked through an alias\")\n cmd = self.bot.get_command(cmd_name)\n if not cmd:\n return log.warning(f'Did not find command \"{cmd_name}\" to invoke.')\n elif not await cmd.can_run(ctx):\n return log.warning(\n f'{str(ctx.author)} tried to run the command \"{cmd_name}\"'\n )\n\n await ctx.invoke(cmd, *args, **kwargs)\n\n @command(name='aliases')\n async def aliases_command(self, ctx: Context) -> None:\n \"\"\"Show configured aliases on the bot.\"\"\"\n embed = Embed(\n title='Configured aliases',\n colour=Colour.blue()\n )\n await LinePaginator.paginate(\n (\n f\"\u2022 `{ctx.prefix}{value.name}` \"\n f\"=> `{ctx.prefix}{name[:-len('_alias')].replace('_', ' ')}`\"\n for name, value in inspect.getmembers(self)\n if isinstance(value, Command) and name.endswith('_alias')\n ),\n ctx, embed, empty=False, max_lines=20\n )\n\n @command(name=\"resources\", aliases=(\"resource\",), hidden=True)\n async def site_resources_alias(self, ctx: Context) -> None:\n \"\"\"Alias for invoking <prefix>site resources.\"\"\"\n await self.invoke(ctx, \"site resources\")\n\n @command(name=\"tools\", hidden=True)\n async def site_tools_alias(self, ctx: Context) -> None:\n \"\"\"Alias for invoking <prefix>site tools.\"\"\"\n await self.invoke(ctx, \"site tools\")\n\n @command(name=\"watch\", hidden=True)\n async def bigbrother_watch_alias(self, ctx: Context, user: Union[Member, User, proxy_user], *, reason: str) -> None:\n \"\"\"Alias for invoking <prefix>bigbrother watch [user] [reason].\"\"\"\n await self.invoke(ctx, \"bigbrother watch\", user, reason=reason)\n\n @command(name=\"unwatch\", hidden=True)\n async def bigbrother_unwatch_alias(self, ctx: Context, user: Union[User, proxy_user], *, reason: str) -> None:\n \"\"\"Alias for invoking <prefix>bigbrother unwatch [user] [reason].\"\"\"\n await self.invoke(ctx, \"bigbrother unwatch\", user, reason=reason)\n\n @command(name=\"home\", hidden=True)\n async def site_home_alias(self, ctx: Context) -> None:\n \"\"\"Alias for invoking <prefix>site home.\"\"\"\n await self.invoke(ctx, \"site home\")\n\n @command(name=\"faq\", hidden=True)\n async def site_faq_alias(self, ctx: Context) -> None:\n \"\"\"Alias for invoking <prefix>site faq.\"\"\"\n await self.invoke(ctx, \"site faq\")\n\n @command(name=\"rules\", aliases=(\"rule\",), hidden=True)\n async def site_rules_alias(self, ctx: Context, *rules: int) -> None:\n \"\"\"Alias for invoking <prefix>site rules.\"\"\"\n await self.invoke(ctx, \"site rules\", *rules)\n\n @command(name=\"reload\", hidden=True)\n async def extensions_reload_alias(self, ctx: Context, *extensions: Extension) -> None:\n \"\"\"Alias for invoking <prefix>extensions reload [extensions...].\"\"\"\n await self.invoke(ctx, \"extensions reload\", *extensions)\n\n @command(name=\"defon\", hidden=True)\n async def defcon_enable_alias(self, ctx: Context) -> None:\n \"\"\"Alias for invoking <prefix>defcon enable.\"\"\"\n await self.invoke(ctx, \"defcon enable\")\n\n @command(name=\"defoff\", hidden=True)\n async def defcon_disable_alias(self, ctx: Context) -> None:\n \"\"\"Alias for invoking <prefix>defcon disable.\"\"\"\n await self.invoke(ctx, \"defcon disable\")\n\n @command(name=\"exception\", hidden=True)\n async def tags_get_traceback_alias(self, ctx: Context) -> None:\n \"\"\"Alias for invoking <prefix>tags get traceback.\"\"\"\n await self.invoke(ctx, \"tags get\", tag_name=\"traceback\")\n\n @group(name=\"get\",\n aliases=(\"show\", \"g\"),\n hidden=True,\n invoke_without_command=True)\n async def get_group_alias(self, ctx: Context) -> None:\n \"\"\"Group for reverse aliases for commands like `tags get`, allowing for `get tags` or `get docs`.\"\"\"\n pass\n\n @get_group_alias.command(name=\"tags\", aliases=(\"tag\", \"t\"), hidden=True)\n async def tags_get_alias(\n self, ctx: Context, *, tag_name: TagNameConverter = None\n ) -> None:\n \"\"\"\n Alias for invoking <prefix>tags get [tag_name].\n\n tag_name: str - tag to be viewed.\n \"\"\"\n await self.invoke(ctx, \"tags get\", tag_name=tag_name)\n\n @get_group_alias.command(name=\"docs\", aliases=(\"doc\", \"d\"), hidden=True)\n async def docs_get_alias(\n self, ctx: Context, symbol: clean_content = None\n ) -> None:\n \"\"\"Alias for invoking <prefix>docs get [symbol].\"\"\"\n await self.invoke(ctx, \"docs get\", symbol)\n\n @command(name=\"nominate\", hidden=True)\n async def nomination_add_alias(self, ctx: Context, user: Union[Member, User, proxy_user], *, reason: str) -> None:\n \"\"\"Alias for invoking <prefix>talentpool add [user] [reason].\"\"\"\n await self.invoke(ctx, \"talentpool add\", user, reason=reason)\n\n @command(name=\"unnominate\", hidden=True)\n async def nomination_end_alias(self, ctx: Context, user: Union[User, proxy_user], *, reason: str) -> None:\n \"\"\"Alias for invoking <prefix>nomination end [user] [reason].\"\"\"\n await self.invoke(ctx, \"nomination end\", user, reason=reason)\n\n @command(name=\"nominees\", hidden=True)\n async def nominees_alias(self, ctx: Context) -> None:\n \"\"\"Alias for invoking <prefix>tp watched.\"\"\"\n await self.invoke(ctx, \"talentpool watched\")\n\n\ndef setup(bot: Bot) -> None:\n \"\"\"Load the Alias cog.\"\"\"\n bot.add_cog(Alias(bot))\n", "path": "bot/cogs/alias.py"}]} | 2,479 | 241 |
gh_patches_debug_10891 | rasdani/github-patches | git_diff | openfun__marsha-2578 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
check_live_state pop from empty list
## Bug Report
**Problematic Behavior**
The management command check_live_state has a recurring error, it tries to pop a value from an empty list. This list comes from cloudwatch service : https://github.com/openfun/marsha/blob/29e1f78ed6e288f7bba3c198bb7b7179e7af4fe0/src/backend/marsha/core/management/commands/check_live_state.py#L100
**Expected behavior/code**
This error seems to occur when a live has no activity anymore. We try to compare alerts set and the clear to determine if they are still active.
**Steps to Reproduce**
1. Start a webinar
2. Once started, stop all activity
3. Run the management command `check_live_state`
4. And then the bug happens!
</issue>
<code>
[start of src/backend/marsha/core/management/commands/check_live_state.py]
1 """Check live state management command."""
2
3 from datetime import datetime, timedelta, timezone
4 import json
5 import re
6
7 from django.conf import settings
8 from django.core.management.base import BaseCommand
9
10 import boto3
11 from dateutil.parser import isoparse
12
13 from marsha.core.defaults import RUNNING, STOPPING
14 from marsha.core.models import Video
15 from marsha.core.utils.medialive_utils import stop_live_channel
16
17
18 aws_credentials = {
19 "aws_access_key_id": settings.AWS_ACCESS_KEY_ID,
20 "aws_secret_access_key": settings.AWS_SECRET_ACCESS_KEY,
21 "region_name": settings.AWS_S3_REGION_NAME,
22 }
23
24 # Configure medialive client
25 medialive_client = boto3.client("medialive", **aws_credentials)
26
27 # Configure cloudwatch logs client
28 logs_client = boto3.client("logs", **aws_credentials)
29
30
31 def parse_iso_date(iso_date):
32 """Parse an iso 8601 date and return a datetime object."""
33 return isoparse(iso_date)
34
35
36 def generate_expired_date():
37 """Generate a datetime object 25 minutes in the past."""
38 return datetime.now(tz=timezone.utc) - timedelta(minutes=25)
39
40
41 # pylint: disable=too-many-locals
42 class Command(BaseCommand):
43 """Check every live streaming running state on AWS."""
44
45 help = (
46 "Check activity on AWS for every live streaming running"
47 "and close them if there is not."
48 )
49
50 def handle(self, *args, **options):
51 """Execute management command."""
52 extract_message_pattern = (
53 r"^(?P<ingestion_time>.*)\t"
54 r"(?P<request_id>.*)\t"
55 r"(?P<level>.*)\t"
56 r"Received event:(?P<message>.*)$"
57 )
58 extract_message_regex = re.compile(extract_message_pattern)
59
60 videos = Video.objects.filter(live_state=RUNNING)
61 for video in videos:
62 # For each running live video, we query cloudwatch on the current live
63 # to search messages having detail.alert_type set to `RTMP Has No Audio/Video`.
64 # This alert tell us there is no stream and the live can be stopped if the message is
65 # older than 25 minutes.
66 self.stdout.write(f"Checking video {video.id}")
67 live_info = video.live_info
68 logs = logs_client.filter_log_events(
69 logGroupName=live_info["cloudwatch"]["logGroupName"],
70 startTime=int(int(video.live_info.get("started_at")) * 1000),
71 filterPattern=(
72 "{"
73 '($.detail-type = "MediaLive Channel Alert") && '
74 f"($.resources[0] = \"{live_info['medialive']['channel']['arn']}\") &&"
75 '($.detail.alert_type = "RTMP Has No Audio/Video")'
76 "}"
77 ),
78 )
79
80 pipelines_queue = {"0": [], "1": []}
81
82 for event in logs["events"]:
83 # All events must be parsed to extract the JSON message. When an alert is added,
84 # the `alarm_state` property value is `SET` and when the alert is removed,
85 # the `alarm_state` property value is `CLEARED`.
86 # We have 2 pipelines, a live is over when the 2 pipeline have `SET` value
87 # in `alarm_state`.
88 # Alarm state act like a list with all the event history. It means a `CLEARED`
89 # event is related to a `SET` one. So we have to look over all events, put in
90 # a list all `SET` events and remove it if a `CLEARED` event is here. At the
91 # end if we have 2 `SET` events, the live has no activity and we have to check
92 # the time of the last `SET` event. If this time is older than 25 minutes we
93 # stop the channel.
94 log = extract_message_regex.match(event["message"])
95 message = json.loads(log.group("message"))
96
97 if message["detail"]["alarm_state"] == "SET":
98 pipelines_queue[message["detail"]["pipeline"]].append(message)
99 else:
100 pipelines_queue[message["detail"]["pipeline"]].pop()
101
102 if len(pipelines_queue["0"]) == 1 and len(pipelines_queue["1"]) == 1:
103 # Both pipelines receive no stream, we have to check the more recent one
104 # and if the time is older than 25 minutes we stop the channel.
105 datetime_pipeline0 = parse_iso_date(pipelines_queue["0"][0]["time"])
106 datetime_pipeline1 = parse_iso_date(pipelines_queue["1"][0]["time"])
107 expired_date = generate_expired_date()
108
109 if (
110 datetime_pipeline0 < expired_date
111 or datetime_pipeline1 < expired_date
112 ):
113 # Stop this channel
114 self.stdout.write(
115 f"Stopping channel with id {live_info['medialive']['channel']['id']}"
116 )
117 stop_live_channel(live_info["medialive"]["channel"]["id"])
118
119 video.live_state = STOPPING
120 video.save()
121 self.stdout.write("Channel stopped")
122
[end of src/backend/marsha/core/management/commands/check_live_state.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/backend/marsha/core/management/commands/check_live_state.py b/src/backend/marsha/core/management/commands/check_live_state.py
--- a/src/backend/marsha/core/management/commands/check_live_state.py
+++ b/src/backend/marsha/core/management/commands/check_live_state.py
@@ -67,7 +67,7 @@
live_info = video.live_info
logs = logs_client.filter_log_events(
logGroupName=live_info["cloudwatch"]["logGroupName"],
- startTime=int(int(video.live_info.get("started_at")) * 1000),
+ startTime=int((int(video.live_info.get("started_at")) - 60) * 1000),
filterPattern=(
"{"
'($.detail-type = "MediaLive Channel Alert") && '
| {"golden_diff": "diff --git a/src/backend/marsha/core/management/commands/check_live_state.py b/src/backend/marsha/core/management/commands/check_live_state.py\n--- a/src/backend/marsha/core/management/commands/check_live_state.py\n+++ b/src/backend/marsha/core/management/commands/check_live_state.py\n@@ -67,7 +67,7 @@\n live_info = video.live_info\n logs = logs_client.filter_log_events(\n logGroupName=live_info[\"cloudwatch\"][\"logGroupName\"],\n- startTime=int(int(video.live_info.get(\"started_at\")) * 1000),\n+ startTime=int((int(video.live_info.get(\"started_at\")) - 60) * 1000),\n filterPattern=(\n \"{\"\n '($.detail-type = \"MediaLive Channel Alert\") && '\n", "issue": "check_live_state pop from empty list\n## Bug Report\r\n\r\n**Problematic Behavior**\r\n\r\nThe management command check_live_state has a recurring error, it tries to pop a value from an empty list. This list comes from cloudwatch service : https://github.com/openfun/marsha/blob/29e1f78ed6e288f7bba3c198bb7b7179e7af4fe0/src/backend/marsha/core/management/commands/check_live_state.py#L100\r\n\r\n**Expected behavior/code**\r\n\r\nThis error seems to occur when a live has no activity anymore. We try to compare alerts set and the clear to determine if they are still active.\r\n\r\n\r\n**Steps to Reproduce**\r\n1. Start a webinar\r\n2. Once started, stop all activity\r\n3. Run the management command `check_live_state`\r\n4. And then the bug happens!\r\n\r\n\n", "before_files": [{"content": "\"\"\"Check live state management command.\"\"\"\n\nfrom datetime import datetime, timedelta, timezone\nimport json\nimport re\n\nfrom django.conf import settings\nfrom django.core.management.base import BaseCommand\n\nimport boto3\nfrom dateutil.parser import isoparse\n\nfrom marsha.core.defaults import RUNNING, STOPPING\nfrom marsha.core.models import Video\nfrom marsha.core.utils.medialive_utils import stop_live_channel\n\n\naws_credentials = {\n \"aws_access_key_id\": settings.AWS_ACCESS_KEY_ID,\n \"aws_secret_access_key\": settings.AWS_SECRET_ACCESS_KEY,\n \"region_name\": settings.AWS_S3_REGION_NAME,\n}\n\n# Configure medialive client\nmedialive_client = boto3.client(\"medialive\", **aws_credentials)\n\n# Configure cloudwatch logs client\nlogs_client = boto3.client(\"logs\", **aws_credentials)\n\n\ndef parse_iso_date(iso_date):\n \"\"\"Parse an iso 8601 date and return a datetime object.\"\"\"\n return isoparse(iso_date)\n\n\ndef generate_expired_date():\n \"\"\"Generate a datetime object 25 minutes in the past.\"\"\"\n return datetime.now(tz=timezone.utc) - timedelta(minutes=25)\n\n\n# pylint: disable=too-many-locals\nclass Command(BaseCommand):\n \"\"\"Check every live streaming running state on AWS.\"\"\"\n\n help = (\n \"Check activity on AWS for every live streaming running\"\n \"and close them if there is not.\"\n )\n\n def handle(self, *args, **options):\n \"\"\"Execute management command.\"\"\"\n extract_message_pattern = (\n r\"^(?P<ingestion_time>.*)\\t\"\n r\"(?P<request_id>.*)\\t\"\n r\"(?P<level>.*)\\t\"\n r\"Received event:(?P<message>.*)$\"\n )\n extract_message_regex = re.compile(extract_message_pattern)\n\n videos = Video.objects.filter(live_state=RUNNING)\n for video in videos:\n # For each running live video, we query cloudwatch on the current live\n # to search messages having detail.alert_type set to `RTMP Has No Audio/Video`.\n # This alert tell us there is no stream and the live can be stopped if the message is\n # older than 25 minutes.\n self.stdout.write(f\"Checking video {video.id}\")\n live_info = video.live_info\n logs = logs_client.filter_log_events(\n logGroupName=live_info[\"cloudwatch\"][\"logGroupName\"],\n startTime=int(int(video.live_info.get(\"started_at\")) * 1000),\n filterPattern=(\n \"{\"\n '($.detail-type = \"MediaLive Channel Alert\") && '\n f\"($.resources[0] = \\\"{live_info['medialive']['channel']['arn']}\\\") &&\"\n '($.detail.alert_type = \"RTMP Has No Audio/Video\")'\n \"}\"\n ),\n )\n\n pipelines_queue = {\"0\": [], \"1\": []}\n\n for event in logs[\"events\"]:\n # All events must be parsed to extract the JSON message. When an alert is added,\n # the `alarm_state` property value is `SET` and when the alert is removed,\n # the `alarm_state` property value is `CLEARED`.\n # We have 2 pipelines, a live is over when the 2 pipeline have `SET` value\n # in `alarm_state`.\n # Alarm state act like a list with all the event history. It means a `CLEARED`\n # event is related to a `SET` one. So we have to look over all events, put in\n # a list all `SET` events and remove it if a `CLEARED` event is here. At the\n # end if we have 2 `SET` events, the live has no activity and we have to check\n # the time of the last `SET` event. If this time is older than 25 minutes we\n # stop the channel.\n log = extract_message_regex.match(event[\"message\"])\n message = json.loads(log.group(\"message\"))\n\n if message[\"detail\"][\"alarm_state\"] == \"SET\":\n pipelines_queue[message[\"detail\"][\"pipeline\"]].append(message)\n else:\n pipelines_queue[message[\"detail\"][\"pipeline\"]].pop()\n\n if len(pipelines_queue[\"0\"]) == 1 and len(pipelines_queue[\"1\"]) == 1:\n # Both pipelines receive no stream, we have to check the more recent one\n # and if the time is older than 25 minutes we stop the channel.\n datetime_pipeline0 = parse_iso_date(pipelines_queue[\"0\"][0][\"time\"])\n datetime_pipeline1 = parse_iso_date(pipelines_queue[\"1\"][0][\"time\"])\n expired_date = generate_expired_date()\n\n if (\n datetime_pipeline0 < expired_date\n or datetime_pipeline1 < expired_date\n ):\n # Stop this channel\n self.stdout.write(\n f\"Stopping channel with id {live_info['medialive']['channel']['id']}\"\n )\n stop_live_channel(live_info[\"medialive\"][\"channel\"][\"id\"])\n\n video.live_state = STOPPING\n video.save()\n self.stdout.write(\"Channel stopped\")\n", "path": "src/backend/marsha/core/management/commands/check_live_state.py"}]} | 2,112 | 177 |
gh_patches_debug_23423 | rasdani/github-patches | git_diff | biolab__orange3-4389 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Transpose remembers old unexisting data
I have a table file that I overwrite with different values (rows and columns stay the same). When using a certain version of the file with certain workflow the workflow remembers old data, that was previously overwritten (and does not exist on my disk anymore). I could not replicate this in a new workflow made from scratch or with Orange data sets. However, it occurs even when I reopen Orange or when I copy the workflow to a new file.
Below are the workflow and the data and an image of what is happening.

[transpose_remembering.zip](https://github.com/biolab/orange3/files/4102632/transpose_remembering.zip)
Orange: Last master.
</issue>
<code>
[start of Orange/widgets/data/owtranspose.py]
1 from Orange.data import Table, ContinuousVariable, StringVariable
2 from Orange.widgets.settings import (Setting, ContextSetting,
3 DomainContextHandler)
4 from Orange.widgets.utils.itemmodels import DomainModel
5 from Orange.widgets.utils.widgetpreview import WidgetPreview
6 from Orange.widgets.widget import OWWidget, Msg
7 from Orange.widgets import gui
8 from Orange.widgets.widget import Input, Output
9
10
11 class OWTranspose(OWWidget):
12 name = "Transpose"
13 description = "Transpose data table."
14 icon = "icons/Transpose.svg"
15 priority = 2000
16 keywords = []
17
18 class Inputs:
19 data = Input("Data", Table)
20
21 class Outputs:
22 data = Output("Data", Table, dynamic=False)
23
24 GENERIC, FROM_VAR = range(2)
25
26 resizing_enabled = False
27 want_main_area = False
28
29 DEFAULT_PREFIX = "Feature"
30
31 settingsHandler = DomainContextHandler()
32 feature_type = ContextSetting(GENERIC)
33 feature_name = ContextSetting("")
34 feature_names_column = ContextSetting(None)
35 auto_apply = Setting(True)
36
37 class Warning(OWWidget.Warning):
38 duplicate_names = Msg("Values are not unique.\nTo avoid multiple "
39 "features with the same name, values \nof "
40 "'{}' have been augmented with indices.")
41
42 class Error(OWWidget.Error):
43 value_error = Msg("{}")
44
45 def __init__(self):
46 super().__init__()
47 self.data = None
48
49 # self.apply is changed later, pylint: disable=unnecessary-lambda
50 box = gui.radioButtons(
51 self.controlArea, self, "feature_type", box="Feature names",
52 callback=lambda: self.apply())
53
54 button = gui.appendRadioButton(box, "Generic")
55 edit = gui.lineEdit(
56 gui.indentedBox(box, gui.checkButtonOffsetHint(button)), self,
57 "feature_name",
58 placeholderText="Type a prefix ...", toolTip="Custom feature name")
59 edit.editingFinished.connect(self._apply_editing)
60
61 self.meta_button = gui.appendRadioButton(box, "From variable:")
62 self.feature_model = DomainModel(
63 valid_types=(ContinuousVariable, StringVariable),
64 alphabetical=False)
65 self.feature_combo = gui.comboBox(
66 gui.indentedBox(box, gui.checkButtonOffsetHint(button)), self,
67 "feature_names_column", contentsLength=12,
68 callback=self._feature_combo_changed, model=self.feature_model)
69
70 self.apply_button = gui.auto_apply(self.controlArea, self, box=False, commit=self.apply)
71 self.apply_button.button.setAutoDefault(False)
72
73 self.info.set_output_summary(self.info.NoInput)
74 self.info.set_input_summary(self.info.NoInput)
75
76 self.set_controls()
77
78 def _apply_editing(self):
79 self.feature_type = self.GENERIC
80 self.feature_name = self.feature_name.strip()
81 self.apply()
82
83 def _feature_combo_changed(self):
84 self.feature_type = self.FROM_VAR
85 self.apply()
86
87 @Inputs.data
88 def set_data(self, data):
89 # Skip the context if the combo is empty: a context with
90 # feature_model == None would then match all domains
91 if self.feature_model:
92 self.closeContext()
93 self.data = data
94 if data:
95 self.info.set_input_summary(len(data))
96 else:
97 self.info.set_input_summary(self.info.NoInput)
98 self.set_controls()
99 if self.feature_model:
100 self.openContext(data)
101 self.unconditional_apply()
102
103 def set_controls(self):
104 self.feature_model.set_domain(self.data and self.data.domain)
105 self.meta_button.setEnabled(bool(self.feature_model))
106 if self.feature_model:
107 self.feature_names_column = self.feature_model[0]
108 self.feature_type = self.FROM_VAR
109 else:
110 self.feature_names_column = None
111
112 def apply(self):
113 self.clear_messages()
114 transposed = None
115 if self.data:
116 try:
117 variable = self.feature_type == self.FROM_VAR and \
118 self.feature_names_column
119 transposed = Table.transpose(
120 self.data, variable,
121 feature_name=self.feature_name or self.DEFAULT_PREFIX)
122 if variable:
123 names = self.data.get_column_view(variable)[0]
124 if len(names) != len(set(names)):
125 self.Warning.duplicate_names(variable)
126 self.info.set_output_summary(len(transposed))
127 except ValueError as e:
128 self.Error.value_error(e)
129 else:
130 self.info.set_output_summary(self.info.NoInput)
131 self.Outputs.data.send(transposed)
132
133 def send_report(self):
134 if self.feature_type == self.GENERIC:
135 names = self.feature_name or self.DEFAULT_PREFIX
136 else:
137 names = "from variable"
138 if self.feature_names_column:
139 names += " '{}'".format(self.feature_names_column.name)
140 self.report_items("", [("Feature names", names)])
141 if self.data:
142 self.report_data("Data", self.data)
143
144
145 if __name__ == "__main__": # pragma: no cover
146 WidgetPreview(OWTranspose).run(Table("iris"))
147
[end of Orange/widgets/data/owtranspose.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/Orange/widgets/data/owtranspose.py b/Orange/widgets/data/owtranspose.py
--- a/Orange/widgets/data/owtranspose.py
+++ b/Orange/widgets/data/owtranspose.py
@@ -38,6 +38,7 @@
duplicate_names = Msg("Values are not unique.\nTo avoid multiple "
"features with the same name, values \nof "
"'{}' have been augmented with indices.")
+ discrete_attrs = Msg("Categorical features have been encoded as numbers.")
class Error(OWWidget.Error):
value_error = Msg("{}")
@@ -123,6 +124,8 @@
names = self.data.get_column_view(variable)[0]
if len(names) != len(set(names)):
self.Warning.duplicate_names(variable)
+ if self.data.domain.has_discrete_attributes():
+ self.Warning.discrete_attrs()
self.info.set_output_summary(len(transposed))
except ValueError as e:
self.Error.value_error(e)
| {"golden_diff": "diff --git a/Orange/widgets/data/owtranspose.py b/Orange/widgets/data/owtranspose.py\n--- a/Orange/widgets/data/owtranspose.py\n+++ b/Orange/widgets/data/owtranspose.py\n@@ -38,6 +38,7 @@\n duplicate_names = Msg(\"Values are not unique.\\nTo avoid multiple \"\n \"features with the same name, values \\nof \"\n \"'{}' have been augmented with indices.\")\n+ discrete_attrs = Msg(\"Categorical features have been encoded as numbers.\")\n \n class Error(OWWidget.Error):\n value_error = Msg(\"{}\")\n@@ -123,6 +124,8 @@\n names = self.data.get_column_view(variable)[0]\n if len(names) != len(set(names)):\n self.Warning.duplicate_names(variable)\n+ if self.data.domain.has_discrete_attributes():\n+ self.Warning.discrete_attrs()\n self.info.set_output_summary(len(transposed))\n except ValueError as e:\n self.Error.value_error(e)\n", "issue": "Transpose remembers old unexisting data\nI have a table file that I overwrite with different values (rows and columns stay the same). When using a certain version of the file with certain workflow the workflow remembers old data, that was previously overwritten (and does not exist on my disk anymore). I could not replicate this in a new workflow made from scratch or with Orange data sets. However, it occurs even when I reopen Orange or when I copy the workflow to a new file. \r\nBelow are the workflow and the data and an image of what is happening. \r\n\r\n\r\n\r\n[transpose_remembering.zip](https://github.com/biolab/orange3/files/4102632/transpose_remembering.zip)\r\n\r\nOrange: Last master.\r\n\n", "before_files": [{"content": "from Orange.data import Table, ContinuousVariable, StringVariable\nfrom Orange.widgets.settings import (Setting, ContextSetting,\n DomainContextHandler)\nfrom Orange.widgets.utils.itemmodels import DomainModel\nfrom Orange.widgets.utils.widgetpreview import WidgetPreview\nfrom Orange.widgets.widget import OWWidget, Msg\nfrom Orange.widgets import gui\nfrom Orange.widgets.widget import Input, Output\n\n\nclass OWTranspose(OWWidget):\n name = \"Transpose\"\n description = \"Transpose data table.\"\n icon = \"icons/Transpose.svg\"\n priority = 2000\n keywords = []\n\n class Inputs:\n data = Input(\"Data\", Table)\n\n class Outputs:\n data = Output(\"Data\", Table, dynamic=False)\n\n GENERIC, FROM_VAR = range(2)\n\n resizing_enabled = False\n want_main_area = False\n\n DEFAULT_PREFIX = \"Feature\"\n\n settingsHandler = DomainContextHandler()\n feature_type = ContextSetting(GENERIC)\n feature_name = ContextSetting(\"\")\n feature_names_column = ContextSetting(None)\n auto_apply = Setting(True)\n\n class Warning(OWWidget.Warning):\n duplicate_names = Msg(\"Values are not unique.\\nTo avoid multiple \"\n \"features with the same name, values \\nof \"\n \"'{}' have been augmented with indices.\")\n\n class Error(OWWidget.Error):\n value_error = Msg(\"{}\")\n\n def __init__(self):\n super().__init__()\n self.data = None\n\n # self.apply is changed later, pylint: disable=unnecessary-lambda\n box = gui.radioButtons(\n self.controlArea, self, \"feature_type\", box=\"Feature names\",\n callback=lambda: self.apply())\n\n button = gui.appendRadioButton(box, \"Generic\")\n edit = gui.lineEdit(\n gui.indentedBox(box, gui.checkButtonOffsetHint(button)), self,\n \"feature_name\",\n placeholderText=\"Type a prefix ...\", toolTip=\"Custom feature name\")\n edit.editingFinished.connect(self._apply_editing)\n\n self.meta_button = gui.appendRadioButton(box, \"From variable:\")\n self.feature_model = DomainModel(\n valid_types=(ContinuousVariable, StringVariable),\n alphabetical=False)\n self.feature_combo = gui.comboBox(\n gui.indentedBox(box, gui.checkButtonOffsetHint(button)), self,\n \"feature_names_column\", contentsLength=12,\n callback=self._feature_combo_changed, model=self.feature_model)\n\n self.apply_button = gui.auto_apply(self.controlArea, self, box=False, commit=self.apply)\n self.apply_button.button.setAutoDefault(False)\n\n self.info.set_output_summary(self.info.NoInput)\n self.info.set_input_summary(self.info.NoInput)\n\n self.set_controls()\n\n def _apply_editing(self):\n self.feature_type = self.GENERIC\n self.feature_name = self.feature_name.strip()\n self.apply()\n\n def _feature_combo_changed(self):\n self.feature_type = self.FROM_VAR\n self.apply()\n\n @Inputs.data\n def set_data(self, data):\n # Skip the context if the combo is empty: a context with\n # feature_model == None would then match all domains\n if self.feature_model:\n self.closeContext()\n self.data = data\n if data:\n self.info.set_input_summary(len(data))\n else:\n self.info.set_input_summary(self.info.NoInput)\n self.set_controls()\n if self.feature_model:\n self.openContext(data)\n self.unconditional_apply()\n\n def set_controls(self):\n self.feature_model.set_domain(self.data and self.data.domain)\n self.meta_button.setEnabled(bool(self.feature_model))\n if self.feature_model:\n self.feature_names_column = self.feature_model[0]\n self.feature_type = self.FROM_VAR\n else:\n self.feature_names_column = None\n\n def apply(self):\n self.clear_messages()\n transposed = None\n if self.data:\n try:\n variable = self.feature_type == self.FROM_VAR and \\\n self.feature_names_column\n transposed = Table.transpose(\n self.data, variable,\n feature_name=self.feature_name or self.DEFAULT_PREFIX)\n if variable:\n names = self.data.get_column_view(variable)[0]\n if len(names) != len(set(names)):\n self.Warning.duplicate_names(variable)\n self.info.set_output_summary(len(transposed))\n except ValueError as e:\n self.Error.value_error(e)\n else:\n self.info.set_output_summary(self.info.NoInput)\n self.Outputs.data.send(transposed)\n\n def send_report(self):\n if self.feature_type == self.GENERIC:\n names = self.feature_name or self.DEFAULT_PREFIX\n else:\n names = \"from variable\"\n if self.feature_names_column:\n names += \" '{}'\".format(self.feature_names_column.name)\n self.report_items(\"\", [(\"Feature names\", names)])\n if self.data:\n self.report_data(\"Data\", self.data)\n\n\nif __name__ == \"__main__\": # pragma: no cover\n WidgetPreview(OWTranspose).run(Table(\"iris\"))\n", "path": "Orange/widgets/data/owtranspose.py"}]} | 2,173 | 212 |
gh_patches_debug_38 | rasdani/github-patches | git_diff | ipython__ipython-5701 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Move ssh out of external and into lib
This module does not belong in external - it cannot be replaced by an external system module.
</issue>
<code>
[start of IPython/external/ssh/__init__.py]
[end of IPython/external/ssh/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/IPython/external/ssh/__init__.py b/IPython/external/ssh/__init__.py
--- a/IPython/external/ssh/__init__.py
+++ b/IPython/external/ssh/__init__.py
@@ -0,0 +1,7 @@
+"""This is a copy of zmq.ssh"""
+
+try:
+ from zmq.ssh import *
+except ImportError:
+ from . import tunnel
+ from .tunnel import *
| {"golden_diff": "diff --git a/IPython/external/ssh/__init__.py b/IPython/external/ssh/__init__.py\n--- a/IPython/external/ssh/__init__.py\n+++ b/IPython/external/ssh/__init__.py\n@@ -0,0 +1,7 @@\n+\"\"\"This is a copy of zmq.ssh\"\"\"\n+\n+try:\n+ from zmq.ssh import *\n+except ImportError:\n+ from . import tunnel\n+ from .tunnel import *\n", "issue": "Move ssh out of external and into lib\nThis module does not belong in external - it cannot be replaced by an external system module.\n\n", "before_files": [{"content": "", "path": "IPython/external/ssh/__init__.py"}]} | 571 | 104 |
gh_patches_debug_37778 | rasdani/github-patches | git_diff | googleapis__python-bigquery-47 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
BigQuery: test with all optional dependencies in Python 3.8
Blocked on:
- Apache Arrow: https://issues.apache.org/jira/browse/ARROW-6920
- fastparquet: https://github.com/dask/fastparquet/issues/468
</issue>
<code>
[start of noxfile.py]
1 # Copyright 2016 Google LLC
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from __future__ import absolute_import
16
17 import os
18 import shutil
19
20 import nox
21
22
23 BLACK_PATHS = ("docs", "google", "samples", "tests", "noxfile.py", "setup.py")
24
25
26 def default(session):
27 """Default unit test session.
28
29 This is intended to be run **without** an interpreter set, so
30 that the current ``python`` (on the ``PATH``) or the version of
31 Python corresponding to the ``nox`` binary the ``PATH`` can
32 run the tests.
33 """
34 # Install all test dependencies, then install local packages in-place.
35 session.install("mock", "pytest", "pytest-cov", "freezegun")
36 session.install("grpcio")
37 session.install("-e", "test_utils")
38
39 coverage_fail_under = "--cov-fail-under=97"
40
41 # fastparquet is not included in .[all] because, in general, it's redundant
42 # with pyarrow. We still want to run some unit tests with fastparquet
43 # serialization, though.
44 dev_install = ".[all,fastparquet]"
45
46 # There is no pyarrow or fastparquet wheel for Python 3.8.
47 if session.python == "3.8":
48 # Since many tests are skipped due to missing dependencies, test
49 # coverage is much lower in Python 3.8. Remove once we can test with
50 # pyarrow.
51 coverage_fail_under = "--cov-fail-under=91"
52 dev_install = ".[pandas,tqdm]"
53
54 session.install("-e", dev_install)
55
56 # IPython does not support Python 2 after version 5.x
57 if session.python == "2.7":
58 session.install("ipython==5.5")
59 else:
60 session.install("ipython")
61
62 # Run py.test against the unit tests.
63 session.run(
64 "py.test",
65 "--quiet",
66 "--cov=google.cloud.bigquery",
67 "--cov=tests.unit",
68 "--cov-append",
69 "--cov-config=.coveragerc",
70 "--cov-report=",
71 coverage_fail_under,
72 os.path.join("tests", "unit"),
73 *session.posargs,
74 )
75
76
77 @nox.session(python=["2.7", "3.5", "3.6", "3.7", "3.8"])
78 def unit(session):
79 """Run the unit test suite."""
80 default(session)
81
82
83 @nox.session(python=["2.7", "3.7"])
84 def system(session):
85 """Run the system test suite."""
86
87 # Sanity check: Only run system tests if the environment variable is set.
88 if not os.environ.get("GOOGLE_APPLICATION_CREDENTIALS", ""):
89 session.skip("Credentials must be set via environment variable.")
90
91 # Use pre-release gRPC for system tests.
92 session.install("--pre", "grpcio")
93
94 # Install all test dependencies, then install local packages in place.
95 session.install("mock", "pytest", "psutil")
96 session.install("google-cloud-storage")
97 session.install("fastavro")
98 session.install("-e", "test_utils")
99 session.install("-e", ".[all]")
100
101 # IPython does not support Python 2 after version 5.x
102 if session.python == "2.7":
103 session.install("ipython==5.5")
104 else:
105 session.install("ipython")
106
107 # Run py.test against the system tests.
108 session.run(
109 "py.test", "--quiet", os.path.join("tests", "system.py"), *session.posargs
110 )
111
112
113 @nox.session(python=["2.7", "3.7"])
114 def snippets(session):
115 """Run the snippets test suite."""
116
117 # Sanity check: Only run snippets tests if the environment variable is set.
118 if not os.environ.get("GOOGLE_APPLICATION_CREDENTIALS", ""):
119 session.skip("Credentials must be set via environment variable.")
120
121 # Install all test dependencies, then install local packages in place.
122 session.install("mock", "pytest")
123 session.install("google-cloud-storage")
124 session.install("grpcio")
125 session.install("-e", "test_utils")
126 session.install("-e", ".[all]")
127
128 # Run py.test against the snippets tests.
129 session.run("py.test", os.path.join("docs", "snippets.py"), *session.posargs)
130 session.run("py.test", "samples", *session.posargs)
131
132
133 @nox.session(python="3.7")
134 def cover(session):
135 """Run the final coverage report.
136
137 This outputs the coverage report aggregating coverage from the unit
138 test runs (not system test runs), and then erases coverage data.
139 """
140 session.install("coverage", "pytest-cov")
141 session.run("coverage", "report", "--show-missing", "--fail-under=100")
142 session.run("coverage", "erase")
143
144
145 @nox.session(python="3.7")
146 def lint(session):
147 """Run linters.
148
149 Returns a failure if the linters find linting errors or sufficiently
150 serious code quality issues.
151 """
152
153 session.install("black", "flake8")
154 session.install("-e", ".")
155 session.run("flake8", os.path.join("google", "cloud", "bigquery"))
156 session.run("flake8", "tests")
157 session.run("flake8", os.path.join("docs", "samples"))
158 session.run("flake8", os.path.join("docs", "snippets.py"))
159 session.run("black", "--check", *BLACK_PATHS)
160
161
162 @nox.session(python="3.7")
163 def lint_setup_py(session):
164 """Verify that setup.py is valid (including RST check)."""
165
166 session.install("docutils", "Pygments")
167 session.run("python", "setup.py", "check", "--restructuredtext", "--strict")
168
169
170 @nox.session(python="3.6")
171 def blacken(session):
172 """Run black.
173 Format code to uniform standard.
174
175 This currently uses Python 3.6 due to the automated Kokoro run of synthtool.
176 That run uses an image that doesn't have 3.6 installed. Before updating this
177 check the state of the `gcp_ubuntu_config` we use for that Kokoro run.
178 """
179 session.install("black")
180 session.run("black", *BLACK_PATHS)
181
182
183 @nox.session(python="3.7")
184 def docs(session):
185 """Build the docs."""
186
187 session.install("ipython", "recommonmark", "sphinx", "sphinx_rtd_theme")
188 session.install("google-cloud-storage")
189 session.install("-e", ".[all]")
190
191 shutil.rmtree(os.path.join("docs", "_build"), ignore_errors=True)
192 session.run(
193 "sphinx-build",
194 "-W", # warnings as errors
195 "-T", # show full traceback on exception
196 "-N", # no colors
197 "-b",
198 "html",
199 "-d",
200 os.path.join("docs", "_build", "doctrees", ""),
201 os.path.join("docs", ""),
202 os.path.join("docs", "_build", "html", ""),
203 )
204
[end of noxfile.py]
[start of setup.py]
1 # Copyright 2018 Google LLC
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import io
16 import os
17
18 import setuptools
19
20
21 # Package metadata.
22
23 name = "google-cloud-bigquery"
24 description = "Google BigQuery API client library"
25 version = "1.24.0"
26 # Should be one of:
27 # 'Development Status :: 3 - Alpha'
28 # 'Development Status :: 4 - Beta'
29 # 'Development Status :: 5 - Production/Stable'
30 release_status = "Development Status :: 5 - Production/Stable"
31 dependencies = [
32 'enum34; python_version < "3.4"',
33 "google-auth >= 1.9.0, < 2.0dev",
34 "google-api-core >= 1.15.0, < 2.0dev",
35 "google-cloud-core >= 1.1.0, < 2.0dev",
36 "google-resumable-media >= 0.5.0, < 0.6dev",
37 "protobuf >= 3.6.0",
38 "six >=1.13.0,< 2.0.0dev",
39 ]
40 extras = {
41 "bqstorage": [
42 "google-cloud-bigquery-storage >= 0.6.0, <2.0.0dev",
43 "pyarrow>=0.16.0, < 2.0dev",
44 ],
45 "pandas": ["pandas>=0.17.1"],
46 # Exclude PyArrow dependency from Windows Python 2.7.
47 'pyarrow: platform_system != "Windows" or python_version >= "3.4"': [
48 # Bad Linux release for 0.14.0.
49 # https://issues.apache.org/jira/browse/ARROW-5868
50 "pyarrow>=0.4.1, != 0.14.0"
51 ],
52 "tqdm": ["tqdm >= 4.0.0, <5.0.0dev"],
53 "fastparquet": ["fastparquet", "python-snappy"],
54 }
55
56 all_extras = []
57
58 for extra in extras:
59 if extra == "fastparquet":
60 # Skip fastparquet from "all" because it is redundant with pyarrow and
61 # creates a dependency on pre-release versions of numpy. See:
62 # https://github.com/googleapis/google-cloud-python/issues/8549
63 continue
64 all_extras.extend(extras[extra])
65
66 extras["all"] = all_extras
67
68 # Setup boilerplate below this line.
69
70 package_root = os.path.abspath(os.path.dirname(__file__))
71
72 readme_filename = os.path.join(package_root, "README.rst")
73 with io.open(readme_filename, encoding="utf-8") as readme_file:
74 readme = readme_file.read()
75
76 # Only include packages under the 'google' namespace. Do not include tests,
77 # benchmarks, etc.
78 packages = [
79 package for package in setuptools.find_packages() if package.startswith("google")
80 ]
81
82 # Determine which namespaces are needed.
83 namespaces = ["google"]
84 if "google.cloud" in packages:
85 namespaces.append("google.cloud")
86
87
88 setuptools.setup(
89 name=name,
90 version=version,
91 description=description,
92 long_description=readme,
93 author="Google LLC",
94 author_email="[email protected]",
95 license="Apache 2.0",
96 url="https://github.com/googleapis/python-bigquery",
97 classifiers=[
98 release_status,
99 "Intended Audience :: Developers",
100 "License :: OSI Approved :: Apache Software License",
101 "Programming Language :: Python",
102 "Programming Language :: Python :: 2",
103 "Programming Language :: Python :: 2.7",
104 "Programming Language :: Python :: 3",
105 "Programming Language :: Python :: 3.5",
106 "Programming Language :: Python :: 3.6",
107 "Programming Language :: Python :: 3.7",
108 "Operating System :: OS Independent",
109 "Topic :: Internet",
110 ],
111 platforms="Posix; MacOS X; Windows",
112 packages=packages,
113 namespace_packages=namespaces,
114 install_requires=dependencies,
115 extras_require=extras,
116 python_requires=">=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*",
117 include_package_data=True,
118 zip_safe=False,
119 )
120
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/noxfile.py b/noxfile.py
--- a/noxfile.py
+++ b/noxfile.py
@@ -43,14 +43,6 @@
# serialization, though.
dev_install = ".[all,fastparquet]"
- # There is no pyarrow or fastparquet wheel for Python 3.8.
- if session.python == "3.8":
- # Since many tests are skipped due to missing dependencies, test
- # coverage is much lower in Python 3.8. Remove once we can test with
- # pyarrow.
- coverage_fail_under = "--cov-fail-under=91"
- dev_install = ".[pandas,tqdm]"
-
session.install("-e", dev_install)
# IPython does not support Python 2 after version 5.x
@@ -80,7 +72,7 @@
default(session)
[email protected](python=["2.7", "3.7"])
[email protected](python=["2.7", "3.8"])
def system(session):
"""Run the system test suite."""
@@ -110,7 +102,7 @@
)
[email protected](python=["2.7", "3.7"])
[email protected](python=["2.7", "3.8"])
def snippets(session):
"""Run the snippets test suite."""
@@ -130,7 +122,7 @@
session.run("py.test", "samples", *session.posargs)
[email protected](python="3.7")
[email protected](python="3.8")
def cover(session):
"""Run the final coverage report.
@@ -142,7 +134,7 @@
session.run("coverage", "erase")
[email protected](python="3.7")
[email protected](python="3.8")
def lint(session):
"""Run linters.
@@ -159,7 +151,7 @@
session.run("black", "--check", *BLACK_PATHS)
[email protected](python="3.7")
[email protected](python="3.8")
def lint_setup_py(session):
"""Verify that setup.py is valid (including RST check)."""
@@ -180,7 +172,7 @@
session.run("black", *BLACK_PATHS)
[email protected](python="3.7")
[email protected](python="3.8")
def docs(session):
"""Build the docs."""
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -105,6 +105,7 @@
"Programming Language :: Python :: 3.5",
"Programming Language :: Python :: 3.6",
"Programming Language :: Python :: 3.7",
+ "Programming Language :: Python :: 3.8",
"Operating System :: OS Independent",
"Topic :: Internet",
],
| {"golden_diff": "diff --git a/noxfile.py b/noxfile.py\n--- a/noxfile.py\n+++ b/noxfile.py\n@@ -43,14 +43,6 @@\n # serialization, though.\n dev_install = \".[all,fastparquet]\"\n \n- # There is no pyarrow or fastparquet wheel for Python 3.8.\n- if session.python == \"3.8\":\n- # Since many tests are skipped due to missing dependencies, test\n- # coverage is much lower in Python 3.8. Remove once we can test with\n- # pyarrow.\n- coverage_fail_under = \"--cov-fail-under=91\"\n- dev_install = \".[pandas,tqdm]\"\n-\n session.install(\"-e\", dev_install)\n \n # IPython does not support Python 2 after version 5.x\n@@ -80,7 +72,7 @@\n default(session)\n \n \[email protected](python=[\"2.7\", \"3.7\"])\[email protected](python=[\"2.7\", \"3.8\"])\n def system(session):\n \"\"\"Run the system test suite.\"\"\"\n \n@@ -110,7 +102,7 @@\n )\n \n \[email protected](python=[\"2.7\", \"3.7\"])\[email protected](python=[\"2.7\", \"3.8\"])\n def snippets(session):\n \"\"\"Run the snippets test suite.\"\"\"\n \n@@ -130,7 +122,7 @@\n session.run(\"py.test\", \"samples\", *session.posargs)\n \n \[email protected](python=\"3.7\")\[email protected](python=\"3.8\")\n def cover(session):\n \"\"\"Run the final coverage report.\n \n@@ -142,7 +134,7 @@\n session.run(\"coverage\", \"erase\")\n \n \[email protected](python=\"3.7\")\[email protected](python=\"3.8\")\n def lint(session):\n \"\"\"Run linters.\n \n@@ -159,7 +151,7 @@\n session.run(\"black\", \"--check\", *BLACK_PATHS)\n \n \[email protected](python=\"3.7\")\[email protected](python=\"3.8\")\n def lint_setup_py(session):\n \"\"\"Verify that setup.py is valid (including RST check).\"\"\"\n \n@@ -180,7 +172,7 @@\n session.run(\"black\", *BLACK_PATHS)\n \n \[email protected](python=\"3.7\")\[email protected](python=\"3.8\")\n def docs(session):\n \"\"\"Build the docs.\"\"\"\n \ndiff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -105,6 +105,7 @@\n \"Programming Language :: Python :: 3.5\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n+ \"Programming Language :: Python :: 3.8\",\n \"Operating System :: OS Independent\",\n \"Topic :: Internet\",\n ],\n", "issue": "BigQuery: test with all optional dependencies in Python 3.8\nBlocked on:\r\n\r\n- Apache Arrow: https://issues.apache.org/jira/browse/ARROW-6920\r\n- fastparquet: https://github.com/dask/fastparquet/issues/468\n", "before_files": [{"content": "# Copyright 2016 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom __future__ import absolute_import\n\nimport os\nimport shutil\n\nimport nox\n\n\nBLACK_PATHS = (\"docs\", \"google\", \"samples\", \"tests\", \"noxfile.py\", \"setup.py\")\n\n\ndef default(session):\n \"\"\"Default unit test session.\n\n This is intended to be run **without** an interpreter set, so\n that the current ``python`` (on the ``PATH``) or the version of\n Python corresponding to the ``nox`` binary the ``PATH`` can\n run the tests.\n \"\"\"\n # Install all test dependencies, then install local packages in-place.\n session.install(\"mock\", \"pytest\", \"pytest-cov\", \"freezegun\")\n session.install(\"grpcio\")\n session.install(\"-e\", \"test_utils\")\n\n coverage_fail_under = \"--cov-fail-under=97\"\n\n # fastparquet is not included in .[all] because, in general, it's redundant\n # with pyarrow. We still want to run some unit tests with fastparquet\n # serialization, though.\n dev_install = \".[all,fastparquet]\"\n\n # There is no pyarrow or fastparquet wheel for Python 3.8.\n if session.python == \"3.8\":\n # Since many tests are skipped due to missing dependencies, test\n # coverage is much lower in Python 3.8. Remove once we can test with\n # pyarrow.\n coverage_fail_under = \"--cov-fail-under=91\"\n dev_install = \".[pandas,tqdm]\"\n\n session.install(\"-e\", dev_install)\n\n # IPython does not support Python 2 after version 5.x\n if session.python == \"2.7\":\n session.install(\"ipython==5.5\")\n else:\n session.install(\"ipython\")\n\n # Run py.test against the unit tests.\n session.run(\n \"py.test\",\n \"--quiet\",\n \"--cov=google.cloud.bigquery\",\n \"--cov=tests.unit\",\n \"--cov-append\",\n \"--cov-config=.coveragerc\",\n \"--cov-report=\",\n coverage_fail_under,\n os.path.join(\"tests\", \"unit\"),\n *session.posargs,\n )\n\n\[email protected](python=[\"2.7\", \"3.5\", \"3.6\", \"3.7\", \"3.8\"])\ndef unit(session):\n \"\"\"Run the unit test suite.\"\"\"\n default(session)\n\n\[email protected](python=[\"2.7\", \"3.7\"])\ndef system(session):\n \"\"\"Run the system test suite.\"\"\"\n\n # Sanity check: Only run system tests if the environment variable is set.\n if not os.environ.get(\"GOOGLE_APPLICATION_CREDENTIALS\", \"\"):\n session.skip(\"Credentials must be set via environment variable.\")\n\n # Use pre-release gRPC for system tests.\n session.install(\"--pre\", \"grpcio\")\n\n # Install all test dependencies, then install local packages in place.\n session.install(\"mock\", \"pytest\", \"psutil\")\n session.install(\"google-cloud-storage\")\n session.install(\"fastavro\")\n session.install(\"-e\", \"test_utils\")\n session.install(\"-e\", \".[all]\")\n\n # IPython does not support Python 2 after version 5.x\n if session.python == \"2.7\":\n session.install(\"ipython==5.5\")\n else:\n session.install(\"ipython\")\n\n # Run py.test against the system tests.\n session.run(\n \"py.test\", \"--quiet\", os.path.join(\"tests\", \"system.py\"), *session.posargs\n )\n\n\[email protected](python=[\"2.7\", \"3.7\"])\ndef snippets(session):\n \"\"\"Run the snippets test suite.\"\"\"\n\n # Sanity check: Only run snippets tests if the environment variable is set.\n if not os.environ.get(\"GOOGLE_APPLICATION_CREDENTIALS\", \"\"):\n session.skip(\"Credentials must be set via environment variable.\")\n\n # Install all test dependencies, then install local packages in place.\n session.install(\"mock\", \"pytest\")\n session.install(\"google-cloud-storage\")\n session.install(\"grpcio\")\n session.install(\"-e\", \"test_utils\")\n session.install(\"-e\", \".[all]\")\n\n # Run py.test against the snippets tests.\n session.run(\"py.test\", os.path.join(\"docs\", \"snippets.py\"), *session.posargs)\n session.run(\"py.test\", \"samples\", *session.posargs)\n\n\[email protected](python=\"3.7\")\ndef cover(session):\n \"\"\"Run the final coverage report.\n\n This outputs the coverage report aggregating coverage from the unit\n test runs (not system test runs), and then erases coverage data.\n \"\"\"\n session.install(\"coverage\", \"pytest-cov\")\n session.run(\"coverage\", \"report\", \"--show-missing\", \"--fail-under=100\")\n session.run(\"coverage\", \"erase\")\n\n\[email protected](python=\"3.7\")\ndef lint(session):\n \"\"\"Run linters.\n\n Returns a failure if the linters find linting errors or sufficiently\n serious code quality issues.\n \"\"\"\n\n session.install(\"black\", \"flake8\")\n session.install(\"-e\", \".\")\n session.run(\"flake8\", os.path.join(\"google\", \"cloud\", \"bigquery\"))\n session.run(\"flake8\", \"tests\")\n session.run(\"flake8\", os.path.join(\"docs\", \"samples\"))\n session.run(\"flake8\", os.path.join(\"docs\", \"snippets.py\"))\n session.run(\"black\", \"--check\", *BLACK_PATHS)\n\n\[email protected](python=\"3.7\")\ndef lint_setup_py(session):\n \"\"\"Verify that setup.py is valid (including RST check).\"\"\"\n\n session.install(\"docutils\", \"Pygments\")\n session.run(\"python\", \"setup.py\", \"check\", \"--restructuredtext\", \"--strict\")\n\n\[email protected](python=\"3.6\")\ndef blacken(session):\n \"\"\"Run black.\n Format code to uniform standard.\n\n This currently uses Python 3.6 due to the automated Kokoro run of synthtool.\n That run uses an image that doesn't have 3.6 installed. Before updating this\n check the state of the `gcp_ubuntu_config` we use for that Kokoro run.\n \"\"\"\n session.install(\"black\")\n session.run(\"black\", *BLACK_PATHS)\n\n\[email protected](python=\"3.7\")\ndef docs(session):\n \"\"\"Build the docs.\"\"\"\n\n session.install(\"ipython\", \"recommonmark\", \"sphinx\", \"sphinx_rtd_theme\")\n session.install(\"google-cloud-storage\")\n session.install(\"-e\", \".[all]\")\n\n shutil.rmtree(os.path.join(\"docs\", \"_build\"), ignore_errors=True)\n session.run(\n \"sphinx-build\",\n \"-W\", # warnings as errors\n \"-T\", # show full traceback on exception\n \"-N\", # no colors\n \"-b\",\n \"html\",\n \"-d\",\n os.path.join(\"docs\", \"_build\", \"doctrees\", \"\"),\n os.path.join(\"docs\", \"\"),\n os.path.join(\"docs\", \"_build\", \"html\", \"\"),\n )\n", "path": "noxfile.py"}, {"content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport io\nimport os\n\nimport setuptools\n\n\n# Package metadata.\n\nname = \"google-cloud-bigquery\"\ndescription = \"Google BigQuery API client library\"\nversion = \"1.24.0\"\n# Should be one of:\n# 'Development Status :: 3 - Alpha'\n# 'Development Status :: 4 - Beta'\n# 'Development Status :: 5 - Production/Stable'\nrelease_status = \"Development Status :: 5 - Production/Stable\"\ndependencies = [\n 'enum34; python_version < \"3.4\"',\n \"google-auth >= 1.9.0, < 2.0dev\",\n \"google-api-core >= 1.15.0, < 2.0dev\",\n \"google-cloud-core >= 1.1.0, < 2.0dev\",\n \"google-resumable-media >= 0.5.0, < 0.6dev\",\n \"protobuf >= 3.6.0\",\n \"six >=1.13.0,< 2.0.0dev\",\n]\nextras = {\n \"bqstorage\": [\n \"google-cloud-bigquery-storage >= 0.6.0, <2.0.0dev\",\n \"pyarrow>=0.16.0, < 2.0dev\",\n ],\n \"pandas\": [\"pandas>=0.17.1\"],\n # Exclude PyArrow dependency from Windows Python 2.7.\n 'pyarrow: platform_system != \"Windows\" or python_version >= \"3.4\"': [\n # Bad Linux release for 0.14.0.\n # https://issues.apache.org/jira/browse/ARROW-5868\n \"pyarrow>=0.4.1, != 0.14.0\"\n ],\n \"tqdm\": [\"tqdm >= 4.0.0, <5.0.0dev\"],\n \"fastparquet\": [\"fastparquet\", \"python-snappy\"],\n}\n\nall_extras = []\n\nfor extra in extras:\n if extra == \"fastparquet\":\n # Skip fastparquet from \"all\" because it is redundant with pyarrow and\n # creates a dependency on pre-release versions of numpy. See:\n # https://github.com/googleapis/google-cloud-python/issues/8549\n continue\n all_extras.extend(extras[extra])\n\nextras[\"all\"] = all_extras\n\n# Setup boilerplate below this line.\n\npackage_root = os.path.abspath(os.path.dirname(__file__))\n\nreadme_filename = os.path.join(package_root, \"README.rst\")\nwith io.open(readme_filename, encoding=\"utf-8\") as readme_file:\n readme = readme_file.read()\n\n# Only include packages under the 'google' namespace. Do not include tests,\n# benchmarks, etc.\npackages = [\n package for package in setuptools.find_packages() if package.startswith(\"google\")\n]\n\n# Determine which namespaces are needed.\nnamespaces = [\"google\"]\nif \"google.cloud\" in packages:\n namespaces.append(\"google.cloud\")\n\n\nsetuptools.setup(\n name=name,\n version=version,\n description=description,\n long_description=readme,\n author=\"Google LLC\",\n author_email=\"[email protected]\",\n license=\"Apache 2.0\",\n url=\"https://github.com/googleapis/python-bigquery\",\n classifiers=[\n release_status,\n \"Intended Audience :: Developers\",\n \"License :: OSI Approved :: Apache Software License\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 2\",\n \"Programming Language :: Python :: 2.7\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.5\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Operating System :: OS Independent\",\n \"Topic :: Internet\",\n ],\n platforms=\"Posix; MacOS X; Windows\",\n packages=packages,\n namespace_packages=namespaces,\n install_requires=dependencies,\n extras_require=extras,\n python_requires=\">=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*\",\n include_package_data=True,\n zip_safe=False,\n)\n", "path": "setup.py"}]} | 4,084 | 665 |
gh_patches_debug_4934 | rasdani/github-patches | git_diff | google__mobly-472 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Log the contents of config file at the debug level early
This helps in debugging remote user's malformed json/yaml or configs that don't adhere to schema.
</issue>
<code>
[start of mobly/config_parser.py]
1 # Copyright 2016 Google Inc.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from builtins import str
16
17 import copy
18 import io
19 import os
20 import yaml
21
22 from mobly import keys
23 from mobly import utils
24
25 # An environment variable defining the base location for Mobly logs.
26 ENV_MOBLY_LOGPATH = 'MOBLY_LOGPATH'
27 _DEFAULT_LOG_PATH = '/tmp/logs/mobly/'
28
29
30 class MoblyConfigError(Exception):
31 """Raised when there is a problem in test configuration file."""
32
33
34 def _validate_test_config(test_config):
35 """Validates the raw configuration loaded from the config file.
36
37 Making sure the required key 'TestBeds' is present.
38 """
39 required_key = keys.Config.key_testbed.value
40 if required_key not in test_config:
41 raise MoblyConfigError(
42 'Required key %s missing in test config.' % required_key)
43
44
45 def _validate_testbed_name(name):
46 """Validates the name of a test bed.
47
48 Since test bed names are used as part of the test run id, it needs to meet
49 certain requirements.
50
51 Args:
52 name: The test bed's name specified in config file.
53
54 Raises:
55 MoblyConfigError: The name does not meet any criteria.
56 """
57 if not name:
58 raise MoblyConfigError("Test bed names can't be empty.")
59 name = str(name)
60 for char in name:
61 if char not in utils.valid_filename_chars:
62 raise MoblyConfigError(
63 'Char "%s" is not allowed in test bed names.' % char)
64
65
66 def _validate_testbed_configs(testbed_configs):
67 """Validates the testbed configurations.
68
69 Args:
70 testbed_configs: A list of testbed configuration dicts.
71
72 Raises:
73 MoblyConfigError: Some parts of the configuration is invalid.
74 """
75 seen_names = set()
76 # Cross checks testbed configs for resource conflicts.
77 for config in testbed_configs:
78 # Check for conflicts between multiple concurrent testbed configs.
79 # No need to call it if there's only one testbed config.
80 name = config[keys.Config.key_testbed_name.value]
81 _validate_testbed_name(name)
82 # Test bed names should be unique.
83 if name in seen_names:
84 raise MoblyConfigError('Duplicate testbed name %s found.' % name)
85 seen_names.add(name)
86
87
88 def load_test_config_file(test_config_path, tb_filters=None):
89 """Processes the test configuration file provied by user.
90
91 Loads the configuration file into a dict, unpacks each testbed
92 config into its own dict, and validate the configuration in the
93 process.
94
95 Args:
96 test_config_path: Path to the test configuration file.
97 tb_filters: A subset of test bed names to be pulled from the config
98 file. If None, then all test beds will be selected.
99
100 Returns:
101 A list of test configuration dicts to be passed to
102 test_runner.TestRunner.
103 """
104 configs = _load_config_file(test_config_path)
105 if tb_filters:
106 tbs = []
107 for tb in configs[keys.Config.key_testbed.value]:
108 if tb[keys.Config.key_testbed_name.value] in tb_filters:
109 tbs.append(tb)
110 if len(tbs) != len(tb_filters):
111 raise MoblyConfigError(
112 'Expect to find %d test bed configs, found %d. Check if'
113 ' you have the correct test bed names.' % (len(tb_filters),
114 len(tbs)))
115 configs[keys.Config.key_testbed.value] = tbs
116 mobly_params = configs.get(keys.Config.key_mobly_params.value, {})
117 # Decide log path.
118 log_path = mobly_params.get(keys.Config.key_log_path.value,
119 _DEFAULT_LOG_PATH)
120 if ENV_MOBLY_LOGPATH in os.environ:
121 log_path = os.environ[ENV_MOBLY_LOGPATH]
122 log_path = utils.abs_path(log_path)
123 # Validate configs
124 _validate_test_config(configs)
125 _validate_testbed_configs(configs[keys.Config.key_testbed.value])
126 # Transform config dict from user-facing key mapping to internal config object.
127 test_configs = []
128 for original_bed_config in configs[keys.Config.key_testbed.value]:
129 test_run_config = TestRunConfig()
130 test_run_config.test_bed_name = original_bed_config[
131 keys.Config.key_testbed_name.value]
132 test_run_config.log_path = log_path
133 test_run_config.controller_configs = original_bed_config.get(
134 keys.Config.key_testbed_controllers.value, {})
135 test_run_config.user_params = original_bed_config.get(
136 keys.Config.key_testbed_test_params.value, {})
137 test_configs.append(test_run_config)
138 return test_configs
139
140
141 def _load_config_file(path):
142 """Loads a test config file.
143
144 The test config file has to be in YAML format.
145
146 Args:
147 path: A string that is the full path to the config file, including the
148 file name.
149
150 Returns:
151 A dict that represents info in the config file.
152 """
153 with io.open(utils.abs_path(path), 'r', encoding='utf-8') as f:
154 conf = yaml.load(f)
155 return conf
156
157
158 class TestRunConfig(object):
159 """The data class that holds all the information needed for a test run.
160
161 Attributes:
162 log_path: string, specifies the root directory for all logs written by
163 a test run.
164 test_bed_name: string, the name of the test bed used by a test run.
165 controller_configs: dict, configs used for instantiating controller
166 objects.
167 user_params: dict, all the parameters to be consumed by the test logic.
168 register_controller: func, used by test classes to register controller
169 modules.
170 summary_writer: records.TestSummaryWriter, used to write elements to
171 the test result summary file.
172 test_class_name_suffix: string, suffix to append to the class name for
173 reporting. This is used for differentiating the same class
174 executed with different parameters in a suite.
175 """
176
177 def __init__(self):
178 self.log_path = None
179 self.test_bed_name = None
180 self.controller_configs = None
181 self.user_params = None
182 self.register_controller = None
183 self.summary_writer = None
184 self.test_class_name_suffix = None
185
186 def copy(self):
187 """Returns a deep copy of the current config.
188 """
189 return copy.deepcopy(self)
190
191 def __str__(self):
192 return str(self.__dict__)
193
[end of mobly/config_parser.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/mobly/config_parser.py b/mobly/config_parser.py
--- a/mobly/config_parser.py
+++ b/mobly/config_parser.py
@@ -16,6 +16,7 @@
import copy
import io
+import pprint
import os
import yaml
@@ -189,4 +190,7 @@
return copy.deepcopy(self)
def __str__(self):
- return str(self.__dict__)
+ content = dict(self.__dict__)
+ content.pop('summary_writer')
+ content.pop('register_controller')
+ return pprint.pformat(content)
| {"golden_diff": "diff --git a/mobly/config_parser.py b/mobly/config_parser.py\n--- a/mobly/config_parser.py\n+++ b/mobly/config_parser.py\n@@ -16,6 +16,7 @@\n \n import copy\n import io\n+import pprint\n import os\n import yaml\n \n@@ -189,4 +190,7 @@\n return copy.deepcopy(self)\n \n def __str__(self):\n- return str(self.__dict__)\n+ content = dict(self.__dict__)\n+ content.pop('summary_writer')\n+ content.pop('register_controller')\n+ return pprint.pformat(content)\n", "issue": "Log the contents of config file at the debug level early\nThis helps in debugging remote user's malformed json/yaml or configs that don't adhere to schema.\n", "before_files": [{"content": "# Copyright 2016 Google Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom builtins import str\n\nimport copy\nimport io\nimport os\nimport yaml\n\nfrom mobly import keys\nfrom mobly import utils\n\n# An environment variable defining the base location for Mobly logs.\nENV_MOBLY_LOGPATH = 'MOBLY_LOGPATH'\n_DEFAULT_LOG_PATH = '/tmp/logs/mobly/'\n\n\nclass MoblyConfigError(Exception):\n \"\"\"Raised when there is a problem in test configuration file.\"\"\"\n\n\ndef _validate_test_config(test_config):\n \"\"\"Validates the raw configuration loaded from the config file.\n\n Making sure the required key 'TestBeds' is present.\n \"\"\"\n required_key = keys.Config.key_testbed.value\n if required_key not in test_config:\n raise MoblyConfigError(\n 'Required key %s missing in test config.' % required_key)\n\n\ndef _validate_testbed_name(name):\n \"\"\"Validates the name of a test bed.\n\n Since test bed names are used as part of the test run id, it needs to meet\n certain requirements.\n\n Args:\n name: The test bed's name specified in config file.\n\n Raises:\n MoblyConfigError: The name does not meet any criteria.\n \"\"\"\n if not name:\n raise MoblyConfigError(\"Test bed names can't be empty.\")\n name = str(name)\n for char in name:\n if char not in utils.valid_filename_chars:\n raise MoblyConfigError(\n 'Char \"%s\" is not allowed in test bed names.' % char)\n\n\ndef _validate_testbed_configs(testbed_configs):\n \"\"\"Validates the testbed configurations.\n\n Args:\n testbed_configs: A list of testbed configuration dicts.\n\n Raises:\n MoblyConfigError: Some parts of the configuration is invalid.\n \"\"\"\n seen_names = set()\n # Cross checks testbed configs for resource conflicts.\n for config in testbed_configs:\n # Check for conflicts between multiple concurrent testbed configs.\n # No need to call it if there's only one testbed config.\n name = config[keys.Config.key_testbed_name.value]\n _validate_testbed_name(name)\n # Test bed names should be unique.\n if name in seen_names:\n raise MoblyConfigError('Duplicate testbed name %s found.' % name)\n seen_names.add(name)\n\n\ndef load_test_config_file(test_config_path, tb_filters=None):\n \"\"\"Processes the test configuration file provied by user.\n\n Loads the configuration file into a dict, unpacks each testbed\n config into its own dict, and validate the configuration in the\n process.\n\n Args:\n test_config_path: Path to the test configuration file.\n tb_filters: A subset of test bed names to be pulled from the config\n file. If None, then all test beds will be selected.\n\n Returns:\n A list of test configuration dicts to be passed to\n test_runner.TestRunner.\n \"\"\"\n configs = _load_config_file(test_config_path)\n if tb_filters:\n tbs = []\n for tb in configs[keys.Config.key_testbed.value]:\n if tb[keys.Config.key_testbed_name.value] in tb_filters:\n tbs.append(tb)\n if len(tbs) != len(tb_filters):\n raise MoblyConfigError(\n 'Expect to find %d test bed configs, found %d. Check if'\n ' you have the correct test bed names.' % (len(tb_filters),\n len(tbs)))\n configs[keys.Config.key_testbed.value] = tbs\n mobly_params = configs.get(keys.Config.key_mobly_params.value, {})\n # Decide log path.\n log_path = mobly_params.get(keys.Config.key_log_path.value,\n _DEFAULT_LOG_PATH)\n if ENV_MOBLY_LOGPATH in os.environ:\n log_path = os.environ[ENV_MOBLY_LOGPATH]\n log_path = utils.abs_path(log_path)\n # Validate configs\n _validate_test_config(configs)\n _validate_testbed_configs(configs[keys.Config.key_testbed.value])\n # Transform config dict from user-facing key mapping to internal config object.\n test_configs = []\n for original_bed_config in configs[keys.Config.key_testbed.value]:\n test_run_config = TestRunConfig()\n test_run_config.test_bed_name = original_bed_config[\n keys.Config.key_testbed_name.value]\n test_run_config.log_path = log_path\n test_run_config.controller_configs = original_bed_config.get(\n keys.Config.key_testbed_controllers.value, {})\n test_run_config.user_params = original_bed_config.get(\n keys.Config.key_testbed_test_params.value, {})\n test_configs.append(test_run_config)\n return test_configs\n\n\ndef _load_config_file(path):\n \"\"\"Loads a test config file.\n\n The test config file has to be in YAML format.\n\n Args:\n path: A string that is the full path to the config file, including the\n file name.\n\n Returns:\n A dict that represents info in the config file.\n \"\"\"\n with io.open(utils.abs_path(path), 'r', encoding='utf-8') as f:\n conf = yaml.load(f)\n return conf\n\n\nclass TestRunConfig(object):\n \"\"\"The data class that holds all the information needed for a test run.\n\n Attributes:\n log_path: string, specifies the root directory for all logs written by\n a test run.\n test_bed_name: string, the name of the test bed used by a test run.\n controller_configs: dict, configs used for instantiating controller\n objects.\n user_params: dict, all the parameters to be consumed by the test logic.\n register_controller: func, used by test classes to register controller\n modules.\n summary_writer: records.TestSummaryWriter, used to write elements to\n the test result summary file.\n test_class_name_suffix: string, suffix to append to the class name for\n reporting. This is used for differentiating the same class\n executed with different parameters in a suite.\n \"\"\"\n\n def __init__(self):\n self.log_path = None\n self.test_bed_name = None\n self.controller_configs = None\n self.user_params = None\n self.register_controller = None\n self.summary_writer = None\n self.test_class_name_suffix = None\n\n def copy(self):\n \"\"\"Returns a deep copy of the current config.\n \"\"\"\n return copy.deepcopy(self)\n\n def __str__(self):\n return str(self.__dict__)\n", "path": "mobly/config_parser.py"}]} | 2,546 | 134 |
gh_patches_debug_39339 | rasdani/github-patches | git_diff | wagtail__wagtail-10545 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Replace `total_ordering` usage with comparison functions implementation
### Is your proposal related to a problem?
We have two instances of `total_ordering` usage within the codebase:
https://github.com/wagtail/wagtail/blob/cd5200c8e1ac0d7299fd9c398b2b994606b3c7d2/wagtail/admin/search.py#L12-L13
https://github.com/wagtail/wagtail/blob/cd5200c8e1ac0d7299fd9c398b2b994606b3c7d2/wagtail/admin/widgets/button.py#L11-L12
Even though it's convenient, `total_ordering` is known to be slow. According to [Python's docs](https://docs.python.org/3/library/functools.html#functools.total_ordering):
> **Note**
> While this decorator makes it easy to create well behaved totally ordered types, it does come at the cost of slower execution and more complex stack traces for the derived comparison methods. If performance benchmarking indicates this is a bottleneck for a given application, implementing all six rich comparison methods instead is likely to provide an easy speed boost.
Django recently removed their usage of `total_ordering` in https://github.com/django/django/pull/16958/commits/ee36e101e8f8c0acde4bb148b738ab7034e902a0 (probably not all usages, I haven't checked).
### Describe the solution you'd like
<!--
Provide a clear and concise description of what you want to happen.
-->
Replace `total_ordering` with implementations of `__eq__()`, `__ne__()`, `__lt__()`, `__le__()`, `__gt__()`, and `__ge__()`.
### Describe alternatives you've considered
<!--
Let us know about other solutions you've tried or researched.
-->
Keep using `total_ordering`
### Additional context
I found this while fixing an incorrect import of `total_ordering` in #10525.
</issue>
<code>
[start of wagtail/admin/widgets/button.py]
1 from functools import total_ordering
2
3 from django.forms.utils import flatatt
4 from django.template.loader import render_to_string
5 from django.utils.functional import cached_property
6 from django.utils.html import format_html
7
8 from wagtail import hooks
9
10
11 @total_ordering
12 class Button:
13 show = True
14
15 def __init__(
16 self, label, url, classes=set(), icon_name=None, attrs={}, priority=1000
17 ):
18 self.label = label
19 self.url = url
20 self.classes = classes
21 self.icon_name = icon_name
22 self.attrs = attrs.copy()
23 self.priority = priority
24
25 def render(self):
26 attrs = {
27 "href": self.url,
28 "class": " ".join(sorted(self.classes)),
29 "title": self.label,
30 }
31 attrs.update(self.attrs)
32 return format_html("<a{}>{}</a>", flatatt(attrs), self.label)
33
34 def __str__(self):
35 return self.render()
36
37 def __repr__(self):
38 return f"<Button: {self.label}>"
39
40 def __lt__(self, other):
41 if not isinstance(other, Button):
42 return NotImplemented
43 return (self.priority, self.label) < (other.priority, other.label)
44
45 def __eq__(self, other):
46 if not isinstance(other, Button):
47 return NotImplemented
48 return (
49 self.label == other.label
50 and self.url == other.url
51 and self.classes == other.classes
52 and self.attrs == other.attrs
53 and self.priority == other.priority
54 )
55
56
57 # Base class for all listing buttons
58 # This is also used by SnippetListingButton defined in wagtail.snippets.widgets
59 class ListingButton(Button):
60 def __init__(self, label, url, classes=set(), **kwargs):
61 classes = {"button", "button-small", "button-secondary"} | set(classes)
62 super().__init__(label, url, classes=classes, **kwargs)
63
64
65 class PageListingButton(ListingButton):
66 pass
67
68
69 class BaseDropdownMenuButton(Button):
70 def __init__(self, *args, **kwargs):
71 super().__init__(*args, url=None, **kwargs)
72
73 @cached_property
74 def dropdown_buttons(self):
75 raise NotImplementedError
76
77 def get_context_data(self):
78 return {
79 "buttons": self.dropdown_buttons,
80 "label": self.label,
81 "title": self.attrs.get("title"),
82 "classes": self.classes,
83 }
84
85 def render(self):
86 return render_to_string(self.template_name, self.get_context_data())
87
88
89 class ButtonWithDropdown(BaseDropdownMenuButton):
90 template_name = "wagtailadmin/pages/listing/_button_with_dropdown.html"
91
92 def __init__(self, *args, **kwargs):
93 self.button_classes = kwargs.pop("button_classes", set())
94 self.buttons_data = kwargs.pop("buttons_data", [])
95 super().__init__(*args, **kwargs)
96
97 def get_context_data(self):
98 context = super().get_context_data()
99 context["button_classes"] = self.button_classes
100 return context
101
102 @cached_property
103 def dropdown_buttons(self):
104 return [Button(**button) for button in self.buttons_data]
105
106
107 class ButtonWithDropdownFromHook(BaseDropdownMenuButton):
108 template_name = "wagtailadmin/pages/listing/_button_with_dropdown.html"
109
110 def __init__(self, label, hook_name, page, page_perms, next_url=None, **kwargs):
111 self.hook_name = hook_name
112 self.page = page
113 self.page_perms = page_perms
114 self.next_url = next_url
115
116 super().__init__(label, **kwargs)
117
118 @property
119 def show(self):
120 return bool(self.dropdown_buttons)
121
122 @cached_property
123 def dropdown_buttons(self):
124 button_hooks = hooks.get_hooks(self.hook_name)
125
126 buttons = []
127 for hook in button_hooks:
128 buttons.extend(hook(self.page, self.page_perms, self.next_url))
129
130 buttons.sort()
131 return buttons
132
[end of wagtail/admin/widgets/button.py]
[start of wagtail/admin/search.py]
1 from functools import total_ordering
2
3 from django.forms import Media, MediaDefiningClass
4 from django.forms.utils import flatatt
5 from django.template.loader import render_to_string
6 from django.utils.functional import cached_property
7 from django.utils.safestring import mark_safe
8 from django.utils.text import slugify
9
10 from wagtail import hooks
11 from wagtail.admin.forms.search import SearchForm
12
13
14 @total_ordering
15 class SearchArea(metaclass=MediaDefiningClass):
16 template = "wagtailadmin/shared/search_area.html"
17
18 def __init__(
19 self, label, url, name=None, classnames="", icon_name="", attrs=None, order=1000
20 ):
21 self.label = label
22 self.url = url
23 self.classnames = classnames
24 self.icon_name = icon_name
25 self.name = name or slugify(str(label))
26 self.order = order
27
28 if attrs:
29 self.attr_string = flatatt(attrs)
30 else:
31 self.attr_string = ""
32
33 def __lt__(self, other):
34 return (self.order, self.label) < (other.order, other.label)
35
36 def __eq__(self, other):
37 return (self.order, self.label) == (other.order, other.label)
38
39 def is_shown(self, request):
40 """
41 Whether this search area should be shown for the given request; permission
42 checks etc should go here. By default, search areas are shown all the time
43 """
44 return True
45
46 def is_active(self, request, current=None):
47 if current is None:
48 return request.path.startswith(self.url)
49 else:
50 return self.name == current
51
52 def render_html(self, request, query, current=None):
53 return render_to_string(
54 self.template,
55 {
56 "name": self.name,
57 "url": self.url,
58 "classnames": self.classnames,
59 "icon_name": self.icon_name,
60 "attr_string": self.attr_string,
61 "label": self.label,
62 "active": self.is_active(request, current),
63 "query_string": query,
64 },
65 request=request,
66 )
67
68
69 class Search:
70 def __init__(self, register_hook_name, construct_hook_name=None):
71 self.register_hook_name = register_hook_name
72 self.construct_hook_name = construct_hook_name
73
74 @cached_property
75 def registered_search_areas(self):
76 return sorted([fn() for fn in hooks.get_hooks(self.register_hook_name)])
77
78 def search_items_for_request(self, request):
79 return [item for item in self.registered_search_areas if item.is_shown(request)]
80
81 def active_search(self, request, current=None):
82 return [
83 item
84 for item in self.search_items_for_request(request)
85 if item.is_active(request, current)
86 ]
87
88 @property
89 def media(self):
90 media = Media()
91 for item in self.registered_search_areas:
92 media += item.media
93 return media
94
95 def render_html(self, request, current=None):
96 search_areas = self.search_items_for_request(request)
97
98 # Get query parameter
99 form = SearchForm(request.GET)
100 query = ""
101 if form.is_valid():
102 query = form.cleaned_data["q"]
103
104 # provide a hook for modifying the search area, if construct_hook_name has been set
105 if self.construct_hook_name:
106 for fn in hooks.get_hooks(self.construct_hook_name):
107 fn(request, search_areas)
108
109 rendered_search_areas = []
110 for item in search_areas:
111 rendered_search_areas.append(item.render_html(request, query, current))
112
113 return mark_safe("".join(rendered_search_areas))
114
115
116 admin_search_areas = Search(
117 register_hook_name="register_admin_search_area",
118 construct_hook_name="construct_search",
119 )
120
[end of wagtail/admin/search.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/wagtail/admin/search.py b/wagtail/admin/search.py
--- a/wagtail/admin/search.py
+++ b/wagtail/admin/search.py
@@ -1,5 +1,3 @@
-from functools import total_ordering
-
from django.forms import Media, MediaDefiningClass
from django.forms.utils import flatatt
from django.template.loader import render_to_string
@@ -11,7 +9,6 @@
from wagtail.admin.forms.search import SearchForm
-@total_ordering
class SearchArea(metaclass=MediaDefiningClass):
template = "wagtailadmin/shared/search_area.html"
@@ -31,9 +28,28 @@
self.attr_string = ""
def __lt__(self, other):
+ if not isinstance(other, SearchArea):
+ return NotImplemented
return (self.order, self.label) < (other.order, other.label)
+ def __le__(self, other):
+ if not isinstance(other, SearchArea):
+ return NotImplemented
+ return (self.order, self.label) <= (other.order, other.label)
+
+ def __gt__(self, other):
+ if not isinstance(other, SearchArea):
+ return NotImplemented
+ return (self.order, self.label) > (other.order, other.label)
+
+ def __ge__(self, other):
+ if not isinstance(other, SearchArea):
+ return NotImplemented
+ return (self.order, self.label) >= (other.order, other.label)
+
def __eq__(self, other):
+ if not isinstance(other, SearchArea):
+ return NotImplemented
return (self.order, self.label) == (other.order, other.label)
def is_shown(self, request):
diff --git a/wagtail/admin/widgets/button.py b/wagtail/admin/widgets/button.py
--- a/wagtail/admin/widgets/button.py
+++ b/wagtail/admin/widgets/button.py
@@ -1,5 +1,3 @@
-from functools import total_ordering
-
from django.forms.utils import flatatt
from django.template.loader import render_to_string
from django.utils.functional import cached_property
@@ -8,7 +6,6 @@
from wagtail import hooks
-@total_ordering
class Button:
show = True
@@ -42,6 +39,21 @@
return NotImplemented
return (self.priority, self.label) < (other.priority, other.label)
+ def __le__(self, other):
+ if not isinstance(other, Button):
+ return NotImplemented
+ return (self.priority, self.label) <= (other.priority, other.label)
+
+ def __gt__(self, other):
+ if not isinstance(other, Button):
+ return NotImplemented
+ return (self.priority, self.label) > (other.priority, other.label)
+
+ def __ge__(self, other):
+ if not isinstance(other, Button):
+ return NotImplemented
+ return (self.priority, self.label) >= (other.priority, other.label)
+
def __eq__(self, other):
if not isinstance(other, Button):
return NotImplemented
| {"golden_diff": "diff --git a/wagtail/admin/search.py b/wagtail/admin/search.py\n--- a/wagtail/admin/search.py\n+++ b/wagtail/admin/search.py\n@@ -1,5 +1,3 @@\n-from functools import total_ordering\n-\n from django.forms import Media, MediaDefiningClass\n from django.forms.utils import flatatt\n from django.template.loader import render_to_string\n@@ -11,7 +9,6 @@\n from wagtail.admin.forms.search import SearchForm\n \n \n-@total_ordering\n class SearchArea(metaclass=MediaDefiningClass):\n template = \"wagtailadmin/shared/search_area.html\"\n \n@@ -31,9 +28,28 @@\n self.attr_string = \"\"\n \n def __lt__(self, other):\n+ if not isinstance(other, SearchArea):\n+ return NotImplemented\n return (self.order, self.label) < (other.order, other.label)\n \n+ def __le__(self, other):\n+ if not isinstance(other, SearchArea):\n+ return NotImplemented\n+ return (self.order, self.label) <= (other.order, other.label)\n+\n+ def __gt__(self, other):\n+ if not isinstance(other, SearchArea):\n+ return NotImplemented\n+ return (self.order, self.label) > (other.order, other.label)\n+\n+ def __ge__(self, other):\n+ if not isinstance(other, SearchArea):\n+ return NotImplemented\n+ return (self.order, self.label) >= (other.order, other.label)\n+\n def __eq__(self, other):\n+ if not isinstance(other, SearchArea):\n+ return NotImplemented\n return (self.order, self.label) == (other.order, other.label)\n \n def is_shown(self, request):\ndiff --git a/wagtail/admin/widgets/button.py b/wagtail/admin/widgets/button.py\n--- a/wagtail/admin/widgets/button.py\n+++ b/wagtail/admin/widgets/button.py\n@@ -1,5 +1,3 @@\n-from functools import total_ordering\n-\n from django.forms.utils import flatatt\n from django.template.loader import render_to_string\n from django.utils.functional import cached_property\n@@ -8,7 +6,6 @@\n from wagtail import hooks\n \n \n-@total_ordering\n class Button:\n show = True\n \n@@ -42,6 +39,21 @@\n return NotImplemented\n return (self.priority, self.label) < (other.priority, other.label)\n \n+ def __le__(self, other):\n+ if not isinstance(other, Button):\n+ return NotImplemented\n+ return (self.priority, self.label) <= (other.priority, other.label)\n+\n+ def __gt__(self, other):\n+ if not isinstance(other, Button):\n+ return NotImplemented\n+ return (self.priority, self.label) > (other.priority, other.label)\n+\n+ def __ge__(self, other):\n+ if not isinstance(other, Button):\n+ return NotImplemented\n+ return (self.priority, self.label) >= (other.priority, other.label)\n+\n def __eq__(self, other):\n if not isinstance(other, Button):\n return NotImplemented\n", "issue": "Replace `total_ordering` usage with comparison functions implementation\n### Is your proposal related to a problem?\r\n\r\nWe have two instances of `total_ordering` usage within the codebase:\r\n\r\nhttps://github.com/wagtail/wagtail/blob/cd5200c8e1ac0d7299fd9c398b2b994606b3c7d2/wagtail/admin/search.py#L12-L13\r\n\r\nhttps://github.com/wagtail/wagtail/blob/cd5200c8e1ac0d7299fd9c398b2b994606b3c7d2/wagtail/admin/widgets/button.py#L11-L12\r\n\r\nEven though it's convenient, `total_ordering` is known to be slow. According to [Python's docs](https://docs.python.org/3/library/functools.html#functools.total_ordering):\r\n\r\n> **Note**\r\n> While this decorator makes it easy to create well behaved totally ordered types, it does come at the cost of slower execution and more complex stack traces for the derived comparison methods. If performance benchmarking indicates this is a bottleneck for a given application, implementing all six rich comparison methods instead is likely to provide an easy speed boost.\r\n\r\nDjango recently removed their usage of `total_ordering` in https://github.com/django/django/pull/16958/commits/ee36e101e8f8c0acde4bb148b738ab7034e902a0 (probably not all usages, I haven't checked).\r\n\r\n### Describe the solution you'd like\r\n\r\n<!--\r\n Provide a clear and concise description of what you want to happen.\r\n-->\r\n\r\nReplace `total_ordering` with implementations of `__eq__()`, `__ne__()`, `__lt__()`, `__le__()`, `__gt__()`, and `__ge__()`.\r\n\r\n### Describe alternatives you've considered\r\n\r\n<!--\r\n Let us know about other solutions you've tried or researched.\r\n-->\r\n\r\nKeep using `total_ordering`\r\n\r\n### Additional context\r\n\r\nI found this while fixing an incorrect import of `total_ordering` in #10525.\r\n\n", "before_files": [{"content": "from functools import total_ordering\n\nfrom django.forms.utils import flatatt\nfrom django.template.loader import render_to_string\nfrom django.utils.functional import cached_property\nfrom django.utils.html import format_html\n\nfrom wagtail import hooks\n\n\n@total_ordering\nclass Button:\n show = True\n\n def __init__(\n self, label, url, classes=set(), icon_name=None, attrs={}, priority=1000\n ):\n self.label = label\n self.url = url\n self.classes = classes\n self.icon_name = icon_name\n self.attrs = attrs.copy()\n self.priority = priority\n\n def render(self):\n attrs = {\n \"href\": self.url,\n \"class\": \" \".join(sorted(self.classes)),\n \"title\": self.label,\n }\n attrs.update(self.attrs)\n return format_html(\"<a{}>{}</a>\", flatatt(attrs), self.label)\n\n def __str__(self):\n return self.render()\n\n def __repr__(self):\n return f\"<Button: {self.label}>\"\n\n def __lt__(self, other):\n if not isinstance(other, Button):\n return NotImplemented\n return (self.priority, self.label) < (other.priority, other.label)\n\n def __eq__(self, other):\n if not isinstance(other, Button):\n return NotImplemented\n return (\n self.label == other.label\n and self.url == other.url\n and self.classes == other.classes\n and self.attrs == other.attrs\n and self.priority == other.priority\n )\n\n\n# Base class for all listing buttons\n# This is also used by SnippetListingButton defined in wagtail.snippets.widgets\nclass ListingButton(Button):\n def __init__(self, label, url, classes=set(), **kwargs):\n classes = {\"button\", \"button-small\", \"button-secondary\"} | set(classes)\n super().__init__(label, url, classes=classes, **kwargs)\n\n\nclass PageListingButton(ListingButton):\n pass\n\n\nclass BaseDropdownMenuButton(Button):\n def __init__(self, *args, **kwargs):\n super().__init__(*args, url=None, **kwargs)\n\n @cached_property\n def dropdown_buttons(self):\n raise NotImplementedError\n\n def get_context_data(self):\n return {\n \"buttons\": self.dropdown_buttons,\n \"label\": self.label,\n \"title\": self.attrs.get(\"title\"),\n \"classes\": self.classes,\n }\n\n def render(self):\n return render_to_string(self.template_name, self.get_context_data())\n\n\nclass ButtonWithDropdown(BaseDropdownMenuButton):\n template_name = \"wagtailadmin/pages/listing/_button_with_dropdown.html\"\n\n def __init__(self, *args, **kwargs):\n self.button_classes = kwargs.pop(\"button_classes\", set())\n self.buttons_data = kwargs.pop(\"buttons_data\", [])\n super().__init__(*args, **kwargs)\n\n def get_context_data(self):\n context = super().get_context_data()\n context[\"button_classes\"] = self.button_classes\n return context\n\n @cached_property\n def dropdown_buttons(self):\n return [Button(**button) for button in self.buttons_data]\n\n\nclass ButtonWithDropdownFromHook(BaseDropdownMenuButton):\n template_name = \"wagtailadmin/pages/listing/_button_with_dropdown.html\"\n\n def __init__(self, label, hook_name, page, page_perms, next_url=None, **kwargs):\n self.hook_name = hook_name\n self.page = page\n self.page_perms = page_perms\n self.next_url = next_url\n\n super().__init__(label, **kwargs)\n\n @property\n def show(self):\n return bool(self.dropdown_buttons)\n\n @cached_property\n def dropdown_buttons(self):\n button_hooks = hooks.get_hooks(self.hook_name)\n\n buttons = []\n for hook in button_hooks:\n buttons.extend(hook(self.page, self.page_perms, self.next_url))\n\n buttons.sort()\n return buttons\n", "path": "wagtail/admin/widgets/button.py"}, {"content": "from functools import total_ordering\n\nfrom django.forms import Media, MediaDefiningClass\nfrom django.forms.utils import flatatt\nfrom django.template.loader import render_to_string\nfrom django.utils.functional import cached_property\nfrom django.utils.safestring import mark_safe\nfrom django.utils.text import slugify\n\nfrom wagtail import hooks\nfrom wagtail.admin.forms.search import SearchForm\n\n\n@total_ordering\nclass SearchArea(metaclass=MediaDefiningClass):\n template = \"wagtailadmin/shared/search_area.html\"\n\n def __init__(\n self, label, url, name=None, classnames=\"\", icon_name=\"\", attrs=None, order=1000\n ):\n self.label = label\n self.url = url\n self.classnames = classnames\n self.icon_name = icon_name\n self.name = name or slugify(str(label))\n self.order = order\n\n if attrs:\n self.attr_string = flatatt(attrs)\n else:\n self.attr_string = \"\"\n\n def __lt__(self, other):\n return (self.order, self.label) < (other.order, other.label)\n\n def __eq__(self, other):\n return (self.order, self.label) == (other.order, other.label)\n\n def is_shown(self, request):\n \"\"\"\n Whether this search area should be shown for the given request; permission\n checks etc should go here. By default, search areas are shown all the time\n \"\"\"\n return True\n\n def is_active(self, request, current=None):\n if current is None:\n return request.path.startswith(self.url)\n else:\n return self.name == current\n\n def render_html(self, request, query, current=None):\n return render_to_string(\n self.template,\n {\n \"name\": self.name,\n \"url\": self.url,\n \"classnames\": self.classnames,\n \"icon_name\": self.icon_name,\n \"attr_string\": self.attr_string,\n \"label\": self.label,\n \"active\": self.is_active(request, current),\n \"query_string\": query,\n },\n request=request,\n )\n\n\nclass Search:\n def __init__(self, register_hook_name, construct_hook_name=None):\n self.register_hook_name = register_hook_name\n self.construct_hook_name = construct_hook_name\n\n @cached_property\n def registered_search_areas(self):\n return sorted([fn() for fn in hooks.get_hooks(self.register_hook_name)])\n\n def search_items_for_request(self, request):\n return [item for item in self.registered_search_areas if item.is_shown(request)]\n\n def active_search(self, request, current=None):\n return [\n item\n for item in self.search_items_for_request(request)\n if item.is_active(request, current)\n ]\n\n @property\n def media(self):\n media = Media()\n for item in self.registered_search_areas:\n media += item.media\n return media\n\n def render_html(self, request, current=None):\n search_areas = self.search_items_for_request(request)\n\n # Get query parameter\n form = SearchForm(request.GET)\n query = \"\"\n if form.is_valid():\n query = form.cleaned_data[\"q\"]\n\n # provide a hook for modifying the search area, if construct_hook_name has been set\n if self.construct_hook_name:\n for fn in hooks.get_hooks(self.construct_hook_name):\n fn(request, search_areas)\n\n rendered_search_areas = []\n for item in search_areas:\n rendered_search_areas.append(item.render_html(request, query, current))\n\n return mark_safe(\"\".join(rendered_search_areas))\n\n\nadmin_search_areas = Search(\n register_hook_name=\"register_admin_search_area\",\n construct_hook_name=\"construct_search\",\n)\n", "path": "wagtail/admin/search.py"}]} | 3,236 | 680 |
gh_patches_debug_19799 | rasdani/github-patches | git_diff | mirumee__ariadne-266 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Change `make_executable_schema` API to accept multiple bindables args
Currently, the second argument to `make_executable_schema` is list of `SchemaBindlables` or single bindable:
```python
# Single bindable:
schema = make_executable_schema(type_defs, query_type, debug=True)
# Multiple bindables:
schema = make_executable_schema(type_defs, [query_type, mutation_type], debug=True)
```
Looking at Ariadne uses in the wild, a pattern is starting to emerge where developers create dedicated modules/packages in their project for `scalars`, `mutations` or `types`, that use their `__init__.py`'s to gather all bindables into single lists:
```
from .scalars import scalars
from .types import types
from .mutations import mutations
```
Those are then combined into single list and passed to `make_executable_schema`:
```
schema = make_executable_schema(type_defs, scalars + types + mutations, debug=True)
```
This looks ugly, but things get uglier when there's bindable involved:
```
schema = make_executable_schema(type_defs, scalars + types + mutations + [fallback_resolvers], debug=True)
```
We can simplify this by changing bindables to `*bindables`:
```
schema = make_executable_schema(type_defs, scalars, types, mutations, fallback_resolvers, debug=True)
```
</issue>
<code>
[start of ariadne/executable_schema.py]
1 from typing import Dict, List, Type, Union
2
3 from graphql import (
4 DocumentNode,
5 GraphQLSchema,
6 assert_valid_schema,
7 build_ast_schema,
8 extend_schema,
9 parse,
10 validate_schema,
11 )
12
13 from .enums import set_default_enum_values_on_schema
14 from .schema_visitor import SchemaDirectiveVisitor
15 from .types import SchemaBindable
16
17
18 def make_executable_schema(
19 type_defs: Union[str, List[str]],
20 bindables: Union[SchemaBindable, List[SchemaBindable], None] = None,
21 *,
22 directives: Dict[str, Type[SchemaDirectiveVisitor]] = None,
23 ) -> GraphQLSchema:
24 if isinstance(type_defs, list):
25 type_defs = join_type_defs(type_defs)
26
27 ast_document = parse(type_defs)
28 schema = build_and_extend_schema(ast_document)
29 validate_schema(schema)
30
31 if isinstance(bindables, list):
32 for obj in bindables:
33 obj.bind_to_schema(schema)
34 elif bindables:
35 bindables.bind_to_schema(schema)
36
37 set_default_enum_values_on_schema(schema)
38
39 if directives:
40 SchemaDirectiveVisitor.visit_schema_directives(schema, directives)
41
42 assert_valid_schema(schema)
43
44 return schema
45
46
47 def join_type_defs(type_defs: List[str]) -> str:
48 return "\n\n".join(t.strip() for t in type_defs)
49
50
51 def build_and_extend_schema(ast: DocumentNode) -> GraphQLSchema:
52 schema = build_ast_schema(ast)
53 extension_ast = extract_extensions(ast)
54
55 if extension_ast.definitions:
56 schema = extend_schema(schema, extension_ast)
57
58 return schema
59
60
61 EXTENSION_KINDS = [
62 "scalar_type_extension",
63 "object_type_extension",
64 "interface_type_extension",
65 "union_type_extension",
66 "enum_type_extension",
67 "input_object_type_extension",
68 ]
69
70
71 def extract_extensions(ast: DocumentNode) -> DocumentNode:
72 extensions = [node for node in ast.definitions if node.kind in EXTENSION_KINDS]
73 return DocumentNode(definitions=extensions)
74
[end of ariadne/executable_schema.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/ariadne/executable_schema.py b/ariadne/executable_schema.py
--- a/ariadne/executable_schema.py
+++ b/ariadne/executable_schema.py
@@ -17,8 +17,7 @@
def make_executable_schema(
type_defs: Union[str, List[str]],
- bindables: Union[SchemaBindable, List[SchemaBindable], None] = None,
- *,
+ *bindables: Union[SchemaBindable, List[SchemaBindable]],
directives: Dict[str, Type[SchemaDirectiveVisitor]] = None,
) -> GraphQLSchema:
if isinstance(type_defs, list):
@@ -28,11 +27,12 @@
schema = build_and_extend_schema(ast_document)
validate_schema(schema)
- if isinstance(bindables, list):
- for obj in bindables:
- obj.bind_to_schema(schema)
- elif bindables:
- bindables.bind_to_schema(schema)
+ for bindable in bindables:
+ if isinstance(bindable, list):
+ for obj in bindable:
+ obj.bind_to_schema(schema)
+ else:
+ bindable.bind_to_schema(schema)
set_default_enum_values_on_schema(schema)
| {"golden_diff": "diff --git a/ariadne/executable_schema.py b/ariadne/executable_schema.py\n--- a/ariadne/executable_schema.py\n+++ b/ariadne/executable_schema.py\n@@ -17,8 +17,7 @@\n \n def make_executable_schema(\n type_defs: Union[str, List[str]],\n- bindables: Union[SchemaBindable, List[SchemaBindable], None] = None,\n- *,\n+ *bindables: Union[SchemaBindable, List[SchemaBindable]],\n directives: Dict[str, Type[SchemaDirectiveVisitor]] = None,\n ) -> GraphQLSchema:\n if isinstance(type_defs, list):\n@@ -28,11 +27,12 @@\n schema = build_and_extend_schema(ast_document)\n validate_schema(schema)\n \n- if isinstance(bindables, list):\n- for obj in bindables:\n- obj.bind_to_schema(schema)\n- elif bindables:\n- bindables.bind_to_schema(schema)\n+ for bindable in bindables:\n+ if isinstance(bindable, list):\n+ for obj in bindable:\n+ obj.bind_to_schema(schema)\n+ else:\n+ bindable.bind_to_schema(schema)\n \n set_default_enum_values_on_schema(schema)\n", "issue": "Change `make_executable_schema` API to accept multiple bindables args\nCurrently, the second argument to `make_executable_schema` is list of `SchemaBindlables` or single bindable:\r\n\r\n```python\r\n# Single bindable:\r\nschema = make_executable_schema(type_defs, query_type, debug=True)\r\n\r\n# Multiple bindables:\r\nschema = make_executable_schema(type_defs, [query_type, mutation_type], debug=True)\r\n```\r\n\r\nLooking at Ariadne uses in the wild, a pattern is starting to emerge where developers create dedicated modules/packages in their project for `scalars`, `mutations` or `types`, that use their `__init__.py`'s to gather all bindables into single lists:\r\n\r\n```\r\nfrom .scalars import scalars\r\nfrom .types import types\r\nfrom .mutations import mutations\r\n```\r\n\r\nThose are then combined into single list and passed to `make_executable_schema`:\r\n\r\n```\r\nschema = make_executable_schema(type_defs, scalars + types + mutations, debug=True)\r\n```\r\n\r\nThis looks ugly, but things get uglier when there's bindable involved:\r\n\r\n```\r\nschema = make_executable_schema(type_defs, scalars + types + mutations + [fallback_resolvers], debug=True)\r\n```\r\n\r\nWe can simplify this by changing bindables to `*bindables`:\r\n\r\n```\r\nschema = make_executable_schema(type_defs, scalars, types, mutations, fallback_resolvers, debug=True)\r\n```\n", "before_files": [{"content": "from typing import Dict, List, Type, Union\n\nfrom graphql import (\n DocumentNode,\n GraphQLSchema,\n assert_valid_schema,\n build_ast_schema,\n extend_schema,\n parse,\n validate_schema,\n)\n\nfrom .enums import set_default_enum_values_on_schema\nfrom .schema_visitor import SchemaDirectiveVisitor\nfrom .types import SchemaBindable\n\n\ndef make_executable_schema(\n type_defs: Union[str, List[str]],\n bindables: Union[SchemaBindable, List[SchemaBindable], None] = None,\n *,\n directives: Dict[str, Type[SchemaDirectiveVisitor]] = None,\n) -> GraphQLSchema:\n if isinstance(type_defs, list):\n type_defs = join_type_defs(type_defs)\n\n ast_document = parse(type_defs)\n schema = build_and_extend_schema(ast_document)\n validate_schema(schema)\n\n if isinstance(bindables, list):\n for obj in bindables:\n obj.bind_to_schema(schema)\n elif bindables:\n bindables.bind_to_schema(schema)\n\n set_default_enum_values_on_schema(schema)\n\n if directives:\n SchemaDirectiveVisitor.visit_schema_directives(schema, directives)\n\n assert_valid_schema(schema)\n\n return schema\n\n\ndef join_type_defs(type_defs: List[str]) -> str:\n return \"\\n\\n\".join(t.strip() for t in type_defs)\n\n\ndef build_and_extend_schema(ast: DocumentNode) -> GraphQLSchema:\n schema = build_ast_schema(ast)\n extension_ast = extract_extensions(ast)\n\n if extension_ast.definitions:\n schema = extend_schema(schema, extension_ast)\n\n return schema\n\n\nEXTENSION_KINDS = [\n \"scalar_type_extension\",\n \"object_type_extension\",\n \"interface_type_extension\",\n \"union_type_extension\",\n \"enum_type_extension\",\n \"input_object_type_extension\",\n]\n\n\ndef extract_extensions(ast: DocumentNode) -> DocumentNode:\n extensions = [node for node in ast.definitions if node.kind in EXTENSION_KINDS]\n return DocumentNode(definitions=extensions)\n", "path": "ariadne/executable_schema.py"}]} | 1,400 | 267 |
gh_patches_debug_17397 | rasdani/github-patches | git_diff | benoitc__gunicorn-2570 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
threading.Thread.setDaemon has been deprecated in favor of setting daemon attribute directly in Python 3.10
Ref : python/cpython#25174
https://github.com/benoitc/gunicorn/blob/cf55d2cec277f220ebd605989ce78ad1bb553c46/gunicorn/reloader.py#L20
https://github.com/benoitc/gunicorn/blob/cf55d2cec277f220ebd605989ce78ad1bb553c46/gunicorn/reloader.py#L77
</issue>
<code>
[start of gunicorn/reloader.py]
1 # -*- coding: utf-8 -
2 #
3 # This file is part of gunicorn released under the MIT license.
4 # See the NOTICE for more information.
5 # pylint: disable=no-else-continue
6
7 import os
8 import os.path
9 import re
10 import sys
11 import time
12 import threading
13
14 COMPILED_EXT_RE = re.compile(r'py[co]$')
15
16
17 class Reloader(threading.Thread):
18 def __init__(self, extra_files=None, interval=1, callback=None):
19 super().__init__()
20 self.setDaemon(True)
21 self._extra_files = set(extra_files or ())
22 self._interval = interval
23 self._callback = callback
24
25 def add_extra_file(self, filename):
26 self._extra_files.add(filename)
27
28 def get_files(self):
29 fnames = [
30 COMPILED_EXT_RE.sub('py', module.__file__)
31 for module in tuple(sys.modules.values())
32 if getattr(module, '__file__', None)
33 ]
34
35 fnames.extend(self._extra_files)
36
37 return fnames
38
39 def run(self):
40 mtimes = {}
41 while True:
42 for filename in self.get_files():
43 try:
44 mtime = os.stat(filename).st_mtime
45 except OSError:
46 continue
47 old_time = mtimes.get(filename)
48 if old_time is None:
49 mtimes[filename] = mtime
50 continue
51 elif mtime > old_time:
52 if self._callback:
53 self._callback(filename)
54 time.sleep(self._interval)
55
56
57 has_inotify = False
58 if sys.platform.startswith('linux'):
59 try:
60 from inotify.adapters import Inotify
61 import inotify.constants
62 has_inotify = True
63 except ImportError:
64 pass
65
66
67 if has_inotify:
68
69 class InotifyReloader(threading.Thread):
70 event_mask = (inotify.constants.IN_CREATE | inotify.constants.IN_DELETE
71 | inotify.constants.IN_DELETE_SELF | inotify.constants.IN_MODIFY
72 | inotify.constants.IN_MOVE_SELF | inotify.constants.IN_MOVED_FROM
73 | inotify.constants.IN_MOVED_TO)
74
75 def __init__(self, extra_files=None, callback=None):
76 super().__init__()
77 self.setDaemon(True)
78 self._callback = callback
79 self._dirs = set()
80 self._watcher = Inotify()
81
82 for extra_file in extra_files:
83 self.add_extra_file(extra_file)
84
85 def add_extra_file(self, filename):
86 dirname = os.path.dirname(filename)
87
88 if dirname in self._dirs:
89 return
90
91 self._watcher.add_watch(dirname, mask=self.event_mask)
92 self._dirs.add(dirname)
93
94 def get_dirs(self):
95 fnames = [
96 os.path.dirname(os.path.abspath(COMPILED_EXT_RE.sub('py', module.__file__)))
97 for module in tuple(sys.modules.values())
98 if getattr(module, '__file__', None)
99 ]
100
101 return set(fnames)
102
103 def run(self):
104 self._dirs = self.get_dirs()
105
106 for dirname in self._dirs:
107 if os.path.isdir(dirname):
108 self._watcher.add_watch(dirname, mask=self.event_mask)
109
110 for event in self._watcher.event_gen():
111 if event is None:
112 continue
113
114 filename = event[3]
115
116 self._callback(filename)
117
118 else:
119
120 class InotifyReloader(object):
121 def __init__(self, callback=None):
122 raise ImportError('You must have the inotify module installed to '
123 'use the inotify reloader')
124
125
126 preferred_reloader = InotifyReloader if has_inotify else Reloader
127
128 reloader_engines = {
129 'auto': preferred_reloader,
130 'poll': Reloader,
131 'inotify': InotifyReloader,
132 }
133
[end of gunicorn/reloader.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/gunicorn/reloader.py b/gunicorn/reloader.py
--- a/gunicorn/reloader.py
+++ b/gunicorn/reloader.py
@@ -17,7 +17,7 @@
class Reloader(threading.Thread):
def __init__(self, extra_files=None, interval=1, callback=None):
super().__init__()
- self.setDaemon(True)
+ self.daemon = True
self._extra_files = set(extra_files or ())
self._interval = interval
self._callback = callback
@@ -74,7 +74,7 @@
def __init__(self, extra_files=None, callback=None):
super().__init__()
- self.setDaemon(True)
+ self.daemon = True
self._callback = callback
self._dirs = set()
self._watcher = Inotify()
| {"golden_diff": "diff --git a/gunicorn/reloader.py b/gunicorn/reloader.py\n--- a/gunicorn/reloader.py\n+++ b/gunicorn/reloader.py\n@@ -17,7 +17,7 @@\n class Reloader(threading.Thread):\n def __init__(self, extra_files=None, interval=1, callback=None):\n super().__init__()\n- self.setDaemon(True)\n+ self.daemon = True\n self._extra_files = set(extra_files or ())\n self._interval = interval\n self._callback = callback\n@@ -74,7 +74,7 @@\n \n def __init__(self, extra_files=None, callback=None):\n super().__init__()\n- self.setDaemon(True)\n+ self.daemon = True\n self._callback = callback\n self._dirs = set()\n self._watcher = Inotify()\n", "issue": "threading.Thread.setDaemon has been deprecated in favor of setting daemon attribute directly in Python 3.10\nRef : python/cpython#25174\r\n\r\nhttps://github.com/benoitc/gunicorn/blob/cf55d2cec277f220ebd605989ce78ad1bb553c46/gunicorn/reloader.py#L20\r\n\r\nhttps://github.com/benoitc/gunicorn/blob/cf55d2cec277f220ebd605989ce78ad1bb553c46/gunicorn/reloader.py#L77\n", "before_files": [{"content": "# -*- coding: utf-8 -\n#\n# This file is part of gunicorn released under the MIT license.\n# See the NOTICE for more information.\n# pylint: disable=no-else-continue\n\nimport os\nimport os.path\nimport re\nimport sys\nimport time\nimport threading\n\nCOMPILED_EXT_RE = re.compile(r'py[co]$')\n\n\nclass Reloader(threading.Thread):\n def __init__(self, extra_files=None, interval=1, callback=None):\n super().__init__()\n self.setDaemon(True)\n self._extra_files = set(extra_files or ())\n self._interval = interval\n self._callback = callback\n\n def add_extra_file(self, filename):\n self._extra_files.add(filename)\n\n def get_files(self):\n fnames = [\n COMPILED_EXT_RE.sub('py', module.__file__)\n for module in tuple(sys.modules.values())\n if getattr(module, '__file__', None)\n ]\n\n fnames.extend(self._extra_files)\n\n return fnames\n\n def run(self):\n mtimes = {}\n while True:\n for filename in self.get_files():\n try:\n mtime = os.stat(filename).st_mtime\n except OSError:\n continue\n old_time = mtimes.get(filename)\n if old_time is None:\n mtimes[filename] = mtime\n continue\n elif mtime > old_time:\n if self._callback:\n self._callback(filename)\n time.sleep(self._interval)\n\n\nhas_inotify = False\nif sys.platform.startswith('linux'):\n try:\n from inotify.adapters import Inotify\n import inotify.constants\n has_inotify = True\n except ImportError:\n pass\n\n\nif has_inotify:\n\n class InotifyReloader(threading.Thread):\n event_mask = (inotify.constants.IN_CREATE | inotify.constants.IN_DELETE\n | inotify.constants.IN_DELETE_SELF | inotify.constants.IN_MODIFY\n | inotify.constants.IN_MOVE_SELF | inotify.constants.IN_MOVED_FROM\n | inotify.constants.IN_MOVED_TO)\n\n def __init__(self, extra_files=None, callback=None):\n super().__init__()\n self.setDaemon(True)\n self._callback = callback\n self._dirs = set()\n self._watcher = Inotify()\n\n for extra_file in extra_files:\n self.add_extra_file(extra_file)\n\n def add_extra_file(self, filename):\n dirname = os.path.dirname(filename)\n\n if dirname in self._dirs:\n return\n\n self._watcher.add_watch(dirname, mask=self.event_mask)\n self._dirs.add(dirname)\n\n def get_dirs(self):\n fnames = [\n os.path.dirname(os.path.abspath(COMPILED_EXT_RE.sub('py', module.__file__)))\n for module in tuple(sys.modules.values())\n if getattr(module, '__file__', None)\n ]\n\n return set(fnames)\n\n def run(self):\n self._dirs = self.get_dirs()\n\n for dirname in self._dirs:\n if os.path.isdir(dirname):\n self._watcher.add_watch(dirname, mask=self.event_mask)\n\n for event in self._watcher.event_gen():\n if event is None:\n continue\n\n filename = event[3]\n\n self._callback(filename)\n\nelse:\n\n class InotifyReloader(object):\n def __init__(self, callback=None):\n raise ImportError('You must have the inotify module installed to '\n 'use the inotify reloader')\n\n\npreferred_reloader = InotifyReloader if has_inotify else Reloader\n\nreloader_engines = {\n 'auto': preferred_reloader,\n 'poll': Reloader,\n 'inotify': InotifyReloader,\n}\n", "path": "gunicorn/reloader.py"}]} | 1,773 | 184 |
gh_patches_debug_9112 | rasdani/github-patches | git_diff | pypi__warehouse-6207 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Improve webauthn errors
Problems:
- We currently have two pieces of JS that control the display of webauthn errors - some in `index.js`, some in `webauthn.js`
- The errors are not announced to the screenreader (via `role=alert`)
- The errors are not associated with the webauthn label field (on the provisioning page) - we should use `aria-describedby` for this
- The user is able to put text into the label field on the provisioning page - it should be disabled
</issue>
<code>
[start of warehouse/manage/forms.py]
1 # Licensed under the Apache License, Version 2.0 (the "License");
2 # you may not use this file except in compliance with the License.
3 # You may obtain a copy of the License at
4 #
5 # http://www.apache.org/licenses/LICENSE-2.0
6 #
7 # Unless required by applicable law or agreed to in writing, software
8 # distributed under the License is distributed on an "AS IS" BASIS,
9 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
10 # See the License for the specific language governing permissions and
11 # limitations under the License.
12
13 import json
14
15 import wtforms
16
17 import warehouse.utils.otp as otp
18 import warehouse.utils.webauthn as webauthn
19
20 from warehouse import forms
21 from warehouse.accounts.forms import (
22 NewEmailMixin,
23 NewPasswordMixin,
24 PasswordMixin,
25 TOTPValueMixin,
26 WebAuthnCredentialMixin,
27 )
28
29
30 class RoleNameMixin:
31
32 role_name = wtforms.SelectField(
33 "Select role",
34 choices=[("Maintainer", "Maintainer"), ("Owner", "Owner")],
35 validators=[wtforms.validators.DataRequired(message="Select role")],
36 )
37
38
39 class UsernameMixin:
40
41 username = wtforms.StringField(
42 validators=[wtforms.validators.DataRequired(message="Specify username")]
43 )
44
45 def validate_username(self, field):
46 userid = self.user_service.find_userid(field.data)
47
48 if userid is None:
49 raise wtforms.validators.ValidationError(
50 "No user found with that username. Try again."
51 )
52
53
54 class CreateRoleForm(RoleNameMixin, UsernameMixin, forms.Form):
55 def __init__(self, *args, user_service, **kwargs):
56 super().__init__(*args, **kwargs)
57 self.user_service = user_service
58
59
60 class ChangeRoleForm(RoleNameMixin, forms.Form):
61 pass
62
63
64 class SaveAccountForm(forms.Form):
65
66 __params__ = ["name"]
67
68 name = wtforms.StringField()
69
70
71 class AddEmailForm(NewEmailMixin, forms.Form):
72
73 __params__ = ["email"]
74
75 def __init__(self, *args, user_service, user_id, **kwargs):
76 super().__init__(*args, **kwargs)
77 self.user_service = user_service
78 self.user_id = user_id
79
80
81 class ChangePasswordForm(PasswordMixin, NewPasswordMixin, forms.Form):
82
83 __params__ = ["password", "new_password", "password_confirm"]
84
85 def __init__(self, *args, user_service, **kwargs):
86 super().__init__(*args, **kwargs)
87 self.user_service = user_service
88
89
90 class DeleteTOTPForm(UsernameMixin, forms.Form):
91
92 __params__ = ["confirm_username"]
93
94 def __init__(self, *args, user_service, **kwargs):
95 super().__init__(*args, **kwargs)
96 self.user_service = user_service
97
98
99 class ProvisionTOTPForm(TOTPValueMixin, forms.Form):
100
101 __params__ = ["totp_value"]
102
103 def __init__(self, *args, totp_secret, **kwargs):
104 super().__init__(*args, **kwargs)
105 self.totp_secret = totp_secret
106
107 def validate_totp_value(self, field):
108 totp_value = field.data.encode("utf8")
109 if not otp.verify_totp(self.totp_secret, totp_value):
110 raise wtforms.validators.ValidationError("Invalid TOTP code. Try again?")
111
112
113 class DeleteWebAuthnForm(forms.Form):
114 __params__ = ["confirm_key_name"]
115
116 label = wtforms.StringField(
117 validators=[
118 wtforms.validators.DataRequired(message="Specify a label"),
119 wtforms.validators.Length(
120 max=64, message=("Label must be 64 characters or less")
121 ),
122 ]
123 )
124
125 def __init__(self, *args, user_service, user_id, **kwargs):
126 super().__init__(*args, **kwargs)
127 self.user_service = user_service
128 self.user_id = user_id
129
130 def validate_label(self, field):
131 label = field.data
132
133 webauthn = self.user_service.get_webauthn_by_label(self.user_id, label)
134 if webauthn is None:
135 raise wtforms.validators.ValidationError("No WebAuthn key with given label")
136 self.webauthn = webauthn
137
138
139 class ProvisionWebAuthnForm(WebAuthnCredentialMixin, forms.Form):
140 __params__ = ["label", "credential"]
141
142 label = wtforms.StringField(
143 validators=[
144 wtforms.validators.DataRequired(message="Specify a label"),
145 wtforms.validators.Length(
146 max=64, message=("Label must be 64 characters or less")
147 ),
148 ]
149 )
150
151 def __init__(
152 self, *args, user_service, user_id, challenge, rp_id, origin, **kwargs
153 ):
154 super().__init__(*args, **kwargs)
155 self.user_service = user_service
156 self.user_id = user_id
157 self.challenge = challenge
158 self.rp_id = rp_id
159 self.origin = origin
160
161 def validate_credential(self, field):
162 try:
163 credential_dict = json.loads(field.data.encode("utf8"))
164 except json.JSONDecodeError:
165 raise wtforms.validators.ValidationError(
166 "Invalid WebAuthn credential: Bad payload"
167 )
168
169 try:
170 validated_credential = self.user_service.verify_webauthn_credential(
171 credential_dict,
172 challenge=self.challenge,
173 rp_id=self.rp_id,
174 origin=self.origin,
175 )
176 except webauthn.RegistrationRejectedException as e:
177 raise wtforms.validators.ValidationError(str(e))
178
179 self.validated_credential = validated_credential
180
181 def validate_label(self, field):
182 label = field.data
183
184 if self.user_service.get_webauthn_by_label(self.user_id, label) is not None:
185 raise wtforms.validators.ValidationError(f"Label '{label}' already in use")
186
[end of warehouse/manage/forms.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/warehouse/manage/forms.py b/warehouse/manage/forms.py
--- a/warehouse/manage/forms.py
+++ b/warehouse/manage/forms.py
@@ -111,11 +111,11 @@
class DeleteWebAuthnForm(forms.Form):
- __params__ = ["confirm_key_name"]
+ __params__ = ["confirm_device_name"]
label = wtforms.StringField(
validators=[
- wtforms.validators.DataRequired(message="Specify a label"),
+ wtforms.validators.DataRequired(message="Specify a device name"),
wtforms.validators.Length(
max=64, message=("Label must be 64 characters or less")
),
| {"golden_diff": "diff --git a/warehouse/manage/forms.py b/warehouse/manage/forms.py\n--- a/warehouse/manage/forms.py\n+++ b/warehouse/manage/forms.py\n@@ -111,11 +111,11 @@\n \n \n class DeleteWebAuthnForm(forms.Form):\n- __params__ = [\"confirm_key_name\"]\n+ __params__ = [\"confirm_device_name\"]\n \n label = wtforms.StringField(\n validators=[\n- wtforms.validators.DataRequired(message=\"Specify a label\"),\n+ wtforms.validators.DataRequired(message=\"Specify a device name\"),\n wtforms.validators.Length(\n max=64, message=(\"Label must be 64 characters or less\")\n ),\n", "issue": "Improve webauthn errors\nProblems:\r\n\r\n- We currently have two pieces of JS that control the display of webauthn errors - some in `index.js`, some in `webauthn.js`\r\n- The errors are not announced to the screenreader (via `role=alert`)\r\n- The errors are not associated with the webauthn label field (on the provisioning page) - we should use `aria-describedby` for this\r\n- The user is able to put text into the label field on the provisioning page - it should be disabled\n", "before_files": [{"content": "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport json\n\nimport wtforms\n\nimport warehouse.utils.otp as otp\nimport warehouse.utils.webauthn as webauthn\n\nfrom warehouse import forms\nfrom warehouse.accounts.forms import (\n NewEmailMixin,\n NewPasswordMixin,\n PasswordMixin,\n TOTPValueMixin,\n WebAuthnCredentialMixin,\n)\n\n\nclass RoleNameMixin:\n\n role_name = wtforms.SelectField(\n \"Select role\",\n choices=[(\"Maintainer\", \"Maintainer\"), (\"Owner\", \"Owner\")],\n validators=[wtforms.validators.DataRequired(message=\"Select role\")],\n )\n\n\nclass UsernameMixin:\n\n username = wtforms.StringField(\n validators=[wtforms.validators.DataRequired(message=\"Specify username\")]\n )\n\n def validate_username(self, field):\n userid = self.user_service.find_userid(field.data)\n\n if userid is None:\n raise wtforms.validators.ValidationError(\n \"No user found with that username. Try again.\"\n )\n\n\nclass CreateRoleForm(RoleNameMixin, UsernameMixin, forms.Form):\n def __init__(self, *args, user_service, **kwargs):\n super().__init__(*args, **kwargs)\n self.user_service = user_service\n\n\nclass ChangeRoleForm(RoleNameMixin, forms.Form):\n pass\n\n\nclass SaveAccountForm(forms.Form):\n\n __params__ = [\"name\"]\n\n name = wtforms.StringField()\n\n\nclass AddEmailForm(NewEmailMixin, forms.Form):\n\n __params__ = [\"email\"]\n\n def __init__(self, *args, user_service, user_id, **kwargs):\n super().__init__(*args, **kwargs)\n self.user_service = user_service\n self.user_id = user_id\n\n\nclass ChangePasswordForm(PasswordMixin, NewPasswordMixin, forms.Form):\n\n __params__ = [\"password\", \"new_password\", \"password_confirm\"]\n\n def __init__(self, *args, user_service, **kwargs):\n super().__init__(*args, **kwargs)\n self.user_service = user_service\n\n\nclass DeleteTOTPForm(UsernameMixin, forms.Form):\n\n __params__ = [\"confirm_username\"]\n\n def __init__(self, *args, user_service, **kwargs):\n super().__init__(*args, **kwargs)\n self.user_service = user_service\n\n\nclass ProvisionTOTPForm(TOTPValueMixin, forms.Form):\n\n __params__ = [\"totp_value\"]\n\n def __init__(self, *args, totp_secret, **kwargs):\n super().__init__(*args, **kwargs)\n self.totp_secret = totp_secret\n\n def validate_totp_value(self, field):\n totp_value = field.data.encode(\"utf8\")\n if not otp.verify_totp(self.totp_secret, totp_value):\n raise wtforms.validators.ValidationError(\"Invalid TOTP code. Try again?\")\n\n\nclass DeleteWebAuthnForm(forms.Form):\n __params__ = [\"confirm_key_name\"]\n\n label = wtforms.StringField(\n validators=[\n wtforms.validators.DataRequired(message=\"Specify a label\"),\n wtforms.validators.Length(\n max=64, message=(\"Label must be 64 characters or less\")\n ),\n ]\n )\n\n def __init__(self, *args, user_service, user_id, **kwargs):\n super().__init__(*args, **kwargs)\n self.user_service = user_service\n self.user_id = user_id\n\n def validate_label(self, field):\n label = field.data\n\n webauthn = self.user_service.get_webauthn_by_label(self.user_id, label)\n if webauthn is None:\n raise wtforms.validators.ValidationError(\"No WebAuthn key with given label\")\n self.webauthn = webauthn\n\n\nclass ProvisionWebAuthnForm(WebAuthnCredentialMixin, forms.Form):\n __params__ = [\"label\", \"credential\"]\n\n label = wtforms.StringField(\n validators=[\n wtforms.validators.DataRequired(message=\"Specify a label\"),\n wtforms.validators.Length(\n max=64, message=(\"Label must be 64 characters or less\")\n ),\n ]\n )\n\n def __init__(\n self, *args, user_service, user_id, challenge, rp_id, origin, **kwargs\n ):\n super().__init__(*args, **kwargs)\n self.user_service = user_service\n self.user_id = user_id\n self.challenge = challenge\n self.rp_id = rp_id\n self.origin = origin\n\n def validate_credential(self, field):\n try:\n credential_dict = json.loads(field.data.encode(\"utf8\"))\n except json.JSONDecodeError:\n raise wtforms.validators.ValidationError(\n \"Invalid WebAuthn credential: Bad payload\"\n )\n\n try:\n validated_credential = self.user_service.verify_webauthn_credential(\n credential_dict,\n challenge=self.challenge,\n rp_id=self.rp_id,\n origin=self.origin,\n )\n except webauthn.RegistrationRejectedException as e:\n raise wtforms.validators.ValidationError(str(e))\n\n self.validated_credential = validated_credential\n\n def validate_label(self, field):\n label = field.data\n\n if self.user_service.get_webauthn_by_label(self.user_id, label) is not None:\n raise wtforms.validators.ValidationError(f\"Label '{label}' already in use\")\n", "path": "warehouse/manage/forms.py"}]} | 2,357 | 145 |
gh_patches_debug_8487 | rasdani/github-patches | git_diff | pytorch__ignite-785 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
tqdm_logger does not take epoch_length into account
## 🐛 `tqdm_logger` does not take `epoch_length` into account
When calling `Engine.run()` with a custom `epoch_length`,
the tqdm progress bar does not adapt and displays the full number of batches in the data.
Here is a minimal example:
```python
from ignite.contrib.handlers.tqdm_logger import ProgressBar
from ignite.engine import Engine
from torch import nn
from torch.utils.data import DataLoader
data = list(range(100))
model = nn.Identity()
engine = Engine(lambda engine, batch: model(batch))
ProgressBar(persist=True).attach(engine)
engine.run(data, epoch_length=50)
```
We have 100 items in `data` but the true end of the epoch is at 50 iterations, yet the progress is displayed over the range of 100 and just ends midway, when I expect it to be displayed over the range of 50, thus ending when the bar is full.
One can not overwrite tqdm's `total` argument by replacing
```python
ProgressBar(persist=True).attach(engine)
```
by
```python
ProgressBar(persist=True, total=50).attach(engine)
```
for it raises `TypeError: type object got multiple values for keyword argument 'total'`.
## Environment
- PyTorch Version : 1.4.0
- Ignite Version : 0.3.0
- OS : Ubuntu 19.04
- Ignite installation method : `pip`
- Python version: 3.7.3
</issue>
<code>
[start of ignite/contrib/handlers/tqdm_logger.py]
1 # -*- coding: utf-8 -*-
2 import warnings
3
4 import torch
5
6 from ignite.engine import Events
7 from ignite.engine.engine import EventWithFilter
8 from ignite.contrib.handlers.base_logger import BaseLogger, BaseOutputHandler
9
10
11 class ProgressBar(BaseLogger):
12 """
13 TQDM progress bar handler to log training progress and computed metrics.
14
15 Args:
16 persist (bool, optional): set to ``True`` to persist the progress bar after completion (default = ``False``)
17 bar_format (str, optional): Specify a custom bar string formatting. May impact performance.
18 [default: '{desc}[{n_fmt}/{total_fmt}] {percentage:3.0f}%|{bar}{postfix} [{elapsed}<{remaining}]'].
19 Set to ``None`` to use ``tqdm`` default bar formatting: '{l_bar}{bar}{r_bar}', where
20 l_bar='{desc}: {percentage:3.0f}%|' and
21 r_bar='| {n_fmt}/{total_fmt} [{elapsed}<{remaining}, {rate_fmt}{postfix}]'. For more details on the
22 formatting, see `tqdm docs <https://tqdm.github.io/docs/tqdm/>`_.
23 **tqdm_kwargs: kwargs passed to tqdm progress bar.
24 By default, progress bar description displays "Epoch [5/10]" where 5 is the current epoch and 10 is the
25 number of epochs. If tqdm_kwargs defines `desc`, e.g. "Predictions", than the description is
26 "Predictions [5/10]" if number of epochs is more than one otherwise it is simply "Predictions".
27
28 Examples:
29
30 Simple progress bar
31
32 .. code-block:: python
33
34 trainer = create_supervised_trainer(model, optimizer, loss)
35
36 pbar = ProgressBar()
37 pbar.attach(trainer)
38
39 # Progress bar will looks like
40 # Epoch [2/50]: [64/128] 50%|█████ [06:17<12:34]
41
42 Log output to a file instead of stderr (tqdm's default output)
43
44 .. code-block:: python
45
46 trainer = create_supervised_trainer(model, optimizer, loss)
47
48 log_file = open("output.log", "w")
49 pbar = ProgressBar(file=log_file)
50 pbar.attach(trainer)
51
52 Attach metrics that already have been computed at :attr:`~ignite.engine.Events.ITERATION_COMPLETED`
53 (such as :class:`~ignite.metrics.RunningAverage`)
54
55 .. code-block:: python
56
57 trainer = create_supervised_trainer(model, optimizer, loss)
58
59 RunningAverage(output_transform=lambda x: x).attach(trainer, 'loss')
60
61 pbar = ProgressBar()
62 pbar.attach(trainer, ['loss'])
63
64 # Progress bar will looks like
65 # Epoch [2/50]: [64/128] 50%|█████ , loss=0.123 [06:17<12:34]
66
67 Directly attach the engine's output
68
69 .. code-block:: python
70
71 trainer = create_supervised_trainer(model, optimizer, loss)
72
73 pbar = ProgressBar()
74 pbar.attach(trainer, output_transform=lambda x: {'loss': x})
75
76 # Progress bar will looks like
77 # Epoch [2/50]: [64/128] 50%|█████ , loss=0.123 [06:17<12:34]
78
79 Note:
80 When adding attaching the progress bar to an engine, it is recommend that you replace
81 every print operation in the engine's handlers triggered every iteration with
82 ``pbar.log_message`` to guarantee the correct format of the stdout.
83
84 Note:
85 When using inside jupyter notebook, `ProgressBar` automatically uses `tqdm_notebook`. For correct rendering,
86 please install `ipywidgets <https://ipywidgets.readthedocs.io/en/stable/user_install.html#installation>`_.
87 Due to `tqdm notebook bugs <https://github.com/tqdm/tqdm/issues/594>`_, bar format may be needed to be set
88 to an empty string value.
89
90 """
91
92 _events_order = [
93 Events.STARTED,
94 Events.EPOCH_STARTED,
95 Events.ITERATION_STARTED,
96 Events.ITERATION_COMPLETED,
97 Events.EPOCH_COMPLETED,
98 Events.COMPLETED
99 ]
100
101 def __init__(self, persist=False,
102 bar_format='{desc}[{n_fmt}/{total_fmt}] {percentage:3.0f}%|{bar}{postfix} [{elapsed}<{remaining}]',
103 **tqdm_kwargs):
104
105 try:
106 from tqdm.autonotebook import tqdm
107 except ImportError:
108 raise RuntimeError("This contrib module requires tqdm to be installed. "
109 "Please install it with command: \n pip install tqdm")
110
111 self.pbar_cls = tqdm
112 self.pbar = None
113 self.persist = persist
114 self.bar_format = bar_format
115 self.tqdm_kwargs = tqdm_kwargs
116
117 def _reset(self, pbar_total):
118 self.pbar = self.pbar_cls(
119 total=pbar_total,
120 leave=self.persist,
121 bar_format=self.bar_format,
122 initial=1,
123 **self.tqdm_kwargs
124 )
125
126 def _close(self, engine):
127 if self.pbar:
128 self.pbar.close()
129 self.pbar = None
130
131 @staticmethod
132 def _compare_lt(event1, event2):
133 if isinstance(event1, EventWithFilter):
134 event1 = event1.event
135 if isinstance(event2, EventWithFilter):
136 event2 = event2.event
137 i1 = ProgressBar._events_order.index(event1)
138 i2 = ProgressBar._events_order.index(event2)
139 return i1 < i2
140
141 def log_message(self, message):
142 """
143 Logs a message, preserving the progress bar correct output format.
144
145 Args:
146 message (str): string you wish to log.
147 """
148 from tqdm import tqdm
149
150 tqdm.write(message, file=self.tqdm_kwargs.get("file", None))
151
152 def attach(self, engine, metric_names=None, output_transform=None,
153 event_name=Events.ITERATION_COMPLETED,
154 closing_event_name=Events.EPOCH_COMPLETED):
155 """
156 Attaches the progress bar to an engine object.
157
158 Args:
159 engine (Engine): engine object.
160 metric_names (list of str, optional): list of metric names to plot or a string "all" to plot all available
161 metrics.
162 output_transform (callable, optional): a function to select what you want to print from the engine's
163 output. This function may return either a dictionary with entries in the format of ``{name: value}``,
164 or a single scalar, which will be displayed with the default name `output`.
165 event_name: event's name on which the progress bar advances. Valid events are from
166 :class:`~ignite.engine.Events`.
167 closing_event_name: event's name on which the progress bar is closed. Valid events are from
168 :class:`~ignite.engine.Events`.
169
170 Note: accepted output value types are numbers, 0d and 1d torch tensors and strings
171
172 """
173 desc = self.tqdm_kwargs.get("desc", "Epoch")
174
175 if not isinstance(event_name, (Events, EventWithFilter)):
176 raise ValueError("Logging event should be only `ignite.engine.Events`")
177
178 if isinstance(closing_event_name, EventWithFilter):
179 raise ValueError("Closing event should not use any event filter")
180
181 if not self._compare_lt(event_name, closing_event_name):
182 raise ValueError("Logging event {} should be called before closing event {}"
183 .format(event_name, closing_event_name))
184
185 log_handler = _OutputHandler(desc, metric_names, output_transform,
186 closing_event_name=closing_event_name)
187 # if event_name is EventWithFilter, filter is passed here
188 super(ProgressBar, self).attach(engine, log_handler, event_name)
189 engine.add_event_handler(closing_event_name, self._close)
190
191
192 class _OutputHandler(BaseOutputHandler):
193 """Helper handler to log engine's output and/or metrics
194
195 Args:
196 description (str): progress bar description.
197 metric_names (list of str, optional): list of metric names to plot or a string "all" to plot all available
198 metrics.
199 output_transform (callable, optional): output transform function to prepare `engine.state.output` as a number.
200 For example, `output_transform = lambda output: output`
201 This function can also return a dictionary, e.g `{'loss': loss1, 'another_loss': loss2}` to label the plot
202 with corresponding keys.
203 closing_event_name: event's name on which the progress bar is closed. Valid events are from
204 :class:`~ignite.engine.Events` or any `event_name` added by
205 :meth:`~ignite.engine.Engine.register_events`.
206
207 """
208
209 def __init__(self, description, metric_names=None, output_transform=None,
210 closing_event_name=Events.EPOCH_COMPLETED):
211 if metric_names is None and output_transform is None:
212 # This helps to avoid 'Either metric_names or output_transform should be defined' of BaseOutputHandler
213 metric_names = []
214 super(_OutputHandler, self).__init__(description, metric_names, output_transform,
215 another_engine=None, global_step_transform=None)
216 self.closing_event_name = closing_event_name
217
218 @staticmethod
219 def get_max_number_events(event_name, engine):
220 if event_name in (Events.ITERATION_STARTED, Events.ITERATION_COMPLETED):
221 return len(engine.state.dataloader)
222 if event_name in (Events.EPOCH_STARTED, Events.EPOCH_COMPLETED):
223 return engine.state.max_epochs
224 return 1
225
226 def __call__(self, engine, logger, event_name):
227
228 pbar_total = self.get_max_number_events(event_name, engine)
229 if logger.pbar is None:
230 logger._reset(pbar_total=pbar_total)
231
232 desc = self.tag
233 max_num_of_closing_events = self.get_max_number_events(self.closing_event_name, engine)
234 if max_num_of_closing_events > 1:
235 global_step = engine.state.get_event_attrib_value(self.closing_event_name)
236 desc += " [{}/{}]".format(global_step, max_num_of_closing_events)
237 logger.pbar.set_description(desc)
238
239 metrics = self._setup_output_metrics(engine)
240
241 rendered_metrics = {}
242 for key, value in metrics.items():
243 if isinstance(value, torch.Tensor):
244 if value.ndimension() == 0:
245 rendered_metrics[key] = value.item()
246 elif value.ndimension() == 1:
247 for i, v in enumerate(value):
248 k = "{}_{}".format(key, i)
249 rendered_metrics[k] = v.item()
250 else:
251 warnings.warn("ProgressBar can not log "
252 "tensor with {} dimensions".format(value.ndimension()))
253 else:
254 rendered_metrics[key] = value
255
256 if rendered_metrics:
257 logger.pbar.set_postfix(**rendered_metrics)
258
259 global_step = engine.state.get_event_attrib_value(event_name)
260 global_step = (global_step - 1) % pbar_total + 1
261 logger.pbar.update(global_step - logger.pbar.n)
262
[end of ignite/contrib/handlers/tqdm_logger.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/ignite/contrib/handlers/tqdm_logger.py b/ignite/contrib/handlers/tqdm_logger.py
--- a/ignite/contrib/handlers/tqdm_logger.py
+++ b/ignite/contrib/handlers/tqdm_logger.py
@@ -218,7 +218,7 @@
@staticmethod
def get_max_number_events(event_name, engine):
if event_name in (Events.ITERATION_STARTED, Events.ITERATION_COMPLETED):
- return len(engine.state.dataloader)
+ return engine.state.epoch_length
if event_name in (Events.EPOCH_STARTED, Events.EPOCH_COMPLETED):
return engine.state.max_epochs
return 1
| {"golden_diff": "diff --git a/ignite/contrib/handlers/tqdm_logger.py b/ignite/contrib/handlers/tqdm_logger.py\n--- a/ignite/contrib/handlers/tqdm_logger.py\n+++ b/ignite/contrib/handlers/tqdm_logger.py\n@@ -218,7 +218,7 @@\n @staticmethod\n def get_max_number_events(event_name, engine):\n if event_name in (Events.ITERATION_STARTED, Events.ITERATION_COMPLETED):\n- return len(engine.state.dataloader)\n+ return engine.state.epoch_length\n if event_name in (Events.EPOCH_STARTED, Events.EPOCH_COMPLETED):\n return engine.state.max_epochs\n return 1\n", "issue": "tqdm_logger does not take epoch_length into account\n## \ud83d\udc1b `tqdm_logger` does not take `epoch_length` into account\r\n\r\nWhen calling `Engine.run()` with a custom `epoch_length`,\r\nthe tqdm progress bar does not adapt and displays the full number of batches in the data.\r\nHere is a minimal example:\r\n```python\r\nfrom ignite.contrib.handlers.tqdm_logger import ProgressBar\r\nfrom ignite.engine import Engine\r\nfrom torch import nn\r\nfrom torch.utils.data import DataLoader\r\n\r\ndata = list(range(100))\r\nmodel = nn.Identity()\r\nengine = Engine(lambda engine, batch: model(batch))\r\n\r\nProgressBar(persist=True).attach(engine)\r\nengine.run(data, epoch_length=50)\r\n```\r\nWe have 100 items in `data` but the true end of the epoch is at 50 iterations, yet the progress is displayed over the range of 100 and just ends midway, when I expect it to be displayed over the range of 50, thus ending when the bar is full.\r\nOne can not overwrite tqdm's `total` argument by replacing\r\n```python\r\nProgressBar(persist=True).attach(engine)\r\n```\r\nby\r\n```python\r\nProgressBar(persist=True, total=50).attach(engine)\r\n```\r\nfor it raises `TypeError: type object got multiple values for keyword argument 'total'`.\r\n\r\n## Environment\r\n - PyTorch Version : 1.4.0 \r\n - Ignite Version : 0.3.0\r\n - OS : Ubuntu 19.04\r\n - Ignite installation method : `pip`\r\n - Python version: 3.7.3\r\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\nimport warnings\n\nimport torch\n\nfrom ignite.engine import Events\nfrom ignite.engine.engine import EventWithFilter\nfrom ignite.contrib.handlers.base_logger import BaseLogger, BaseOutputHandler\n\n\nclass ProgressBar(BaseLogger):\n \"\"\"\n TQDM progress bar handler to log training progress and computed metrics.\n\n Args:\n persist (bool, optional): set to ``True`` to persist the progress bar after completion (default = ``False``)\n bar_format (str, optional): Specify a custom bar string formatting. May impact performance.\n [default: '{desc}[{n_fmt}/{total_fmt}] {percentage:3.0f}%|{bar}{postfix} [{elapsed}<{remaining}]'].\n Set to ``None`` to use ``tqdm`` default bar formatting: '{l_bar}{bar}{r_bar}', where\n l_bar='{desc}: {percentage:3.0f}%|' and\n r_bar='| {n_fmt}/{total_fmt} [{elapsed}<{remaining}, {rate_fmt}{postfix}]'. For more details on the\n formatting, see `tqdm docs <https://tqdm.github.io/docs/tqdm/>`_.\n **tqdm_kwargs: kwargs passed to tqdm progress bar.\n By default, progress bar description displays \"Epoch [5/10]\" where 5 is the current epoch and 10 is the\n number of epochs. If tqdm_kwargs defines `desc`, e.g. \"Predictions\", than the description is\n \"Predictions [5/10]\" if number of epochs is more than one otherwise it is simply \"Predictions\".\n\n Examples:\n\n Simple progress bar\n\n .. code-block:: python\n\n trainer = create_supervised_trainer(model, optimizer, loss)\n\n pbar = ProgressBar()\n pbar.attach(trainer)\n\n # Progress bar will looks like\n # Epoch [2/50]: [64/128] 50%|\u2588\u2588\u2588\u2588\u2588 [06:17<12:34]\n\n Log output to a file instead of stderr (tqdm's default output)\n\n .. code-block:: python\n\n trainer = create_supervised_trainer(model, optimizer, loss)\n\n log_file = open(\"output.log\", \"w\")\n pbar = ProgressBar(file=log_file)\n pbar.attach(trainer)\n\n Attach metrics that already have been computed at :attr:`~ignite.engine.Events.ITERATION_COMPLETED`\n (such as :class:`~ignite.metrics.RunningAverage`)\n\n .. code-block:: python\n\n trainer = create_supervised_trainer(model, optimizer, loss)\n\n RunningAverage(output_transform=lambda x: x).attach(trainer, 'loss')\n\n pbar = ProgressBar()\n pbar.attach(trainer, ['loss'])\n\n # Progress bar will looks like\n # Epoch [2/50]: [64/128] 50%|\u2588\u2588\u2588\u2588\u2588 , loss=0.123 [06:17<12:34]\n\n Directly attach the engine's output\n\n .. code-block:: python\n\n trainer = create_supervised_trainer(model, optimizer, loss)\n\n pbar = ProgressBar()\n pbar.attach(trainer, output_transform=lambda x: {'loss': x})\n\n # Progress bar will looks like\n # Epoch [2/50]: [64/128] 50%|\u2588\u2588\u2588\u2588\u2588 , loss=0.123 [06:17<12:34]\n\n Note:\n When adding attaching the progress bar to an engine, it is recommend that you replace\n every print operation in the engine's handlers triggered every iteration with\n ``pbar.log_message`` to guarantee the correct format of the stdout.\n\n Note:\n When using inside jupyter notebook, `ProgressBar` automatically uses `tqdm_notebook`. For correct rendering,\n please install `ipywidgets <https://ipywidgets.readthedocs.io/en/stable/user_install.html#installation>`_.\n Due to `tqdm notebook bugs <https://github.com/tqdm/tqdm/issues/594>`_, bar format may be needed to be set\n to an empty string value.\n\n \"\"\"\n\n _events_order = [\n Events.STARTED,\n Events.EPOCH_STARTED,\n Events.ITERATION_STARTED,\n Events.ITERATION_COMPLETED,\n Events.EPOCH_COMPLETED,\n Events.COMPLETED\n ]\n\n def __init__(self, persist=False,\n bar_format='{desc}[{n_fmt}/{total_fmt}] {percentage:3.0f}%|{bar}{postfix} [{elapsed}<{remaining}]',\n **tqdm_kwargs):\n\n try:\n from tqdm.autonotebook import tqdm\n except ImportError:\n raise RuntimeError(\"This contrib module requires tqdm to be installed. \"\n \"Please install it with command: \\n pip install tqdm\")\n\n self.pbar_cls = tqdm\n self.pbar = None\n self.persist = persist\n self.bar_format = bar_format\n self.tqdm_kwargs = tqdm_kwargs\n\n def _reset(self, pbar_total):\n self.pbar = self.pbar_cls(\n total=pbar_total,\n leave=self.persist,\n bar_format=self.bar_format,\n initial=1,\n **self.tqdm_kwargs\n )\n\n def _close(self, engine):\n if self.pbar:\n self.pbar.close()\n self.pbar = None\n\n @staticmethod\n def _compare_lt(event1, event2):\n if isinstance(event1, EventWithFilter):\n event1 = event1.event\n if isinstance(event2, EventWithFilter):\n event2 = event2.event\n i1 = ProgressBar._events_order.index(event1)\n i2 = ProgressBar._events_order.index(event2)\n return i1 < i2\n\n def log_message(self, message):\n \"\"\"\n Logs a message, preserving the progress bar correct output format.\n\n Args:\n message (str): string you wish to log.\n \"\"\"\n from tqdm import tqdm\n\n tqdm.write(message, file=self.tqdm_kwargs.get(\"file\", None))\n\n def attach(self, engine, metric_names=None, output_transform=None,\n event_name=Events.ITERATION_COMPLETED,\n closing_event_name=Events.EPOCH_COMPLETED):\n \"\"\"\n Attaches the progress bar to an engine object.\n\n Args:\n engine (Engine): engine object.\n metric_names (list of str, optional): list of metric names to plot or a string \"all\" to plot all available\n metrics.\n output_transform (callable, optional): a function to select what you want to print from the engine's\n output. This function may return either a dictionary with entries in the format of ``{name: value}``,\n or a single scalar, which will be displayed with the default name `output`.\n event_name: event's name on which the progress bar advances. Valid events are from\n :class:`~ignite.engine.Events`.\n closing_event_name: event's name on which the progress bar is closed. Valid events are from\n :class:`~ignite.engine.Events`.\n\n Note: accepted output value types are numbers, 0d and 1d torch tensors and strings\n\n \"\"\"\n desc = self.tqdm_kwargs.get(\"desc\", \"Epoch\")\n\n if not isinstance(event_name, (Events, EventWithFilter)):\n raise ValueError(\"Logging event should be only `ignite.engine.Events`\")\n\n if isinstance(closing_event_name, EventWithFilter):\n raise ValueError(\"Closing event should not use any event filter\")\n\n if not self._compare_lt(event_name, closing_event_name):\n raise ValueError(\"Logging event {} should be called before closing event {}\"\n .format(event_name, closing_event_name))\n\n log_handler = _OutputHandler(desc, metric_names, output_transform,\n closing_event_name=closing_event_name)\n # if event_name is EventWithFilter, filter is passed here\n super(ProgressBar, self).attach(engine, log_handler, event_name)\n engine.add_event_handler(closing_event_name, self._close)\n\n\nclass _OutputHandler(BaseOutputHandler):\n \"\"\"Helper handler to log engine's output and/or metrics\n\n Args:\n description (str): progress bar description.\n metric_names (list of str, optional): list of metric names to plot or a string \"all\" to plot all available\n metrics.\n output_transform (callable, optional): output transform function to prepare `engine.state.output` as a number.\n For example, `output_transform = lambda output: output`\n This function can also return a dictionary, e.g `{'loss': loss1, 'another_loss': loss2}` to label the plot\n with corresponding keys.\n closing_event_name: event's name on which the progress bar is closed. Valid events are from\n :class:`~ignite.engine.Events` or any `event_name` added by\n :meth:`~ignite.engine.Engine.register_events`.\n\n \"\"\"\n\n def __init__(self, description, metric_names=None, output_transform=None,\n closing_event_name=Events.EPOCH_COMPLETED):\n if metric_names is None and output_transform is None:\n # This helps to avoid 'Either metric_names or output_transform should be defined' of BaseOutputHandler\n metric_names = []\n super(_OutputHandler, self).__init__(description, metric_names, output_transform,\n another_engine=None, global_step_transform=None)\n self.closing_event_name = closing_event_name\n\n @staticmethod\n def get_max_number_events(event_name, engine):\n if event_name in (Events.ITERATION_STARTED, Events.ITERATION_COMPLETED):\n return len(engine.state.dataloader)\n if event_name in (Events.EPOCH_STARTED, Events.EPOCH_COMPLETED):\n return engine.state.max_epochs\n return 1\n\n def __call__(self, engine, logger, event_name):\n\n pbar_total = self.get_max_number_events(event_name, engine)\n if logger.pbar is None:\n logger._reset(pbar_total=pbar_total)\n\n desc = self.tag\n max_num_of_closing_events = self.get_max_number_events(self.closing_event_name, engine)\n if max_num_of_closing_events > 1:\n global_step = engine.state.get_event_attrib_value(self.closing_event_name)\n desc += \" [{}/{}]\".format(global_step, max_num_of_closing_events)\n logger.pbar.set_description(desc)\n\n metrics = self._setup_output_metrics(engine)\n\n rendered_metrics = {}\n for key, value in metrics.items():\n if isinstance(value, torch.Tensor):\n if value.ndimension() == 0:\n rendered_metrics[key] = value.item()\n elif value.ndimension() == 1:\n for i, v in enumerate(value):\n k = \"{}_{}\".format(key, i)\n rendered_metrics[k] = v.item()\n else:\n warnings.warn(\"ProgressBar can not log \"\n \"tensor with {} dimensions\".format(value.ndimension()))\n else:\n rendered_metrics[key] = value\n\n if rendered_metrics:\n logger.pbar.set_postfix(**rendered_metrics)\n\n global_step = engine.state.get_event_attrib_value(event_name)\n global_step = (global_step - 1) % pbar_total + 1\n logger.pbar.update(global_step - logger.pbar.n)\n", "path": "ignite/contrib/handlers/tqdm_logger.py"}]} | 4,012 | 152 |
gh_patches_debug_3191 | rasdani/github-patches | git_diff | weecology__retriever-663 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Stop bad scripts from causing errors
Currently when `compile_json` gets run if something goes wrong the retriever errors out even on commands not running the script (e.g., `retriever ls`). What it should do is ignore the bad script and possibly report back that there is an issue with the script but keep running normally otherwise.
</issue>
<code>
[start of lib/compile.py]
1 from builtins import str
2 import json
3 import sys
4 if sys.version_info[0] < 3:
5 from codecs import open
6
7 script_templates = {
8 "default": """#retriever
9 from retriever.lib.templates import BasicTextTemplate
10 from retriever.lib.models import Table, Cleanup, correct_invalid_value
11
12 SCRIPT = BasicTextTemplate(%s)""",
13
14 "html_table": """#retriever
15 from retriever.lib.templates import HtmlTableTemplate
16 from retriever.lib.models import Table, Cleanup, correct_invalid_value
17
18 SCRIPT = HtmlTableTemplate(%s)""",
19 }
20
21
22 def compile_script(script_file):
23 definition = open(script_file + ".script", 'r')
24
25 values = {}
26 urls = {}
27 tables = {}
28 last_table = ""
29 replace = []
30 keys_to_ignore = ["template"]
31
32 for line in [line.strip() for line in definition]:
33 if line and ':' in line and not line[0] == '#':
34 split_line = [a.strip() for a in line.split(":")]
35 key = split_line[0].lower()
36 value = ':'.join(split_line[1:])
37 if key == "table":
38 table_name = value.split(',')[0].strip()
39 last_table = table_name
40 table_url = ','.join(value.split(',')[1:]).strip()
41 urls[table_name] = table_url
42 if replace:
43 try:
44 tables[last_table]
45 except:
46 tables[table_name] = {'replace_columns': str(replace)}
47 elif key == "*nulls":
48 if last_table:
49 nulls = [eval(v) for v in [v.strip()
50 for v in value.split(',')]]
51 try:
52 tables[last_table]
53 except KeyError:
54 if replace:
55 tables[last_table] = {'replace_columns': str(replace)}
56 else:
57 tables[last_table] = {}
58 tables[last_table]['cleanup'] = "Cleanup(correct_invalid_value, nulls=" + str(nulls) + ")"
59 elif key == "replace":
60 replace = [(v.split(',')[0].strip(), v.split(',')[1].strip())
61 for v in [v.strip() for v in value.split(';')]]
62 elif key == "tags":
63 values["tags"] = [v.strip() for v in value.split(',')]
64 elif key == "*ct_names":
65 tables[last_table]["ct_names"] = [v.strip()
66 for v in value.split(',')]
67 elif key == "*column":
68 if last_table:
69 vs = [v.strip() for v in value.split(',')]
70 column = [(vs[0], (vs[1], vs[2]) if len(vs) > 2 else (vs[1],))]
71 try:
72 tables[last_table]
73 except KeyError:
74 tables[last_table] = {}
75
76 try:
77 tables[last_table]['columns'] += column
78 except KeyError:
79 tables[last_table]['columns'] = column
80 elif key[0] == "*":
81 # attribute that should be applied to the most recently
82 # declared table
83 if key[0] == "*":
84 key = key[1:]
85 if last_table:
86 try:
87 tables[last_table]
88 except KeyError:
89 tables[last_table] = {}
90
91 try:
92 e = eval(value)
93 except:
94 e = str(value)
95
96 tables[last_table][key] = "'" + str(e) + "'"
97 else:
98 # general script attributes
99 values[key] = '"' + str(value) + '"'
100
101 if 'shortname' not in list(values.keys()):
102 try:
103 values['shortname'] = values['name']
104 except:
105 pass
106 values['urls'] = str(urls)
107
108 def get_value(key):
109 try:
110 return values[key]
111 except KeyError:
112 return ""
113
114 table_desc = "{"
115 for (key, value) in list(tables.items()):
116 table_desc += "'" + key + "': Table('" + key + "', "
117 table_desc += ','.join([key + "=" + str(value)
118 for key, value, in list(value.items())])
119 table_desc += "),"
120 if table_desc != '{':
121 table_desc = table_desc[:-1]
122 table_desc += "}"
123
124 values['tables'] = table_desc
125
126 script_desc = []
127 for key, value in list(values.items()):
128 if key == "url":
129 key = "ref"
130 if key not in keys_to_ignore:
131 script_desc.append(key + "=" + str(value))
132 script_desc = (',\n' + ' ' * 27).join(script_desc)
133
134 if 'template' in list(values.keys()):
135 template = values["template"]
136 else:
137 template = "default"
138 script_contents = (script_templates[template] % script_desc)
139
140 new_script = open(script_file + '.py', 'w')
141 new_script.write(script_contents)
142 new_script.close()
143
144 definition.close()
145
146
147 def add_dialect(table_dict, table):
148 """
149 Reads dialect key of JSON script and extracts key-value pairs to store them
150 in python script
151
152 Contains properties such 'nulls', delimiter', etc
153 """
154 for (key, val) in table['dialect'].items():
155 # dialect related key-value pairs
156 # copied as is
157 if key == "nulls":
158 table_dict[
159 'cleanup'] = "Cleanup(correct_invalid_value, nulls=" + str(val) + ")"
160
161 elif key == "delimiter":
162 table_dict[key] = "'" + str(val) + "'"
163 else:
164 table_dict[key] = val
165
166
167 def add_schema(table_dict, table):
168 """
169 Reads schema key of JSON script and extracts values to store them in
170 python script
171
172 Contains properties related to table schema, such as 'fields' and cross-tab
173 column name ('ct_column').
174 """
175 for (key, val) in table['schema'].items():
176 # schema related key-value pairs
177
178 if key == "fields":
179 # fields = columns of the table
180
181 # list of column tuples
182 column_list = []
183 for obj in val:
184 # fields is a collection of JSON objects
185 # (similar to a list of dicts in python)
186
187 if "size" in obj:
188 column_list.append((obj["name"],
189 (obj["type"], obj["size"])))
190 else:
191 column_list.append((obj["name"],
192 (obj["type"],)))
193
194 table_dict["columns"] = column_list
195
196 elif key == "ct_column":
197 table_dict[key] = "'" + val + "'"
198
199 else:
200 table_dict[key] = val
201
202
203 def compile_json(json_file):
204 """
205 Function to compile JSON script files to python scripts
206 The scripts are created with `retriever create_json <script_name` using
207 command line
208 """
209 json_object = json.load(open(json_file + ".json", "r"))
210
211 if "retriever" not in json_object.keys():
212 # Compile only files that have retriever key
213 return
214
215 values = {}
216 values['urls'] = {}
217
218 keys_to_ignore = ["template"]
219
220 for (key, value) in json_object.items():
221
222 if key == "title":
223 values["name"] = "\"" + str(value) + "\""
224
225 elif key == "name":
226 values["shortname"] = "\"" + str(value) + "\""
227
228 elif key == "description":
229 values["description"] = "\"" + str(value) + "\""
230
231 elif key == "homepage":
232 values["ref"] = "\"" + str(value) + "\""
233
234 elif key == "citation":
235 values["citation"] = "\"" + str(value) + "\""
236
237 elif key == "keywords":
238 values["tags"] = value
239
240 elif key == "retriever_minimum_version":
241 values["retriever_minimum_version"] = "\"" + value + "\""
242
243 elif key == "resources":
244 # Array of table objects
245 tables = {}
246 for table in value:
247 # Maintain a dict for table keys and values
248 table_dict = {}
249
250 try:
251 values['urls'][table['name']] = table['url']
252 except Exception as e:
253 print(e, "\nError in reading table: " + table)
254 continue
255
256 if table["schema"] == {} and table["dialect"] == {}:
257 continue
258
259 for (t_key, t_val) in table.items():
260
261 if t_key == "dialect":
262 add_dialect(table_dict, table)
263
264 elif t_key == "schema":
265 add_schema(table_dict, table)
266
267 tables[table["name"]] = table_dict
268
269 else:
270 values[key] = value
271
272 # Create a Table object string using the tables dict
273 table_desc = "{"
274 for (key, value) in tables.items():
275 table_desc += "'" + key + "': Table('" + key + "', "
276 table_desc += ','.join([key + "=" + str(value)
277 for key, value, in value.items()])
278 table_desc += "),"
279 if table_desc != '{':
280 table_desc = table_desc[:-1]
281 table_desc += "}"
282
283 values["tables"] = table_desc
284
285 script_desc = []
286 for key, value in values.items():
287 if key not in keys_to_ignore:
288 script_desc.append(key + "=" + str(value))
289 script_desc = (',\n' + ' ' * 27).join(script_desc)
290
291 if 'template' in values.keys():
292 template = values["template"]
293 else:
294 template = "default"
295 script_contents = (script_templates[template] % script_desc)
296
297 new_script = open(json_file + '.py', 'w', encoding='utf-8')
298 new_script.write('# -*- coding: latin-1 -*-\n')
299 new_script.write(script_contents)
300 new_script.close()
301
[end of lib/compile.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/lib/compile.py b/lib/compile.py
--- a/lib/compile.py
+++ b/lib/compile.py
@@ -238,7 +238,7 @@
values["tags"] = value
elif key == "retriever_minimum_version":
- values["retriever_minimum_version"] = "\"" + value + "\""
+ values["retriever_minimum_version"] = "\"" + str(value) + "\""
elif key == "resources":
# Array of table objects
| {"golden_diff": "diff --git a/lib/compile.py b/lib/compile.py\n--- a/lib/compile.py\n+++ b/lib/compile.py\n@@ -238,7 +238,7 @@\n values[\"tags\"] = value\n \n elif key == \"retriever_minimum_version\":\n- values[\"retriever_minimum_version\"] = \"\\\"\" + value + \"\\\"\"\n+ values[\"retriever_minimum_version\"] = \"\\\"\" + str(value) + \"\\\"\"\n \n elif key == \"resources\":\n # Array of table objects\n", "issue": "Stop bad scripts from causing errors\nCurrently when `compile_json` gets run if something goes wrong the retriever errors out even on commands not running the script (e.g., `retriever ls`). What it should do is ignore the bad script and possibly report back that there is an issue with the script but keep running normally otherwise.\n\n", "before_files": [{"content": "from builtins import str\nimport json\nimport sys\nif sys.version_info[0] < 3:\n from codecs import open\n\nscript_templates = {\n \"default\": \"\"\"#retriever\nfrom retriever.lib.templates import BasicTextTemplate\nfrom retriever.lib.models import Table, Cleanup, correct_invalid_value\n\nSCRIPT = BasicTextTemplate(%s)\"\"\",\n\n \"html_table\": \"\"\"#retriever\nfrom retriever.lib.templates import HtmlTableTemplate\nfrom retriever.lib.models import Table, Cleanup, correct_invalid_value\n\nSCRIPT = HtmlTableTemplate(%s)\"\"\",\n}\n\n\ndef compile_script(script_file):\n definition = open(script_file + \".script\", 'r')\n\n values = {}\n urls = {}\n tables = {}\n last_table = \"\"\n replace = []\n keys_to_ignore = [\"template\"]\n\n for line in [line.strip() for line in definition]:\n if line and ':' in line and not line[0] == '#':\n split_line = [a.strip() for a in line.split(\":\")]\n key = split_line[0].lower()\n value = ':'.join(split_line[1:])\n if key == \"table\":\n table_name = value.split(',')[0].strip()\n last_table = table_name\n table_url = ','.join(value.split(',')[1:]).strip()\n urls[table_name] = table_url\n if replace:\n try:\n tables[last_table]\n except:\n tables[table_name] = {'replace_columns': str(replace)}\n elif key == \"*nulls\":\n if last_table:\n nulls = [eval(v) for v in [v.strip()\n for v in value.split(',')]]\n try:\n tables[last_table]\n except KeyError:\n if replace:\n tables[last_table] = {'replace_columns': str(replace)}\n else:\n tables[last_table] = {}\n tables[last_table]['cleanup'] = \"Cleanup(correct_invalid_value, nulls=\" + str(nulls) + \")\"\n elif key == \"replace\":\n replace = [(v.split(',')[0].strip(), v.split(',')[1].strip())\n for v in [v.strip() for v in value.split(';')]]\n elif key == \"tags\":\n values[\"tags\"] = [v.strip() for v in value.split(',')]\n elif key == \"*ct_names\":\n tables[last_table][\"ct_names\"] = [v.strip()\n for v in value.split(',')]\n elif key == \"*column\":\n if last_table:\n vs = [v.strip() for v in value.split(',')]\n column = [(vs[0], (vs[1], vs[2]) if len(vs) > 2 else (vs[1],))]\n try:\n tables[last_table]\n except KeyError:\n tables[last_table] = {}\n\n try:\n tables[last_table]['columns'] += column\n except KeyError:\n tables[last_table]['columns'] = column\n elif key[0] == \"*\":\n # attribute that should be applied to the most recently\n # declared table\n if key[0] == \"*\":\n key = key[1:]\n if last_table:\n try:\n tables[last_table]\n except KeyError:\n tables[last_table] = {}\n\n try:\n e = eval(value)\n except:\n e = str(value)\n\n tables[last_table][key] = \"'\" + str(e) + \"'\"\n else:\n # general script attributes\n values[key] = '\"' + str(value) + '\"'\n\n if 'shortname' not in list(values.keys()):\n try:\n values['shortname'] = values['name']\n except:\n pass\n values['urls'] = str(urls)\n\n def get_value(key):\n try:\n return values[key]\n except KeyError:\n return \"\"\n\n table_desc = \"{\"\n for (key, value) in list(tables.items()):\n table_desc += \"'\" + key + \"': Table('\" + key + \"', \"\n table_desc += ','.join([key + \"=\" + str(value)\n for key, value, in list(value.items())])\n table_desc += \"),\"\n if table_desc != '{':\n table_desc = table_desc[:-1]\n table_desc += \"}\"\n\n values['tables'] = table_desc\n\n script_desc = []\n for key, value in list(values.items()):\n if key == \"url\":\n key = \"ref\"\n if key not in keys_to_ignore:\n script_desc.append(key + \"=\" + str(value))\n script_desc = (',\\n' + ' ' * 27).join(script_desc)\n\n if 'template' in list(values.keys()):\n template = values[\"template\"]\n else:\n template = \"default\"\n script_contents = (script_templates[template] % script_desc)\n\n new_script = open(script_file + '.py', 'w')\n new_script.write(script_contents)\n new_script.close()\n\n definition.close()\n\n\ndef add_dialect(table_dict, table):\n \"\"\"\n Reads dialect key of JSON script and extracts key-value pairs to store them\n in python script\n\n Contains properties such 'nulls', delimiter', etc\n \"\"\"\n for (key, val) in table['dialect'].items():\n # dialect related key-value pairs\n # copied as is\n if key == \"nulls\":\n table_dict[\n 'cleanup'] = \"Cleanup(correct_invalid_value, nulls=\" + str(val) + \")\"\n\n elif key == \"delimiter\":\n table_dict[key] = \"'\" + str(val) + \"'\"\n else:\n table_dict[key] = val\n\n\ndef add_schema(table_dict, table):\n \"\"\"\n Reads schema key of JSON script and extracts values to store them in\n python script\n\n Contains properties related to table schema, such as 'fields' and cross-tab\n column name ('ct_column').\n \"\"\"\n for (key, val) in table['schema'].items():\n # schema related key-value pairs\n\n if key == \"fields\":\n # fields = columns of the table\n\n # list of column tuples\n column_list = []\n for obj in val:\n # fields is a collection of JSON objects\n # (similar to a list of dicts in python)\n\n if \"size\" in obj:\n column_list.append((obj[\"name\"],\n (obj[\"type\"], obj[\"size\"])))\n else:\n column_list.append((obj[\"name\"],\n (obj[\"type\"],)))\n\n table_dict[\"columns\"] = column_list\n\n elif key == \"ct_column\":\n table_dict[key] = \"'\" + val + \"'\"\n\n else:\n table_dict[key] = val\n\n\ndef compile_json(json_file):\n \"\"\"\n Function to compile JSON script files to python scripts\n The scripts are created with `retriever create_json <script_name` using\n command line\n \"\"\"\n json_object = json.load(open(json_file + \".json\", \"r\"))\n\n if \"retriever\" not in json_object.keys():\n # Compile only files that have retriever key\n return\n\n values = {}\n values['urls'] = {}\n\n keys_to_ignore = [\"template\"]\n\n for (key, value) in json_object.items():\n\n if key == \"title\":\n values[\"name\"] = \"\\\"\" + str(value) + \"\\\"\"\n\n elif key == \"name\":\n values[\"shortname\"] = \"\\\"\" + str(value) + \"\\\"\"\n\n elif key == \"description\":\n values[\"description\"] = \"\\\"\" + str(value) + \"\\\"\"\n\n elif key == \"homepage\":\n values[\"ref\"] = \"\\\"\" + str(value) + \"\\\"\"\n\n elif key == \"citation\":\n values[\"citation\"] = \"\\\"\" + str(value) + \"\\\"\"\n\n elif key == \"keywords\":\n values[\"tags\"] = value\n\n elif key == \"retriever_minimum_version\":\n values[\"retriever_minimum_version\"] = \"\\\"\" + value + \"\\\"\"\n\n elif key == \"resources\":\n # Array of table objects\n tables = {}\n for table in value:\n # Maintain a dict for table keys and values\n table_dict = {}\n\n try:\n values['urls'][table['name']] = table['url']\n except Exception as e:\n print(e, \"\\nError in reading table: \" + table)\n continue\n\n if table[\"schema\"] == {} and table[\"dialect\"] == {}:\n continue\n\n for (t_key, t_val) in table.items():\n\n if t_key == \"dialect\":\n add_dialect(table_dict, table)\n\n elif t_key == \"schema\":\n add_schema(table_dict, table)\n\n tables[table[\"name\"]] = table_dict\n\n else:\n values[key] = value\n\n # Create a Table object string using the tables dict\n table_desc = \"{\"\n for (key, value) in tables.items():\n table_desc += \"'\" + key + \"': Table('\" + key + \"', \"\n table_desc += ','.join([key + \"=\" + str(value)\n for key, value, in value.items()])\n table_desc += \"),\"\n if table_desc != '{':\n table_desc = table_desc[:-1]\n table_desc += \"}\"\n\n values[\"tables\"] = table_desc\n\n script_desc = []\n for key, value in values.items():\n if key not in keys_to_ignore:\n script_desc.append(key + \"=\" + str(value))\n script_desc = (',\\n' + ' ' * 27).join(script_desc)\n\n if 'template' in values.keys():\n template = values[\"template\"]\n else:\n template = \"default\"\n script_contents = (script_templates[template] % script_desc)\n\n new_script = open(json_file + '.py', 'w', encoding='utf-8')\n new_script.write('# -*- coding: latin-1 -*-\\n')\n new_script.write(script_contents)\n new_script.close()\n", "path": "lib/compile.py"}]} | 3,525 | 112 |
gh_patches_debug_20304 | rasdani/github-patches | git_diff | frappe__hrms-1583 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
IFSC Code showing wrong value in Bank Remittance Report
### Information about bug
IFSC Code showing wrong value in Bank Remittance Report. It is showing the same IFSC Code for all the employee in the list.
### Module
Payroll
### Version
ERPNext: v14.52.1 (HEAD)
Frappe Framework: v14.57.0 (HEAD)
Frappe HR: v14.18.1 (HEAD)
### Installation method
FrappeCloud
### Relevant log output / Stack trace / Full Error Message.
_No response_
### Code of Conduct
- [X] I agree to follow this project's Code of Conduct
</issue>
<code>
[start of hrms/payroll/report/bank_remittance/bank_remittance.py]
1 # Copyright (c) 2013, Frappe Technologies Pvt. Ltd. and contributors
2 # For license information, please see license.txt
3
4
5 import frappe
6 from frappe import _, get_all
7
8
9 def execute(filters=None):
10 columns = [
11 {
12 "label": _("Payroll Number"),
13 "fieldtype": "Link",
14 "fieldname": "payroll_no",
15 "options": "Payroll Entry",
16 "width": 150,
17 },
18 {
19 "label": _("Debit A/C Number"),
20 "fieldtype": "Int",
21 "fieldname": "debit_account",
22 "hidden": 1,
23 "width": 200,
24 },
25 {"label": _("Payment Date"), "fieldtype": "Data", "fieldname": "payment_date", "width": 100},
26 {
27 "label": _("Employee Name"),
28 "fieldtype": "Link",
29 "fieldname": "employee_name",
30 "options": "Employee",
31 "width": 200,
32 },
33 {"label": _("Bank Name"), "fieldtype": "Data", "fieldname": "bank_name", "width": 50},
34 {
35 "label": _("Employee A/C Number"),
36 "fieldtype": "Int",
37 "fieldname": "employee_account_no",
38 "width": 50,
39 },
40 ]
41
42 if frappe.db.has_column("Employee", "ifsc_code"):
43 columns.append(
44 {"label": _("IFSC Code"), "fieldtype": "Data", "fieldname": "bank_code", "width": 100}
45 )
46
47 columns += [
48 {"label": _("Currency"), "fieldtype": "Data", "fieldname": "currency", "width": 50},
49 {
50 "label": _("Net Salary Amount"),
51 "fieldtype": "Currency",
52 "options": "currency",
53 "fieldname": "amount",
54 "width": 100,
55 },
56 ]
57
58 data = []
59
60 accounts = get_bank_accounts()
61 payroll_entries = get_payroll_entries(accounts, filters)
62 salary_slips = get_salary_slips(payroll_entries)
63
64 if frappe.db.has_column("Employee", "ifsc_code"):
65 get_emp_bank_ifsc_code(salary_slips)
66
67 for salary in salary_slips:
68 if (
69 salary.bank_name
70 and salary.bank_account_no
71 and salary.debit_acc_no
72 and salary.status in ["Submitted", "Paid"]
73 ):
74 row = {
75 "payroll_no": salary.payroll_entry,
76 "debit_account": salary.debit_acc_no,
77 "payment_date": frappe.utils.formatdate(salary.modified.strftime("%Y-%m-%d")),
78 "bank_name": salary.bank_name,
79 "employee_account_no": salary.bank_account_no,
80 "bank_code": salary.ifsc_code,
81 "employee_name": salary.employee + ": " + salary.employee_name,
82 "currency": frappe.get_cached_value("Company", filters.company, "default_currency"),
83 "amount": salary.net_pay,
84 }
85 data.append(row)
86
87 return columns, data
88
89
90 def get_bank_accounts():
91 accounts = [d.name for d in get_all("Account", filters={"account_type": "Bank"})]
92 return accounts
93
94
95 def get_payroll_entries(accounts, filters):
96 payroll_filter = [
97 ("payment_account", "IN", accounts),
98 ("number_of_employees", ">", 0),
99 ("Company", "=", filters.company),
100 ]
101 if filters.to_date:
102 payroll_filter.append(("posting_date", "<", filters.to_date))
103
104 if filters.from_date:
105 payroll_filter.append(("posting_date", ">", filters.from_date))
106
107 entries = get_all("Payroll Entry", payroll_filter, ["name", "payment_account"])
108
109 payment_accounts = [d.payment_account for d in entries]
110 entries = set_company_account(payment_accounts, entries)
111 return entries
112
113
114 def get_salary_slips(payroll_entries):
115 payroll = [d.name for d in payroll_entries]
116 salary_slips = get_all(
117 "Salary Slip",
118 filters=[("payroll_entry", "IN", payroll)],
119 fields=[
120 "modified",
121 "net_pay",
122 "bank_name",
123 "bank_account_no",
124 "payroll_entry",
125 "employee",
126 "employee_name",
127 "status",
128 ],
129 )
130
131 payroll_entry_map = {}
132 for entry in payroll_entries:
133 payroll_entry_map[entry.name] = entry
134
135 # appending company debit accounts
136 for slip in salary_slips:
137 if slip.payroll_entry:
138 slip["debit_acc_no"] = payroll_entry_map[slip.payroll_entry]["company_account"]
139 else:
140 slip["debit_acc_no"] = None
141
142 return salary_slips
143
144
145 def get_emp_bank_ifsc_code(salary_slips):
146 emp_names = [d.employee for d in salary_slips]
147 ifsc_codes = get_all("Employee", [("name", "IN", emp_names)], ["ifsc_code", "name"])
148
149 ifsc_codes_map = {}
150 for code in ifsc_codes:
151 ifsc_codes_map[code.name] = code
152
153 for slip in salary_slips:
154 slip["ifsc_code"] = ifsc_codes_map[code.name]["ifsc_code"]
155
156 return salary_slips
157
158
159 def set_company_account(payment_accounts, payroll_entries):
160 company_accounts = get_all(
161 "Bank Account", [("account", "in", payment_accounts)], ["account", "bank_account_no"]
162 )
163 company_accounts_map = {}
164 for acc in company_accounts:
165 company_accounts_map[acc.account] = acc
166
167 for entry in payroll_entries:
168 company_account = ""
169 if entry.payment_account in company_accounts_map:
170 company_account = company_accounts_map[entry.payment_account]["bank_account_no"]
171 entry["company_account"] = company_account
172
173 return payroll_entries
174
[end of hrms/payroll/report/bank_remittance/bank_remittance.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/hrms/payroll/report/bank_remittance/bank_remittance.py b/hrms/payroll/report/bank_remittance/bank_remittance.py
--- a/hrms/payroll/report/bank_remittance/bank_remittance.py
+++ b/hrms/payroll/report/bank_remittance/bank_remittance.py
@@ -22,7 +22,12 @@
"hidden": 1,
"width": 200,
},
- {"label": _("Payment Date"), "fieldtype": "Data", "fieldname": "payment_date", "width": 100},
+ {
+ "label": _("Payment Date"),
+ "fieldtype": "Data",
+ "fieldname": "payment_date",
+ "width": 100,
+ },
{
"label": _("Employee Name"),
"fieldtype": "Link",
@@ -146,12 +151,10 @@
emp_names = [d.employee for d in salary_slips]
ifsc_codes = get_all("Employee", [("name", "IN", emp_names)], ["ifsc_code", "name"])
- ifsc_codes_map = {}
- for code in ifsc_codes:
- ifsc_codes_map[code.name] = code
+ ifsc_codes_map = {code.name: code.ifsc_code for code in ifsc_codes}
for slip in salary_slips:
- slip["ifsc_code"] = ifsc_codes_map[code.name]["ifsc_code"]
+ slip["ifsc_code"] = ifsc_codes_map[slip.employee]
return salary_slips
| {"golden_diff": "diff --git a/hrms/payroll/report/bank_remittance/bank_remittance.py b/hrms/payroll/report/bank_remittance/bank_remittance.py\n--- a/hrms/payroll/report/bank_remittance/bank_remittance.py\n+++ b/hrms/payroll/report/bank_remittance/bank_remittance.py\n@@ -22,7 +22,12 @@\n \t\t\t\"hidden\": 1,\n \t\t\t\"width\": 200,\n \t\t},\n-\t\t{\"label\": _(\"Payment Date\"), \"fieldtype\": \"Data\", \"fieldname\": \"payment_date\", \"width\": 100},\n+\t\t{\n+\t\t\t\"label\": _(\"Payment Date\"),\n+\t\t\t\"fieldtype\": \"Data\",\n+\t\t\t\"fieldname\": \"payment_date\",\n+\t\t\t\"width\": 100,\n+\t\t},\n \t\t{\n \t\t\t\"label\": _(\"Employee Name\"),\n \t\t\t\"fieldtype\": \"Link\",\n@@ -146,12 +151,10 @@\n \temp_names = [d.employee for d in salary_slips]\n \tifsc_codes = get_all(\"Employee\", [(\"name\", \"IN\", emp_names)], [\"ifsc_code\", \"name\"])\n \n-\tifsc_codes_map = {}\n-\tfor code in ifsc_codes:\n-\t\tifsc_codes_map[code.name] = code\n+\tifsc_codes_map = {code.name: code.ifsc_code for code in ifsc_codes}\n \n \tfor slip in salary_slips:\n-\t\tslip[\"ifsc_code\"] = ifsc_codes_map[code.name][\"ifsc_code\"]\n+\t\tslip[\"ifsc_code\"] = ifsc_codes_map[slip.employee]\n \n \treturn salary_slips\n", "issue": "IFSC Code showing wrong value in Bank Remittance Report\n### Information about bug\n\nIFSC Code showing wrong value in Bank Remittance Report. It is showing the same IFSC Code for all the employee in the list.\n\n### Module\n\nPayroll\n\n### Version\n\nERPNext: v14.52.1 (HEAD)\r\nFrappe Framework: v14.57.0 (HEAD)\r\nFrappe HR: v14.18.1 (HEAD)\n\n### Installation method\n\nFrappeCloud\n\n### Relevant log output / Stack trace / Full Error Message.\n\n_No response_\n\n### Code of Conduct\n\n- [X] I agree to follow this project's Code of Conduct\n", "before_files": [{"content": "# Copyright (c) 2013, Frappe Technologies Pvt. Ltd. and contributors\n# For license information, please see license.txt\n\n\nimport frappe\nfrom frappe import _, get_all\n\n\ndef execute(filters=None):\n\tcolumns = [\n\t\t{\n\t\t\t\"label\": _(\"Payroll Number\"),\n\t\t\t\"fieldtype\": \"Link\",\n\t\t\t\"fieldname\": \"payroll_no\",\n\t\t\t\"options\": \"Payroll Entry\",\n\t\t\t\"width\": 150,\n\t\t},\n\t\t{\n\t\t\t\"label\": _(\"Debit A/C Number\"),\n\t\t\t\"fieldtype\": \"Int\",\n\t\t\t\"fieldname\": \"debit_account\",\n\t\t\t\"hidden\": 1,\n\t\t\t\"width\": 200,\n\t\t},\n\t\t{\"label\": _(\"Payment Date\"), \"fieldtype\": \"Data\", \"fieldname\": \"payment_date\", \"width\": 100},\n\t\t{\n\t\t\t\"label\": _(\"Employee Name\"),\n\t\t\t\"fieldtype\": \"Link\",\n\t\t\t\"fieldname\": \"employee_name\",\n\t\t\t\"options\": \"Employee\",\n\t\t\t\"width\": 200,\n\t\t},\n\t\t{\"label\": _(\"Bank Name\"), \"fieldtype\": \"Data\", \"fieldname\": \"bank_name\", \"width\": 50},\n\t\t{\n\t\t\t\"label\": _(\"Employee A/C Number\"),\n\t\t\t\"fieldtype\": \"Int\",\n\t\t\t\"fieldname\": \"employee_account_no\",\n\t\t\t\"width\": 50,\n\t\t},\n\t]\n\n\tif frappe.db.has_column(\"Employee\", \"ifsc_code\"):\n\t\tcolumns.append(\n\t\t\t{\"label\": _(\"IFSC Code\"), \"fieldtype\": \"Data\", \"fieldname\": \"bank_code\", \"width\": 100}\n\t\t)\n\n\tcolumns += [\n\t\t{\"label\": _(\"Currency\"), \"fieldtype\": \"Data\", \"fieldname\": \"currency\", \"width\": 50},\n\t\t{\n\t\t\t\"label\": _(\"Net Salary Amount\"),\n\t\t\t\"fieldtype\": \"Currency\",\n\t\t\t\"options\": \"currency\",\n\t\t\t\"fieldname\": \"amount\",\n\t\t\t\"width\": 100,\n\t\t},\n\t]\n\n\tdata = []\n\n\taccounts = get_bank_accounts()\n\tpayroll_entries = get_payroll_entries(accounts, filters)\n\tsalary_slips = get_salary_slips(payroll_entries)\n\n\tif frappe.db.has_column(\"Employee\", \"ifsc_code\"):\n\t\tget_emp_bank_ifsc_code(salary_slips)\n\n\tfor salary in salary_slips:\n\t\tif (\n\t\t\tsalary.bank_name\n\t\t\tand salary.bank_account_no\n\t\t\tand salary.debit_acc_no\n\t\t\tand salary.status in [\"Submitted\", \"Paid\"]\n\t\t):\n\t\t\trow = {\n\t\t\t\t\"payroll_no\": salary.payroll_entry,\n\t\t\t\t\"debit_account\": salary.debit_acc_no,\n\t\t\t\t\"payment_date\": frappe.utils.formatdate(salary.modified.strftime(\"%Y-%m-%d\")),\n\t\t\t\t\"bank_name\": salary.bank_name,\n\t\t\t\t\"employee_account_no\": salary.bank_account_no,\n\t\t\t\t\"bank_code\": salary.ifsc_code,\n\t\t\t\t\"employee_name\": salary.employee + \": \" + salary.employee_name,\n\t\t\t\t\"currency\": frappe.get_cached_value(\"Company\", filters.company, \"default_currency\"),\n\t\t\t\t\"amount\": salary.net_pay,\n\t\t\t}\n\t\t\tdata.append(row)\n\n\treturn columns, data\n\n\ndef get_bank_accounts():\n\taccounts = [d.name for d in get_all(\"Account\", filters={\"account_type\": \"Bank\"})]\n\treturn accounts\n\n\ndef get_payroll_entries(accounts, filters):\n\tpayroll_filter = [\n\t\t(\"payment_account\", \"IN\", accounts),\n\t\t(\"number_of_employees\", \">\", 0),\n\t\t(\"Company\", \"=\", filters.company),\n\t]\n\tif filters.to_date:\n\t\tpayroll_filter.append((\"posting_date\", \"<\", filters.to_date))\n\n\tif filters.from_date:\n\t\tpayroll_filter.append((\"posting_date\", \">\", filters.from_date))\n\n\tentries = get_all(\"Payroll Entry\", payroll_filter, [\"name\", \"payment_account\"])\n\n\tpayment_accounts = [d.payment_account for d in entries]\n\tentries = set_company_account(payment_accounts, entries)\n\treturn entries\n\n\ndef get_salary_slips(payroll_entries):\n\tpayroll = [d.name for d in payroll_entries]\n\tsalary_slips = get_all(\n\t\t\"Salary Slip\",\n\t\tfilters=[(\"payroll_entry\", \"IN\", payroll)],\n\t\tfields=[\n\t\t\t\"modified\",\n\t\t\t\"net_pay\",\n\t\t\t\"bank_name\",\n\t\t\t\"bank_account_no\",\n\t\t\t\"payroll_entry\",\n\t\t\t\"employee\",\n\t\t\t\"employee_name\",\n\t\t\t\"status\",\n\t\t],\n\t)\n\n\tpayroll_entry_map = {}\n\tfor entry in payroll_entries:\n\t\tpayroll_entry_map[entry.name] = entry\n\n\t# appending company debit accounts\n\tfor slip in salary_slips:\n\t\tif slip.payroll_entry:\n\t\t\tslip[\"debit_acc_no\"] = payroll_entry_map[slip.payroll_entry][\"company_account\"]\n\t\telse:\n\t\t\tslip[\"debit_acc_no\"] = None\n\n\treturn salary_slips\n\n\ndef get_emp_bank_ifsc_code(salary_slips):\n\temp_names = [d.employee for d in salary_slips]\n\tifsc_codes = get_all(\"Employee\", [(\"name\", \"IN\", emp_names)], [\"ifsc_code\", \"name\"])\n\n\tifsc_codes_map = {}\n\tfor code in ifsc_codes:\n\t\tifsc_codes_map[code.name] = code\n\n\tfor slip in salary_slips:\n\t\tslip[\"ifsc_code\"] = ifsc_codes_map[code.name][\"ifsc_code\"]\n\n\treturn salary_slips\n\n\ndef set_company_account(payment_accounts, payroll_entries):\n\tcompany_accounts = get_all(\n\t\t\"Bank Account\", [(\"account\", \"in\", payment_accounts)], [\"account\", \"bank_account_no\"]\n\t)\n\tcompany_accounts_map = {}\n\tfor acc in company_accounts:\n\t\tcompany_accounts_map[acc.account] = acc\n\n\tfor entry in payroll_entries:\n\t\tcompany_account = \"\"\n\t\tif entry.payment_account in company_accounts_map:\n\t\t\tcompany_account = company_accounts_map[entry.payment_account][\"bank_account_no\"]\n\t\tentry[\"company_account\"] = company_account\n\n\treturn payroll_entries\n", "path": "hrms/payroll/report/bank_remittance/bank_remittance.py"}]} | 2,478 | 365 |
gh_patches_debug_6470 | rasdani/github-patches | git_diff | strawberry-graphql__strawberry-2306 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Fix enum_value handling for inputs
A clean and easy solution for fixing the broken enum_value handling for inputs
Closes https://github.com/strawberry-graphql/strawberry/issues/2305
Closes https://github.com/strawberry-graphql/strawberry/pull/2203
Closes https://github.com/strawberry-graphql/strawberry/pull/2185
Closes https://github.com/strawberry-graphql/strawberry/pull/2306
@patrick91 sorry for stealing your release file and tests
</issue>
<code>
[start of strawberry/enum.py]
1 import dataclasses
2 from enum import EnumMeta
3 from typing import (
4 Any,
5 Callable,
6 Iterable,
7 List,
8 Mapping,
9 Optional,
10 TypeVar,
11 Union,
12 overload,
13 )
14
15 from strawberry.type import StrawberryType
16
17 from .exceptions import ObjectIsNotAnEnumError
18
19
20 @dataclasses.dataclass
21 class EnumValue:
22 name: str
23 value: Any
24 deprecation_reason: Optional[str] = None
25 directives: Iterable[object] = ()
26 description: Optional[str] = None
27
28
29 @dataclasses.dataclass
30 class EnumDefinition(StrawberryType):
31 wrapped_cls: EnumMeta
32 name: str
33 values: List[EnumValue]
34 description: Optional[str]
35 directives: Iterable[object] = ()
36
37 def __hash__(self) -> int:
38 # TODO: Is this enough for unique-ness?
39 return hash(self.name)
40
41 def copy_with(
42 self, type_var_map: Mapping[TypeVar, Union[StrawberryType, type]]
43 ) -> Union[StrawberryType, type]:
44 return super().copy_with(type_var_map)
45
46 @property
47 def is_generic(self) -> bool:
48 return False
49
50
51 # TODO: remove duplication of EnumValueDefinition and EnumValue
52 @dataclasses.dataclass
53 class EnumValueDefinition:
54 value: Any
55 deprecation_reason: Optional[str] = None
56 directives: Iterable[object] = ()
57 description: Optional[str] = None
58
59
60 def enum_value(
61 value: Any,
62 deprecation_reason: Optional[str] = None,
63 directives: Iterable[object] = (),
64 description: Optional[str] = None,
65 ) -> EnumValueDefinition:
66 return EnumValueDefinition(
67 value=value,
68 deprecation_reason=deprecation_reason,
69 directives=directives,
70 description=description,
71 )
72
73
74 EnumType = TypeVar("EnumType", bound=EnumMeta)
75
76
77 def _process_enum(
78 cls: EnumType,
79 name: Optional[str] = None,
80 description: Optional[str] = None,
81 directives: Iterable[object] = (),
82 ) -> EnumType:
83 if not isinstance(cls, EnumMeta):
84 raise ObjectIsNotAnEnumError(cls)
85
86 if not name:
87 name = cls.__name__
88
89 description = description
90
91 values = []
92 for item in cls: # type: ignore
93 item_value = item.value
94 item_name = item.name
95 deprecation_reason = None
96 item_directives: Iterable[object] = ()
97 enum_value_description = None
98
99 if isinstance(item_value, EnumValueDefinition):
100 item_directives = item_value.directives
101 enum_value_description = item_value.description
102 deprecation_reason = item_value.deprecation_reason
103 item_value = item_value.value
104
105 value = EnumValue(
106 item_name,
107 item_value,
108 deprecation_reason=deprecation_reason,
109 directives=item_directives,
110 description=enum_value_description,
111 )
112 values.append(value)
113
114 cls._enum_definition = EnumDefinition( # type: ignore
115 wrapped_cls=cls,
116 name=name,
117 values=values,
118 description=description,
119 directives=directives,
120 )
121
122 return cls
123
124
125 @overload
126 def enum(
127 _cls: EnumType,
128 *,
129 name: Optional[str] = None,
130 description: Optional[str] = None,
131 directives: Iterable[object] = ()
132 ) -> EnumType:
133 ...
134
135
136 @overload
137 def enum(
138 _cls: None = None,
139 *,
140 name: Optional[str] = None,
141 description: Optional[str] = None,
142 directives: Iterable[object] = ()
143 ) -> Callable[[EnumType], EnumType]:
144 ...
145
146
147 def enum(
148 _cls: Optional[EnumType] = None,
149 *,
150 name: Optional[str] = None,
151 description: Optional[str] = None,
152 directives: Iterable[object] = ()
153 ) -> Union[EnumType, Callable[[EnumType], EnumType]]:
154 """Registers the enum in the GraphQL type system.
155
156 If name is passed, the name of the GraphQL type will be
157 the value passed of name instead of the Enum class name.
158 """
159
160 def wrap(cls: EnumType) -> EnumType:
161 return _process_enum(cls, name, description, directives=directives)
162
163 if not _cls:
164 return wrap
165
166 return wrap(_cls)
167
[end of strawberry/enum.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/strawberry/enum.py b/strawberry/enum.py
--- a/strawberry/enum.py
+++ b/strawberry/enum.py
@@ -102,6 +102,11 @@
deprecation_reason = item_value.deprecation_reason
item_value = item_value.value
+ # update _value2member_map_ so that doing `MyEnum.MY_VALUE` and
+ # `MyEnum['MY_VALUE']` both work
+ cls._value2member_map_[item_value] = item
+ cls._member_map_[item_name]._value_ = item_value
+
value = EnumValue(
item_name,
item_value,
| {"golden_diff": "diff --git a/strawberry/enum.py b/strawberry/enum.py\n--- a/strawberry/enum.py\n+++ b/strawberry/enum.py\n@@ -102,6 +102,11 @@\n deprecation_reason = item_value.deprecation_reason\n item_value = item_value.value\n \n+ # update _value2member_map_ so that doing `MyEnum.MY_VALUE` and\n+ # `MyEnum['MY_VALUE']` both work\n+ cls._value2member_map_[item_value] = item\n+ cls._member_map_[item_name]._value_ = item_value\n+\n value = EnumValue(\n item_name,\n item_value,\n", "issue": "Fix enum_value handling for inputs\nA clean and easy solution for fixing the broken enum_value handling for inputs\r\n\r\nCloses https://github.com/strawberry-graphql/strawberry/issues/2305\r\nCloses https://github.com/strawberry-graphql/strawberry/pull/2203\r\nCloses https://github.com/strawberry-graphql/strawberry/pull/2185\r\nCloses https://github.com/strawberry-graphql/strawberry/pull/2306\r\n\r\n@patrick91 sorry for stealing your release file and tests\n", "before_files": [{"content": "import dataclasses\nfrom enum import EnumMeta\nfrom typing import (\n Any,\n Callable,\n Iterable,\n List,\n Mapping,\n Optional,\n TypeVar,\n Union,\n overload,\n)\n\nfrom strawberry.type import StrawberryType\n\nfrom .exceptions import ObjectIsNotAnEnumError\n\n\[email protected]\nclass EnumValue:\n name: str\n value: Any\n deprecation_reason: Optional[str] = None\n directives: Iterable[object] = ()\n description: Optional[str] = None\n\n\[email protected]\nclass EnumDefinition(StrawberryType):\n wrapped_cls: EnumMeta\n name: str\n values: List[EnumValue]\n description: Optional[str]\n directives: Iterable[object] = ()\n\n def __hash__(self) -> int:\n # TODO: Is this enough for unique-ness?\n return hash(self.name)\n\n def copy_with(\n self, type_var_map: Mapping[TypeVar, Union[StrawberryType, type]]\n ) -> Union[StrawberryType, type]:\n return super().copy_with(type_var_map)\n\n @property\n def is_generic(self) -> bool:\n return False\n\n\n# TODO: remove duplication of EnumValueDefinition and EnumValue\[email protected]\nclass EnumValueDefinition:\n value: Any\n deprecation_reason: Optional[str] = None\n directives: Iterable[object] = ()\n description: Optional[str] = None\n\n\ndef enum_value(\n value: Any,\n deprecation_reason: Optional[str] = None,\n directives: Iterable[object] = (),\n description: Optional[str] = None,\n) -> EnumValueDefinition:\n return EnumValueDefinition(\n value=value,\n deprecation_reason=deprecation_reason,\n directives=directives,\n description=description,\n )\n\n\nEnumType = TypeVar(\"EnumType\", bound=EnumMeta)\n\n\ndef _process_enum(\n cls: EnumType,\n name: Optional[str] = None,\n description: Optional[str] = None,\n directives: Iterable[object] = (),\n) -> EnumType:\n if not isinstance(cls, EnumMeta):\n raise ObjectIsNotAnEnumError(cls)\n\n if not name:\n name = cls.__name__\n\n description = description\n\n values = []\n for item in cls: # type: ignore\n item_value = item.value\n item_name = item.name\n deprecation_reason = None\n item_directives: Iterable[object] = ()\n enum_value_description = None\n\n if isinstance(item_value, EnumValueDefinition):\n item_directives = item_value.directives\n enum_value_description = item_value.description\n deprecation_reason = item_value.deprecation_reason\n item_value = item_value.value\n\n value = EnumValue(\n item_name,\n item_value,\n deprecation_reason=deprecation_reason,\n directives=item_directives,\n description=enum_value_description,\n )\n values.append(value)\n\n cls._enum_definition = EnumDefinition( # type: ignore\n wrapped_cls=cls,\n name=name,\n values=values,\n description=description,\n directives=directives,\n )\n\n return cls\n\n\n@overload\ndef enum(\n _cls: EnumType,\n *,\n name: Optional[str] = None,\n description: Optional[str] = None,\n directives: Iterable[object] = ()\n) -> EnumType:\n ...\n\n\n@overload\ndef enum(\n _cls: None = None,\n *,\n name: Optional[str] = None,\n description: Optional[str] = None,\n directives: Iterable[object] = ()\n) -> Callable[[EnumType], EnumType]:\n ...\n\n\ndef enum(\n _cls: Optional[EnumType] = None,\n *,\n name: Optional[str] = None,\n description: Optional[str] = None,\n directives: Iterable[object] = ()\n) -> Union[EnumType, Callable[[EnumType], EnumType]]:\n \"\"\"Registers the enum in the GraphQL type system.\n\n If name is passed, the name of the GraphQL type will be\n the value passed of name instead of the Enum class name.\n \"\"\"\n\n def wrap(cls: EnumType) -> EnumType:\n return _process_enum(cls, name, description, directives=directives)\n\n if not _cls:\n return wrap\n\n return wrap(_cls)\n", "path": "strawberry/enum.py"}]} | 2,012 | 156 |
gh_patches_debug_29962 | rasdani/github-patches | git_diff | AUTOMATIC1111__stable-diffusion-webui-3628 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Portrait mode images generates in landscape mode in img2img [Bug]:
### Is there an existing issue for this?
- [X] I have searched the existing issues and checked the recent builds/commits
### What happened?
Image in portrait mode shows up fine in the preview, but when the alternative image is generated it is rotated to landscape mode.
### Steps to reproduce the problem
1. Load a image that was taken using a phone in portrait mode.
2. Set a prompt and press generate.
### What should have happened?
It should have generated the output image in portrait mode as well.
### Commit where the problem happens
6bd6154a92eb05c80d66df661a38f8b70cc13729
### What platforms do you use to access UI ?
Windows
### What browsers do you use to access the UI ?
Microsoft Edge
### Command Line Arguments
```Shell
--xformers
```
### Additional information, context and logs
When images are taken in portrait mode, they are often stored as landscape, but have information that it is portrait so that they can be displayed correctly in image viewers, this should be used to determine how the output image should be generated.
</issue>
<code>
[start of modules/img2img.py]
1 import math
2 import os
3 import sys
4 import traceback
5
6 import numpy as np
7 from PIL import Image, ImageOps, ImageChops
8
9 from modules import devices
10 from modules.processing import Processed, StableDiffusionProcessingImg2Img, process_images
11 from modules.shared import opts, state
12 import modules.shared as shared
13 import modules.processing as processing
14 from modules.ui import plaintext_to_html
15 import modules.images as images
16 import modules.scripts
17
18
19 def process_batch(p, input_dir, output_dir, args):
20 processing.fix_seed(p)
21
22 images = [file for file in [os.path.join(input_dir, x) for x in os.listdir(input_dir)] if os.path.isfile(file)]
23
24 print(f"Will process {len(images)} images, creating {p.n_iter * p.batch_size} new images for each.")
25
26 save_normally = output_dir == ''
27
28 p.do_not_save_grid = True
29 p.do_not_save_samples = not save_normally
30
31 state.job_count = len(images) * p.n_iter
32
33 for i, image in enumerate(images):
34 state.job = f"{i+1} out of {len(images)}"
35 if state.skipped:
36 state.skipped = False
37
38 if state.interrupted:
39 break
40
41 img = Image.open(image)
42 p.init_images = [img] * p.batch_size
43
44 proc = modules.scripts.scripts_img2img.run(p, *args)
45 if proc is None:
46 proc = process_images(p)
47
48 for n, processed_image in enumerate(proc.images):
49 filename = os.path.basename(image)
50
51 if n > 0:
52 left, right = os.path.splitext(filename)
53 filename = f"{left}-{n}{right}"
54
55 if not save_normally:
56 processed_image.save(os.path.join(output_dir, filename))
57
58
59 def img2img(mode: int, prompt: str, negative_prompt: str, prompt_style: str, prompt_style2: str, init_img, init_img_with_mask, init_img_inpaint, init_mask_inpaint, mask_mode, steps: int, sampler_index: int, mask_blur: int, inpainting_fill: int, restore_faces: bool, tiling: bool, n_iter: int, batch_size: int, cfg_scale: float, denoising_strength: float, seed: int, subseed: int, subseed_strength: float, seed_resize_from_h: int, seed_resize_from_w: int, seed_enable_extras: bool, height: int, width: int, resize_mode: int, inpaint_full_res: bool, inpaint_full_res_padding: int, inpainting_mask_invert: int, img2img_batch_input_dir: str, img2img_batch_output_dir: str, *args):
60 is_inpaint = mode == 1
61 is_batch = mode == 2
62
63 if is_inpaint:
64 if mask_mode == 0:
65 image = init_img_with_mask['image']
66 mask = init_img_with_mask['mask']
67 alpha_mask = ImageOps.invert(image.split()[-1]).convert('L').point(lambda x: 255 if x > 0 else 0, mode='1')
68 mask = ImageChops.lighter(alpha_mask, mask.convert('L')).convert('L')
69 image = image.convert('RGB')
70 else:
71 image = init_img_inpaint
72 mask = init_mask_inpaint
73 else:
74 image = init_img
75 mask = None
76
77 assert 0. <= denoising_strength <= 1., 'can only work with strength in [0.0, 1.0]'
78
79 p = StableDiffusionProcessingImg2Img(
80 sd_model=shared.sd_model,
81 outpath_samples=opts.outdir_samples or opts.outdir_img2img_samples,
82 outpath_grids=opts.outdir_grids or opts.outdir_img2img_grids,
83 prompt=prompt,
84 negative_prompt=negative_prompt,
85 styles=[prompt_style, prompt_style2],
86 seed=seed,
87 subseed=subseed,
88 subseed_strength=subseed_strength,
89 seed_resize_from_h=seed_resize_from_h,
90 seed_resize_from_w=seed_resize_from_w,
91 seed_enable_extras=seed_enable_extras,
92 sampler_index=sampler_index,
93 batch_size=batch_size,
94 n_iter=n_iter,
95 steps=steps,
96 cfg_scale=cfg_scale,
97 width=width,
98 height=height,
99 restore_faces=restore_faces,
100 tiling=tiling,
101 init_images=[image],
102 mask=mask,
103 mask_blur=mask_blur,
104 inpainting_fill=inpainting_fill,
105 resize_mode=resize_mode,
106 denoising_strength=denoising_strength,
107 inpaint_full_res=inpaint_full_res,
108 inpaint_full_res_padding=inpaint_full_res_padding,
109 inpainting_mask_invert=inpainting_mask_invert,
110 )
111
112 p.scripts = modules.scripts.scripts_txt2img
113 p.script_args = args
114
115 if shared.cmd_opts.enable_console_prompts:
116 print(f"\nimg2img: {prompt}", file=shared.progress_print_out)
117
118 p.extra_generation_params["Mask blur"] = mask_blur
119
120 if is_batch:
121 assert not shared.cmd_opts.hide_ui_dir_config, "Launched with --hide-ui-dir-config, batch img2img disabled"
122
123 process_batch(p, img2img_batch_input_dir, img2img_batch_output_dir, args)
124
125 processed = Processed(p, [], p.seed, "")
126 else:
127 processed = modules.scripts.scripts_img2img.run(p, *args)
128 if processed is None:
129 processed = process_images(p)
130
131 shared.total_tqdm.clear()
132
133 generation_info_js = processed.js()
134 if opts.samples_log_stdout:
135 print(generation_info_js)
136
137 if opts.do_not_show_images:
138 processed.images = []
139
140 return processed.images, generation_info_js, plaintext_to_html(processed.info)
141
[end of modules/img2img.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/modules/img2img.py b/modules/img2img.py
--- a/modules/img2img.py
+++ b/modules/img2img.py
@@ -39,6 +39,8 @@
break
img = Image.open(image)
+ # Use the EXIF orientation of photos taken by smartphones.
+ img = ImageOps.exif_transpose(img)
p.init_images = [img] * p.batch_size
proc = modules.scripts.scripts_img2img.run(p, *args)
@@ -61,19 +63,25 @@
is_batch = mode == 2
if is_inpaint:
+ # Drawn mask
if mask_mode == 0:
image = init_img_with_mask['image']
mask = init_img_with_mask['mask']
alpha_mask = ImageOps.invert(image.split()[-1]).convert('L').point(lambda x: 255 if x > 0 else 0, mode='1')
mask = ImageChops.lighter(alpha_mask, mask.convert('L')).convert('L')
image = image.convert('RGB')
+ # Uploaded mask
else:
image = init_img_inpaint
mask = init_mask_inpaint
+ # No mask
else:
image = init_img
mask = None
+ # Use the EXIF orientation of photos taken by smartphones.
+ image = ImageOps.exif_transpose(image)
+
assert 0. <= denoising_strength <= 1., 'can only work with strength in [0.0, 1.0]'
p = StableDiffusionProcessingImg2Img(
| {"golden_diff": "diff --git a/modules/img2img.py b/modules/img2img.py\n--- a/modules/img2img.py\n+++ b/modules/img2img.py\n@@ -39,6 +39,8 @@\n break\r\n \r\n img = Image.open(image)\r\n+ # Use the EXIF orientation of photos taken by smartphones.\r\n+ img = ImageOps.exif_transpose(img) \r\n p.init_images = [img] * p.batch_size\r\n \r\n proc = modules.scripts.scripts_img2img.run(p, *args)\r\n@@ -61,19 +63,25 @@\n is_batch = mode == 2\r\n \r\n if is_inpaint:\r\n+ # Drawn mask\r\n if mask_mode == 0:\r\n image = init_img_with_mask['image']\r\n mask = init_img_with_mask['mask']\r\n alpha_mask = ImageOps.invert(image.split()[-1]).convert('L').point(lambda x: 255 if x > 0 else 0, mode='1')\r\n mask = ImageChops.lighter(alpha_mask, mask.convert('L')).convert('L')\r\n image = image.convert('RGB')\r\n+ # Uploaded mask\r\n else:\r\n image = init_img_inpaint\r\n mask = init_mask_inpaint\r\n+ # No mask\r\n else:\r\n image = init_img\r\n mask = None\r\n \r\n+ # Use the EXIF orientation of photos taken by smartphones.\r\n+ image = ImageOps.exif_transpose(image) \r\n+\r\n assert 0. <= denoising_strength <= 1., 'can only work with strength in [0.0, 1.0]'\r\n \r\n p = StableDiffusionProcessingImg2Img(\n", "issue": "Portrait mode images generates in landscape mode in img2img [Bug]: \n### Is there an existing issue for this?\n\n- [X] I have searched the existing issues and checked the recent builds/commits\n\n### What happened?\n\nImage in portrait mode shows up fine in the preview, but when the alternative image is generated it is rotated to landscape mode.\n\n### Steps to reproduce the problem\n\n1. Load a image that was taken using a phone in portrait mode.\r\n2. Set a prompt and press generate.\r\n\n\n### What should have happened?\n\nIt should have generated the output image in portrait mode as well.\n\n### Commit where the problem happens\n\n6bd6154a92eb05c80d66df661a38f8b70cc13729\n\n### What platforms do you use to access UI ?\n\nWindows\n\n### What browsers do you use to access the UI ?\n\nMicrosoft Edge\n\n### Command Line Arguments\n\n```Shell\n--xformers\n```\n\n\n### Additional information, context and logs\n\nWhen images are taken in portrait mode, they are often stored as landscape, but have information that it is portrait so that they can be displayed correctly in image viewers, this should be used to determine how the output image should be generated.\n", "before_files": [{"content": "import math\r\nimport os\r\nimport sys\r\nimport traceback\r\n\r\nimport numpy as np\r\nfrom PIL import Image, ImageOps, ImageChops\r\n\r\nfrom modules import devices\r\nfrom modules.processing import Processed, StableDiffusionProcessingImg2Img, process_images\r\nfrom modules.shared import opts, state\r\nimport modules.shared as shared\r\nimport modules.processing as processing\r\nfrom modules.ui import plaintext_to_html\r\nimport modules.images as images\r\nimport modules.scripts\r\n\r\n\r\ndef process_batch(p, input_dir, output_dir, args):\r\n processing.fix_seed(p)\r\n\r\n images = [file for file in [os.path.join(input_dir, x) for x in os.listdir(input_dir)] if os.path.isfile(file)]\r\n\r\n print(f\"Will process {len(images)} images, creating {p.n_iter * p.batch_size} new images for each.\")\r\n\r\n save_normally = output_dir == ''\r\n\r\n p.do_not_save_grid = True\r\n p.do_not_save_samples = not save_normally\r\n\r\n state.job_count = len(images) * p.n_iter\r\n\r\n for i, image in enumerate(images):\r\n state.job = f\"{i+1} out of {len(images)}\"\r\n if state.skipped:\r\n state.skipped = False\r\n\r\n if state.interrupted:\r\n break\r\n\r\n img = Image.open(image)\r\n p.init_images = [img] * p.batch_size\r\n\r\n proc = modules.scripts.scripts_img2img.run(p, *args)\r\n if proc is None:\r\n proc = process_images(p)\r\n\r\n for n, processed_image in enumerate(proc.images):\r\n filename = os.path.basename(image)\r\n\r\n if n > 0:\r\n left, right = os.path.splitext(filename)\r\n filename = f\"{left}-{n}{right}\"\r\n\r\n if not save_normally:\r\n processed_image.save(os.path.join(output_dir, filename))\r\n\r\n\r\ndef img2img(mode: int, prompt: str, negative_prompt: str, prompt_style: str, prompt_style2: str, init_img, init_img_with_mask, init_img_inpaint, init_mask_inpaint, mask_mode, steps: int, sampler_index: int, mask_blur: int, inpainting_fill: int, restore_faces: bool, tiling: bool, n_iter: int, batch_size: int, cfg_scale: float, denoising_strength: float, seed: int, subseed: int, subseed_strength: float, seed_resize_from_h: int, seed_resize_from_w: int, seed_enable_extras: bool, height: int, width: int, resize_mode: int, inpaint_full_res: bool, inpaint_full_res_padding: int, inpainting_mask_invert: int, img2img_batch_input_dir: str, img2img_batch_output_dir: str, *args):\r\n is_inpaint = mode == 1\r\n is_batch = mode == 2\r\n\r\n if is_inpaint:\r\n if mask_mode == 0:\r\n image = init_img_with_mask['image']\r\n mask = init_img_with_mask['mask']\r\n alpha_mask = ImageOps.invert(image.split()[-1]).convert('L').point(lambda x: 255 if x > 0 else 0, mode='1')\r\n mask = ImageChops.lighter(alpha_mask, mask.convert('L')).convert('L')\r\n image = image.convert('RGB')\r\n else:\r\n image = init_img_inpaint\r\n mask = init_mask_inpaint\r\n else:\r\n image = init_img\r\n mask = None\r\n\r\n assert 0. <= denoising_strength <= 1., 'can only work with strength in [0.0, 1.0]'\r\n\r\n p = StableDiffusionProcessingImg2Img(\r\n sd_model=shared.sd_model,\r\n outpath_samples=opts.outdir_samples or opts.outdir_img2img_samples,\r\n outpath_grids=opts.outdir_grids or opts.outdir_img2img_grids,\r\n prompt=prompt,\r\n negative_prompt=negative_prompt,\r\n styles=[prompt_style, prompt_style2],\r\n seed=seed,\r\n subseed=subseed,\r\n subseed_strength=subseed_strength,\r\n seed_resize_from_h=seed_resize_from_h,\r\n seed_resize_from_w=seed_resize_from_w,\r\n seed_enable_extras=seed_enable_extras,\r\n sampler_index=sampler_index,\r\n batch_size=batch_size,\r\n n_iter=n_iter,\r\n steps=steps,\r\n cfg_scale=cfg_scale,\r\n width=width,\r\n height=height,\r\n restore_faces=restore_faces,\r\n tiling=tiling,\r\n init_images=[image],\r\n mask=mask,\r\n mask_blur=mask_blur,\r\n inpainting_fill=inpainting_fill,\r\n resize_mode=resize_mode,\r\n denoising_strength=denoising_strength,\r\n inpaint_full_res=inpaint_full_res,\r\n inpaint_full_res_padding=inpaint_full_res_padding,\r\n inpainting_mask_invert=inpainting_mask_invert,\r\n )\r\n\r\n p.scripts = modules.scripts.scripts_txt2img\r\n p.script_args = args\r\n\r\n if shared.cmd_opts.enable_console_prompts:\r\n print(f\"\\nimg2img: {prompt}\", file=shared.progress_print_out)\r\n\r\n p.extra_generation_params[\"Mask blur\"] = mask_blur\r\n\r\n if is_batch:\r\n assert not shared.cmd_opts.hide_ui_dir_config, \"Launched with --hide-ui-dir-config, batch img2img disabled\"\r\n\r\n process_batch(p, img2img_batch_input_dir, img2img_batch_output_dir, args)\r\n\r\n processed = Processed(p, [], p.seed, \"\")\r\n else:\r\n processed = modules.scripts.scripts_img2img.run(p, *args)\r\n if processed is None:\r\n processed = process_images(p)\r\n\r\n shared.total_tqdm.clear()\r\n\r\n generation_info_js = processed.js()\r\n if opts.samples_log_stdout:\r\n print(generation_info_js)\r\n\r\n if opts.do_not_show_images:\r\n processed.images = []\r\n\r\n return processed.images, generation_info_js, plaintext_to_html(processed.info)\r\n", "path": "modules/img2img.py"}]} | 2,395 | 364 |
gh_patches_debug_5348 | rasdani/github-patches | git_diff | localstack__localstack-536 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Fix Java Lambda Handler Static Imports
By mistake - autoimport functionality wrong static methods has been imported. This fix the issues reported by in #534
It is unrelated to the JavaFX as that has been imported by mistake. Will prepare another PR with a bit more tests don't understand how it could have passed the CI / CD build with the wrong imports.
</issue>
<code>
[start of localstack/constants.py]
1 import os
2 import localstack_client.config
3
4 # LocalStack version
5 VERSION = '0.8.4'
6
7 # default AWS region
8 if 'DEFAULT_REGION' not in os.environ:
9 os.environ['DEFAULT_REGION'] = 'us-east-1'
10 DEFAULT_REGION = os.environ['DEFAULT_REGION']
11
12 # constant to represent the "local" region, i.e., local machine
13 REGION_LOCAL = 'local'
14
15 # dev environment
16 ENV_DEV = 'dev'
17
18 # backend service ports, for services that are behind a proxy (counting down from 4566)
19 DEFAULT_PORT_APIGATEWAY_BACKEND = 4566
20 DEFAULT_PORT_KINESIS_BACKEND = 4565
21 DEFAULT_PORT_DYNAMODB_BACKEND = 4564
22 DEFAULT_PORT_S3_BACKEND = 4563
23 DEFAULT_PORT_SNS_BACKEND = 4562
24 DEFAULT_PORT_SQS_BACKEND = 4561
25 DEFAULT_PORT_ELASTICSEARCH_BACKEND = 4560
26 DEFAULT_PORT_CLOUDFORMATION_BACKEND = 4559
27
28 DEFAULT_PORT_WEB_UI = 8080
29
30 LOCALHOST = 'localhost'
31
32 # version of the Maven dependency with Java utility code
33 LOCALSTACK_MAVEN_VERSION = '0.1.9'
34
35 # map of default service APIs and ports to be spun up (fetch map from localstack_client)
36 DEFAULT_SERVICE_PORTS = localstack_client.config.get_service_ports()
37
38 # host to bind to when starting the services
39 BIND_HOST = '0.0.0.0'
40
41 # AWS user account ID used for tests
42 TEST_AWS_ACCOUNT_ID = '000000000000'
43 os.environ['TEST_AWS_ACCOUNT_ID'] = TEST_AWS_ACCOUNT_ID
44
45 # root code folder
46 LOCALSTACK_ROOT_FOLDER = os.path.realpath(os.path.join(os.path.dirname(os.path.realpath(__file__)), '..'))
47
48 # virtualenv folder
49 LOCALSTACK_VENV_FOLDER = os.path.join(LOCALSTACK_ROOT_FOLDER, '.venv')
50 if not os.path.isdir(LOCALSTACK_VENV_FOLDER):
51 # assuming this package lives here: <python>/lib/pythonX.X/site-packages/localstack/
52 LOCALSTACK_VENV_FOLDER = os.path.realpath(os.path.join(LOCALSTACK_ROOT_FOLDER, '..', '..', '..'))
53
54 # API Gateway path to indicate a user request sent to the gateway
55 PATH_USER_REQUEST = '_user_request_'
56
57 # name of LocalStack Docker image
58 DOCKER_IMAGE_NAME = 'localstack/localstack'
59
60 # environment variable name to tag local test runs
61 ENV_INTERNAL_TEST_RUN = 'LOCALSTACK_INTERNAL_TEST_RUN'
62
63 # content types
64 APPLICATION_AMZ_JSON_1_0 = 'application/x-amz-json-1.0'
65 APPLICATION_AMZ_JSON_1_1 = 'application/x-amz-json-1.1'
66 APPLICATION_JSON = 'application/json'
67
68 # Lambda defaults
69 LAMBDA_TEST_ROLE = 'arn:aws:iam::%s:role/lambda-test-role' % TEST_AWS_ACCOUNT_ID
70
71 # installation constants
72 ELASTICSEARCH_JAR_URL = 'https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-5.3.0.zip'
73 DYNAMODB_JAR_URL = 'https://s3-us-west-2.amazonaws.com/dynamodb-local/dynamodb_local_latest.zip'
74 ELASTICMQ_JAR_URL = 'https://s3-eu-west-1.amazonaws.com/softwaremill-public/elasticmq-server-0.13.8.jar'
75 STS_JAR_URL = 'http://central.maven.org/maven2/com/amazonaws/aws-java-sdk-sts/1.11.14/aws-java-sdk-sts-1.11.14.jar'
76
77 # API endpoint for analytics events
78 API_ENDPOINT = 'https://api.localstack.cloud/v1'
79
[end of localstack/constants.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/localstack/constants.py b/localstack/constants.py
--- a/localstack/constants.py
+++ b/localstack/constants.py
@@ -30,7 +30,7 @@
LOCALHOST = 'localhost'
# version of the Maven dependency with Java utility code
-LOCALSTACK_MAVEN_VERSION = '0.1.9'
+LOCALSTACK_MAVEN_VERSION = '0.1.10'
# map of default service APIs and ports to be spun up (fetch map from localstack_client)
DEFAULT_SERVICE_PORTS = localstack_client.config.get_service_ports()
| {"golden_diff": "diff --git a/localstack/constants.py b/localstack/constants.py\n--- a/localstack/constants.py\n+++ b/localstack/constants.py\n@@ -30,7 +30,7 @@\n LOCALHOST = 'localhost'\n \n # version of the Maven dependency with Java utility code\n-LOCALSTACK_MAVEN_VERSION = '0.1.9'\n+LOCALSTACK_MAVEN_VERSION = '0.1.10'\n \n # map of default service APIs and ports to be spun up (fetch map from localstack_client)\n DEFAULT_SERVICE_PORTS = localstack_client.config.get_service_ports()\n", "issue": "Fix Java Lambda Handler Static Imports\nBy mistake - autoimport functionality wrong static methods has been imported. This fix the issues reported by in #534 \r\n\r\nIt is unrelated to the JavaFX as that has been imported by mistake. Will prepare another PR with a bit more tests don't understand how it could have passed the CI / CD build with the wrong imports.\r\n\n", "before_files": [{"content": "import os\nimport localstack_client.config\n\n# LocalStack version\nVERSION = '0.8.4'\n\n# default AWS region\nif 'DEFAULT_REGION' not in os.environ:\n os.environ['DEFAULT_REGION'] = 'us-east-1'\nDEFAULT_REGION = os.environ['DEFAULT_REGION']\n\n# constant to represent the \"local\" region, i.e., local machine\nREGION_LOCAL = 'local'\n\n# dev environment\nENV_DEV = 'dev'\n\n# backend service ports, for services that are behind a proxy (counting down from 4566)\nDEFAULT_PORT_APIGATEWAY_BACKEND = 4566\nDEFAULT_PORT_KINESIS_BACKEND = 4565\nDEFAULT_PORT_DYNAMODB_BACKEND = 4564\nDEFAULT_PORT_S3_BACKEND = 4563\nDEFAULT_PORT_SNS_BACKEND = 4562\nDEFAULT_PORT_SQS_BACKEND = 4561\nDEFAULT_PORT_ELASTICSEARCH_BACKEND = 4560\nDEFAULT_PORT_CLOUDFORMATION_BACKEND = 4559\n\nDEFAULT_PORT_WEB_UI = 8080\n\nLOCALHOST = 'localhost'\n\n# version of the Maven dependency with Java utility code\nLOCALSTACK_MAVEN_VERSION = '0.1.9'\n\n# map of default service APIs and ports to be spun up (fetch map from localstack_client)\nDEFAULT_SERVICE_PORTS = localstack_client.config.get_service_ports()\n\n# host to bind to when starting the services\nBIND_HOST = '0.0.0.0'\n\n# AWS user account ID used for tests\nTEST_AWS_ACCOUNT_ID = '000000000000'\nos.environ['TEST_AWS_ACCOUNT_ID'] = TEST_AWS_ACCOUNT_ID\n\n# root code folder\nLOCALSTACK_ROOT_FOLDER = os.path.realpath(os.path.join(os.path.dirname(os.path.realpath(__file__)), '..'))\n\n# virtualenv folder\nLOCALSTACK_VENV_FOLDER = os.path.join(LOCALSTACK_ROOT_FOLDER, '.venv')\nif not os.path.isdir(LOCALSTACK_VENV_FOLDER):\n # assuming this package lives here: <python>/lib/pythonX.X/site-packages/localstack/\n LOCALSTACK_VENV_FOLDER = os.path.realpath(os.path.join(LOCALSTACK_ROOT_FOLDER, '..', '..', '..'))\n\n# API Gateway path to indicate a user request sent to the gateway\nPATH_USER_REQUEST = '_user_request_'\n\n# name of LocalStack Docker image\nDOCKER_IMAGE_NAME = 'localstack/localstack'\n\n# environment variable name to tag local test runs\nENV_INTERNAL_TEST_RUN = 'LOCALSTACK_INTERNAL_TEST_RUN'\n\n# content types\nAPPLICATION_AMZ_JSON_1_0 = 'application/x-amz-json-1.0'\nAPPLICATION_AMZ_JSON_1_1 = 'application/x-amz-json-1.1'\nAPPLICATION_JSON = 'application/json'\n\n# Lambda defaults\nLAMBDA_TEST_ROLE = 'arn:aws:iam::%s:role/lambda-test-role' % TEST_AWS_ACCOUNT_ID\n\n# installation constants\nELASTICSEARCH_JAR_URL = 'https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-5.3.0.zip'\nDYNAMODB_JAR_URL = 'https://s3-us-west-2.amazonaws.com/dynamodb-local/dynamodb_local_latest.zip'\nELASTICMQ_JAR_URL = 'https://s3-eu-west-1.amazonaws.com/softwaremill-public/elasticmq-server-0.13.8.jar'\nSTS_JAR_URL = 'http://central.maven.org/maven2/com/amazonaws/aws-java-sdk-sts/1.11.14/aws-java-sdk-sts-1.11.14.jar'\n\n# API endpoint for analytics events\nAPI_ENDPOINT = 'https://api.localstack.cloud/v1'\n", "path": "localstack/constants.py"}]} | 1,556 | 122 |
gh_patches_debug_4064 | rasdani/github-patches | git_diff | dmlc__dgl-490 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Data missing for JTNN
## ❓ Questions and Help
Hello,I come across the problem that the website you put your jtnn.zip is missing.Can you offer me a new website?
thanks!
</issue>
<code>
[start of examples/pytorch/jtnn/jtnn/datautils.py]
1 import torch
2 from torch.utils.data import Dataset
3 import numpy as np
4
5 import dgl
6 from dgl.data.utils import download, extract_archive, get_download_dir
7 from .mol_tree_nx import DGLMolTree
8 from .mol_tree import Vocab
9
10 from .mpn import mol2dgl_single as mol2dgl_enc
11 from .jtmpn import mol2dgl_single as mol2dgl_dec
12 from .jtmpn import ATOM_FDIM as ATOM_FDIM_DEC
13 from .jtmpn import BOND_FDIM as BOND_FDIM_DEC
14
15 _url = 'https://www.dropbox.com/s/4ypr0e0abcbsvoh/jtnn.zip?dl=1'
16
17 def _unpack_field(examples, field):
18 return [e[field] for e in examples]
19
20 def _set_node_id(mol_tree, vocab):
21 wid = []
22 for i, node in enumerate(mol_tree.nodes_dict):
23 mol_tree.nodes_dict[node]['idx'] = i
24 wid.append(vocab.get_index(mol_tree.nodes_dict[node]['smiles']))
25
26 return wid
27
28 class JTNNDataset(Dataset):
29 def __init__(self, data, vocab, training=True):
30 self.dir = get_download_dir()
31 self.zip_file_path='{}/jtnn.zip'.format(self.dir)
32 download(_url, path=self.zip_file_path)
33 extract_archive(self.zip_file_path, '{}/jtnn'.format(self.dir))
34 print('Loading data...')
35 data_file = '{}/jtnn/{}.txt'.format(self.dir, data)
36 with open(data_file) as f:
37 self.data = [line.strip("\r\n ").split()[0] for line in f]
38 self.vocab_file = '{}/jtnn/{}.txt'.format(self.dir, vocab)
39 print('Loading finished.')
40 print('\tNum samples:', len(self.data))
41 print('\tVocab file:', self.vocab_file)
42 self.training = training
43 self.vocab = Vocab([x.strip("\r\n ") for x in open(self.vocab_file)])
44
45 def __len__(self):
46 return len(self.data)
47
48 def __getitem__(self, idx):
49 smiles = self.data[idx]
50 mol_tree = DGLMolTree(smiles)
51 mol_tree.recover()
52 mol_tree.assemble()
53
54 wid = _set_node_id(mol_tree, self.vocab)
55
56 # prebuild the molecule graph
57 mol_graph, atom_x_enc, bond_x_enc = mol2dgl_enc(mol_tree.smiles)
58
59 result = {
60 'mol_tree': mol_tree,
61 'mol_graph': mol_graph,
62 'atom_x_enc': atom_x_enc,
63 'bond_x_enc': bond_x_enc,
64 'wid': wid,
65 }
66
67 if not self.training:
68 return result
69
70 # prebuild the candidate graph list
71 cands = []
72 for node_id, node in mol_tree.nodes_dict.items():
73 # fill in ground truth
74 if node['label'] not in node['cands']:
75 node['cands'].append(node['label'])
76 node['cand_mols'].append(node['label_mol'])
77
78 if node['is_leaf'] or len(node['cands']) == 1:
79 continue
80 cands.extend([(cand, mol_tree, node_id)
81 for cand in node['cand_mols']])
82 if len(cands) > 0:
83 cand_graphs, atom_x_dec, bond_x_dec, tree_mess_src_e, \
84 tree_mess_tgt_e, tree_mess_tgt_n = mol2dgl_dec(cands)
85 else:
86 cand_graphs = []
87 atom_x_dec = torch.zeros(0, ATOM_FDIM_DEC)
88 bond_x_dec = torch.zeros(0, BOND_FDIM_DEC)
89 tree_mess_src_e = torch.zeros(0, 2).long()
90 tree_mess_tgt_e = torch.zeros(0, 2).long()
91 tree_mess_tgt_n = torch.zeros(0).long()
92
93 # prebuild the stereoisomers
94 cands = mol_tree.stereo_cands
95 if len(cands) > 1:
96 if mol_tree.smiles3D not in cands:
97 cands.append(mol_tree.smiles3D)
98
99 stereo_graphs = [mol2dgl_enc(c) for c in cands]
100 stereo_cand_graphs, stereo_atom_x_enc, stereo_bond_x_enc = \
101 zip(*stereo_graphs)
102 stereo_atom_x_enc = torch.cat(stereo_atom_x_enc)
103 stereo_bond_x_enc = torch.cat(stereo_bond_x_enc)
104 stereo_cand_label = [(cands.index(mol_tree.smiles3D), len(cands))]
105 else:
106 stereo_cand_graphs = []
107 stereo_atom_x_enc = torch.zeros(0, atom_x_enc.shape[1])
108 stereo_bond_x_enc = torch.zeros(0, bond_x_enc.shape[1])
109 stereo_cand_label = []
110
111 result.update({
112 'cand_graphs': cand_graphs,
113 'atom_x_dec': atom_x_dec,
114 'bond_x_dec': bond_x_dec,
115 'tree_mess_src_e': tree_mess_src_e,
116 'tree_mess_tgt_e': tree_mess_tgt_e,
117 'tree_mess_tgt_n': tree_mess_tgt_n,
118 'stereo_cand_graphs': stereo_cand_graphs,
119 'stereo_atom_x_enc': stereo_atom_x_enc,
120 'stereo_bond_x_enc': stereo_bond_x_enc,
121 'stereo_cand_label': stereo_cand_label,
122 })
123
124 return result
125
126 class JTNNCollator(object):
127 def __init__(self, vocab, training):
128 self.vocab = vocab
129 self.training = training
130
131 @staticmethod
132 def _batch_and_set(graphs, atom_x, bond_x, flatten):
133 if flatten:
134 graphs = [g for f in graphs for g in f]
135 graph_batch = dgl.batch(graphs)
136 graph_batch.ndata['x'] = atom_x
137 graph_batch.edata.update({
138 'x': bond_x,
139 'src_x': atom_x.new(bond_x.shape[0], atom_x.shape[1]).zero_(),
140 })
141 return graph_batch
142
143 def __call__(self, examples):
144 # get list of trees
145 mol_trees = _unpack_field(examples, 'mol_tree')
146 wid = _unpack_field(examples, 'wid')
147 for _wid, mol_tree in zip(wid, mol_trees):
148 mol_tree.ndata['wid'] = torch.LongTensor(_wid)
149
150 # TODO: either support pickling or get around ctypes pointers using scipy
151 # batch molecule graphs
152 mol_graphs = _unpack_field(examples, 'mol_graph')
153 atom_x = torch.cat(_unpack_field(examples, 'atom_x_enc'))
154 bond_x = torch.cat(_unpack_field(examples, 'bond_x_enc'))
155 mol_graph_batch = self._batch_and_set(mol_graphs, atom_x, bond_x, False)
156
157 result = {
158 'mol_trees': mol_trees,
159 'mol_graph_batch': mol_graph_batch,
160 }
161
162 if not self.training:
163 return result
164
165 # batch candidate graphs
166 cand_graphs = _unpack_field(examples, 'cand_graphs')
167 cand_batch_idx = []
168 atom_x = torch.cat(_unpack_field(examples, 'atom_x_dec'))
169 bond_x = torch.cat(_unpack_field(examples, 'bond_x_dec'))
170 tree_mess_src_e = _unpack_field(examples, 'tree_mess_src_e')
171 tree_mess_tgt_e = _unpack_field(examples, 'tree_mess_tgt_e')
172 tree_mess_tgt_n = _unpack_field(examples, 'tree_mess_tgt_n')
173
174 n_graph_nodes = 0
175 n_tree_nodes = 0
176 for i in range(len(cand_graphs)):
177 tree_mess_tgt_e[i] += n_graph_nodes
178 tree_mess_src_e[i] += n_tree_nodes
179 tree_mess_tgt_n[i] += n_graph_nodes
180 n_graph_nodes += sum(g.number_of_nodes() for g in cand_graphs[i])
181 n_tree_nodes += mol_trees[i].number_of_nodes()
182 cand_batch_idx.extend([i] * len(cand_graphs[i]))
183 tree_mess_tgt_e = torch.cat(tree_mess_tgt_e)
184 tree_mess_src_e = torch.cat(tree_mess_src_e)
185 tree_mess_tgt_n = torch.cat(tree_mess_tgt_n)
186
187 cand_graph_batch = self._batch_and_set(cand_graphs, atom_x, bond_x, True)
188
189 # batch stereoisomers
190 stereo_cand_graphs = _unpack_field(examples, 'stereo_cand_graphs')
191 atom_x = torch.cat(_unpack_field(examples, 'stereo_atom_x_enc'))
192 bond_x = torch.cat(_unpack_field(examples, 'stereo_bond_x_enc'))
193 stereo_cand_batch_idx = []
194 for i in range(len(stereo_cand_graphs)):
195 stereo_cand_batch_idx.extend([i] * len(stereo_cand_graphs[i]))
196
197 if len(stereo_cand_batch_idx) > 0:
198 stereo_cand_labels = [
199 (label, length)
200 for ex in _unpack_field(examples, 'stereo_cand_label')
201 for label, length in ex
202 ]
203 stereo_cand_labels, stereo_cand_lengths = zip(*stereo_cand_labels)
204 stereo_cand_graph_batch = self._batch_and_set(
205 stereo_cand_graphs, atom_x, bond_x, True)
206 else:
207 stereo_cand_labels = []
208 stereo_cand_lengths = []
209 stereo_cand_graph_batch = None
210 stereo_cand_batch_idx = []
211
212 result.update({
213 'cand_graph_batch': cand_graph_batch,
214 'cand_batch_idx': cand_batch_idx,
215 'tree_mess_tgt_e': tree_mess_tgt_e,
216 'tree_mess_src_e': tree_mess_src_e,
217 'tree_mess_tgt_n': tree_mess_tgt_n,
218 'stereo_cand_graph_batch': stereo_cand_graph_batch,
219 'stereo_cand_batch_idx': stereo_cand_batch_idx,
220 'stereo_cand_labels': stereo_cand_labels,
221 'stereo_cand_lengths': stereo_cand_lengths,
222 })
223
224 return result
225
[end of examples/pytorch/jtnn/jtnn/datautils.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/examples/pytorch/jtnn/jtnn/datautils.py b/examples/pytorch/jtnn/jtnn/datautils.py
--- a/examples/pytorch/jtnn/jtnn/datautils.py
+++ b/examples/pytorch/jtnn/jtnn/datautils.py
@@ -12,7 +12,7 @@
from .jtmpn import ATOM_FDIM as ATOM_FDIM_DEC
from .jtmpn import BOND_FDIM as BOND_FDIM_DEC
-_url = 'https://www.dropbox.com/s/4ypr0e0abcbsvoh/jtnn.zip?dl=1'
+_url = 'https://s3-ap-southeast-1.amazonaws.com/dgl-data-cn/dataset/jtnn.zip'
def _unpack_field(examples, field):
return [e[field] for e in examples]
| {"golden_diff": "diff --git a/examples/pytorch/jtnn/jtnn/datautils.py b/examples/pytorch/jtnn/jtnn/datautils.py\n--- a/examples/pytorch/jtnn/jtnn/datautils.py\n+++ b/examples/pytorch/jtnn/jtnn/datautils.py\n@@ -12,7 +12,7 @@\n from .jtmpn import ATOM_FDIM as ATOM_FDIM_DEC\n from .jtmpn import BOND_FDIM as BOND_FDIM_DEC\n \n-_url = 'https://www.dropbox.com/s/4ypr0e0abcbsvoh/jtnn.zip?dl=1'\n+_url = 'https://s3-ap-southeast-1.amazonaws.com/dgl-data-cn/dataset/jtnn.zip'\n \n def _unpack_field(examples, field):\n return [e[field] for e in examples]\n", "issue": "Data missing for JTNN\n## \u2753 Questions and Help\r\n\r\n Hello,I come across the problem that the website you put your jtnn.zip is missing.Can you offer me a new website?\r\n thanks!\r\n\n", "before_files": [{"content": "import torch\nfrom torch.utils.data import Dataset\nimport numpy as np\n\nimport dgl\nfrom dgl.data.utils import download, extract_archive, get_download_dir\nfrom .mol_tree_nx import DGLMolTree\nfrom .mol_tree import Vocab\n\nfrom .mpn import mol2dgl_single as mol2dgl_enc\nfrom .jtmpn import mol2dgl_single as mol2dgl_dec\nfrom .jtmpn import ATOM_FDIM as ATOM_FDIM_DEC\nfrom .jtmpn import BOND_FDIM as BOND_FDIM_DEC\n\n_url = 'https://www.dropbox.com/s/4ypr0e0abcbsvoh/jtnn.zip?dl=1'\n\ndef _unpack_field(examples, field):\n return [e[field] for e in examples]\n\ndef _set_node_id(mol_tree, vocab):\n wid = []\n for i, node in enumerate(mol_tree.nodes_dict):\n mol_tree.nodes_dict[node]['idx'] = i\n wid.append(vocab.get_index(mol_tree.nodes_dict[node]['smiles']))\n\n return wid\n\nclass JTNNDataset(Dataset):\n def __init__(self, data, vocab, training=True):\n self.dir = get_download_dir()\n self.zip_file_path='{}/jtnn.zip'.format(self.dir)\n download(_url, path=self.zip_file_path)\n extract_archive(self.zip_file_path, '{}/jtnn'.format(self.dir))\n print('Loading data...')\n data_file = '{}/jtnn/{}.txt'.format(self.dir, data)\n with open(data_file) as f:\n self.data = [line.strip(\"\\r\\n \").split()[0] for line in f]\n self.vocab_file = '{}/jtnn/{}.txt'.format(self.dir, vocab)\n print('Loading finished.')\n print('\\tNum samples:', len(self.data))\n print('\\tVocab file:', self.vocab_file)\n self.training = training\n self.vocab = Vocab([x.strip(\"\\r\\n \") for x in open(self.vocab_file)])\n\n def __len__(self):\n return len(self.data)\n\n def __getitem__(self, idx):\n smiles = self.data[idx]\n mol_tree = DGLMolTree(smiles)\n mol_tree.recover()\n mol_tree.assemble()\n\n wid = _set_node_id(mol_tree, self.vocab)\n\n # prebuild the molecule graph\n mol_graph, atom_x_enc, bond_x_enc = mol2dgl_enc(mol_tree.smiles)\n\n result = {\n 'mol_tree': mol_tree,\n 'mol_graph': mol_graph,\n 'atom_x_enc': atom_x_enc,\n 'bond_x_enc': bond_x_enc,\n 'wid': wid,\n }\n\n if not self.training:\n return result\n\n # prebuild the candidate graph list\n cands = []\n for node_id, node in mol_tree.nodes_dict.items():\n # fill in ground truth\n if node['label'] not in node['cands']:\n node['cands'].append(node['label'])\n node['cand_mols'].append(node['label_mol'])\n\n if node['is_leaf'] or len(node['cands']) == 1:\n continue\n cands.extend([(cand, mol_tree, node_id)\n for cand in node['cand_mols']])\n if len(cands) > 0:\n cand_graphs, atom_x_dec, bond_x_dec, tree_mess_src_e, \\\n tree_mess_tgt_e, tree_mess_tgt_n = mol2dgl_dec(cands)\n else:\n cand_graphs = []\n atom_x_dec = torch.zeros(0, ATOM_FDIM_DEC)\n bond_x_dec = torch.zeros(0, BOND_FDIM_DEC)\n tree_mess_src_e = torch.zeros(0, 2).long()\n tree_mess_tgt_e = torch.zeros(0, 2).long()\n tree_mess_tgt_n = torch.zeros(0).long()\n\n # prebuild the stereoisomers\n cands = mol_tree.stereo_cands\n if len(cands) > 1:\n if mol_tree.smiles3D not in cands:\n cands.append(mol_tree.smiles3D)\n\n stereo_graphs = [mol2dgl_enc(c) for c in cands]\n stereo_cand_graphs, stereo_atom_x_enc, stereo_bond_x_enc = \\\n zip(*stereo_graphs)\n stereo_atom_x_enc = torch.cat(stereo_atom_x_enc)\n stereo_bond_x_enc = torch.cat(stereo_bond_x_enc)\n stereo_cand_label = [(cands.index(mol_tree.smiles3D), len(cands))]\n else:\n stereo_cand_graphs = []\n stereo_atom_x_enc = torch.zeros(0, atom_x_enc.shape[1])\n stereo_bond_x_enc = torch.zeros(0, bond_x_enc.shape[1])\n stereo_cand_label = []\n\n result.update({\n 'cand_graphs': cand_graphs,\n 'atom_x_dec': atom_x_dec,\n 'bond_x_dec': bond_x_dec,\n 'tree_mess_src_e': tree_mess_src_e,\n 'tree_mess_tgt_e': tree_mess_tgt_e,\n 'tree_mess_tgt_n': tree_mess_tgt_n,\n 'stereo_cand_graphs': stereo_cand_graphs,\n 'stereo_atom_x_enc': stereo_atom_x_enc,\n 'stereo_bond_x_enc': stereo_bond_x_enc,\n 'stereo_cand_label': stereo_cand_label,\n })\n\n return result\n\nclass JTNNCollator(object):\n def __init__(self, vocab, training):\n self.vocab = vocab\n self.training = training\n\n @staticmethod\n def _batch_and_set(graphs, atom_x, bond_x, flatten):\n if flatten:\n graphs = [g for f in graphs for g in f]\n graph_batch = dgl.batch(graphs)\n graph_batch.ndata['x'] = atom_x\n graph_batch.edata.update({\n 'x': bond_x,\n 'src_x': atom_x.new(bond_x.shape[0], atom_x.shape[1]).zero_(),\n })\n return graph_batch\n\n def __call__(self, examples):\n # get list of trees\n mol_trees = _unpack_field(examples, 'mol_tree')\n wid = _unpack_field(examples, 'wid')\n for _wid, mol_tree in zip(wid, mol_trees):\n mol_tree.ndata['wid'] = torch.LongTensor(_wid)\n\n # TODO: either support pickling or get around ctypes pointers using scipy\n # batch molecule graphs\n mol_graphs = _unpack_field(examples, 'mol_graph')\n atom_x = torch.cat(_unpack_field(examples, 'atom_x_enc'))\n bond_x = torch.cat(_unpack_field(examples, 'bond_x_enc'))\n mol_graph_batch = self._batch_and_set(mol_graphs, atom_x, bond_x, False)\n\n result = {\n 'mol_trees': mol_trees,\n 'mol_graph_batch': mol_graph_batch,\n }\n\n if not self.training:\n return result\n\n # batch candidate graphs\n cand_graphs = _unpack_field(examples, 'cand_graphs')\n cand_batch_idx = []\n atom_x = torch.cat(_unpack_field(examples, 'atom_x_dec'))\n bond_x = torch.cat(_unpack_field(examples, 'bond_x_dec'))\n tree_mess_src_e = _unpack_field(examples, 'tree_mess_src_e')\n tree_mess_tgt_e = _unpack_field(examples, 'tree_mess_tgt_e')\n tree_mess_tgt_n = _unpack_field(examples, 'tree_mess_tgt_n')\n\n n_graph_nodes = 0\n n_tree_nodes = 0\n for i in range(len(cand_graphs)):\n tree_mess_tgt_e[i] += n_graph_nodes\n tree_mess_src_e[i] += n_tree_nodes\n tree_mess_tgt_n[i] += n_graph_nodes\n n_graph_nodes += sum(g.number_of_nodes() for g in cand_graphs[i])\n n_tree_nodes += mol_trees[i].number_of_nodes()\n cand_batch_idx.extend([i] * len(cand_graphs[i]))\n tree_mess_tgt_e = torch.cat(tree_mess_tgt_e)\n tree_mess_src_e = torch.cat(tree_mess_src_e)\n tree_mess_tgt_n = torch.cat(tree_mess_tgt_n)\n\n cand_graph_batch = self._batch_and_set(cand_graphs, atom_x, bond_x, True)\n\n # batch stereoisomers\n stereo_cand_graphs = _unpack_field(examples, 'stereo_cand_graphs')\n atom_x = torch.cat(_unpack_field(examples, 'stereo_atom_x_enc'))\n bond_x = torch.cat(_unpack_field(examples, 'stereo_bond_x_enc'))\n stereo_cand_batch_idx = []\n for i in range(len(stereo_cand_graphs)):\n stereo_cand_batch_idx.extend([i] * len(stereo_cand_graphs[i]))\n\n if len(stereo_cand_batch_idx) > 0:\n stereo_cand_labels = [\n (label, length)\n for ex in _unpack_field(examples, 'stereo_cand_label')\n for label, length in ex\n ]\n stereo_cand_labels, stereo_cand_lengths = zip(*stereo_cand_labels)\n stereo_cand_graph_batch = self._batch_and_set(\n stereo_cand_graphs, atom_x, bond_x, True)\n else:\n stereo_cand_labels = []\n stereo_cand_lengths = []\n stereo_cand_graph_batch = None\n stereo_cand_batch_idx = []\n\n result.update({\n 'cand_graph_batch': cand_graph_batch,\n 'cand_batch_idx': cand_batch_idx,\n 'tree_mess_tgt_e': tree_mess_tgt_e,\n 'tree_mess_src_e': tree_mess_src_e,\n 'tree_mess_tgt_n': tree_mess_tgt_n,\n 'stereo_cand_graph_batch': stereo_cand_graph_batch,\n 'stereo_cand_batch_idx': stereo_cand_batch_idx,\n 'stereo_cand_labels': stereo_cand_labels,\n 'stereo_cand_lengths': stereo_cand_lengths,\n })\n\n return result\n", "path": "examples/pytorch/jtnn/jtnn/datautils.py"}]} | 3,410 | 185 |
gh_patches_debug_1472 | rasdani/github-patches | git_diff | dbt-labs__dbt-core-1324 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[Stephen Girard] resource list shows sources as None
To be fixed for the 0.13.0 (Stephen Girard) release. An invocation of `dbt run` shows:
```
Found 162 models, 320 tests, 0 archives, 0 analyses, 236 macros, 2 operations, 4 seed files, 34 None
^
|
```
We should also add an assert, as this should fail immediately in development (it's easy to miss!)
</issue>
<code>
[start of core/dbt/compilation.py]
1 import itertools
2 import os
3 import json
4 from collections import OrderedDict, defaultdict
5 import sqlparse
6
7 import dbt.utils
8 import dbt.include
9 import dbt.tracking
10
11 from dbt import deprecations
12 from dbt.utils import get_materialization, NodeType, is_type
13 from dbt.linker import Linker
14
15 import dbt.compat
16 import dbt.context.runtime
17 import dbt.contracts.project
18 import dbt.exceptions
19 import dbt.flags
20 import dbt.loader
21 import dbt.config
22 from dbt.contracts.graph.compiled import CompiledNode, CompiledGraph
23
24 from dbt.clients.system import write_json
25 from dbt.logger import GLOBAL_LOGGER as logger
26
27 graph_file_name = 'graph.gpickle'
28
29
30 def print_compile_stats(stats):
31 names = {
32 NodeType.Model: 'models',
33 NodeType.Test: 'tests',
34 NodeType.Archive: 'archives',
35 NodeType.Analysis: 'analyses',
36 NodeType.Macro: 'macros',
37 NodeType.Operation: 'operations',
38 NodeType.Seed: 'seed files',
39 }
40
41 results = {k: 0 for k in names.keys()}
42 results.update(stats)
43
44 stat_line = ", ".join(
45 ["{} {}".format(ct, names.get(t)) for t, ct in results.items()])
46
47 logger.info("Found {}".format(stat_line))
48
49
50 def _add_prepended_cte(prepended_ctes, new_cte):
51 for dct in prepended_ctes:
52 if dct['id'] == new_cte['id']:
53 dct['sql'] = new_cte['sql']
54 return
55 prepended_ctes.append(new_cte)
56
57
58 def _extend_prepended_ctes(prepended_ctes, new_prepended_ctes):
59 for new_cte in new_prepended_ctes:
60 _add_prepended_cte(prepended_ctes, new_cte)
61
62
63 def prepend_ctes(model, manifest):
64 model, _, manifest = recursively_prepend_ctes(model, manifest)
65
66 return (model, manifest)
67
68
69 def recursively_prepend_ctes(model, manifest):
70 if model.extra_ctes_injected:
71 return (model, model.extra_ctes, manifest)
72
73 if dbt.flags.STRICT_MODE:
74 # ensure that the cte we're adding to is compiled
75 CompiledNode(**model.serialize())
76
77 prepended_ctes = []
78
79 for cte in model.extra_ctes:
80 cte_id = cte['id']
81 cte_to_add = manifest.nodes.get(cte_id)
82 cte_to_add, new_prepended_ctes, manifest = recursively_prepend_ctes(
83 cte_to_add, manifest)
84 _extend_prepended_ctes(prepended_ctes, new_prepended_ctes)
85 new_cte_name = '__dbt__CTE__{}'.format(cte_to_add.get('name'))
86 sql = ' {} as (\n{}\n)'.format(new_cte_name, cte_to_add.compiled_sql)
87 _add_prepended_cte(prepended_ctes, {'id': cte_id, 'sql': sql})
88
89 model.prepend_ctes(prepended_ctes)
90
91 manifest.nodes[model.unique_id] = model
92
93 return (model, prepended_ctes, manifest)
94
95
96 class Compiler(object):
97 def __init__(self, config):
98 self.config = config
99
100 def initialize(self):
101 dbt.clients.system.make_directory(self.config.target_path)
102 dbt.clients.system.make_directory(self.config.modules_path)
103
104 def compile_node(self, node, manifest, extra_context=None):
105 if extra_context is None:
106 extra_context = {}
107
108 logger.debug("Compiling {}".format(node.get('unique_id')))
109
110 data = node.to_dict()
111 data.update({
112 'compiled': False,
113 'compiled_sql': None,
114 'extra_ctes_injected': False,
115 'extra_ctes': [],
116 'injected_sql': None,
117 })
118 compiled_node = CompiledNode(**data)
119
120 context = dbt.context.runtime.generate(
121 compiled_node, self.config, manifest)
122 context.update(extra_context)
123
124 compiled_node.compiled_sql = dbt.clients.jinja.get_rendered(
125 node.get('raw_sql'),
126 context,
127 node)
128
129 compiled_node.compiled = True
130
131 injected_node, _ = prepend_ctes(compiled_node, manifest)
132
133 should_wrap = {NodeType.Test, NodeType.Operation}
134 if injected_node.resource_type in should_wrap:
135 # data tests get wrapped in count(*)
136 # TODO : move this somewhere more reasonable
137 if 'data' in injected_node.tags and \
138 is_type(injected_node, NodeType.Test):
139 injected_node.wrapped_sql = (
140 "select count(*) from (\n{test_sql}\n) sbq").format(
141 test_sql=injected_node.injected_sql)
142 else:
143 # don't wrap schema tests or analyses.
144 injected_node.wrapped_sql = injected_node.injected_sql
145
146 elif is_type(injected_node, NodeType.Archive):
147 # unfortunately we do everything automagically for
148 # archives. in the future it'd be nice to generate
149 # the SQL at the parser level.
150 pass
151
152 elif(is_type(injected_node, NodeType.Model) and
153 get_materialization(injected_node) == 'ephemeral'):
154 pass
155
156 else:
157 injected_node.wrapped_sql = None
158
159 return injected_node
160
161 def write_graph_file(self, linker, manifest):
162 filename = graph_file_name
163 graph_path = os.path.join(self.config.target_path, filename)
164 linker.write_graph(graph_path, manifest)
165
166 def link_node(self, linker, node, manifest):
167 linker.add_node(node.unique_id)
168
169 for dependency in node.depends_on_nodes:
170 if manifest.nodes.get(dependency):
171 linker.dependency(
172 node.unique_id,
173 (manifest.nodes.get(dependency).unique_id))
174 else:
175 dbt.exceptions.dependency_not_found(node, dependency)
176
177 def link_graph(self, linker, manifest):
178 for node in manifest.nodes.values():
179 self.link_node(linker, node, manifest)
180
181 cycle = linker.find_cycles()
182
183 if cycle:
184 raise RuntimeError("Found a cycle: {}".format(cycle))
185
186 def compile(self, manifest):
187 linker = Linker()
188
189 self.link_graph(linker, manifest)
190
191 stats = defaultdict(int)
192
193 for node_name, node in itertools.chain(
194 manifest.nodes.items(),
195 manifest.macros.items()):
196 stats[node.resource_type] += 1
197
198 self.write_graph_file(linker, manifest)
199 print_compile_stats(stats)
200
201 return linker
202
203
204 def compile_manifest(config, manifest):
205 compiler = Compiler(config)
206 compiler.initialize()
207 return compiler.compile(manifest)
208
209
210 def compile_node(adapter, config, node, manifest, extra_context):
211 compiler = Compiler(config)
212 node = compiler.compile_node(node, manifest, extra_context)
213 node = _inject_runtime_config(adapter, node, extra_context)
214
215 if(node.injected_sql is not None and
216 not (dbt.utils.is_type(node, NodeType.Archive))):
217 logger.debug('Writing injected SQL for node "{}"'.format(
218 node.unique_id))
219
220 written_path = dbt.writer.write_node(
221 node,
222 config.target_path,
223 'compiled',
224 node.injected_sql)
225
226 node.build_path = written_path
227
228 return node
229
230
231 def _inject_runtime_config(adapter, node, extra_context):
232 wrapped_sql = node.wrapped_sql
233 context = _node_context(adapter, node)
234 context.update(extra_context)
235 sql = dbt.clients.jinja.get_rendered(wrapped_sql, context)
236 node.wrapped_sql = sql
237 return node
238
239
240 def _node_context(adapter, node):
241 return {
242 "run_started_at": dbt.tracking.active_user.run_started_at,
243 "invocation_id": dbt.tracking.active_user.invocation_id,
244 }
245
[end of core/dbt/compilation.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/core/dbt/compilation.py b/core/dbt/compilation.py
--- a/core/dbt/compilation.py
+++ b/core/dbt/compilation.py
@@ -36,6 +36,7 @@
NodeType.Macro: 'macros',
NodeType.Operation: 'operations',
NodeType.Seed: 'seed files',
+ NodeType.Source: 'sources',
}
results = {k: 0 for k in names.keys()}
| {"golden_diff": "diff --git a/core/dbt/compilation.py b/core/dbt/compilation.py\n--- a/core/dbt/compilation.py\n+++ b/core/dbt/compilation.py\n@@ -36,6 +36,7 @@\n NodeType.Macro: 'macros',\n NodeType.Operation: 'operations',\n NodeType.Seed: 'seed files',\n+ NodeType.Source: 'sources',\n }\n \n results = {k: 0 for k in names.keys()}\n", "issue": "[Stephen Girard] resource list shows sources as None\nTo be fixed for the 0.13.0 (Stephen Girard) release. An invocation of `dbt run` shows:\r\n\r\n```\r\nFound 162 models, 320 tests, 0 archives, 0 analyses, 236 macros, 2 operations, 4 seed files, 34 None\r\n ^\r\n |\r\n```\r\n\r\nWe should also add an assert, as this should fail immediately in development (it's easy to miss!)\n", "before_files": [{"content": "import itertools\nimport os\nimport json\nfrom collections import OrderedDict, defaultdict\nimport sqlparse\n\nimport dbt.utils\nimport dbt.include\nimport dbt.tracking\n\nfrom dbt import deprecations\nfrom dbt.utils import get_materialization, NodeType, is_type\nfrom dbt.linker import Linker\n\nimport dbt.compat\nimport dbt.context.runtime\nimport dbt.contracts.project\nimport dbt.exceptions\nimport dbt.flags\nimport dbt.loader\nimport dbt.config\nfrom dbt.contracts.graph.compiled import CompiledNode, CompiledGraph\n\nfrom dbt.clients.system import write_json\nfrom dbt.logger import GLOBAL_LOGGER as logger\n\ngraph_file_name = 'graph.gpickle'\n\n\ndef print_compile_stats(stats):\n names = {\n NodeType.Model: 'models',\n NodeType.Test: 'tests',\n NodeType.Archive: 'archives',\n NodeType.Analysis: 'analyses',\n NodeType.Macro: 'macros',\n NodeType.Operation: 'operations',\n NodeType.Seed: 'seed files',\n }\n\n results = {k: 0 for k in names.keys()}\n results.update(stats)\n\n stat_line = \", \".join(\n [\"{} {}\".format(ct, names.get(t)) for t, ct in results.items()])\n\n logger.info(\"Found {}\".format(stat_line))\n\n\ndef _add_prepended_cte(prepended_ctes, new_cte):\n for dct in prepended_ctes:\n if dct['id'] == new_cte['id']:\n dct['sql'] = new_cte['sql']\n return\n prepended_ctes.append(new_cte)\n\n\ndef _extend_prepended_ctes(prepended_ctes, new_prepended_ctes):\n for new_cte in new_prepended_ctes:\n _add_prepended_cte(prepended_ctes, new_cte)\n\n\ndef prepend_ctes(model, manifest):\n model, _, manifest = recursively_prepend_ctes(model, manifest)\n\n return (model, manifest)\n\n\ndef recursively_prepend_ctes(model, manifest):\n if model.extra_ctes_injected:\n return (model, model.extra_ctes, manifest)\n\n if dbt.flags.STRICT_MODE:\n # ensure that the cte we're adding to is compiled\n CompiledNode(**model.serialize())\n\n prepended_ctes = []\n\n for cte in model.extra_ctes:\n cte_id = cte['id']\n cte_to_add = manifest.nodes.get(cte_id)\n cte_to_add, new_prepended_ctes, manifest = recursively_prepend_ctes(\n cte_to_add, manifest)\n _extend_prepended_ctes(prepended_ctes, new_prepended_ctes)\n new_cte_name = '__dbt__CTE__{}'.format(cte_to_add.get('name'))\n sql = ' {} as (\\n{}\\n)'.format(new_cte_name, cte_to_add.compiled_sql)\n _add_prepended_cte(prepended_ctes, {'id': cte_id, 'sql': sql})\n\n model.prepend_ctes(prepended_ctes)\n\n manifest.nodes[model.unique_id] = model\n\n return (model, prepended_ctes, manifest)\n\n\nclass Compiler(object):\n def __init__(self, config):\n self.config = config\n\n def initialize(self):\n dbt.clients.system.make_directory(self.config.target_path)\n dbt.clients.system.make_directory(self.config.modules_path)\n\n def compile_node(self, node, manifest, extra_context=None):\n if extra_context is None:\n extra_context = {}\n\n logger.debug(\"Compiling {}\".format(node.get('unique_id')))\n\n data = node.to_dict()\n data.update({\n 'compiled': False,\n 'compiled_sql': None,\n 'extra_ctes_injected': False,\n 'extra_ctes': [],\n 'injected_sql': None,\n })\n compiled_node = CompiledNode(**data)\n\n context = dbt.context.runtime.generate(\n compiled_node, self.config, manifest)\n context.update(extra_context)\n\n compiled_node.compiled_sql = dbt.clients.jinja.get_rendered(\n node.get('raw_sql'),\n context,\n node)\n\n compiled_node.compiled = True\n\n injected_node, _ = prepend_ctes(compiled_node, manifest)\n\n should_wrap = {NodeType.Test, NodeType.Operation}\n if injected_node.resource_type in should_wrap:\n # data tests get wrapped in count(*)\n # TODO : move this somewhere more reasonable\n if 'data' in injected_node.tags and \\\n is_type(injected_node, NodeType.Test):\n injected_node.wrapped_sql = (\n \"select count(*) from (\\n{test_sql}\\n) sbq\").format(\n test_sql=injected_node.injected_sql)\n else:\n # don't wrap schema tests or analyses.\n injected_node.wrapped_sql = injected_node.injected_sql\n\n elif is_type(injected_node, NodeType.Archive):\n # unfortunately we do everything automagically for\n # archives. in the future it'd be nice to generate\n # the SQL at the parser level.\n pass\n\n elif(is_type(injected_node, NodeType.Model) and\n get_materialization(injected_node) == 'ephemeral'):\n pass\n\n else:\n injected_node.wrapped_sql = None\n\n return injected_node\n\n def write_graph_file(self, linker, manifest):\n filename = graph_file_name\n graph_path = os.path.join(self.config.target_path, filename)\n linker.write_graph(graph_path, manifest)\n\n def link_node(self, linker, node, manifest):\n linker.add_node(node.unique_id)\n\n for dependency in node.depends_on_nodes:\n if manifest.nodes.get(dependency):\n linker.dependency(\n node.unique_id,\n (manifest.nodes.get(dependency).unique_id))\n else:\n dbt.exceptions.dependency_not_found(node, dependency)\n\n def link_graph(self, linker, manifest):\n for node in manifest.nodes.values():\n self.link_node(linker, node, manifest)\n\n cycle = linker.find_cycles()\n\n if cycle:\n raise RuntimeError(\"Found a cycle: {}\".format(cycle))\n\n def compile(self, manifest):\n linker = Linker()\n\n self.link_graph(linker, manifest)\n\n stats = defaultdict(int)\n\n for node_name, node in itertools.chain(\n manifest.nodes.items(),\n manifest.macros.items()):\n stats[node.resource_type] += 1\n\n self.write_graph_file(linker, manifest)\n print_compile_stats(stats)\n\n return linker\n\n\ndef compile_manifest(config, manifest):\n compiler = Compiler(config)\n compiler.initialize()\n return compiler.compile(manifest)\n\n\ndef compile_node(adapter, config, node, manifest, extra_context):\n compiler = Compiler(config)\n node = compiler.compile_node(node, manifest, extra_context)\n node = _inject_runtime_config(adapter, node, extra_context)\n\n if(node.injected_sql is not None and\n not (dbt.utils.is_type(node, NodeType.Archive))):\n logger.debug('Writing injected SQL for node \"{}\"'.format(\n node.unique_id))\n\n written_path = dbt.writer.write_node(\n node,\n config.target_path,\n 'compiled',\n node.injected_sql)\n\n node.build_path = written_path\n\n return node\n\n\ndef _inject_runtime_config(adapter, node, extra_context):\n wrapped_sql = node.wrapped_sql\n context = _node_context(adapter, node)\n context.update(extra_context)\n sql = dbt.clients.jinja.get_rendered(wrapped_sql, context)\n node.wrapped_sql = sql\n return node\n\n\ndef _node_context(adapter, node):\n return {\n \"run_started_at\": dbt.tracking.active_user.run_started_at,\n \"invocation_id\": dbt.tracking.active_user.invocation_id,\n }\n", "path": "core/dbt/compilation.py"}]} | 2,970 | 101 |
gh_patches_debug_21589 | rasdani/github-patches | git_diff | conan-io__conan-center-index-7891 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[package] sqlpp11/0.60: scripts are not in the package
sqlpp11 provides some scripts that can be used by the consumer: https://github.com/rbock/sqlpp11/tree/develop/scripts
But these scripts are not in the conan package.
</issue>
<code>
[start of recipes/sqlpp11/all/conanfile.py]
1 from conans import ConanFile, tools
2 import os
3
4 required_conan_version = ">=1.33.0"
5
6
7 class Sqlpp11Conan(ConanFile):
8 name = "sqlpp11"
9 license = "BSD-2-Clause"
10 url = "https://github.com/conan-io/conan-center-index"
11 homepage = "https://github.com/rbock/sqlpp11"
12 description = "A type safe SQL template library for C++"
13 topics = ("SQL", "DSL", "embedded", "data-base")
14 no_copy_source = True
15
16 @property
17 def _source_subfolder(self):
18 return "source_subfolder"
19
20 def requirements(self):
21 self.requires("date/3.0.1")
22
23 def package_id(self):
24 self.info.header_only()
25
26 def source(self):
27 tools.get(**self.conan_data["sources"][self.version],
28 destination=self._source_subfolder, strip_root=True)
29
30 def package(self):
31 self.copy("LICENSE", dst="licenses", src=self._source_subfolder)
32 self.copy("*.h", dst="include", src=os.path.join(self._source_subfolder, "include"))
33
34 def package_info(self):
35 self.cpp_info.filenames["cmake_find_package"] = "Sqlpp11"
36 self.cpp_info.filenames["cmake_find_package_multi"] = "Sqlpp11"
37
[end of recipes/sqlpp11/all/conanfile.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/recipes/sqlpp11/all/conanfile.py b/recipes/sqlpp11/all/conanfile.py
--- a/recipes/sqlpp11/all/conanfile.py
+++ b/recipes/sqlpp11/all/conanfile.py
@@ -24,13 +24,21 @@
self.info.header_only()
def source(self):
- tools.get(**self.conan_data["sources"][self.version],
- destination=self._source_subfolder, strip_root=True)
+ tools.get(
+ **self.conan_data["sources"][self.version],
+ destination=self._source_subfolder,
+ strip_root=True
+ )
def package(self):
self.copy("LICENSE", dst="licenses", src=self._source_subfolder)
self.copy("*.h", dst="include", src=os.path.join(self._source_subfolder, "include"))
+ self.copy("*", dst="bin", src=os.path.join(self._source_subfolder, "scripts"))
def package_info(self):
self.cpp_info.filenames["cmake_find_package"] = "Sqlpp11"
self.cpp_info.filenames["cmake_find_package_multi"] = "Sqlpp11"
+
+ bindir = os.path.join(self.package_folder, "bin")
+ self.output.info("Appending PATH environment variable: {}".format(bindir))
+ self.env_info.PATH.append(bindir)
| {"golden_diff": "diff --git a/recipes/sqlpp11/all/conanfile.py b/recipes/sqlpp11/all/conanfile.py\n--- a/recipes/sqlpp11/all/conanfile.py\n+++ b/recipes/sqlpp11/all/conanfile.py\n@@ -24,13 +24,21 @@\n self.info.header_only()\n \n def source(self):\n- tools.get(**self.conan_data[\"sources\"][self.version],\n- destination=self._source_subfolder, strip_root=True)\n+ tools.get(\n+ **self.conan_data[\"sources\"][self.version],\n+ destination=self._source_subfolder,\n+ strip_root=True\n+ )\n \n def package(self):\n self.copy(\"LICENSE\", dst=\"licenses\", src=self._source_subfolder)\n self.copy(\"*.h\", dst=\"include\", src=os.path.join(self._source_subfolder, \"include\"))\n+ self.copy(\"*\", dst=\"bin\", src=os.path.join(self._source_subfolder, \"scripts\"))\n \n def package_info(self):\n self.cpp_info.filenames[\"cmake_find_package\"] = \"Sqlpp11\"\n self.cpp_info.filenames[\"cmake_find_package_multi\"] = \"Sqlpp11\"\n+\n+ bindir = os.path.join(self.package_folder, \"bin\")\n+ self.output.info(\"Appending PATH environment variable: {}\".format(bindir))\n+ self.env_info.PATH.append(bindir)\n", "issue": "[package] sqlpp11/0.60: scripts are not in the package\nsqlpp11 provides some scripts that can be used by the consumer: https://github.com/rbock/sqlpp11/tree/develop/scripts \r\nBut these scripts are not in the conan package.\n", "before_files": [{"content": "from conans import ConanFile, tools\nimport os\n\nrequired_conan_version = \">=1.33.0\"\n\n\nclass Sqlpp11Conan(ConanFile):\n name = \"sqlpp11\"\n license = \"BSD-2-Clause\"\n url = \"https://github.com/conan-io/conan-center-index\"\n homepage = \"https://github.com/rbock/sqlpp11\"\n description = \"A type safe SQL template library for C++\"\n topics = (\"SQL\", \"DSL\", \"embedded\", \"data-base\")\n no_copy_source = True\n\n @property\n def _source_subfolder(self):\n return \"source_subfolder\"\n\n def requirements(self):\n self.requires(\"date/3.0.1\")\n\n def package_id(self):\n self.info.header_only()\n\n def source(self):\n tools.get(**self.conan_data[\"sources\"][self.version],\n destination=self._source_subfolder, strip_root=True)\n\n def package(self):\n self.copy(\"LICENSE\", dst=\"licenses\", src=self._source_subfolder)\n self.copy(\"*.h\", dst=\"include\", src=os.path.join(self._source_subfolder, \"include\"))\n\n def package_info(self):\n self.cpp_info.filenames[\"cmake_find_package\"] = \"Sqlpp11\"\n self.cpp_info.filenames[\"cmake_find_package_multi\"] = \"Sqlpp11\"\n", "path": "recipes/sqlpp11/all/conanfile.py"}]} | 976 | 305 |
gh_patches_debug_29861 | rasdani/github-patches | git_diff | pwr-Solaar__Solaar-1786 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Ubuntu 22.10's upgrade to 1.1.5 broke stuff. ("'NoneType' object is not iterable") w/fix
Hey fwiw in Ubuntu 22.10 (beta), I just got an upgrade of solaar from 1.1.1 to 1.1.5 ([solaar_1.1.5+dfsg-1_all.deb](https://packages.ubuntu.com/kinetic/solaar)) and noticed that solaar was now broken. Running it manually resulted in an error that ended like this:
```
....
File "/usr/share/solaar/lib/logitech_receiver/device.py", line 352, in persister
self._persister = _configuration.persister(self)
File "/usr/share/solaar/lib/solaar/configuration.py", line 214, in persister
_load()
File "/usr/share/solaar/lib/solaar/configuration.py", line 71, in _load
_config = _cleanup_load(loaded_config)
File "/usr/share/solaar/lib/solaar/configuration.py", line 137, in _cleanup_load
for element in c:
TypeError: 'NoneType' object is not iterable
```
This was running as the user (not rute) and seemed to be an issue parsing the `~/.config/solaar/config.yaml` file. When I looked at that file, it was completely blank, though there was a file there `config.json` modified five months back that looked like this:
```
{
"_version": "1.1.1"
}
```
On a hunch, I set the blank `config.yaml` to instead look like:
```
_version:1.1.5
```
and started solar and it came back! It repopulated the config.yaml, so I'm guessing it just wanted any values in there so it wouldn't error out.
While this is probably a bug w/ubuntu's packaging and may even be due to me running as a normal user, the `configuration.yaml` file's `_cleanup_load(c)` function should probably gracefully behave if there are no yaml entries in there rather than silently crashing.
That is all! This is probably going to have to also be addressed in the Ubuntu release, but I figured this was upstream so maybe it should be here as well. Thanks!
</issue>
<code>
[start of lib/solaar/configuration.py]
1 # -*- python-mode -*-
2
3 ## Copyright (C) 2012-2013 Daniel Pavel
4 ##
5 ## This program is free software; you can redistribute it and/or modify
6 ## it under the terms of the GNU General Public License as published by
7 ## the Free Software Foundation; either version 2 of the License, or
8 ## (at your option) any later version.
9 ##
10 ## This program is distributed in the hope that it will be useful,
11 ## but WITHOUT ANY WARRANTY; without even the implied warranty of
12 ## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
13 ## GNU General Public License for more details.
14 ##
15 ## You should have received a copy of the GNU General Public License along
16 ## with this program; if not, write to the Free Software Foundation, Inc.,
17 ## 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
18
19 import json as _json
20 import os as _os
21 import os.path as _path
22
23 from logging import DEBUG as _DEBUG
24 from logging import INFO as _INFO
25 from logging import getLogger
26 from threading import Lock as _Lock
27 from threading import Timer as _Timer
28
29 import yaml as _yaml
30
31 from gi.repository import GLib
32 from logitech_receiver.common import NamedInt as _NamedInt
33 from solaar import __version__
34
35 _log = getLogger(__name__)
36 del getLogger
37
38 _XDG_CONFIG_HOME = _os.environ.get('XDG_CONFIG_HOME') or _path.expanduser(_path.join('~', '.config'))
39 _file_path = _path.join(_XDG_CONFIG_HOME, 'solaar', 'config.json')
40 _yaml_file_path = _path.join(_XDG_CONFIG_HOME, 'solaar', 'config.yaml')
41
42 _KEY_VERSION = '_version'
43 _KEY_NAME = '_NAME'
44 _KEY_WPID = '_wpid'
45 _KEY_SERIAL = '_serial'
46 _KEY_MODEL_ID = '_modelId'
47 _KEY_UNIT_ID = '_unitId'
48 _KEY_ABSENT = '_absent'
49 _KEY_SENSITIVE = '_sensitive'
50 _config = []
51
52
53 def _load():
54 global _config
55 loaded_config = []
56 if _path.isfile(_yaml_file_path):
57 try:
58 with open(_yaml_file_path) as config_file:
59 loaded_config = _yaml.safe_load(config_file)
60 except Exception as e:
61 _log.error('failed to load from %s: %s', _yaml_file_path, e)
62 elif _path.isfile(_file_path):
63 try:
64 with open(_file_path) as config_file:
65 loaded_config = _json.load(config_file)
66 except Exception as e:
67 _log.error('failed to load from %s: %s', _file_path, e)
68 loaded_config = _convert_json(loaded_config)
69 if _log.isEnabledFor(_DEBUG):
70 _log.debug('load => %s', loaded_config)
71 _config = _cleanup_load(loaded_config)
72
73
74 save_timer = None
75 save_lock = _Lock()
76
77
78 def save(defer=False):
79 global save_timer
80 if not _config:
81 return
82 dirname = _os.path.dirname(_yaml_file_path)
83 if not _path.isdir(dirname):
84 try:
85 _os.makedirs(dirname)
86 except Exception:
87 _log.error('failed to create %s', dirname)
88 return
89 if not defer:
90 do_save()
91 else:
92 with save_lock:
93 if not save_timer:
94 save_timer = _Timer(5.0, lambda: GLib.idle_add(do_save))
95 save_timer.start()
96
97
98 def do_save():
99 global save_timer
100 with save_lock:
101 if save_timer:
102 save_timer.cancel()
103 save_timer = None
104 try:
105 with open(_yaml_file_path, 'w') as config_file:
106 _yaml.dump(_config, config_file, default_flow_style=None, width=150)
107 if _log.isEnabledFor(_INFO):
108 _log.info('saved %s to %s', _config, _yaml_file_path)
109 except Exception as e:
110 _log.error('failed to save to %s: %s', _yaml_file_path, e)
111
112
113 def _convert_json(json_dict):
114 config = [json_dict.get(_KEY_VERSION)]
115 for key, dev in json_dict.items():
116 key = key.split(':')
117 if len(key) == 2:
118 dev[_KEY_WPID] = dev.get(_KEY_WPID) if dev.get(_KEY_WPID) else key[0]
119 dev[_KEY_SERIAL] = dev.get(_KEY_SERIAL) if dev.get(_KEY_SERIAL) else key[1]
120 for k, v in dev.items():
121 if type(k) == str and not k.startswith('_') and type(v) == dict: # convert string keys to ints
122 v = {int(dk) if type(dk) == str else dk: dv for dk, dv in v.items()}
123 dev[k] = v
124 for k in ['mouse-gestures', 'dpi-sliding']:
125 v = dev.get(k, None)
126 if v is True or v is False:
127 dev.pop(k)
128 if '_name' in dev:
129 dev[_KEY_NAME] = dev['_name']
130 dev.pop('_name')
131 config.append(dev)
132 return config
133
134
135 def _cleanup_load(c):
136 _config = [__version__]
137 for element in c:
138 if isinstance(element, dict):
139 divert = element.get('divert-keys')
140 if divert:
141 sliding = element.get('dpi-sliding')
142 if sliding: # convert old-style dpi-sliding setting to divert-keys entry
143 divert[int(sliding)] = 3
144 element.pop('dpi-sliding', None)
145 gestures = element.get('mouse-gestures')
146 if gestures: # convert old-style mouse-gestures setting to divert-keys entry
147 divert[int(gestures)] = 2
148 element.pop('mouse-gestures', None)
149 # remove any string entries (from bad conversions)
150 element['divert-keys'] = {k: v for k, v in divert.items() if isinstance(k, int)}
151 # convert to device entries
152 element = _DeviceEntry(**element)
153 _config.append(element)
154 return _config
155
156
157 class _DeviceEntry(dict):
158 def __init__(self, **kwargs):
159 super().__init__(**kwargs)
160
161 def __setitem__(self, key, value):
162 super().__setitem__(key, value)
163 save(defer=True)
164
165 def update(self, device, modelId):
166 if device.name and device.name != self.get(_KEY_NAME):
167 super().__setitem__(_KEY_NAME, device.name)
168 if device.wpid and device.wpid != self.get(_KEY_WPID):
169 super().__setitem__(_KEY_WPID, device.wpid)
170 if device.serial and device.serial != '?' and device.serial != self.get(_KEY_SERIAL):
171 super().__setitem__(_KEY_SERIAL, device.serial)
172 if modelId and modelId != self.get(_KEY_MODEL_ID):
173 super().__setitem__(_KEY_MODEL_ID, modelId)
174 if device.unitId and device.unitId != self.get(_KEY_UNIT_ID):
175 super().__setitem__(_KEY_UNIT_ID, device.unitId)
176
177 def get_sensitivity(self, name):
178 return self.get(_KEY_SENSITIVE, {}).get(name, False)
179
180 def set_sensitivity(self, name, value):
181 sensitives = self.get(_KEY_SENSITIVE, {})
182 if sensitives.get(name) != value:
183 sensitives[name] = value
184 self.__setitem__(_KEY_SENSITIVE, sensitives)
185
186
187 def device_representer(dumper, data):
188 return dumper.represent_mapping('tag:yaml.org,2002:map', data)
189
190
191 _yaml.add_representer(_DeviceEntry, device_representer)
192
193
194 def named_int_representer(dumper, data):
195 return dumper.represent_scalar('tag:yaml.org,2002:int', str(int(data)))
196
197
198 _yaml.add_representer(_NamedInt, named_int_representer)
199
200
201 # A device can be identified by a combination of WPID and serial number (for receiver-connected devices)
202 # or a combination of modelId and unitId (for direct-connected devices).
203 # But some devices have empty (all zero) modelIds and unitIds. Use the device name as a backup for the modelId.
204 # The worst situation is a receiver-connected device that Solaar has never seen on-line
205 # that is directly connected. Here there is no way to realize that the two devices are the same.
206 # So new entries are not created for unseen off-line receiver-connected devices except for those with protocol 1.0
207 def persister(device):
208 def match(wpid, serial, modelId, unitId, c):
209 return ((wpid and wpid == c.get(_KEY_WPID) and serial and serial == c.get(_KEY_SERIAL)) or (
210 modelId and modelId != '000000000000' and modelId == c.get(_KEY_MODEL_ID) and unitId
211 and unitId == c.get(_KEY_UNIT_ID)
212 ))
213
214 if not _config:
215 _load()
216 entry = None
217 modelId = device.modelId if device.modelId != '000000000000' else device.name if device.modelId else None
218 for c in _config:
219 if isinstance(c, _DeviceEntry) and match(device.wpid, device.serial, modelId, device.unitId, c):
220 entry = c
221 break
222 if not entry:
223 if not device.online and not device.serial: # don't create entry for offline devices without serial number
224 if _log.isEnabledFor(_INFO):
225 _log.info('not setting up persister for offline device %s with missing serial number', device.name)
226 return
227 if _log.isEnabledFor(_INFO):
228 _log.info('setting up persister for device %s', device.name)
229 entry = _DeviceEntry()
230 _config.append(entry)
231 entry.update(device, modelId)
232 return entry
233
234
235 def attach_to(device):
236 pass
237
[end of lib/solaar/configuration.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/lib/solaar/configuration.py b/lib/solaar/configuration.py
--- a/lib/solaar/configuration.py
+++ b/lib/solaar/configuration.py
@@ -134,23 +134,26 @@
def _cleanup_load(c):
_config = [__version__]
- for element in c:
- if isinstance(element, dict):
- divert = element.get('divert-keys')
- if divert:
- sliding = element.get('dpi-sliding')
- if sliding: # convert old-style dpi-sliding setting to divert-keys entry
- divert[int(sliding)] = 3
- element.pop('dpi-sliding', None)
- gestures = element.get('mouse-gestures')
- if gestures: # convert old-style mouse-gestures setting to divert-keys entry
- divert[int(gestures)] = 2
- element.pop('mouse-gestures', None)
- # remove any string entries (from bad conversions)
- element['divert-keys'] = {k: v for k, v in divert.items() if isinstance(k, int)}
- # convert to device entries
- element = _DeviceEntry(**element)
- _config.append(element)
+ try:
+ for element in c:
+ if isinstance(element, dict):
+ divert = element.get('divert-keys')
+ if divert:
+ sliding = element.get('dpi-sliding')
+ if sliding: # convert old-style dpi-sliding setting to divert-keys entry
+ divert[int(sliding)] = 3
+ element.pop('dpi-sliding', None)
+ gestures = element.get('mouse-gestures')
+ if gestures: # convert old-style mouse-gestures setting to divert-keys entry
+ divert[int(gestures)] = 2
+ element.pop('mouse-gestures', None)
+ # remove any string entries (from bad conversions)
+ element['divert-keys'] = {k: v for k, v in divert.items() if isinstance(k, int)}
+ # convert to device entries
+ element = _DeviceEntry(**element)
+ _config.append(element)
+ except Exception as e:
+ _log.warn('Exception processing config.yaml file, ignoring contents: %s', e)
return _config
| {"golden_diff": "diff --git a/lib/solaar/configuration.py b/lib/solaar/configuration.py\n--- a/lib/solaar/configuration.py\n+++ b/lib/solaar/configuration.py\n@@ -134,23 +134,26 @@\n \n def _cleanup_load(c):\n _config = [__version__]\n- for element in c:\n- if isinstance(element, dict):\n- divert = element.get('divert-keys')\n- if divert:\n- sliding = element.get('dpi-sliding')\n- if sliding: # convert old-style dpi-sliding setting to divert-keys entry\n- divert[int(sliding)] = 3\n- element.pop('dpi-sliding', None)\n- gestures = element.get('mouse-gestures')\n- if gestures: # convert old-style mouse-gestures setting to divert-keys entry\n- divert[int(gestures)] = 2\n- element.pop('mouse-gestures', None)\n- # remove any string entries (from bad conversions)\n- element['divert-keys'] = {k: v for k, v in divert.items() if isinstance(k, int)}\n- # convert to device entries\n- element = _DeviceEntry(**element)\n- _config.append(element)\n+ try:\n+ for element in c:\n+ if isinstance(element, dict):\n+ divert = element.get('divert-keys')\n+ if divert:\n+ sliding = element.get('dpi-sliding')\n+ if sliding: # convert old-style dpi-sliding setting to divert-keys entry\n+ divert[int(sliding)] = 3\n+ element.pop('dpi-sliding', None)\n+ gestures = element.get('mouse-gestures')\n+ if gestures: # convert old-style mouse-gestures setting to divert-keys entry\n+ divert[int(gestures)] = 2\n+ element.pop('mouse-gestures', None)\n+ # remove any string entries (from bad conversions)\n+ element['divert-keys'] = {k: v for k, v in divert.items() if isinstance(k, int)}\n+ # convert to device entries\n+ element = _DeviceEntry(**element)\n+ _config.append(element)\n+ except Exception as e:\n+ _log.warn('Exception processing config.yaml file, ignoring contents: %s', e)\n return _config\n", "issue": "Ubuntu 22.10's upgrade to 1.1.5 broke stuff. (\"'NoneType' object is not iterable\") w/fix\nHey fwiw in Ubuntu 22.10 (beta), I just got an upgrade of solaar from 1.1.1 to 1.1.5 ([solaar_1.1.5+dfsg-1_all.deb](https://packages.ubuntu.com/kinetic/solaar)) and noticed that solaar was now broken. Running it manually resulted in an error that ended like this:\r\n\r\n```\r\n....\r\n File \"/usr/share/solaar/lib/logitech_receiver/device.py\", line 352, in persister\r\n self._persister = _configuration.persister(self)\r\n File \"/usr/share/solaar/lib/solaar/configuration.py\", line 214, in persister\r\n _load()\r\n File \"/usr/share/solaar/lib/solaar/configuration.py\", line 71, in _load\r\n _config = _cleanup_load(loaded_config)\r\n File \"/usr/share/solaar/lib/solaar/configuration.py\", line 137, in _cleanup_load\r\n for element in c:\r\nTypeError: 'NoneType' object is not iterable\r\n```\r\n\r\nThis was running as the user (not rute) and seemed to be an issue parsing the `~/.config/solaar/config.yaml` file. When I looked at that file, it was completely blank, though there was a file there `config.json` modified five months back that looked like this:\r\n\r\n```\r\n{\r\n \"_version\": \"1.1.1\"\r\n}\r\n```\r\n\r\nOn a hunch, I set the blank `config.yaml` to instead look like:\r\n\r\n```\r\n_version:1.1.5\r\n```\r\n\r\nand started solar and it came back! It repopulated the config.yaml, so I'm guessing it just wanted any values in there so it wouldn't error out.\r\n\r\nWhile this is probably a bug w/ubuntu's packaging and may even be due to me running as a normal user, the `configuration.yaml` file's `_cleanup_load(c)` function should probably gracefully behave if there are no yaml entries in there rather than silently crashing.\r\n\r\nThat is all! This is probably going to have to also be addressed in the Ubuntu release, but I figured this was upstream so maybe it should be here as well. Thanks!\r\n\n", "before_files": [{"content": "# -*- python-mode -*-\n\n## Copyright (C) 2012-2013 Daniel Pavel\n##\n## This program is free software; you can redistribute it and/or modify\n## it under the terms of the GNU General Public License as published by\n## the Free Software Foundation; either version 2 of the License, or\n## (at your option) any later version.\n##\n## This program is distributed in the hope that it will be useful,\n## but WITHOUT ANY WARRANTY; without even the implied warranty of\n## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n## GNU General Public License for more details.\n##\n## You should have received a copy of the GNU General Public License along\n## with this program; if not, write to the Free Software Foundation, Inc.,\n## 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.\n\nimport json as _json\nimport os as _os\nimport os.path as _path\n\nfrom logging import DEBUG as _DEBUG\nfrom logging import INFO as _INFO\nfrom logging import getLogger\nfrom threading import Lock as _Lock\nfrom threading import Timer as _Timer\n\nimport yaml as _yaml\n\nfrom gi.repository import GLib\nfrom logitech_receiver.common import NamedInt as _NamedInt\nfrom solaar import __version__\n\n_log = getLogger(__name__)\ndel getLogger\n\n_XDG_CONFIG_HOME = _os.environ.get('XDG_CONFIG_HOME') or _path.expanduser(_path.join('~', '.config'))\n_file_path = _path.join(_XDG_CONFIG_HOME, 'solaar', 'config.json')\n_yaml_file_path = _path.join(_XDG_CONFIG_HOME, 'solaar', 'config.yaml')\n\n_KEY_VERSION = '_version'\n_KEY_NAME = '_NAME'\n_KEY_WPID = '_wpid'\n_KEY_SERIAL = '_serial'\n_KEY_MODEL_ID = '_modelId'\n_KEY_UNIT_ID = '_unitId'\n_KEY_ABSENT = '_absent'\n_KEY_SENSITIVE = '_sensitive'\n_config = []\n\n\ndef _load():\n global _config\n loaded_config = []\n if _path.isfile(_yaml_file_path):\n try:\n with open(_yaml_file_path) as config_file:\n loaded_config = _yaml.safe_load(config_file)\n except Exception as e:\n _log.error('failed to load from %s: %s', _yaml_file_path, e)\n elif _path.isfile(_file_path):\n try:\n with open(_file_path) as config_file:\n loaded_config = _json.load(config_file)\n except Exception as e:\n _log.error('failed to load from %s: %s', _file_path, e)\n loaded_config = _convert_json(loaded_config)\n if _log.isEnabledFor(_DEBUG):\n _log.debug('load => %s', loaded_config)\n _config = _cleanup_load(loaded_config)\n\n\nsave_timer = None\nsave_lock = _Lock()\n\n\ndef save(defer=False):\n global save_timer\n if not _config:\n return\n dirname = _os.path.dirname(_yaml_file_path)\n if not _path.isdir(dirname):\n try:\n _os.makedirs(dirname)\n except Exception:\n _log.error('failed to create %s', dirname)\n return\n if not defer:\n do_save()\n else:\n with save_lock:\n if not save_timer:\n save_timer = _Timer(5.0, lambda: GLib.idle_add(do_save))\n save_timer.start()\n\n\ndef do_save():\n global save_timer\n with save_lock:\n if save_timer:\n save_timer.cancel()\n save_timer = None\n try:\n with open(_yaml_file_path, 'w') as config_file:\n _yaml.dump(_config, config_file, default_flow_style=None, width=150)\n if _log.isEnabledFor(_INFO):\n _log.info('saved %s to %s', _config, _yaml_file_path)\n except Exception as e:\n _log.error('failed to save to %s: %s', _yaml_file_path, e)\n\n\ndef _convert_json(json_dict):\n config = [json_dict.get(_KEY_VERSION)]\n for key, dev in json_dict.items():\n key = key.split(':')\n if len(key) == 2:\n dev[_KEY_WPID] = dev.get(_KEY_WPID) if dev.get(_KEY_WPID) else key[0]\n dev[_KEY_SERIAL] = dev.get(_KEY_SERIAL) if dev.get(_KEY_SERIAL) else key[1]\n for k, v in dev.items():\n if type(k) == str and not k.startswith('_') and type(v) == dict: # convert string keys to ints\n v = {int(dk) if type(dk) == str else dk: dv for dk, dv in v.items()}\n dev[k] = v\n for k in ['mouse-gestures', 'dpi-sliding']:\n v = dev.get(k, None)\n if v is True or v is False:\n dev.pop(k)\n if '_name' in dev:\n dev[_KEY_NAME] = dev['_name']\n dev.pop('_name')\n config.append(dev)\n return config\n\n\ndef _cleanup_load(c):\n _config = [__version__]\n for element in c:\n if isinstance(element, dict):\n divert = element.get('divert-keys')\n if divert:\n sliding = element.get('dpi-sliding')\n if sliding: # convert old-style dpi-sliding setting to divert-keys entry\n divert[int(sliding)] = 3\n element.pop('dpi-sliding', None)\n gestures = element.get('mouse-gestures')\n if gestures: # convert old-style mouse-gestures setting to divert-keys entry\n divert[int(gestures)] = 2\n element.pop('mouse-gestures', None)\n # remove any string entries (from bad conversions)\n element['divert-keys'] = {k: v for k, v in divert.items() if isinstance(k, int)}\n # convert to device entries\n element = _DeviceEntry(**element)\n _config.append(element)\n return _config\n\n\nclass _DeviceEntry(dict):\n def __init__(self, **kwargs):\n super().__init__(**kwargs)\n\n def __setitem__(self, key, value):\n super().__setitem__(key, value)\n save(defer=True)\n\n def update(self, device, modelId):\n if device.name and device.name != self.get(_KEY_NAME):\n super().__setitem__(_KEY_NAME, device.name)\n if device.wpid and device.wpid != self.get(_KEY_WPID):\n super().__setitem__(_KEY_WPID, device.wpid)\n if device.serial and device.serial != '?' and device.serial != self.get(_KEY_SERIAL):\n super().__setitem__(_KEY_SERIAL, device.serial)\n if modelId and modelId != self.get(_KEY_MODEL_ID):\n super().__setitem__(_KEY_MODEL_ID, modelId)\n if device.unitId and device.unitId != self.get(_KEY_UNIT_ID):\n super().__setitem__(_KEY_UNIT_ID, device.unitId)\n\n def get_sensitivity(self, name):\n return self.get(_KEY_SENSITIVE, {}).get(name, False)\n\n def set_sensitivity(self, name, value):\n sensitives = self.get(_KEY_SENSITIVE, {})\n if sensitives.get(name) != value:\n sensitives[name] = value\n self.__setitem__(_KEY_SENSITIVE, sensitives)\n\n\ndef device_representer(dumper, data):\n return dumper.represent_mapping('tag:yaml.org,2002:map', data)\n\n\n_yaml.add_representer(_DeviceEntry, device_representer)\n\n\ndef named_int_representer(dumper, data):\n return dumper.represent_scalar('tag:yaml.org,2002:int', str(int(data)))\n\n\n_yaml.add_representer(_NamedInt, named_int_representer)\n\n\n# A device can be identified by a combination of WPID and serial number (for receiver-connected devices)\n# or a combination of modelId and unitId (for direct-connected devices).\n# But some devices have empty (all zero) modelIds and unitIds. Use the device name as a backup for the modelId.\n# The worst situation is a receiver-connected device that Solaar has never seen on-line\n# that is directly connected. Here there is no way to realize that the two devices are the same.\n# So new entries are not created for unseen off-line receiver-connected devices except for those with protocol 1.0\ndef persister(device):\n def match(wpid, serial, modelId, unitId, c):\n return ((wpid and wpid == c.get(_KEY_WPID) and serial and serial == c.get(_KEY_SERIAL)) or (\n modelId and modelId != '000000000000' and modelId == c.get(_KEY_MODEL_ID) and unitId\n and unitId == c.get(_KEY_UNIT_ID)\n ))\n\n if not _config:\n _load()\n entry = None\n modelId = device.modelId if device.modelId != '000000000000' else device.name if device.modelId else None\n for c in _config:\n if isinstance(c, _DeviceEntry) and match(device.wpid, device.serial, modelId, device.unitId, c):\n entry = c\n break\n if not entry:\n if not device.online and not device.serial: # don't create entry for offline devices without serial number\n if _log.isEnabledFor(_INFO):\n _log.info('not setting up persister for offline device %s with missing serial number', device.name)\n return\n if _log.isEnabledFor(_INFO):\n _log.info('setting up persister for device %s', device.name)\n entry = _DeviceEntry()\n _config.append(entry)\n entry.update(device, modelId)\n return entry\n\n\ndef attach_to(device):\n pass\n", "path": "lib/solaar/configuration.py"}]} | 3,844 | 510 |
gh_patches_debug_35494 | rasdani/github-patches | git_diff | ycm-core__ycmd-645 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
exit code not correct if importing ycm_core in global config fails
Hi,
I am not sure this is a real bug, but I encountered this while implementing handling of exit code in `emacs-ycmd`.
I had a `import ycm_core` in my global config. If importing fails there the line with `code = CompatibleWithCurrentCore()` in `__main__.py` will never be reached to return the correct exit code and then I just get an exit code 1.
</issue>
<code>
[start of ycmd/extra_conf_store.py]
1 # Copyright (C) 2011, 2012 Google Inc.
2 #
3 # This file is part of ycmd.
4 #
5 # ycmd is free software: you can redistribute it and/or modify
6 # it under the terms of the GNU General Public License as published by
7 # the Free Software Foundation, either version 3 of the License, or
8 # (at your option) any later version.
9 #
10 # ycmd is distributed in the hope that it will be useful,
11 # but WITHOUT ANY WARRANTY; without even the implied warranty of
12 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
13 # GNU General Public License for more details.
14 #
15 # You should have received a copy of the GNU General Public License
16 # along with ycmd. If not, see <http://www.gnu.org/licenses/>.
17
18 # NOTE: This module is used as a Singleton
19
20 from __future__ import unicode_literals
21 from __future__ import print_function
22 from __future__ import division
23 from __future__ import absolute_import
24 from future import standard_library
25 standard_library.install_aliases()
26 from builtins import * # noqa
27
28 import os
29 import random
30 import string
31 import sys
32 import logging
33 from threading import Lock
34 from ycmd import user_options_store
35 from ycmd.responses import UnknownExtraConf, YCM_EXTRA_CONF_FILENAME
36 from ycmd.utils import LoadPythonSource, PathsToAllParentFolders
37 from fnmatch import fnmatch
38
39
40 # Singleton variables
41 _module_for_module_file = {}
42 _module_for_module_file_lock = Lock()
43 _module_file_for_source_file = {}
44 _module_file_for_source_file_lock = Lock()
45
46
47 def Reset():
48 global _module_for_module_file, _module_file_for_source_file
49 _module_for_module_file = {}
50 _module_file_for_source_file = {}
51
52
53 def ModuleForSourceFile( filename ):
54 return Load( ModuleFileForSourceFile( filename ) )
55
56
57 def ModuleFileForSourceFile( filename ):
58 """This will try all files returned by _ExtraConfModuleSourceFilesForFile in
59 order and return the filename of the first module that was allowed to load.
60 If no module was found or allowed to load, None is returned."""
61
62 with _module_file_for_source_file_lock:
63 if filename not in _module_file_for_source_file:
64 for module_file in _ExtraConfModuleSourceFilesForFile( filename ):
65 if Load( module_file ):
66 _module_file_for_source_file[ filename ] = module_file
67 break
68
69 return _module_file_for_source_file.setdefault( filename )
70
71
72 def CallGlobalExtraConfYcmCorePreloadIfExists():
73 _CallGlobalExtraConfMethod( 'YcmCorePreload' )
74
75
76 def Shutdown():
77 # VimClose is for the sake of backwards compatibility; it's a no-op when it
78 # doesn't exist.
79 _CallGlobalExtraConfMethod( 'VimClose' )
80 _CallGlobalExtraConfMethod( 'Shutdown' )
81
82
83 def _CallGlobalExtraConfMethod( function_name ):
84 logger = _Logger()
85 global_ycm_extra_conf = _GlobalYcmExtraConfFileLocation()
86 if not ( global_ycm_extra_conf and
87 os.path.exists( global_ycm_extra_conf ) ):
88 logger.debug( 'No global extra conf, not calling method ' + function_name )
89 return
90
91 module = Load( global_ycm_extra_conf, force = True )
92 if not module or not hasattr( module, function_name ):
93 logger.debug( 'Global extra conf not loaded or no function ' +
94 function_name )
95 return
96
97 logger.info( 'Calling global extra conf method {0} on conf file {1}'.format(
98 function_name, global_ycm_extra_conf ) )
99 getattr( module, function_name )()
100
101
102 def Disable( module_file ):
103 """Disables the loading of a module for the current session."""
104 with _module_for_module_file_lock:
105 _module_for_module_file[ module_file ] = None
106
107
108 def _ShouldLoad( module_file ):
109 """Checks if a module is safe to be loaded. By default this will try to
110 decide using a white-/blacklist and ask the user for confirmation as a
111 fallback."""
112
113 if ( module_file == _GlobalYcmExtraConfFileLocation() or
114 not user_options_store.Value( 'confirm_extra_conf' ) ):
115 return True
116
117 globlist = user_options_store.Value( 'extra_conf_globlist' )
118 for glob in globlist:
119 is_blacklisted = glob[0] == '!'
120 if _MatchesGlobPattern( module_file, glob.lstrip('!') ):
121 return not is_blacklisted
122
123 raise UnknownExtraConf( module_file )
124
125
126 def Load( module_file, force = False ):
127 """Load and return the module contained in a file.
128 Using force = True the module will be loaded regardless
129 of the criteria in _ShouldLoad.
130 This will return None if the module was not allowed to be loaded."""
131
132 if not module_file:
133 return None
134
135 if not force:
136 with _module_for_module_file_lock:
137 if module_file in _module_for_module_file:
138 return _module_for_module_file[ module_file ]
139
140 if not _ShouldLoad( module_file ):
141 Disable( module_file )
142 return None
143
144 # This has to be here because a long time ago, the ycm_extra_conf.py files
145 # used to import clang_helpers.py from the cpp folder. This is not needed
146 # anymore, but there are a lot of old ycm_extra_conf.py files that we don't
147 # want to break.
148 sys.path.insert( 0, _PathToCppCompleterFolder() )
149
150 # By default, the Python interpreter compiles source files into bytecode to
151 # load them faster next time they are run. These *.pyc files are generated
152 # along the source files prior to Python 3.2 or in a __pycache__ folder for
153 # newer versions. We disable the generation of these files when loading
154 # ycm_extra_conf.py files as users do not want them inside their projects.
155 # The drawback is negligible since ycm_extra_conf.py files are generally small
156 # files thus really fast to compile and only loaded once by editing session.
157 old_dont_write_bytecode = sys.dont_write_bytecode
158 sys.dont_write_bytecode = True
159 try:
160 module = LoadPythonSource( _RandomName(), module_file )
161 finally:
162 sys.dont_write_bytecode = old_dont_write_bytecode
163
164 del sys.path[ 0 ]
165
166 with _module_for_module_file_lock:
167 _module_for_module_file[ module_file ] = module
168 return module
169
170
171 def _MatchesGlobPattern( filename, glob ):
172 """Returns true if a filename matches a given pattern. A '~' in glob will be
173 expanded to the home directory and checking will be performed using absolute
174 paths. See the documentation of fnmatch for the supported patterns."""
175
176 abspath = os.path.abspath( filename )
177 return fnmatch( abspath, os.path.abspath( os.path.expanduser( glob ) ) )
178
179
180 def _ExtraConfModuleSourceFilesForFile( filename ):
181 """For a given filename, search all parent folders for YCM_EXTRA_CONF_FILENAME
182 files that will compute the flags necessary to compile the file.
183 If _GlobalYcmExtraConfFileLocation() exists it is returned as a fallback."""
184
185 for folder in PathsToAllParentFolders( filename ):
186 candidate = os.path.join( folder, YCM_EXTRA_CONF_FILENAME )
187 if os.path.exists( candidate ):
188 yield candidate
189 global_ycm_extra_conf = _GlobalYcmExtraConfFileLocation()
190 if ( global_ycm_extra_conf
191 and os.path.exists( global_ycm_extra_conf ) ):
192 yield global_ycm_extra_conf
193
194
195 def _PathToCppCompleterFolder():
196 """Returns the path to the 'cpp' completer folder. This is necessary
197 because ycm_extra_conf files need it on the path."""
198 return os.path.join( _DirectoryOfThisScript(), 'completers', 'cpp' )
199
200
201 def _DirectoryOfThisScript():
202 return os.path.dirname( os.path.abspath( __file__ ) )
203
204
205 def _RandomName():
206 """Generates a random module name."""
207 return ''.join( random.choice( string.ascii_lowercase ) for x in range( 15 ) )
208
209
210 def _GlobalYcmExtraConfFileLocation():
211 return os.path.expanduser(
212 user_options_store.Value( 'global_ycm_extra_conf' ) )
213
214
215 def _Logger():
216 return logging.getLogger( __name__ )
217
[end of ycmd/extra_conf_store.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/ycmd/extra_conf_store.py b/ycmd/extra_conf_store.py
--- a/ycmd/extra_conf_store.py
+++ b/ycmd/extra_conf_store.py
@@ -1,4 +1,5 @@
-# Copyright (C) 2011, 2012 Google Inc.
+# Copyright (C) 2011-2012 Google Inc.
+# 2016 ycmd contributors
#
# This file is part of ycmd.
#
@@ -37,6 +38,8 @@
from fnmatch import fnmatch
+_logger = logging.getLogger( __name__ )
+
# Singleton variables
_module_for_module_file = {}
_module_for_module_file_lock = Lock()
@@ -81,22 +84,34 @@
def _CallGlobalExtraConfMethod( function_name ):
- logger = _Logger()
global_ycm_extra_conf = _GlobalYcmExtraConfFileLocation()
if not ( global_ycm_extra_conf and
os.path.exists( global_ycm_extra_conf ) ):
- logger.debug( 'No global extra conf, not calling method ' + function_name )
+ _logger.debug( 'No global extra conf, '
+ 'not calling method {0}'.format( function_name ) )
+ return
+
+ try:
+ module = Load( global_ycm_extra_conf, force = True )
+ except Exception:
+ _logger.exception( 'Error occurred while loading '
+ 'global extra conf {0}'.format( global_ycm_extra_conf ) )
return
- module = Load( global_ycm_extra_conf, force = True )
if not module or not hasattr( module, function_name ):
- logger.debug( 'Global extra conf not loaded or no function ' +
- function_name )
+ _logger.debug( 'Global extra conf not loaded or no function ' +
+ function_name )
return
- logger.info( 'Calling global extra conf method {0} on conf file {1}'.format(
- function_name, global_ycm_extra_conf ) )
- getattr( module, function_name )()
+ try:
+ _logger.info(
+ 'Calling global extra conf method {0} '
+ 'on conf file {1}'.format( function_name, global_ycm_extra_conf ) )
+ getattr( module, function_name )()
+ except Exception:
+ _logger.exception(
+ 'Error occurred while calling global extra conf method {0} '
+ 'on conf file {1}'.format( function_name, global_ycm_extra_conf ) )
def Disable( module_file ):
@@ -210,7 +225,3 @@
def _GlobalYcmExtraConfFileLocation():
return os.path.expanduser(
user_options_store.Value( 'global_ycm_extra_conf' ) )
-
-
-def _Logger():
- return logging.getLogger( __name__ )
| {"golden_diff": "diff --git a/ycmd/extra_conf_store.py b/ycmd/extra_conf_store.py\n--- a/ycmd/extra_conf_store.py\n+++ b/ycmd/extra_conf_store.py\n@@ -1,4 +1,5 @@\n-# Copyright (C) 2011, 2012 Google Inc.\n+# Copyright (C) 2011-2012 Google Inc.\n+# 2016 ycmd contributors\n #\n # This file is part of ycmd.\n #\n@@ -37,6 +38,8 @@\n from fnmatch import fnmatch\n \n \n+_logger = logging.getLogger( __name__ )\n+\n # Singleton variables\n _module_for_module_file = {}\n _module_for_module_file_lock = Lock()\n@@ -81,22 +84,34 @@\n \n \n def _CallGlobalExtraConfMethod( function_name ):\n- logger = _Logger()\n global_ycm_extra_conf = _GlobalYcmExtraConfFileLocation()\n if not ( global_ycm_extra_conf and\n os.path.exists( global_ycm_extra_conf ) ):\n- logger.debug( 'No global extra conf, not calling method ' + function_name )\n+ _logger.debug( 'No global extra conf, '\n+ 'not calling method {0}'.format( function_name ) )\n+ return\n+\n+ try:\n+ module = Load( global_ycm_extra_conf, force = True )\n+ except Exception:\n+ _logger.exception( 'Error occurred while loading '\n+ 'global extra conf {0}'.format( global_ycm_extra_conf ) )\n return\n \n- module = Load( global_ycm_extra_conf, force = True )\n if not module or not hasattr( module, function_name ):\n- logger.debug( 'Global extra conf not loaded or no function ' +\n- function_name )\n+ _logger.debug( 'Global extra conf not loaded or no function ' +\n+ function_name )\n return\n \n- logger.info( 'Calling global extra conf method {0} on conf file {1}'.format(\n- function_name, global_ycm_extra_conf ) )\n- getattr( module, function_name )()\n+ try:\n+ _logger.info(\n+ 'Calling global extra conf method {0} '\n+ 'on conf file {1}'.format( function_name, global_ycm_extra_conf ) )\n+ getattr( module, function_name )()\n+ except Exception:\n+ _logger.exception(\n+ 'Error occurred while calling global extra conf method {0} '\n+ 'on conf file {1}'.format( function_name, global_ycm_extra_conf ) )\n \n \n def Disable( module_file ):\n@@ -210,7 +225,3 @@\n def _GlobalYcmExtraConfFileLocation():\n return os.path.expanduser(\n user_options_store.Value( 'global_ycm_extra_conf' ) )\n-\n-\n-def _Logger():\n- return logging.getLogger( __name__ )\n", "issue": "exit code not correct if importing ycm_core in global config fails\nHi,\n\nI am not sure this is a real bug, but I encountered this while implementing handling of exit code in `emacs-ycmd`.\n\nI had a `import ycm_core` in my global config. If importing fails there the line with `code = CompatibleWithCurrentCore()` in `__main__.py` will never be reached to return the correct exit code and then I just get an exit code 1.\n\n", "before_files": [{"content": "# Copyright (C) 2011, 2012 Google Inc.\n#\n# This file is part of ycmd.\n#\n# ycmd is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# ycmd is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with ycmd. If not, see <http://www.gnu.org/licenses/>.\n\n# NOTE: This module is used as a Singleton\n\nfrom __future__ import unicode_literals\nfrom __future__ import print_function\nfrom __future__ import division\nfrom __future__ import absolute_import\nfrom future import standard_library\nstandard_library.install_aliases()\nfrom builtins import * # noqa\n\nimport os\nimport random\nimport string\nimport sys\nimport logging\nfrom threading import Lock\nfrom ycmd import user_options_store\nfrom ycmd.responses import UnknownExtraConf, YCM_EXTRA_CONF_FILENAME\nfrom ycmd.utils import LoadPythonSource, PathsToAllParentFolders\nfrom fnmatch import fnmatch\n\n\n# Singleton variables\n_module_for_module_file = {}\n_module_for_module_file_lock = Lock()\n_module_file_for_source_file = {}\n_module_file_for_source_file_lock = Lock()\n\n\ndef Reset():\n global _module_for_module_file, _module_file_for_source_file\n _module_for_module_file = {}\n _module_file_for_source_file = {}\n\n\ndef ModuleForSourceFile( filename ):\n return Load( ModuleFileForSourceFile( filename ) )\n\n\ndef ModuleFileForSourceFile( filename ):\n \"\"\"This will try all files returned by _ExtraConfModuleSourceFilesForFile in\n order and return the filename of the first module that was allowed to load.\n If no module was found or allowed to load, None is returned.\"\"\"\n\n with _module_file_for_source_file_lock:\n if filename not in _module_file_for_source_file:\n for module_file in _ExtraConfModuleSourceFilesForFile( filename ):\n if Load( module_file ):\n _module_file_for_source_file[ filename ] = module_file\n break\n\n return _module_file_for_source_file.setdefault( filename )\n\n\ndef CallGlobalExtraConfYcmCorePreloadIfExists():\n _CallGlobalExtraConfMethod( 'YcmCorePreload' )\n\n\ndef Shutdown():\n # VimClose is for the sake of backwards compatibility; it's a no-op when it\n # doesn't exist.\n _CallGlobalExtraConfMethod( 'VimClose' )\n _CallGlobalExtraConfMethod( 'Shutdown' )\n\n\ndef _CallGlobalExtraConfMethod( function_name ):\n logger = _Logger()\n global_ycm_extra_conf = _GlobalYcmExtraConfFileLocation()\n if not ( global_ycm_extra_conf and\n os.path.exists( global_ycm_extra_conf ) ):\n logger.debug( 'No global extra conf, not calling method ' + function_name )\n return\n\n module = Load( global_ycm_extra_conf, force = True )\n if not module or not hasattr( module, function_name ):\n logger.debug( 'Global extra conf not loaded or no function ' +\n function_name )\n return\n\n logger.info( 'Calling global extra conf method {0} on conf file {1}'.format(\n function_name, global_ycm_extra_conf ) )\n getattr( module, function_name )()\n\n\ndef Disable( module_file ):\n \"\"\"Disables the loading of a module for the current session.\"\"\"\n with _module_for_module_file_lock:\n _module_for_module_file[ module_file ] = None\n\n\ndef _ShouldLoad( module_file ):\n \"\"\"Checks if a module is safe to be loaded. By default this will try to\n decide using a white-/blacklist and ask the user for confirmation as a\n fallback.\"\"\"\n\n if ( module_file == _GlobalYcmExtraConfFileLocation() or\n not user_options_store.Value( 'confirm_extra_conf' ) ):\n return True\n\n globlist = user_options_store.Value( 'extra_conf_globlist' )\n for glob in globlist:\n is_blacklisted = glob[0] == '!'\n if _MatchesGlobPattern( module_file, glob.lstrip('!') ):\n return not is_blacklisted\n\n raise UnknownExtraConf( module_file )\n\n\ndef Load( module_file, force = False ):\n \"\"\"Load and return the module contained in a file.\n Using force = True the module will be loaded regardless\n of the criteria in _ShouldLoad.\n This will return None if the module was not allowed to be loaded.\"\"\"\n\n if not module_file:\n return None\n\n if not force:\n with _module_for_module_file_lock:\n if module_file in _module_for_module_file:\n return _module_for_module_file[ module_file ]\n\n if not _ShouldLoad( module_file ):\n Disable( module_file )\n return None\n\n # This has to be here because a long time ago, the ycm_extra_conf.py files\n # used to import clang_helpers.py from the cpp folder. This is not needed\n # anymore, but there are a lot of old ycm_extra_conf.py files that we don't\n # want to break.\n sys.path.insert( 0, _PathToCppCompleterFolder() )\n\n # By default, the Python interpreter compiles source files into bytecode to\n # load them faster next time they are run. These *.pyc files are generated\n # along the source files prior to Python 3.2 or in a __pycache__ folder for\n # newer versions. We disable the generation of these files when loading\n # ycm_extra_conf.py files as users do not want them inside their projects.\n # The drawback is negligible since ycm_extra_conf.py files are generally small\n # files thus really fast to compile and only loaded once by editing session.\n old_dont_write_bytecode = sys.dont_write_bytecode\n sys.dont_write_bytecode = True\n try:\n module = LoadPythonSource( _RandomName(), module_file )\n finally:\n sys.dont_write_bytecode = old_dont_write_bytecode\n\n del sys.path[ 0 ]\n\n with _module_for_module_file_lock:\n _module_for_module_file[ module_file ] = module\n return module\n\n\ndef _MatchesGlobPattern( filename, glob ):\n \"\"\"Returns true if a filename matches a given pattern. A '~' in glob will be\n expanded to the home directory and checking will be performed using absolute\n paths. See the documentation of fnmatch for the supported patterns.\"\"\"\n\n abspath = os.path.abspath( filename )\n return fnmatch( abspath, os.path.abspath( os.path.expanduser( glob ) ) )\n\n\ndef _ExtraConfModuleSourceFilesForFile( filename ):\n \"\"\"For a given filename, search all parent folders for YCM_EXTRA_CONF_FILENAME\n files that will compute the flags necessary to compile the file.\n If _GlobalYcmExtraConfFileLocation() exists it is returned as a fallback.\"\"\"\n\n for folder in PathsToAllParentFolders( filename ):\n candidate = os.path.join( folder, YCM_EXTRA_CONF_FILENAME )\n if os.path.exists( candidate ):\n yield candidate\n global_ycm_extra_conf = _GlobalYcmExtraConfFileLocation()\n if ( global_ycm_extra_conf\n and os.path.exists( global_ycm_extra_conf ) ):\n yield global_ycm_extra_conf\n\n\ndef _PathToCppCompleterFolder():\n \"\"\"Returns the path to the 'cpp' completer folder. This is necessary\n because ycm_extra_conf files need it on the path.\"\"\"\n return os.path.join( _DirectoryOfThisScript(), 'completers', 'cpp' )\n\n\ndef _DirectoryOfThisScript():\n return os.path.dirname( os.path.abspath( __file__ ) )\n\n\ndef _RandomName():\n \"\"\"Generates a random module name.\"\"\"\n return ''.join( random.choice( string.ascii_lowercase ) for x in range( 15 ) )\n\n\ndef _GlobalYcmExtraConfFileLocation():\n return os.path.expanduser(\n user_options_store.Value( 'global_ycm_extra_conf' ) )\n\n\ndef _Logger():\n return logging.getLogger( __name__ )\n", "path": "ycmd/extra_conf_store.py"}]} | 3,040 | 646 |
gh_patches_debug_761 | rasdani/github-patches | git_diff | encode__uvicorn-324 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
TypeError: __init__() when run "uvicorn app:App"
I'm working on Mac Os Sierra 10.12.6, python 3.7.2 and uvicorn via pip3 0.5.1.
When I run the example uvicorn app:App get the following error:
Traceback (most recent call last):
File "/usr/local/bin/uvicorn", line 11, in <module>
load_entry_point('uvicorn==0.5.1', 'console_scripts', 'uvicorn')()
File "/usr/local/lib/python3.7/site-packages/pkg_resources/__init__.py", line 489, in load_entry_point
return get_distribution(dist).load_entry_point(group, name)
File "/usr/local/lib/python3.7/site-packages/pkg_resources/__init__.py", line 2793, in load_entry_point
return ep.load()
File "/usr/local/lib/python3.7/site-packages/pkg_resources/__init__.py", line 2411, in load
return self.resolve()
File "/usr/local/lib/python3.7/site-packages/pkg_resources/__init__.py", line 2417, in resolve
module = __import__(self.module_name, fromlist=['__name__'], level=0)
File "/usr/local/lib/python3.7/site-packages/uvicorn/__init__.py", line 2, in <module>
from uvicorn.main import Server, main, run
File "/usr/local/lib/python3.7/site-packages/uvicorn/main.py", line 212, in <module>
ssl_ciphers: str,
File "/usr/local/lib/python3.7/site-packages/click/decorators.py", line 170, in decorator
_param_memo(f, OptionClass(param_decls, **attrs))
File "/usr/local/lib/python3.7/site-packages/click/core.py", line 1460, in __init__
Parameter.__init__(self, param_decls, type=type, **attrs)
TypeError: __init__() got an unexpected keyword argument 'hidden'
Thank you
</issue>
<code>
[start of setup.py]
1 #!/usr/bin/env python
2 # -*- coding: utf-8 -*-
3
4 import os
5 import re
6 import sys
7 import platform
8
9 from setuptools import setup
10
11
12 def get_version(package):
13 """
14 Return package version as listed in `__version__` in `init.py`.
15 """
16 path = os.path.join(package, '__init__.py')
17 init_py = open(path, 'r', encoding='utf8').read()
18 return re.search("__version__ = ['\"]([^'\"]+)['\"]", init_py).group(1)
19
20
21 def get_long_description():
22 """
23 Return the README.
24 """
25 return open('README.md', 'r', encoding='utf8').read()
26
27
28 def get_packages(package):
29 """
30 Return root package and all sub-packages.
31 """
32 return [dirpath
33 for dirpath, dirnames, filenames in os.walk(package)
34 if os.path.exists(os.path.join(dirpath, '__init__.py'))]
35
36
37 env_marker = (
38 "sys_platform != 'win32'"
39 " and sys_platform != 'cygwin'"
40 " and platform_python_implementation != 'pypy'"
41 )
42
43 requirements = [
44 "click",
45 "h11",
46 "websockets>=6.0",
47 "httptools;" + env_marker,
48 "uvloop;" + env_marker,
49 ]
50
51
52 setup(
53 name='uvicorn',
54 version=get_version('uvicorn'),
55 url='https://github.com/encode/uvicorn',
56 license='BSD',
57 description='The lightning-fast ASGI server.',
58 long_description=get_long_description(),
59 long_description_content_type='text/markdown',
60 author='Tom Christie',
61 author_email='[email protected]',
62 packages=get_packages('uvicorn'),
63 install_requires=requirements,
64 data_files = [("", ["LICENSE.md"])],
65 classifiers=[
66 'Development Status :: 3 - Alpha',
67 'Environment :: Web Environment',
68 'Intended Audience :: Developers',
69 'License :: OSI Approved :: BSD License',
70 'Operating System :: OS Independent',
71 'Topic :: Internet :: WWW/HTTP',
72 'Programming Language :: Python :: 3',
73 'Programming Language :: Python :: 3.5',
74 'Programming Language :: Python :: 3.6',
75 'Programming Language :: Python :: 3.7',
76 'Programming Language :: Python :: Implementation :: CPython',
77 'Programming Language :: Python :: Implementation :: PyPy',
78 ],
79 entry_points="""
80 [console_scripts]
81 uvicorn=uvicorn.main:main
82 """
83 )
84
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -41,11 +41,11 @@
)
requirements = [
- "click",
- "h11",
- "websockets>=6.0",
- "httptools;" + env_marker,
- "uvloop;" + env_marker,
+ "click==7.*",
+ "h11==0.8.*",
+ "websockets==7.*",
+ "httptools==0.0.13 ;" + env_marker,
+ "uvloop==0.12.* ;" + env_marker,
]
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -41,11 +41,11 @@\n )\n \n requirements = [\n- \"click\",\n- \"h11\",\n- \"websockets>=6.0\",\n- \"httptools;\" + env_marker,\n- \"uvloop;\" + env_marker,\n+ \"click==7.*\",\n+ \"h11==0.8.*\",\n+ \"websockets==7.*\",\n+ \"httptools==0.0.13 ;\" + env_marker,\n+ \"uvloop==0.12.* ;\" + env_marker,\n ]\n", "issue": "TypeError: __init__() when run \"uvicorn app:App\"\nI'm working on Mac Os Sierra 10.12.6, python 3.7.2 and uvicorn via pip3 0.5.1.\r\nWhen I run the example uvicorn app:App get the following error:\r\nTraceback (most recent call last):\r\n File \"/usr/local/bin/uvicorn\", line 11, in <module>\r\n load_entry_point('uvicorn==0.5.1', 'console_scripts', 'uvicorn')()\r\n File \"/usr/local/lib/python3.7/site-packages/pkg_resources/__init__.py\", line 489, in load_entry_point\r\n return get_distribution(dist).load_entry_point(group, name)\r\n File \"/usr/local/lib/python3.7/site-packages/pkg_resources/__init__.py\", line 2793, in load_entry_point\r\n return ep.load()\r\n File \"/usr/local/lib/python3.7/site-packages/pkg_resources/__init__.py\", line 2411, in load\r\n return self.resolve()\r\n File \"/usr/local/lib/python3.7/site-packages/pkg_resources/__init__.py\", line 2417, in resolve\r\n module = __import__(self.module_name, fromlist=['__name__'], level=0)\r\n File \"/usr/local/lib/python3.7/site-packages/uvicorn/__init__.py\", line 2, in <module>\r\n from uvicorn.main import Server, main, run\r\n File \"/usr/local/lib/python3.7/site-packages/uvicorn/main.py\", line 212, in <module>\r\n ssl_ciphers: str,\r\n File \"/usr/local/lib/python3.7/site-packages/click/decorators.py\", line 170, in decorator\r\n _param_memo(f, OptionClass(param_decls, **attrs))\r\n File \"/usr/local/lib/python3.7/site-packages/click/core.py\", line 1460, in __init__\r\n Parameter.__init__(self, param_decls, type=type, **attrs)\r\nTypeError: __init__() got an unexpected keyword argument 'hidden'\r\n\r\nThank you\n", "before_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\nimport os\nimport re\nimport sys\nimport platform\n\nfrom setuptools import setup\n\n\ndef get_version(package):\n \"\"\"\n Return package version as listed in `__version__` in `init.py`.\n \"\"\"\n path = os.path.join(package, '__init__.py')\n init_py = open(path, 'r', encoding='utf8').read()\n return re.search(\"__version__ = ['\\\"]([^'\\\"]+)['\\\"]\", init_py).group(1)\n\n\ndef get_long_description():\n \"\"\"\n Return the README.\n \"\"\"\n return open('README.md', 'r', encoding='utf8').read()\n\n\ndef get_packages(package):\n \"\"\"\n Return root package and all sub-packages.\n \"\"\"\n return [dirpath\n for dirpath, dirnames, filenames in os.walk(package)\n if os.path.exists(os.path.join(dirpath, '__init__.py'))]\n\n\nenv_marker = (\n \"sys_platform != 'win32'\"\n \" and sys_platform != 'cygwin'\"\n \" and platform_python_implementation != 'pypy'\"\n)\n\nrequirements = [\n \"click\",\n \"h11\",\n \"websockets>=6.0\",\n \"httptools;\" + env_marker,\n \"uvloop;\" + env_marker,\n]\n\n\nsetup(\n name='uvicorn',\n version=get_version('uvicorn'),\n url='https://github.com/encode/uvicorn',\n license='BSD',\n description='The lightning-fast ASGI server.',\n long_description=get_long_description(),\n long_description_content_type='text/markdown',\n author='Tom Christie',\n author_email='[email protected]',\n packages=get_packages('uvicorn'),\n install_requires=requirements,\n data_files = [(\"\", [\"LICENSE.md\"])],\n classifiers=[\n 'Development Status :: 3 - Alpha',\n 'Environment :: Web Environment',\n 'Intended Audience :: Developers',\n 'License :: OSI Approved :: BSD License',\n 'Operating System :: OS Independent',\n 'Topic :: Internet :: WWW/HTTP',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Programming Language :: Python :: Implementation :: CPython',\n 'Programming Language :: Python :: Implementation :: PyPy',\n ],\n entry_points=\"\"\"\n [console_scripts]\n uvicorn=uvicorn.main:main\n \"\"\"\n)\n", "path": "setup.py"}]} | 1,692 | 148 |
gh_patches_debug_26776 | rasdani/github-patches | git_diff | quantumlib__Cirq-1865 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add QASM importer
As the other leg of #44 and maybe a partial solution to #862 depending on the gate sets - as we discussed on today's sync meeting, a QASM importer would be useful.
I'm happy to design and implement it.
</issue>
<code>
[start of cirq/contrib/qasm_import/__init__.py]
1 # Copyright 2018 The Cirq Developers
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # https://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from cirq.contrib.qasm_import.exception import (QasmException)
16 from cirq.contrib.qasm_import.qasm import (QasmCircuitParser)
17
[end of cirq/contrib/qasm_import/__init__.py]
[start of docs/conf.py]
1 # -*- coding: utf-8 -*-
2 # coverage: ignore
3
4 # Configuration file for the Sphinx documentation builder.
5 # See http://www.sphinx-doc.org/en/master/config for help
6
7 # -- Path setup --------------------------------------------------------------
8
9 # If extensions (or modules to document with autodoc) are in another directory,
10 # add these directories to sys.path here. If the directory is relative to the
11 # documentation root, use os.path.abspath to make it absolute, like shown here.
12 #
13 from typing import List, Any
14
15 import os
16 import sys
17
18 import pypandoc
19
20 cirq_root_path = os.path.dirname(os.path.dirname(__file__))
21 sys.path.insert(0, cirq_root_path)
22
23
24 def setup(app):
25 app.add_config_value('pandoc_use_parser', 'markdown', True)
26 app.connect('autodoc-process-docstring', pandoc_process)
27
28
29 def convert_markdown_mathjax_for_rst(lines: List[str]) -> List[str]:
30 if all('$$' not in line for line in lines):
31 return lines
32
33 data = '\n'.join(lines)
34 sections = data.split('$$')
35 if len(sections) % 2 != 1:
36 raise ValueError('Mismatched number of "$$" latex tokens.')
37
38 result = []
39 for i, s in enumerate(sections):
40 if i % 2:
41 # Avoid getting split across divs.
42 s = ' '.join(s.split('\n'))
43 # Avoid intermediate layers turning our newlines into slashes.
44 s = s.replace('\\\\', '\\newline')
45 # Keep the $$ so MathJax can find it.
46 result.append('$${}$$'.format(s))
47 else:
48 # Work around bad table detection in pandoc by concatenating
49 # lines from the same paragraph.
50 s = '\n\n'.join(e.replace('\n', ' ') for e in s.split('\n\n'))
51
52 # Convert markdown to rst.
53 out = pypandoc.convert(s, to='rst', format='markdown_github')
54
55 # Not sure why pandoc is escaping these...
56 out = out.replace(r'\|', '|')
57
58 result.extend(out.split('\n'))
59
60 return result
61
62
63 def pandoc_process(app,
64 what: str,
65 name: str,
66 obj: Any,
67 options,
68 lines: List[str]
69 ) -> None:
70 if not getattr(obj, '__module__', 'cirq').startswith('cirq'):
71 # Don't convert objects from other modules.
72 return
73
74 # Don't convert output from Napoleon extension, which is already rst.
75 i = 0
76 while i < len(lines) and not lines[i].startswith(':'):
77 i += 1
78 if not i:
79 return
80
81 converted_lines = convert_markdown_mathjax_for_rst(lines[:i])
82 kept_lines = lines[i:]
83
84 data = pypandoc.convert(
85 '\n'.join(converted_lines),
86 to='rst',
87 format='markdown_github',
88 )
89
90 lines[:] = data.split('\n') + kept_lines
91
92
93 # -- Project information -----------------------------------------------------
94
95 project = 'Cirq'
96 copyright = '2018, The Cirq Developers' # pylint: disable=redefined-builtin
97 author = 'The Cirq Developers'
98
99 # The full version, including alpha/beta/rc tags
100 __version__ = ''
101 exec(open(os.path.join(cirq_root_path, 'cirq', '_version.py')).read())
102 release = __version__
103
104 # The short X.Y version
105 version = release # '.'.join(release.split('.')[:2])
106
107 # -- General configuration ---------------------------------------------------
108
109 # If your documentation needs a minimal Sphinx version, state it here.
110 # needs_sphinx = '1.0'
111
112 # Add any Sphinx extension module names here, as strings. They can be
113 # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
114 # ones.
115 extensions = [
116 'sphinx.ext.autodoc',
117 'sphinx.ext.autosummary',
118 'sphinx.ext.doctest',
119 'sphinx.ext.mathjax',
120 'sphinx.ext.napoleon',
121 'sphinx.ext.viewcode'
122 ]
123
124 # Add any paths that contain templates here, relative to this directory.
125 templates_path = ['_templates']
126
127 # Allow markdown includes.
128 # http://www.sphinx-doc.org/en/master/markdown.html
129 source_parsers = {
130 '.md': 'recommonmark.parser.CommonMarkParser',
131 }
132
133 # The suffix(es) of source filenames.
134 # You can specify multiple suffix as a list of string:
135 #
136 source_suffix = ['.rst', '.md']
137
138 # The master toctree document.
139 master_doc = 'index'
140
141 # The language for content autogenerated by Sphinx. Refer to documentation
142 # for a list of supported languages.
143 #
144 # This is also used if you do content translation via gettext catalogs.
145 # Usually you set "language" from the command line for these cases.
146 language = None
147
148 # List of patterns, relative to source directory, that match files and
149 # directories to ignore when looking for source files.
150 # This pattern also affects html_static_path and html_extra_path .
151 exclude_patterns = ['_build', 'Thumbs.db', '.DS_Store']
152
153 # The name of the Pygments (syntax highlighting) style to use.
154 pygments_style = 'sphinx'
155
156
157 # -- Options for HTML output ---------------------------------------------
158
159 html_theme = 'sphinx_rtd_theme'
160 html_favicon = 'favicon.ico'
161 # html_theme_options = {}
162
163 # Add any paths that contain custom static files (such as style sheets) here,
164 # relative to this directory. They are copied after the builtin static files,
165 # so a file named "default.css" will overwrite the builtin "default.css".
166 # html_static_path = ['_static']
167
168 # Custom sidebar templates, must be a dictionary that maps document names
169 # to template names.
170 #
171 # The default sidebars (for documents that don't match any pattern) are
172 # defined by theme itself. Builtin themes are using these templates by
173 # default: ``['localtoc.html', 'relations.html', 'sourcelink.html',
174 # 'searchbox.html']``.
175 #
176 # html_sidebars = {}
177
178
179 # -- Options for HTMLHelp output -----------------------------------------
180
181 # Output file base name for HTML help builder.
182 htmlhelp_basename = 'Cirqdoc'
183
184
185 # -- Options for LaTeX output --------------------------------------------
186
187 latex_elements = {
188 # The paper size ('letterpaper' or 'a4paper').
189 'papersize': 'letterpaper',
190
191 # The font size ('10pt', '11pt' or '12pt').
192 # 'pointsize': '10pt',
193
194 # Additional stuff for the LaTeX preamble.
195 # 'preamble': '',
196
197 # Latex figure (float) alignment
198 # 'figure_align': 'htbp',
199 }
200
201 # Grouping the document tree into LaTeX files. List of tuples
202 # (source start file, target name, title,
203 # author, documentclass [howto, manual, or own class]).
204 latex_documents = [
205 (master_doc, 'Cirq.tex', 'Cirq Documentation',
206 'The Cirq Developers', 'manual'),
207 ]
208
209
210 # -- Options for manual page output --------------------------------------
211
212 # One entry per manual page. List of tuples
213 # (source start file, name, description, authors, manual section).
214 man_pages = [
215 (master_doc, 'cirq', 'Cirq Documentation',
216 [author], 1)
217 ]
218
219
220 # -- Options for Texinfo output ------------------------------------------
221
222 # Grouping the document tree into Texinfo files. List of tuples
223 # (source start file, target name, title, author,
224 # dir menu entry, description, category)
225 texinfo_documents = [
226 (master_doc, 'Cirq', 'Cirq Documentation',
227 author, 'Cirq', 'A python library for NISQ circuits.',
228 'Miscellaneous'),
229 ]
230
231
232 # -- Extension configuration -------------------------------------------------
233
234 # Generate subpages for reference docs automatically.
235 # http://www.sphinx-doc.org/en/master/ext/autosummary.html#generating-stub-pages-automatically
236 autosummary_generate = True
237
[end of docs/conf.py]
[start of cirq/contrib/qasm_import/qasm.py]
1 # Copyright 2018 The Cirq Developers
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # https://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from cirq import circuits
16 from cirq.contrib.qasm_import._parser import QasmParser
17
18
19 class QasmCircuitParser:
20 """QasmCircuitParser is currently partially developed, not functional,
21 DO NOT USE.
22 TODO(https://github.com/quantumlib/Cirq/issues/1548)
23 It will serve as the entrypoint for parsing QASM files."""
24
25 def __init__(self):
26 pass
27
28 def parse(self, qasm: str) -> circuits.Circuit:
29 return QasmParser().parse(qasm).circuit
30
[end of cirq/contrib/qasm_import/qasm.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/cirq/contrib/qasm_import/__init__.py b/cirq/contrib/qasm_import/__init__.py
--- a/cirq/contrib/qasm_import/__init__.py
+++ b/cirq/contrib/qasm_import/__init__.py
@@ -12,5 +12,5 @@
# See the License for the specific language governing permissions and
# limitations under the License.
-from cirq.contrib.qasm_import.exception import (QasmException)
-from cirq.contrib.qasm_import.qasm import (QasmCircuitParser)
+from cirq.contrib.qasm_import.exception import QasmException
+from cirq.contrib.qasm_import.qasm import circuit_from_qasm
diff --git a/cirq/contrib/qasm_import/qasm.py b/cirq/contrib/qasm_import/qasm.py
--- a/cirq/contrib/qasm_import/qasm.py
+++ b/cirq/contrib/qasm_import/qasm.py
@@ -16,14 +16,14 @@
from cirq.contrib.qasm_import._parser import QasmParser
-class QasmCircuitParser:
- """QasmCircuitParser is currently partially developed, not functional,
- DO NOT USE.
- TODO(https://github.com/quantumlib/Cirq/issues/1548)
- It will serve as the entrypoint for parsing QASM files."""
+def circuit_from_qasm(qasm: str) -> circuits.Circuit:
+ """Parses an OpenQASM string to `cirq.Circuit`.
- def __init__(self):
- pass
+ Args:
+ qasm: The OpenQASM string
- def parse(self, qasm: str) -> circuits.Circuit:
- return QasmParser().parse(qasm).circuit
+ Returns:
+ The parsed circuit
+ """
+
+ return QasmParser().parse(qasm).circuit
diff --git a/docs/conf.py b/docs/conf.py
--- a/docs/conf.py
+++ b/docs/conf.py
@@ -118,7 +118,8 @@
'sphinx.ext.doctest',
'sphinx.ext.mathjax',
'sphinx.ext.napoleon',
- 'sphinx.ext.viewcode'
+ 'sphinx.ext.viewcode',
+ 'sphinx_markdown_tables',
]
# Add any paths that contain templates here, relative to this directory.
| {"golden_diff": "diff --git a/cirq/contrib/qasm_import/__init__.py b/cirq/contrib/qasm_import/__init__.py\n--- a/cirq/contrib/qasm_import/__init__.py\n+++ b/cirq/contrib/qasm_import/__init__.py\n@@ -12,5 +12,5 @@\n # See the License for the specific language governing permissions and\n # limitations under the License.\n \n-from cirq.contrib.qasm_import.exception import (QasmException)\n-from cirq.contrib.qasm_import.qasm import (QasmCircuitParser)\n+from cirq.contrib.qasm_import.exception import QasmException\n+from cirq.contrib.qasm_import.qasm import circuit_from_qasm\ndiff --git a/cirq/contrib/qasm_import/qasm.py b/cirq/contrib/qasm_import/qasm.py\n--- a/cirq/contrib/qasm_import/qasm.py\n+++ b/cirq/contrib/qasm_import/qasm.py\n@@ -16,14 +16,14 @@\n from cirq.contrib.qasm_import._parser import QasmParser\n \n \n-class QasmCircuitParser:\n- \"\"\"QasmCircuitParser is currently partially developed, not functional,\n- DO NOT USE.\n- TODO(https://github.com/quantumlib/Cirq/issues/1548)\n- It will serve as the entrypoint for parsing QASM files.\"\"\"\n+def circuit_from_qasm(qasm: str) -> circuits.Circuit:\n+ \"\"\"Parses an OpenQASM string to `cirq.Circuit`.\n \n- def __init__(self):\n- pass\n+ Args:\n+ qasm: The OpenQASM string\n \n- def parse(self, qasm: str) -> circuits.Circuit:\n- return QasmParser().parse(qasm).circuit\n+ Returns:\n+ The parsed circuit\n+ \"\"\"\n+\n+ return QasmParser().parse(qasm).circuit\ndiff --git a/docs/conf.py b/docs/conf.py\n--- a/docs/conf.py\n+++ b/docs/conf.py\n@@ -118,7 +118,8 @@\n 'sphinx.ext.doctest',\n 'sphinx.ext.mathjax',\n 'sphinx.ext.napoleon',\n- 'sphinx.ext.viewcode'\n+ 'sphinx.ext.viewcode',\n+ 'sphinx_markdown_tables',\n ]\n \n # Add any paths that contain templates here, relative to this directory.\n", "issue": "Add QASM importer\nAs the other leg of #44 and maybe a partial solution to #862 depending on the gate sets - as we discussed on today's sync meeting, a QASM importer would be useful. \r\n\r\nI'm happy to design and implement it. \n", "before_files": [{"content": "# Copyright 2018 The Cirq Developers\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom cirq.contrib.qasm_import.exception import (QasmException)\nfrom cirq.contrib.qasm_import.qasm import (QasmCircuitParser)\n", "path": "cirq/contrib/qasm_import/__init__.py"}, {"content": "# -*- coding: utf-8 -*-\n# coverage: ignore\n\n# Configuration file for the Sphinx documentation builder.\n# See http://www.sphinx-doc.org/en/master/config for help\n\n# -- Path setup --------------------------------------------------------------\n\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\n#\nfrom typing import List, Any\n\nimport os\nimport sys\n\nimport pypandoc\n\ncirq_root_path = os.path.dirname(os.path.dirname(__file__))\nsys.path.insert(0, cirq_root_path)\n\n\ndef setup(app):\n app.add_config_value('pandoc_use_parser', 'markdown', True)\n app.connect('autodoc-process-docstring', pandoc_process)\n\n\ndef convert_markdown_mathjax_for_rst(lines: List[str]) -> List[str]:\n if all('$$' not in line for line in lines):\n return lines\n\n data = '\\n'.join(lines)\n sections = data.split('$$')\n if len(sections) % 2 != 1:\n raise ValueError('Mismatched number of \"$$\" latex tokens.')\n\n result = []\n for i, s in enumerate(sections):\n if i % 2:\n # Avoid getting split across divs.\n s = ' '.join(s.split('\\n'))\n # Avoid intermediate layers turning our newlines into slashes.\n s = s.replace('\\\\\\\\', '\\\\newline')\n # Keep the $$ so MathJax can find it.\n result.append('$${}$$'.format(s))\n else:\n # Work around bad table detection in pandoc by concatenating\n # lines from the same paragraph.\n s = '\\n\\n'.join(e.replace('\\n', ' ') for e in s.split('\\n\\n'))\n\n # Convert markdown to rst.\n out = pypandoc.convert(s, to='rst', format='markdown_github')\n\n # Not sure why pandoc is escaping these...\n out = out.replace(r'\\|', '|')\n\n result.extend(out.split('\\n'))\n\n return result\n\n\ndef pandoc_process(app,\n what: str,\n name: str,\n obj: Any,\n options,\n lines: List[str]\n ) -> None:\n if not getattr(obj, '__module__', 'cirq').startswith('cirq'):\n # Don't convert objects from other modules.\n return\n\n # Don't convert output from Napoleon extension, which is already rst.\n i = 0\n while i < len(lines) and not lines[i].startswith(':'):\n i += 1\n if not i:\n return\n\n converted_lines = convert_markdown_mathjax_for_rst(lines[:i])\n kept_lines = lines[i:]\n\n data = pypandoc.convert(\n '\\n'.join(converted_lines),\n to='rst',\n format='markdown_github',\n )\n\n lines[:] = data.split('\\n') + kept_lines\n\n\n# -- Project information -----------------------------------------------------\n\nproject = 'Cirq'\ncopyright = '2018, The Cirq Developers' # pylint: disable=redefined-builtin\nauthor = 'The Cirq Developers'\n\n# The full version, including alpha/beta/rc tags\n__version__ = ''\nexec(open(os.path.join(cirq_root_path, 'cirq', '_version.py')).read())\nrelease = __version__\n\n# The short X.Y version\nversion = release # '.'.join(release.split('.')[:2])\n\n# -- General configuration ---------------------------------------------------\n\n# If your documentation needs a minimal Sphinx version, state it here.\n# needs_sphinx = '1.0'\n\n# Add any Sphinx extension module names here, as strings. They can be\n# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom\n# ones.\nextensions = [\n 'sphinx.ext.autodoc',\n 'sphinx.ext.autosummary',\n 'sphinx.ext.doctest',\n 'sphinx.ext.mathjax',\n 'sphinx.ext.napoleon',\n 'sphinx.ext.viewcode'\n]\n\n# Add any paths that contain templates here, relative to this directory.\ntemplates_path = ['_templates']\n\n# Allow markdown includes.\n# http://www.sphinx-doc.org/en/master/markdown.html\nsource_parsers = {\n '.md': 'recommonmark.parser.CommonMarkParser',\n}\n\n# The suffix(es) of source filenames.\n# You can specify multiple suffix as a list of string:\n#\nsource_suffix = ['.rst', '.md']\n\n# The master toctree document.\nmaster_doc = 'index'\n\n# The language for content autogenerated by Sphinx. Refer to documentation\n# for a list of supported languages.\n#\n# This is also used if you do content translation via gettext catalogs.\n# Usually you set \"language\" from the command line for these cases.\nlanguage = None\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\n# This pattern also affects html_static_path and html_extra_path .\nexclude_patterns = ['_build', 'Thumbs.db', '.DS_Store']\n\n# The name of the Pygments (syntax highlighting) style to use.\npygments_style = 'sphinx'\n\n\n# -- Options for HTML output ---------------------------------------------\n\nhtml_theme = 'sphinx_rtd_theme'\nhtml_favicon = 'favicon.ico'\n# html_theme_options = {}\n\n# Add any paths that contain custom static files (such as style sheets) here,\n# relative to this directory. They are copied after the builtin static files,\n# so a file named \"default.css\" will overwrite the builtin \"default.css\".\n# html_static_path = ['_static']\n\n# Custom sidebar templates, must be a dictionary that maps document names\n# to template names.\n#\n# The default sidebars (for documents that don't match any pattern) are\n# defined by theme itself. Builtin themes are using these templates by\n# default: ``['localtoc.html', 'relations.html', 'sourcelink.html',\n# 'searchbox.html']``.\n#\n# html_sidebars = {}\n\n\n# -- Options for HTMLHelp output -----------------------------------------\n\n# Output file base name for HTML help builder.\nhtmlhelp_basename = 'Cirqdoc'\n\n\n# -- Options for LaTeX output --------------------------------------------\n\nlatex_elements = {\n # The paper size ('letterpaper' or 'a4paper').\n 'papersize': 'letterpaper',\n\n # The font size ('10pt', '11pt' or '12pt').\n # 'pointsize': '10pt',\n\n # Additional stuff for the LaTeX preamble.\n # 'preamble': '',\n\n # Latex figure (float) alignment\n # 'figure_align': 'htbp',\n}\n\n# Grouping the document tree into LaTeX files. List of tuples\n# (source start file, target name, title,\n# author, documentclass [howto, manual, or own class]).\nlatex_documents = [\n (master_doc, 'Cirq.tex', 'Cirq Documentation',\n 'The Cirq Developers', 'manual'),\n]\n\n\n# -- Options for manual page output --------------------------------------\n\n# One entry per manual page. List of tuples\n# (source start file, name, description, authors, manual section).\nman_pages = [\n (master_doc, 'cirq', 'Cirq Documentation',\n [author], 1)\n]\n\n\n# -- Options for Texinfo output ------------------------------------------\n\n# Grouping the document tree into Texinfo files. List of tuples\n# (source start file, target name, title, author,\n# dir menu entry, description, category)\ntexinfo_documents = [\n (master_doc, 'Cirq', 'Cirq Documentation',\n author, 'Cirq', 'A python library for NISQ circuits.',\n 'Miscellaneous'),\n]\n\n\n# -- Extension configuration -------------------------------------------------\n\n# Generate subpages for reference docs automatically.\n# http://www.sphinx-doc.org/en/master/ext/autosummary.html#generating-stub-pages-automatically\nautosummary_generate = True\n", "path": "docs/conf.py"}, {"content": "# Copyright 2018 The Cirq Developers\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom cirq import circuits\nfrom cirq.contrib.qasm_import._parser import QasmParser\n\n\nclass QasmCircuitParser:\n \"\"\"QasmCircuitParser is currently partially developed, not functional,\n DO NOT USE.\n TODO(https://github.com/quantumlib/Cirq/issues/1548)\n It will serve as the entrypoint for parsing QASM files.\"\"\"\n\n def __init__(self):\n pass\n\n def parse(self, qasm: str) -> circuits.Circuit:\n return QasmParser().parse(qasm).circuit\n", "path": "cirq/contrib/qasm_import/qasm.py"}]} | 3,486 | 520 |
gh_patches_debug_18498 | rasdani/github-patches | git_diff | Cog-Creators__Red-DiscordBot-1156 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[V3] [p]mock doesn't work with aliases
Please be sure to read through other issues as well to make sure what you are suggesting/reporting has not already
been suggested/reported
### Type:
- [ ] Suggestion
- [x] Bug
### Brief description of the problem
Attempting to run an alias as another user with `[p]mock` does nothing
### Expected behavior
It should run the command the alias is for
### Actual behavior
It does nothing
### Steps to reproduce
1. Create an alias (for example, `[p]alias add c contact`
2. Try to use mock with it (`[p]mock <user> c`)
3. Nothing happens
</issue>
<code>
[start of redbot/core/dev_commands.py]
1 import asyncio
2 import inspect
3 import io
4 import textwrap
5 import traceback
6 from contextlib import redirect_stdout
7
8 import discord
9 from discord.ext import commands
10 from . import checks
11 from .i18n import CogI18n
12 from .utils.chat_formatting import box, pagify
13 """
14 Notice:
15
16 95% of the below code came from R.Danny which can be found here:
17
18 https://github.com/Rapptz/RoboDanny/blob/master/cogs/repl.py
19 """
20
21 _ = CogI18n("Dev", __file__)
22
23
24 class Dev:
25 """Various development focused utilities."""
26
27 def __init__(self):
28 self._last_result = None
29 self.sessions = set()
30
31 @staticmethod
32 def cleanup_code(content):
33 """Automatically removes code blocks from the code."""
34 # remove ```py\n```
35 if content.startswith('```') and content.endswith('```'):
36 return '\n'.join(content.split('\n')[1:-1])
37
38 # remove `foo`
39 return content.strip('` \n')
40
41 @staticmethod
42 def get_syntax_error(e):
43 """Format a syntax error to send to the user.
44
45 Returns a string representation of the error formatted as a codeblock.
46 """
47 if e.text is None:
48 return box('{0.__class__.__name__}: {0}'.format(e), lang="py")
49 return box(
50 '{0.text}{1:>{0.offset}}\n{2}: {0}'
51 ''.format(e, '^', type(e).__name__),
52 lang="py")
53
54 @staticmethod
55 def get_pages(msg: str):
56 """Pagify the given message for output to the user."""
57 return pagify(msg, delims=["\n", " "], priority=True, shorten_by=10)
58
59 @staticmethod
60 def sanitize_output(ctx: commands.Context, input_: str) -> str:
61 """Hides the bot's token from a string."""
62 token = ctx.bot.http.token
63 r = "[EXPUNGED]"
64 result = input_.replace(token, r)
65 result = result.replace(token.lower(), r)
66 result = result.replace(token.upper(), r)
67 return result
68
69 @commands.command()
70 @checks.is_owner()
71 async def debug(self, ctx, *, code):
72 """Evaluate a statement of python code.
73
74 The bot will always respond with the return value of the code.
75 If the return value of the code is a coroutine, it will be awaited,
76 and the result of that will be the bot's response.
77
78 Note: Only one statement may be evaluated. Using await, yield or
79 similar restricted keywords will result in a syntax error. For multiple
80 lines or asynchronous code, see [p]repl or [p]eval.
81
82 Environment Variables:
83 ctx - command invokation context
84 bot - bot object
85 channel - the current channel object
86 author - command author's member object
87 message - the command's message object
88 discord - discord.py library
89 commands - discord.py commands extension
90 _ - The result of the last dev command.
91 """
92 env = {
93 'bot': ctx.bot,
94 'ctx': ctx,
95 'channel': ctx.channel,
96 'author': ctx.author,
97 'guild': ctx.guild,
98 'message': ctx.message,
99 'discord': discord,
100 'commands': commands,
101 '_': self._last_result
102 }
103
104 code = self.cleanup_code(code)
105
106 try:
107 result = eval(code, env)
108 except SyntaxError as e:
109 await ctx.send(self.get_syntax_error(e))
110 return
111 except Exception as e:
112 await ctx.send(
113 box('{}: {!s}'.format(type(e).__name__, e), lang='py'))
114 return
115
116 if asyncio.iscoroutine(result):
117 result = await result
118
119 self._last_result = result
120
121 result = self.sanitize_output(ctx, str(result))
122
123 await ctx.send_interactive(self.get_pages(result), box_lang="py")
124
125 @commands.command(name='eval')
126 @checks.is_owner()
127 async def _eval(self, ctx, *, body: str):
128 """Execute asynchronous code.
129
130 This command wraps code into the body of an async function and then
131 calls and awaits it. The bot will respond with anything printed to
132 stdout, as well as the return value of the function.
133
134 The code can be within a codeblock, inline code or neither, as long
135 as they are not mixed and they are formatted correctly.
136
137 Environment Variables:
138 ctx - command invokation context
139 bot - bot object
140 channel - the current channel object
141 author - command author's member object
142 message - the command's message object
143 discord - discord.py library
144 commands - discord.py commands extension
145 _ - The result of the last dev command.
146 """
147 env = {
148 'bot': ctx.bot,
149 'ctx': ctx,
150 'channel': ctx.channel,
151 'author': ctx.author,
152 'guild': ctx.guild,
153 'message': ctx.message,
154 'discord': discord,
155 'commands': commands,
156 '_': self._last_result
157 }
158
159 body = self.cleanup_code(body)
160 stdout = io.StringIO()
161
162 to_compile = 'async def func():\n%s' % textwrap.indent(body, ' ')
163
164 try:
165 exec(to_compile, env)
166 except SyntaxError as e:
167 return await ctx.send(self.get_syntax_error(e))
168
169 func = env['func']
170 result = None
171 try:
172 with redirect_stdout(stdout):
173 result = await func()
174 except:
175 printed = "{}{}".format(stdout.getvalue(), traceback.format_exc())
176 else:
177 printed = stdout.getvalue()
178 await ctx.tick()
179
180 if result is not None:
181 self._last_result = result
182 msg = "{}{}".format(printed, result)
183 else:
184 msg = printed
185 msg = self.sanitize_output(ctx, msg)
186
187 await ctx.send_interactive(self.get_pages(msg), box_lang="py")
188
189 @commands.command()
190 @checks.is_owner()
191 async def repl(self, ctx):
192 """Open an interactive REPL.
193
194 The REPL will only recognise code as messages which start with a
195 backtick. This includes codeblocks, and as such multiple lines can be
196 evaluated.
197
198 You may not await any code in this REPL unless you define it inside an
199 async function.
200 """
201 variables = {
202 'ctx': ctx,
203 'bot': ctx.bot,
204 'message': ctx.message,
205 'guild': ctx.guild,
206 'channel': ctx.channel,
207 'author': ctx.author,
208 '_': None,
209 }
210
211 if ctx.channel.id in self.sessions:
212 await ctx.send(_('Already running a REPL session in this channel. '
213 'Exit it with `quit`.'))
214 return
215
216 self.sessions.add(ctx.channel.id)
217 await ctx.send(_('Enter code to execute or evaluate.'
218 ' `exit()` or `quit` to exit.'))
219
220 msg_check = lambda m: (m.author == ctx.author and
221 m.channel == ctx.channel and
222 m.content.startswith('`'))
223
224 while True:
225 response = await ctx.bot.wait_for("message", check=msg_check)
226
227 cleaned = self.cleanup_code(response.content)
228
229 if cleaned in ('quit', 'exit', 'exit()'):
230 await ctx.send('Exiting.')
231 self.sessions.remove(ctx.channel.id)
232 return
233
234 executor = exec
235 if cleaned.count('\n') == 0:
236 # single statement, potentially 'eval'
237 try:
238 code = compile(cleaned, '<repl session>', 'eval')
239 except SyntaxError:
240 pass
241 else:
242 executor = eval
243
244 if executor is exec:
245 try:
246 code = compile(cleaned, '<repl session>', 'exec')
247 except SyntaxError as e:
248 await ctx.send(self.get_syntax_error(e))
249 continue
250
251 variables['message'] = response
252
253 stdout = io.StringIO()
254
255 msg = None
256
257 try:
258 with redirect_stdout(stdout):
259 result = executor(code, variables)
260 if inspect.isawaitable(result):
261 result = await result
262 except:
263 value = stdout.getvalue()
264 msg = "{}{}".format(value, traceback.format_exc())
265 else:
266 value = stdout.getvalue()
267 if result is not None:
268 msg = "{}{}".format(value, result)
269 variables['_'] = result
270 elif value:
271 msg = "{}".format(value)
272
273 msg = self.sanitize_output(ctx, msg)
274
275 try:
276 await ctx.send_interactive(self.get_pages(msg), box_lang="py")
277 except discord.Forbidden:
278 pass
279 except discord.HTTPException as e:
280 await ctx.send(_('Unexpected error: `{}`').format(e))
281
282 @commands.command()
283 @checks.is_owner()
284 async def mock(self, ctx, user: discord.Member, *, command):
285 """Mock another user invoking a command.
286
287 The prefix must not be entered.
288 """
289 # Since we have stateful objects now this might be pretty bad
290 # Sorry Danny
291 old_author = ctx.author
292 old_content = ctx.message.content
293 ctx.message.author = user
294 ctx.message.content = ctx.prefix + command
295
296 await ctx.bot.process_commands(ctx.message)
297
298 ctx.message.author = old_author
299 ctx.message.content = old_content
300
301 @commands.command(name="mockmsg")
302 @checks.is_owner()
303 async def mock_msg(self, ctx, user: discord.Member, *, content: str):
304 """Dispatch a message event as if it were sent by a different user.
305
306 Only reads the raw content of the message. Attachments, embeds etc. are
307 ignored.
308 """
309 old_author = ctx.author
310 old_content = ctx.message.content
311 ctx.message.author = user
312 ctx.message.content = content
313
314 ctx.bot.dispatch("message", ctx.message)
315
316 # If we change the author and content back too quickly,
317 # the bot won't process the mocked message in time.
318 await asyncio.sleep(2)
319 ctx.message.author = old_author
320 ctx.message.content = old_content
321
[end of redbot/core/dev_commands.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/redbot/core/dev_commands.py b/redbot/core/dev_commands.py
--- a/redbot/core/dev_commands.py
+++ b/redbot/core/dev_commands.py
@@ -4,6 +4,7 @@
import textwrap
import traceback
from contextlib import redirect_stdout
+from copy import copy
import discord
from discord.ext import commands
@@ -286,17 +287,11 @@
The prefix must not be entered.
"""
- # Since we have stateful objects now this might be pretty bad
- # Sorry Danny
- old_author = ctx.author
- old_content = ctx.message.content
- ctx.message.author = user
- ctx.message.content = ctx.prefix + command
+ msg = copy(ctx.message)
+ msg.author = user
+ msg.content = ctx.prefix + command
- await ctx.bot.process_commands(ctx.message)
-
- ctx.message.author = old_author
- ctx.message.content = old_content
+ ctx.bot.dispatch('message', msg)
@commands.command(name="mockmsg")
@checks.is_owner()
| {"golden_diff": "diff --git a/redbot/core/dev_commands.py b/redbot/core/dev_commands.py\n--- a/redbot/core/dev_commands.py\n+++ b/redbot/core/dev_commands.py\n@@ -4,6 +4,7 @@\n import textwrap\n import traceback\n from contextlib import redirect_stdout\n+from copy import copy\n \n import discord\n from discord.ext import commands\n@@ -286,17 +287,11 @@\n \n The prefix must not be entered.\n \"\"\"\n- # Since we have stateful objects now this might be pretty bad\n- # Sorry Danny\n- old_author = ctx.author\n- old_content = ctx.message.content\n- ctx.message.author = user\n- ctx.message.content = ctx.prefix + command\n+ msg = copy(ctx.message)\n+ msg.author = user\n+ msg.content = ctx.prefix + command\n \n- await ctx.bot.process_commands(ctx.message)\n-\n- ctx.message.author = old_author\n- ctx.message.content = old_content\n+ ctx.bot.dispatch('message', msg)\n \n @commands.command(name=\"mockmsg\")\n @checks.is_owner()\n", "issue": "[V3] [p]mock doesn't work with aliases\nPlease be sure to read through other issues as well to make sure what you are suggesting/reporting has not already\r\nbeen suggested/reported\r\n\r\n### Type:\r\n\r\n- [ ] Suggestion\r\n- [x] Bug\r\n\r\n### Brief description of the problem\r\nAttempting to run an alias as another user with `[p]mock` does nothing\r\n\r\n### Expected behavior\r\nIt should run the command the alias is for\r\n\r\n### Actual behavior\r\nIt does nothing\r\n### Steps to reproduce\r\n\r\n1. Create an alias (for example, `[p]alias add c contact`\r\n2. Try to use mock with it (`[p]mock <user> c`)\r\n3. Nothing happens\r\n\n", "before_files": [{"content": "import asyncio\nimport inspect\nimport io\nimport textwrap\nimport traceback\nfrom contextlib import redirect_stdout\n\nimport discord\nfrom discord.ext import commands\nfrom . import checks\nfrom .i18n import CogI18n\nfrom .utils.chat_formatting import box, pagify\n\"\"\"\nNotice:\n\n95% of the below code came from R.Danny which can be found here:\n\nhttps://github.com/Rapptz/RoboDanny/blob/master/cogs/repl.py\n\"\"\"\n\n_ = CogI18n(\"Dev\", __file__)\n\n\nclass Dev:\n \"\"\"Various development focused utilities.\"\"\"\n\n def __init__(self):\n self._last_result = None\n self.sessions = set()\n\n @staticmethod\n def cleanup_code(content):\n \"\"\"Automatically removes code blocks from the code.\"\"\"\n # remove ```py\\n```\n if content.startswith('```') and content.endswith('```'):\n return '\\n'.join(content.split('\\n')[1:-1])\n\n # remove `foo`\n return content.strip('` \\n')\n\n @staticmethod\n def get_syntax_error(e):\n \"\"\"Format a syntax error to send to the user.\n\n Returns a string representation of the error formatted as a codeblock.\n \"\"\"\n if e.text is None:\n return box('{0.__class__.__name__}: {0}'.format(e), lang=\"py\")\n return box(\n '{0.text}{1:>{0.offset}}\\n{2}: {0}'\n ''.format(e, '^', type(e).__name__),\n lang=\"py\")\n\n @staticmethod\n def get_pages(msg: str):\n \"\"\"Pagify the given message for output to the user.\"\"\"\n return pagify(msg, delims=[\"\\n\", \" \"], priority=True, shorten_by=10)\n\n @staticmethod\n def sanitize_output(ctx: commands.Context, input_: str) -> str:\n \"\"\"Hides the bot's token from a string.\"\"\"\n token = ctx.bot.http.token\n r = \"[EXPUNGED]\"\n result = input_.replace(token, r)\n result = result.replace(token.lower(), r)\n result = result.replace(token.upper(), r)\n return result\n\n @commands.command()\n @checks.is_owner()\n async def debug(self, ctx, *, code):\n \"\"\"Evaluate a statement of python code.\n\n The bot will always respond with the return value of the code.\n If the return value of the code is a coroutine, it will be awaited,\n and the result of that will be the bot's response.\n\n Note: Only one statement may be evaluated. Using await, yield or\n similar restricted keywords will result in a syntax error. For multiple\n lines or asynchronous code, see [p]repl or [p]eval.\n\n Environment Variables:\n ctx - command invokation context\n bot - bot object\n channel - the current channel object\n author - command author's member object\n message - the command's message object\n discord - discord.py library\n commands - discord.py commands extension\n _ - The result of the last dev command.\n \"\"\"\n env = {\n 'bot': ctx.bot,\n 'ctx': ctx,\n 'channel': ctx.channel,\n 'author': ctx.author,\n 'guild': ctx.guild,\n 'message': ctx.message,\n 'discord': discord,\n 'commands': commands,\n '_': self._last_result\n }\n\n code = self.cleanup_code(code)\n\n try:\n result = eval(code, env)\n except SyntaxError as e:\n await ctx.send(self.get_syntax_error(e))\n return\n except Exception as e:\n await ctx.send(\n box('{}: {!s}'.format(type(e).__name__, e), lang='py'))\n return\n\n if asyncio.iscoroutine(result):\n result = await result\n\n self._last_result = result\n\n result = self.sanitize_output(ctx, str(result))\n\n await ctx.send_interactive(self.get_pages(result), box_lang=\"py\")\n\n @commands.command(name='eval')\n @checks.is_owner()\n async def _eval(self, ctx, *, body: str):\n \"\"\"Execute asynchronous code.\n\n This command wraps code into the body of an async function and then\n calls and awaits it. The bot will respond with anything printed to\n stdout, as well as the return value of the function.\n\n The code can be within a codeblock, inline code or neither, as long\n as they are not mixed and they are formatted correctly.\n\n Environment Variables:\n ctx - command invokation context\n bot - bot object\n channel - the current channel object\n author - command author's member object\n message - the command's message object\n discord - discord.py library\n commands - discord.py commands extension\n _ - The result of the last dev command.\n \"\"\"\n env = {\n 'bot': ctx.bot,\n 'ctx': ctx,\n 'channel': ctx.channel,\n 'author': ctx.author,\n 'guild': ctx.guild,\n 'message': ctx.message,\n 'discord': discord,\n 'commands': commands,\n '_': self._last_result\n }\n\n body = self.cleanup_code(body)\n stdout = io.StringIO()\n\n to_compile = 'async def func():\\n%s' % textwrap.indent(body, ' ')\n\n try:\n exec(to_compile, env)\n except SyntaxError as e:\n return await ctx.send(self.get_syntax_error(e))\n\n func = env['func']\n result = None\n try:\n with redirect_stdout(stdout):\n result = await func()\n except:\n printed = \"{}{}\".format(stdout.getvalue(), traceback.format_exc())\n else:\n printed = stdout.getvalue()\n await ctx.tick()\n\n if result is not None:\n self._last_result = result\n msg = \"{}{}\".format(printed, result)\n else:\n msg = printed\n msg = self.sanitize_output(ctx, msg)\n\n await ctx.send_interactive(self.get_pages(msg), box_lang=\"py\")\n\n @commands.command()\n @checks.is_owner()\n async def repl(self, ctx):\n \"\"\"Open an interactive REPL.\n\n The REPL will only recognise code as messages which start with a\n backtick. This includes codeblocks, and as such multiple lines can be\n evaluated.\n\n You may not await any code in this REPL unless you define it inside an\n async function.\n \"\"\"\n variables = {\n 'ctx': ctx,\n 'bot': ctx.bot,\n 'message': ctx.message,\n 'guild': ctx.guild,\n 'channel': ctx.channel,\n 'author': ctx.author,\n '_': None,\n }\n\n if ctx.channel.id in self.sessions:\n await ctx.send(_('Already running a REPL session in this channel. '\n 'Exit it with `quit`.'))\n return\n\n self.sessions.add(ctx.channel.id)\n await ctx.send(_('Enter code to execute or evaluate.'\n ' `exit()` or `quit` to exit.'))\n\n msg_check = lambda m: (m.author == ctx.author and\n m.channel == ctx.channel and\n m.content.startswith('`'))\n\n while True:\n response = await ctx.bot.wait_for(\"message\", check=msg_check)\n\n cleaned = self.cleanup_code(response.content)\n\n if cleaned in ('quit', 'exit', 'exit()'):\n await ctx.send('Exiting.')\n self.sessions.remove(ctx.channel.id)\n return\n\n executor = exec\n if cleaned.count('\\n') == 0:\n # single statement, potentially 'eval'\n try:\n code = compile(cleaned, '<repl session>', 'eval')\n except SyntaxError:\n pass\n else:\n executor = eval\n\n if executor is exec:\n try:\n code = compile(cleaned, '<repl session>', 'exec')\n except SyntaxError as e:\n await ctx.send(self.get_syntax_error(e))\n continue\n\n variables['message'] = response\n\n stdout = io.StringIO()\n\n msg = None\n\n try:\n with redirect_stdout(stdout):\n result = executor(code, variables)\n if inspect.isawaitable(result):\n result = await result\n except:\n value = stdout.getvalue()\n msg = \"{}{}\".format(value, traceback.format_exc())\n else:\n value = stdout.getvalue()\n if result is not None:\n msg = \"{}{}\".format(value, result)\n variables['_'] = result\n elif value:\n msg = \"{}\".format(value)\n\n msg = self.sanitize_output(ctx, msg)\n\n try:\n await ctx.send_interactive(self.get_pages(msg), box_lang=\"py\")\n except discord.Forbidden:\n pass\n except discord.HTTPException as e:\n await ctx.send(_('Unexpected error: `{}`').format(e))\n\n @commands.command()\n @checks.is_owner()\n async def mock(self, ctx, user: discord.Member, *, command):\n \"\"\"Mock another user invoking a command.\n\n The prefix must not be entered.\n \"\"\"\n # Since we have stateful objects now this might be pretty bad\n # Sorry Danny\n old_author = ctx.author\n old_content = ctx.message.content\n ctx.message.author = user\n ctx.message.content = ctx.prefix + command\n\n await ctx.bot.process_commands(ctx.message)\n\n ctx.message.author = old_author\n ctx.message.content = old_content\n\n @commands.command(name=\"mockmsg\")\n @checks.is_owner()\n async def mock_msg(self, ctx, user: discord.Member, *, content: str):\n \"\"\"Dispatch a message event as if it were sent by a different user.\n\n Only reads the raw content of the message. Attachments, embeds etc. are\n ignored.\n \"\"\"\n old_author = ctx.author\n old_content = ctx.message.content\n ctx.message.author = user\n ctx.message.content = content\n\n ctx.bot.dispatch(\"message\", ctx.message)\n\n # If we change the author and content back too quickly,\n # the bot won't process the mocked message in time.\n await asyncio.sleep(2)\n ctx.message.author = old_author\n ctx.message.content = old_content\n", "path": "redbot/core/dev_commands.py"}]} | 3,752 | 240 |
gh_patches_debug_11769 | rasdani/github-patches | git_diff | apache__tvm-2119 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[CONTRIB] NNPack Test Flaky
http://ci.tvm.ai:8080/job/tvm/job/PR-2103/1/consoleText
cc @ajtulloch can you take a look?
</issue>
<code>
[start of python/tvm/contrib/nnpack.py]
1 """External function interface to NNPACK libraroes."""
2 from __future__ import absolute_import as _abs
3
4 from .. import api as _api
5 from .. import intrin as _intrin
6 from .._ffi.function import _init_api
7
8 def config(nthreads):
9 """Configure the nnpack library.
10
11 Parameters
12 ----------
13 nthreads : int
14 The threads number of nnpack thread pool, must be a nonnegative.
15
16 """
17 _Config(nthreads)
18
19 def fully_connected_inference(lhs, rhs, nthreads=1):
20 """Create an extern op that compute fully connected of 1D tensor lhs and
21 2D tensor rhs with nnpack.
22
23 Parameters
24 ----------
25 lhs : Tensor
26 lhs 1D array input[input_channels] of FP32 elements
27 rhs : Tensor
28 lhs 2D matrix kernel[output_channels][input_channels] of FP32 elements
29
30 Returns
31 -------
32 C : Tensor
33 lhs 1D array out[output_channels] of FP32 elements.
34 """
35 m = rhs.shape[0]
36 return _api.extern(
37 (m, ), [lhs, rhs],
38 lambda ins, outs: _intrin.call_packed(
39 "tvm.contrib.nnpack.fully_connected_inference",
40 ins[0], ins[1], outs[0], nthreads), name="C")
41
42 def fully_connected_output(lhs, rhs, nthreads=1):
43 """Create an extern op that compute fully connected of 2D tensor lhs and
44 2D tensor rhs with nnpack.
45
46 Parameters
47 ----------
48 lhs : Tensor
49 lhs 2D matrix input[batch_size][input_channels] of FP32 elements
50 rhs : Tensor
51 lhs 2D matrix kernel[output_channels][input_channels] of FP32 elements
52
53 Returns
54 -------
55 C : Tensor
56 lhs 2D array out[batch_size][output_channels] of FP32 elements.
57 """
58 n = lhs.shape[0]
59 m = rhs.shape[0]
60 return _api.extern(
61 (n, m), [lhs, rhs],
62 lambda ins, outs: _intrin.call_packed(
63 "tvm.contrib.nnpack.fully_connected_output",
64 ins[0], ins[1], outs[0], nthreads), name="C")
65
66
67 class ConvolutionAlgorithm:
68 AUTO = 0
69 FFT_8x8 = 1
70 FFT_16x16 = 2
71 WT_8x8 = 3
72 IMPLICIT_GEMM = 4
73 DIRECT = 5
74 WT_8x8_FP16 = 6
75
76
77 class ConvolutionTransformStrategy:
78 COMPUTE = 1
79 PRECOMPUTE = 2
80
81
82 def convolution_inference(
83 data, kernel, bias, padding, stride, nthreads=1,
84 algorithm=ConvolutionAlgorithm.AUTO):
85 """Create an extern op to do inference convolution of 4D tensor data and
86 4D tensor kernel and 1D tensor bias with nnpack.
87
88 Parameters
89 ----------
90 data : Tensor
91 data 4D tensor input[batch][input_channels][input_height][input_width] of
92 FP32 elements.
93 kernel : Tensor
94 kernel 4D tensor kernel[output_channels][input_channels][kernel_height]
95 [kernel_width] of FP32 elements.
96 bias : Tensor
97 bias 1D array bias[output_channels][input_channels][kernel_height]
98 [kernel_width] of FP32 elements.
99 padding : list
100 padding A 4-dim list of [pad_top, pad_bottom, pad_left, pad_right],
101 which indicates the padding around the feature map.
102 stride : list
103 stride A 2-dim list of [stride_height, stride_width], which indicates
104 the stride.
105
106 Returns
107 -------
108 output : Tensor
109 output 4D tensor output[batch][output_channels][output_height][output_width]
110 of FP32 elements.
111 """
112
113 assert isinstance(padding, list) and len(padding) == 4
114 assert isinstance(stride, list) and len(stride) == 2
115 batch, _, input_height, input_width = data.shape
116 output_channels, _, kernel_height, kernel_width = kernel.shape
117 output_height = (input_height + padding[0] + padding[1] - kernel_height) / stride[0] + 1
118 output_width = (input_width + padding[0] + padding[1] - kernel_width) / stride[1] + 1
119
120 return _api.extern(
121 (batch, output_channels, output_height, output_width),
122 [data, kernel, bias] if bias is not None else [data, kernel],
123 lambda ins, outs: _intrin.call_packed(
124 "tvm.contrib.nnpack.convolution_inference",
125 ins[0],
126 ins[1],
127 ins[2] if bias is not None else 0,
128 outs[0], padding[0], padding[1], padding[2], padding[3],
129 stride[0], stride[1], nthreads, algorithm), name="C")
130
131 def convolution_inference_without_weight_transform(
132 data, transformed_kernel, bias, padding, stride, nthreads=1,
133 algorithm=ConvolutionAlgorithm.AUTO):
134 """Create an extern op to do inference convolution of 4D tensor data and
135 4D pre-transformed tensor kernel and 1D tensor bias with nnpack.
136
137 Parameters
138 ----------
139 data : Tensor
140 data 4D tensor input[batch][input_channels][input_height][input_width] of
141 FP32 elements.
142 transformed_kernel : Tensor
143 transformed_kernel 4D tensor kernel[output_channels][input_channels][tile]
144 [tile] of FP32 elements.
145 bias : Tensor
146 bias 1D array bias[output_channels][input_channels][kernel_height]
147 [kernel_width] of FP32 elements.
148 padding : list
149 padding A 4-dim list of [pad_top, pad_bottom, pad_left, pad_right],
150 which indicates the padding around the feature map.
151 stride : list
152 stride A 2-dim list of [stride_height, stride_width], which indicates
153 the stride.
154
155 Returns
156 -------
157 output : Tensor
158 output 4D tensor output[batch][output_channels][output_height][output_width]
159 of FP32 elements.
160 """
161
162 assert algorithm in (ConvolutionAlgorithm.WT_8x8,
163 ConvolutionAlgorithm.WT_8x8_FP16)
164 assert isinstance(padding, list) and len(padding) == 4
165 assert isinstance(stride, list) and len(stride) == 2
166 batch, _, input_height, input_width = data.shape
167 output_channels, _, _, _ = transformed_kernel.shape
168 kernel_height, kernel_width = (3, 3)
169 output_height = (input_height + padding[0] + padding[1] - kernel_height) / stride[0] + 1
170 output_width = (input_width + padding[0] + padding[1] - kernel_width) / stride[1] + 1
171
172 return _api.extern(
173 (batch, output_channels, output_height, output_width),
174 [data, transformed_kernel, bias] if bias is not None else [data, transformed_kernel],
175 lambda ins, outs: _intrin.call_packed(
176 "tvm.contrib.nnpack.convolution_inference_without_weight_transform",
177 ins[0],
178 ins[1],
179 ins[2] if bias is not None else 0,
180 outs[0], padding[0], padding[1], padding[2], padding[3],
181 stride[0], stride[1], nthreads, algorithm), name="C")
182
183 def convolution_inference_weight_transform(
184 kernel, nthreads=1,
185 algorithm=ConvolutionAlgorithm.AUTO):
186 """Create an extern op to do inference convolution of 3D tensor data and
187 4D tensor kernel and 1D tensor bias with nnpack.
188
189 Parameters
190 ----------
191 kernel : Tensor
192 kernel 4D tensor kernel[output_channels][input_channels][kernel_height]
193 [kernel_width] of FP32 elements.
194
195 Returns
196 -------
197 output : Tensor
198 output 4D tensor output[output_channels][input_channels][tile][tile]
199 of FP32 elements.
200 """
201 assert algorithm in (ConvolutionAlgorithm.WT_8x8, ConvolutionAlgorithm.WT_8x8_FP16)
202 output_channels, input_channels, _, _ = kernel.shape
203
204 transform_tile_size = 8
205 return _api.extern(
206 (output_channels, input_channels, transform_tile_size, transform_tile_size),
207 [kernel],
208 lambda ins, outs: _intrin.call_packed(
209 "tvm.contrib.nnpack.convolution_inference_weight_transform",
210 ins[0], outs[0], nthreads, algorithm), name="transform_kernel")
211
212 def convolution_output(data, kernel, bias, padding, nthreads=1):
213 """Create an extern op to compute convolution of 4D tensor data and
214 4D tensor kernel and 1D tensor bias with nnpack.
215
216 Parameters
217 ----------
218 data : Tensor
219 data 4D tensor input[batch_size][input_channels][input_height]
220 [input_width] of FP32 elements.
221 kernel : Tensor
222 kernel 4D tensor kernel[output_channels][input_channels][kernel_height]
223 [kernel_width] of FP32 elements.
224 bias : Tensor
225 bias 1D array bias[output_channels][input_channels][kernel_height]
226 [kernel_width] of FP32 elements.
227 padding : list
228 padding A 4-dim list of [pad_top, pad_bottom, pad_left, pad_right],
229 which indicates the padding around the feature map.
230
231 Returns
232 -------
233 output : Tensor
234 output 4D tensor output[batch_size][output_channels][output_height]
235 [output_width] of FP32 elements.
236 """
237
238 assert isinstance(padding, list) and len(padding) == 4
239 batch, _, input_height, input_width = data.shape
240 output_channels, _, kernel_height, kernel_width = kernel.shape
241 output_height = (input_height + padding[0] + padding[1] - kernel_height) + 1
242 output_width = (input_width + padding[0] + padding[1] - kernel_width) + 1
243
244 return _api.extern(
245 (batch, output_channels, output_height, output_width), [data, kernel, bias],
246 lambda ins, outs: _intrin.call_packed(
247 "tvm.contrib.nnpack.convolution_output", ins[0], ins[1], ins[2],
248 outs[0], padding[0], padding[1], padding[2], padding[3], nthreads), name="C")
249
250
251 _init_api("tvm.contrib.nnpack")
252
[end of python/tvm/contrib/nnpack.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/python/tvm/contrib/nnpack.py b/python/tvm/contrib/nnpack.py
--- a/python/tvm/contrib/nnpack.py
+++ b/python/tvm/contrib/nnpack.py
@@ -5,16 +5,11 @@
from .. import intrin as _intrin
from .._ffi.function import _init_api
-def config(nthreads):
- """Configure the nnpack library.
-
- Parameters
- ----------
- nthreads : int
- The threads number of nnpack thread pool, must be a nonnegative.
-
+def is_available():
+ """Check whether NNPACK is available, that is, `nnp_initialize()`
+ returns `nnp_status_success`.
"""
- _Config(nthreads)
+ return _initialize() == 0
def fully_connected_inference(lhs, rhs, nthreads=1):
"""Create an extern op that compute fully connected of 1D tensor lhs and
| {"golden_diff": "diff --git a/python/tvm/contrib/nnpack.py b/python/tvm/contrib/nnpack.py\n--- a/python/tvm/contrib/nnpack.py\n+++ b/python/tvm/contrib/nnpack.py\n@@ -5,16 +5,11 @@\n from .. import intrin as _intrin\n from .._ffi.function import _init_api\n \n-def config(nthreads):\n- \"\"\"Configure the nnpack library.\n-\n- Parameters\n- ----------\n- nthreads : int\n- The threads number of nnpack thread pool, must be a nonnegative.\n-\n+def is_available():\n+ \"\"\"Check whether NNPACK is available, that is, `nnp_initialize()`\n+ returns `nnp_status_success`.\n \"\"\"\n- _Config(nthreads)\n+ return _initialize() == 0\n \n def fully_connected_inference(lhs, rhs, nthreads=1):\n \"\"\"Create an extern op that compute fully connected of 1D tensor lhs and\n", "issue": "[CONTRIB] NNPack Test Flaky\nhttp://ci.tvm.ai:8080/job/tvm/job/PR-2103/1/consoleText\r\n\r\ncc @ajtulloch can you take a look?\n", "before_files": [{"content": "\"\"\"External function interface to NNPACK libraroes.\"\"\"\nfrom __future__ import absolute_import as _abs\n\nfrom .. import api as _api\nfrom .. import intrin as _intrin\nfrom .._ffi.function import _init_api\n\ndef config(nthreads):\n \"\"\"Configure the nnpack library.\n\n Parameters\n ----------\n nthreads : int\n The threads number of nnpack thread pool, must be a nonnegative.\n\n \"\"\"\n _Config(nthreads)\n\ndef fully_connected_inference(lhs, rhs, nthreads=1):\n \"\"\"Create an extern op that compute fully connected of 1D tensor lhs and\n 2D tensor rhs with nnpack.\n\n Parameters\n ----------\n lhs : Tensor\n lhs 1D array input[input_channels] of FP32 elements\n rhs : Tensor\n lhs 2D matrix kernel[output_channels][input_channels] of FP32 elements\n\n Returns\n -------\n C : Tensor\n lhs 1D array out[output_channels] of FP32 elements.\n \"\"\"\n m = rhs.shape[0]\n return _api.extern(\n (m, ), [lhs, rhs],\n lambda ins, outs: _intrin.call_packed(\n \"tvm.contrib.nnpack.fully_connected_inference\",\n ins[0], ins[1], outs[0], nthreads), name=\"C\")\n\ndef fully_connected_output(lhs, rhs, nthreads=1):\n \"\"\"Create an extern op that compute fully connected of 2D tensor lhs and\n 2D tensor rhs with nnpack.\n\n Parameters\n ----------\n lhs : Tensor\n lhs 2D matrix input[batch_size][input_channels] of FP32 elements\n rhs : Tensor\n lhs 2D matrix kernel[output_channels][input_channels] of FP32 elements\n\n Returns\n -------\n C : Tensor\n lhs 2D array out[batch_size][output_channels] of FP32 elements.\n \"\"\"\n n = lhs.shape[0]\n m = rhs.shape[0]\n return _api.extern(\n (n, m), [lhs, rhs],\n lambda ins, outs: _intrin.call_packed(\n \"tvm.contrib.nnpack.fully_connected_output\",\n ins[0], ins[1], outs[0], nthreads), name=\"C\")\n\n\nclass ConvolutionAlgorithm:\n AUTO = 0\n FFT_8x8 = 1\n FFT_16x16 = 2\n WT_8x8 = 3\n IMPLICIT_GEMM = 4\n DIRECT = 5\n WT_8x8_FP16 = 6\n\n\nclass ConvolutionTransformStrategy:\n COMPUTE = 1\n PRECOMPUTE = 2\n\n\ndef convolution_inference(\n data, kernel, bias, padding, stride, nthreads=1,\n algorithm=ConvolutionAlgorithm.AUTO):\n \"\"\"Create an extern op to do inference convolution of 4D tensor data and\n 4D tensor kernel and 1D tensor bias with nnpack.\n\n Parameters\n ----------\n data : Tensor\n data 4D tensor input[batch][input_channels][input_height][input_width] of\n FP32 elements.\n kernel : Tensor\n kernel 4D tensor kernel[output_channels][input_channels][kernel_height]\n [kernel_width] of FP32 elements.\n bias : Tensor\n bias 1D array bias[output_channels][input_channels][kernel_height]\n [kernel_width] of FP32 elements.\n padding : list\n padding A 4-dim list of [pad_top, pad_bottom, pad_left, pad_right],\n which indicates the padding around the feature map.\n stride : list\n stride A 2-dim list of [stride_height, stride_width], which indicates\n the stride.\n\n Returns\n -------\n output : Tensor\n output 4D tensor output[batch][output_channels][output_height][output_width]\n of FP32 elements.\n \"\"\"\n\n assert isinstance(padding, list) and len(padding) == 4\n assert isinstance(stride, list) and len(stride) == 2\n batch, _, input_height, input_width = data.shape\n output_channels, _, kernel_height, kernel_width = kernel.shape\n output_height = (input_height + padding[0] + padding[1] - kernel_height) / stride[0] + 1\n output_width = (input_width + padding[0] + padding[1] - kernel_width) / stride[1] + 1\n\n return _api.extern(\n (batch, output_channels, output_height, output_width),\n [data, kernel, bias] if bias is not None else [data, kernel],\n lambda ins, outs: _intrin.call_packed(\n \"tvm.contrib.nnpack.convolution_inference\",\n ins[0],\n ins[1],\n ins[2] if bias is not None else 0,\n outs[0], padding[0], padding[1], padding[2], padding[3],\n stride[0], stride[1], nthreads, algorithm), name=\"C\")\n\ndef convolution_inference_without_weight_transform(\n data, transformed_kernel, bias, padding, stride, nthreads=1,\n algorithm=ConvolutionAlgorithm.AUTO):\n \"\"\"Create an extern op to do inference convolution of 4D tensor data and\n 4D pre-transformed tensor kernel and 1D tensor bias with nnpack.\n\n Parameters\n ----------\n data : Tensor\n data 4D tensor input[batch][input_channels][input_height][input_width] of\n FP32 elements.\n transformed_kernel : Tensor\n transformed_kernel 4D tensor kernel[output_channels][input_channels][tile]\n [tile] of FP32 elements.\n bias : Tensor\n bias 1D array bias[output_channels][input_channels][kernel_height]\n [kernel_width] of FP32 elements.\n padding : list\n padding A 4-dim list of [pad_top, pad_bottom, pad_left, pad_right],\n which indicates the padding around the feature map.\n stride : list\n stride A 2-dim list of [stride_height, stride_width], which indicates\n the stride.\n\n Returns\n -------\n output : Tensor\n output 4D tensor output[batch][output_channels][output_height][output_width]\n of FP32 elements.\n \"\"\"\n\n assert algorithm in (ConvolutionAlgorithm.WT_8x8,\n ConvolutionAlgorithm.WT_8x8_FP16)\n assert isinstance(padding, list) and len(padding) == 4\n assert isinstance(stride, list) and len(stride) == 2\n batch, _, input_height, input_width = data.shape\n output_channels, _, _, _ = transformed_kernel.shape\n kernel_height, kernel_width = (3, 3)\n output_height = (input_height + padding[0] + padding[1] - kernel_height) / stride[0] + 1\n output_width = (input_width + padding[0] + padding[1] - kernel_width) / stride[1] + 1\n\n return _api.extern(\n (batch, output_channels, output_height, output_width),\n [data, transformed_kernel, bias] if bias is not None else [data, transformed_kernel],\n lambda ins, outs: _intrin.call_packed(\n \"tvm.contrib.nnpack.convolution_inference_without_weight_transform\",\n ins[0],\n ins[1],\n ins[2] if bias is not None else 0,\n outs[0], padding[0], padding[1], padding[2], padding[3],\n stride[0], stride[1], nthreads, algorithm), name=\"C\")\n\ndef convolution_inference_weight_transform(\n kernel, nthreads=1,\n algorithm=ConvolutionAlgorithm.AUTO):\n \"\"\"Create an extern op to do inference convolution of 3D tensor data and\n 4D tensor kernel and 1D tensor bias with nnpack.\n\n Parameters\n ----------\n kernel : Tensor\n kernel 4D tensor kernel[output_channels][input_channels][kernel_height]\n [kernel_width] of FP32 elements.\n\n Returns\n -------\n output : Tensor\n output 4D tensor output[output_channels][input_channels][tile][tile]\n of FP32 elements.\n \"\"\"\n assert algorithm in (ConvolutionAlgorithm.WT_8x8, ConvolutionAlgorithm.WT_8x8_FP16)\n output_channels, input_channels, _, _ = kernel.shape\n\n transform_tile_size = 8\n return _api.extern(\n (output_channels, input_channels, transform_tile_size, transform_tile_size),\n [kernel],\n lambda ins, outs: _intrin.call_packed(\n \"tvm.contrib.nnpack.convolution_inference_weight_transform\",\n ins[0], outs[0], nthreads, algorithm), name=\"transform_kernel\")\n\ndef convolution_output(data, kernel, bias, padding, nthreads=1):\n \"\"\"Create an extern op to compute convolution of 4D tensor data and\n 4D tensor kernel and 1D tensor bias with nnpack.\n\n Parameters\n ----------\n data : Tensor\n data 4D tensor input[batch_size][input_channels][input_height]\n [input_width] of FP32 elements.\n kernel : Tensor\n kernel 4D tensor kernel[output_channels][input_channels][kernel_height]\n [kernel_width] of FP32 elements.\n bias : Tensor\n bias 1D array bias[output_channels][input_channels][kernel_height]\n [kernel_width] of FP32 elements.\n padding : list\n padding A 4-dim list of [pad_top, pad_bottom, pad_left, pad_right],\n which indicates the padding around the feature map.\n\n Returns\n -------\n output : Tensor\n output 4D tensor output[batch_size][output_channels][output_height]\n [output_width] of FP32 elements.\n \"\"\"\n\n assert isinstance(padding, list) and len(padding) == 4\n batch, _, input_height, input_width = data.shape\n output_channels, _, kernel_height, kernel_width = kernel.shape\n output_height = (input_height + padding[0] + padding[1] - kernel_height) + 1\n output_width = (input_width + padding[0] + padding[1] - kernel_width) + 1\n\n return _api.extern(\n (batch, output_channels, output_height, output_width), [data, kernel, bias],\n lambda ins, outs: _intrin.call_packed(\n \"tvm.contrib.nnpack.convolution_output\", ins[0], ins[1], ins[2],\n outs[0], padding[0], padding[1], padding[2], padding[3], nthreads), name=\"C\")\n\n\n_init_api(\"tvm.contrib.nnpack\")\n", "path": "python/tvm/contrib/nnpack.py"}]} | 3,622 | 211 |
gh_patches_debug_27919 | rasdani/github-patches | git_diff | saulpw__visidata-1059 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[unfurl-col] unfurling a column with TypedWrapper halts unfurl
**Small description**
When unfurling a column which has a TypedWrapper a sheet loading halt occurs
**Expected result**
The same response as whatever `options.unfurl_empty` would do for a row without a list.
**Actual result with screenshot**
AttributeError: 'TypedWrapper' object has no attribute 'xyz'

If you get an unexpected error, please include the full stack trace that you get with `Ctrl-E`.
**Steps to reproduce with sample data and a .vd**
`echo '[{"a":1,"d":{"b":[1,2,3]}},{"a":2,"d":{"c":[1,2,3]}},{"a":3,"d":{"b":[1,2,3]}}]' | vd -f json`
```
sheet col row longname input keystrokes comment
open-file - o
_ d addcol-expr curcol['b'] = create new column from Python expression, with column names as variables
_ curcol_b_ unfurl-col row-wise expand current column of lists (e.g. [2]) or dicts (e.g. {3}) within that column
```
**Additional context**
v2.6dev
</issue>
<code>
[start of visidata/unfurl.py]
1 '''This adds the `unfurl-col` command, to unfurl a column containing iterable values, such as lists and dicts.
2 Unfurling pushes a new sheet, with each key/value pair in the unfurled column values getting its own row, with the rest of the source sheet's columns copied for each of those rows.
3
4 Note: When unfurling a column, non-iterable objects (numbers, and also strings) are treated as single-item lists, so that they too can be unfurled.
5
6 Credit to Jeremy Singer-Vine for the idea and original implementation.
7 '''
8
9 from collections.abc import Iterable, Mapping
10 from visidata import vd, Progress, Sheet, Column, ColumnItem, SettableColumn, SubColumnFunc, asyncthread, clean_to_id
11
12
13 vd.option('unfurl_empty', False, 'if unfurl includes rows for empty containers', replay=True)
14
15
16 class UnfurledSheet(Sheet):
17 @asyncthread
18 def reload(self):
19 # Copy over base sheet, using SubColumnFunc
20 self.columns = []
21 for col in self.source.columns:
22 if col is self.source_col:
23 # Replace iterable column with two columns: keys and values
24 self.cursorVisibleColIndex = len(self.columns)-1
25 self.addColumn(ColumnItem(col.name + "_key", 1))
26 self.addColumn(ColumnItem(col.name + "_value", 2))
27 else:
28 self.addColumn(SubColumnFunc(col.name, col, 0, keycol=col.keycol))
29
30 self.rows = []
31 unfurl_empty = self.options.unfurl_empty
32 for row in Progress(self.source.rows):
33 val = self.source_col.getValue(row)
34
35 if not isinstance(val, Iterable) or isinstance(val, str):
36 val = [ val ]
37
38 if isinstance(val, Mapping):
39 gen = val.items()
40 else:
41 gen = enumerate(val)
42
43 nadded = 0
44 for key, sub_value in gen:
45 new_row = [ row, key, sub_value ]
46 self.addRow(new_row)
47 nadded += 1
48
49 if unfurl_empty and not nadded:
50 self.addRow([row, None, None])
51
52 @Sheet.api
53 def unfurl_col(sheet, col):
54 clean_id = clean_to_id(col.name)
55 vs = UnfurledSheet(f"{sheet.name}_{clean_id}_unfurled", source=sheet, source_col=col)
56 return vs
57
58
59 Sheet.addCommand("zM", "unfurl-col", "vd.push(unfurl_col(cursorCol))", "row-wise expand current column of lists (e.g. [2]) or dicts (e.g. {3}) within that column")
60
[end of visidata/unfurl.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/visidata/unfurl.py b/visidata/unfurl.py
--- a/visidata/unfurl.py
+++ b/visidata/unfurl.py
@@ -8,12 +8,14 @@
from collections.abc import Iterable, Mapping
from visidata import vd, Progress, Sheet, Column, ColumnItem, SettableColumn, SubColumnFunc, asyncthread, clean_to_id
+from visidata import stacktrace, TypedExceptionWrapper
vd.option('unfurl_empty', False, 'if unfurl includes rows for empty containers', replay=True)
class UnfurledSheet(Sheet):
+ # rowdef: [row, key, sub_value]
@asyncthread
def reload(self):
# Copy over base sheet, using SubColumnFunc
@@ -30,7 +32,16 @@
self.rows = []
unfurl_empty = self.options.unfurl_empty
for row in Progress(self.source.rows):
- val = self.source_col.getValue(row)
+ try:
+ val = self.source_col.getValue(row)
+ except Exception as e:
+ e.stacktrace = stacktrace()
+ if unfurl_empty:
+ # TypedExceptionWrapper allows the use of z^E to see the stacktrace
+ # the exception on its own lacks clarity
+ self.addRow([row, TypedExceptionWrapper(None, exception=e), TypedExceptionWrapper(None, exception=e)])
+ else:
+ vd.exceptionCaught(e)
if not isinstance(val, Iterable) or isinstance(val, str):
val = [ val ]
| {"golden_diff": "diff --git a/visidata/unfurl.py b/visidata/unfurl.py\n--- a/visidata/unfurl.py\n+++ b/visidata/unfurl.py\n@@ -8,12 +8,14 @@\n \n from collections.abc import Iterable, Mapping\n from visidata import vd, Progress, Sheet, Column, ColumnItem, SettableColumn, SubColumnFunc, asyncthread, clean_to_id\n+from visidata import stacktrace, TypedExceptionWrapper\n \n \n vd.option('unfurl_empty', False, 'if unfurl includes rows for empty containers', replay=True)\n \n \n class UnfurledSheet(Sheet):\n+ # rowdef: [row, key, sub_value]\n @asyncthread\n def reload(self):\n # Copy over base sheet, using SubColumnFunc\n@@ -30,7 +32,16 @@\n self.rows = []\n unfurl_empty = self.options.unfurl_empty\n for row in Progress(self.source.rows):\n- val = self.source_col.getValue(row)\n+ try:\n+ val = self.source_col.getValue(row)\n+ except Exception as e:\n+ e.stacktrace = stacktrace()\n+ if unfurl_empty:\n+ # TypedExceptionWrapper allows the use of z^E to see the stacktrace\n+ # the exception on its own lacks clarity\n+ self.addRow([row, TypedExceptionWrapper(None, exception=e), TypedExceptionWrapper(None, exception=e)])\n+ else:\n+ vd.exceptionCaught(e)\n \n if not isinstance(val, Iterable) or isinstance(val, str):\n val = [ val ]\n", "issue": "[unfurl-col] unfurling a column with TypedWrapper halts unfurl\n**Small description**\r\nWhen unfurling a column which has a TypedWrapper a sheet loading halt occurs\r\n\r\n**Expected result**\r\nThe same response as whatever `options.unfurl_empty` would do for a row without a list.\r\n\r\n**Actual result with screenshot**\r\nAttributeError: 'TypedWrapper' object has no attribute 'xyz'\r\n\r\n\r\nIf you get an unexpected error, please include the full stack trace that you get with `Ctrl-E`.\r\n\r\n**Steps to reproduce with sample data and a .vd**\r\n`echo '[{\"a\":1,\"d\":{\"b\":[1,2,3]}},{\"a\":2,\"d\":{\"c\":[1,2,3]}},{\"a\":3,\"d\":{\"b\":[1,2,3]}}]' | vd -f json`\r\n\r\n```\r\nsheet\tcol\trow\tlongname\tinput\tkeystrokes\tcomment\r\n\t\t\topen-file\t-\to\t\r\n_\td\t\taddcol-expr\tcurcol['b']\t=\tcreate new column from Python expression, with column names as variables\r\n_\tcurcol_b_\t\tunfurl-col\t\t\trow-wise expand current column of lists (e.g. [2]) or dicts (e.g. {3}) within that column\r\n\r\n```\r\n\r\n**Additional context**\r\nv2.6dev\r\n\n", "before_files": [{"content": "'''This adds the `unfurl-col` command, to unfurl a column containing iterable values, such as lists and dicts.\nUnfurling pushes a new sheet, with each key/value pair in the unfurled column values getting its own row, with the rest of the source sheet's columns copied for each of those rows.\n\nNote: When unfurling a column, non-iterable objects (numbers, and also strings) are treated as single-item lists, so that they too can be unfurled.\n\nCredit to Jeremy Singer-Vine for the idea and original implementation.\n'''\n\nfrom collections.abc import Iterable, Mapping\nfrom visidata import vd, Progress, Sheet, Column, ColumnItem, SettableColumn, SubColumnFunc, asyncthread, clean_to_id\n\n\nvd.option('unfurl_empty', False, 'if unfurl includes rows for empty containers', replay=True)\n\n\nclass UnfurledSheet(Sheet):\n @asyncthread\n def reload(self):\n # Copy over base sheet, using SubColumnFunc\n self.columns = []\n for col in self.source.columns:\n if col is self.source_col:\n # Replace iterable column with two columns: keys and values\n self.cursorVisibleColIndex = len(self.columns)-1\n self.addColumn(ColumnItem(col.name + \"_key\", 1))\n self.addColumn(ColumnItem(col.name + \"_value\", 2))\n else:\n self.addColumn(SubColumnFunc(col.name, col, 0, keycol=col.keycol))\n\n self.rows = []\n unfurl_empty = self.options.unfurl_empty\n for row in Progress(self.source.rows):\n val = self.source_col.getValue(row)\n\n if not isinstance(val, Iterable) or isinstance(val, str):\n val = [ val ]\n\n if isinstance(val, Mapping):\n gen = val.items()\n else:\n gen = enumerate(val)\n\n nadded = 0\n for key, sub_value in gen:\n new_row = [ row, key, sub_value ]\n self.addRow(new_row)\n nadded += 1\n\n if unfurl_empty and not nadded:\n self.addRow([row, None, None])\n\[email protected]\ndef unfurl_col(sheet, col):\n clean_id = clean_to_id(col.name)\n vs = UnfurledSheet(f\"{sheet.name}_{clean_id}_unfurled\", source=sheet, source_col=col)\n return vs\n\n\nSheet.addCommand(\"zM\", \"unfurl-col\", \"vd.push(unfurl_col(cursorCol))\", \"row-wise expand current column of lists (e.g. [2]) or dicts (e.g. {3}) within that column\")\n", "path": "visidata/unfurl.py"}]} | 1,558 | 348 |
gh_patches_debug_11658 | rasdani/github-patches | git_diff | elastic__apm-agent-python-1890 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
IndexError when "url" not found in args or kwargs.
### Overview
I've found an issue in line 46 of `call()` method in `AioHttpClientInstrumentation(...)` class.
https://github.com/elastic/apm-agent-python/blob/da93e7af448abcac367d216e2d20a584051f6e50/elasticapm/instrumentation/packages/asyncio/aiohttp_client.py#L44-L47
I'm getting an `IndexError` exception due to lack or "url" in both kwargs and in args[1]. The the reason is that the argument containing urls is called "str_or_url".
https://github.com/aio-libs/aiohttp/blob/4b59d55e9e79f5a0b1932d6dc9f6b12a33d19266/aiohttp/client.py#L325-L328
By default the code is running fine, but this issue will appear in cases where someone will try to use `ClientSession._request()` method directly AND use keyword-arguments.
### How to recreate the bug?
This is a general example on how to recreate the bug. Lets assume that somewhere in my code I want to connect to some external http rest-api service using aiohttp library. I'll be using custom made session object based on ClientSession object from aiohttp library.
```python
from aiohttp import ClientSession, ClientResponse
class CustomSession(ClientSession):
async def _request(self, method: str, str_or_url: StrOrURL, **kwargs: Any) -> ClientResponse:
# put some extra code here, like add retry functionality
client_response = await super()._request(method=method, str_or_url=url, **kwargs) # << ISSUE IS HERE
return client_response
```
the above code is valid and it works as long as there is no apm agent running as middleware. With apm agent added, the code has to be written as below:
```python
client_response = await super()._request(method, url, **kwargs)
```
### How to fix it?
Do search for both `url` and `str_or_url` in kwargs. It's simple fix that can fit in same line, I bet.
</issue>
<code>
[start of elasticapm/instrumentation/packages/asyncio/aiohttp_client.py]
1 # BSD 3-Clause License
2 #
3 # Copyright (c) 2019, Elasticsearch BV
4 # All rights reserved.
5 #
6 # Redistribution and use in source and binary forms, with or without
7 # modification, are permitted provided that the following conditions are met:
8 #
9 # * Redistributions of source code must retain the above copyright notice, this
10 # list of conditions and the following disclaimer.
11 #
12 # * Redistributions in binary form must reproduce the above copyright notice,
13 # this list of conditions and the following disclaimer in the documentation
14 # and/or other materials provided with the distribution.
15 #
16 # * Neither the name of the copyright holder nor the names of its
17 # contributors may be used to endorse or promote products derived from
18 # this software without specific prior written permission.
19 #
20 # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
21 # AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
22 # IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
23 # DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
24 # FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
25 # DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
26 # SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
27 # CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
28 # OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
29 # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
30
31 from elasticapm import async_capture_span
32 from elasticapm.conf import constants
33 from elasticapm.instrumentation.packages.asyncio.base import AsyncAbstractInstrumentedModule
34 from elasticapm.traces import DroppedSpan, execution_context
35 from elasticapm.utils import get_host_from_url, sanitize_url
36 from elasticapm.utils.disttracing import TracingOptions
37
38
39 class AioHttpClientInstrumentation(AsyncAbstractInstrumentedModule):
40 name = "aiohttp_client"
41
42 instrument_list = [("aiohttp.client", "ClientSession._request")]
43
44 async def call(self, module, method, wrapped, instance, args, kwargs):
45 method = kwargs["method"] if "method" in kwargs else args[0]
46 url = kwargs["url"] if "url" in kwargs else args[1]
47 url = str(url)
48
49 signature = " ".join([method.upper(), get_host_from_url(url)])
50 url = sanitize_url(url)
51 transaction = execution_context.get_transaction()
52
53 async with async_capture_span(
54 signature,
55 span_type="external",
56 span_subtype="http",
57 extra={"http": {"url": url}},
58 leaf=True,
59 ) as span:
60 leaf_span = span
61 while isinstance(leaf_span, DroppedSpan):
62 leaf_span = leaf_span.parent
63
64 parent_id = leaf_span.id if leaf_span else transaction.id
65 trace_parent = transaction.trace_parent.copy_from(
66 span_id=parent_id, trace_options=TracingOptions(recorded=True)
67 )
68 headers = kwargs.get("headers") or {}
69 self._set_disttracing_headers(headers, trace_parent, transaction)
70 kwargs["headers"] = headers
71 response = await wrapped(*args, **kwargs)
72 if response:
73 if span.context:
74 span.context["http"]["status_code"] = response.status
75 span.set_success() if response.status < 400 else span.set_failure()
76 return response
77
78 def mutate_unsampled_call_args(self, module, method, wrapped, instance, args, kwargs, transaction):
79 # since we don't have a span, we set the span id to the transaction id
80 trace_parent = transaction.trace_parent.copy_from(
81 span_id=transaction.id, trace_options=TracingOptions(recorded=False)
82 )
83
84 headers = kwargs.get("headers") or {}
85 self._set_disttracing_headers(headers, trace_parent, transaction)
86 kwargs["headers"] = headers
87 return args, kwargs
88
89 def _set_disttracing_headers(self, headers, trace_parent, transaction):
90 trace_parent_str = trace_parent.to_string()
91 headers[constants.TRACEPARENT_HEADER_NAME] = trace_parent_str
92 if transaction.tracer.config.use_elastic_traceparent_header:
93 headers[constants.TRACEPARENT_LEGACY_HEADER_NAME] = trace_parent_str
94 if trace_parent.tracestate:
95 headers[constants.TRACESTATE_HEADER_NAME] = trace_parent.tracestate
96
[end of elasticapm/instrumentation/packages/asyncio/aiohttp_client.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/elasticapm/instrumentation/packages/asyncio/aiohttp_client.py b/elasticapm/instrumentation/packages/asyncio/aiohttp_client.py
--- a/elasticapm/instrumentation/packages/asyncio/aiohttp_client.py
+++ b/elasticapm/instrumentation/packages/asyncio/aiohttp_client.py
@@ -42,8 +42,10 @@
instrument_list = [("aiohttp.client", "ClientSession._request")]
async def call(self, module, method, wrapped, instance, args, kwargs):
- method = kwargs["method"] if "method" in kwargs else args[0]
- url = kwargs["url"] if "url" in kwargs else args[1]
+ method = kwargs.get("method", args[0])
+ url = kwargs.get("url", kwargs.get("str_or_url", None))
+ if url is None:
+ url = args[1]
url = str(url)
signature = " ".join([method.upper(), get_host_from_url(url)])
| {"golden_diff": "diff --git a/elasticapm/instrumentation/packages/asyncio/aiohttp_client.py b/elasticapm/instrumentation/packages/asyncio/aiohttp_client.py\n--- a/elasticapm/instrumentation/packages/asyncio/aiohttp_client.py\n+++ b/elasticapm/instrumentation/packages/asyncio/aiohttp_client.py\n@@ -42,8 +42,10 @@\n instrument_list = [(\"aiohttp.client\", \"ClientSession._request\")]\n \n async def call(self, module, method, wrapped, instance, args, kwargs):\n- method = kwargs[\"method\"] if \"method\" in kwargs else args[0]\n- url = kwargs[\"url\"] if \"url\" in kwargs else args[1]\n+ method = kwargs.get(\"method\", args[0])\n+ url = kwargs.get(\"url\", kwargs.get(\"str_or_url\", None))\n+ if url is None:\n+ url = args[1]\n url = str(url)\n \n signature = \" \".join([method.upper(), get_host_from_url(url)])\n", "issue": "IndexError when \"url\" not found in args or kwargs.\n### Overview\r\n\r\nI've found an issue in line 46 of `call()` method in `AioHttpClientInstrumentation(...)` class.\r\n\r\nhttps://github.com/elastic/apm-agent-python/blob/da93e7af448abcac367d216e2d20a584051f6e50/elasticapm/instrumentation/packages/asyncio/aiohttp_client.py#L44-L47\r\n\r\nI'm getting an `IndexError` exception due to lack or \"url\" in both kwargs and in args[1]. The the reason is that the argument containing urls is called \"str_or_url\".\r\n\r\nhttps://github.com/aio-libs/aiohttp/blob/4b59d55e9e79f5a0b1932d6dc9f6b12a33d19266/aiohttp/client.py#L325-L328\r\n\r\nBy default the code is running fine, but this issue will appear in cases where someone will try to use `ClientSession._request()` method directly AND use keyword-arguments.\r\n\r\n### How to recreate the bug?\r\n\r\nThis is a general example on how to recreate the bug. Lets assume that somewhere in my code I want to connect to some external http rest-api service using aiohttp library. I'll be using custom made session object based on ClientSession object from aiohttp library.\r\n\r\n```python\r\nfrom aiohttp import ClientSession, ClientResponse\r\n\r\nclass CustomSession(ClientSession):\r\n\r\n async def _request(self, method: str, str_or_url: StrOrURL, **kwargs: Any) -> ClientResponse:\r\n # put some extra code here, like add retry functionality\r\n client_response = await super()._request(method=method, str_or_url=url, **kwargs) # << ISSUE IS HERE\r\n return client_response\r\n```\r\nthe above code is valid and it works as long as there is no apm agent running as middleware. With apm agent added, the code has to be written as below:\r\n```python\r\n client_response = await super()._request(method, url, **kwargs)\r\n```\r\n\r\n### How to fix it?\r\n\r\nDo search for both `url` and `str_or_url` in kwargs. It's simple fix that can fit in same line, I bet.\n", "before_files": [{"content": "# BSD 3-Clause License\n#\n# Copyright (c) 2019, Elasticsearch BV\n# All rights reserved.\n#\n# Redistribution and use in source and binary forms, with or without\n# modification, are permitted provided that the following conditions are met:\n#\n# * Redistributions of source code must retain the above copyright notice, this\n# list of conditions and the following disclaimer.\n#\n# * Redistributions in binary form must reproduce the above copyright notice,\n# this list of conditions and the following disclaimer in the documentation\n# and/or other materials provided with the distribution.\n#\n# * Neither the name of the copyright holder nor the names of its\n# contributors may be used to endorse or promote products derived from\n# this software without specific prior written permission.\n#\n# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\"\n# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\n# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE\n# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE\n# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL\n# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR\n# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER\n# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,\n# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\n# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n\nfrom elasticapm import async_capture_span\nfrom elasticapm.conf import constants\nfrom elasticapm.instrumentation.packages.asyncio.base import AsyncAbstractInstrumentedModule\nfrom elasticapm.traces import DroppedSpan, execution_context\nfrom elasticapm.utils import get_host_from_url, sanitize_url\nfrom elasticapm.utils.disttracing import TracingOptions\n\n\nclass AioHttpClientInstrumentation(AsyncAbstractInstrumentedModule):\n name = \"aiohttp_client\"\n\n instrument_list = [(\"aiohttp.client\", \"ClientSession._request\")]\n\n async def call(self, module, method, wrapped, instance, args, kwargs):\n method = kwargs[\"method\"] if \"method\" in kwargs else args[0]\n url = kwargs[\"url\"] if \"url\" in kwargs else args[1]\n url = str(url)\n\n signature = \" \".join([method.upper(), get_host_from_url(url)])\n url = sanitize_url(url)\n transaction = execution_context.get_transaction()\n\n async with async_capture_span(\n signature,\n span_type=\"external\",\n span_subtype=\"http\",\n extra={\"http\": {\"url\": url}},\n leaf=True,\n ) as span:\n leaf_span = span\n while isinstance(leaf_span, DroppedSpan):\n leaf_span = leaf_span.parent\n\n parent_id = leaf_span.id if leaf_span else transaction.id\n trace_parent = transaction.trace_parent.copy_from(\n span_id=parent_id, trace_options=TracingOptions(recorded=True)\n )\n headers = kwargs.get(\"headers\") or {}\n self._set_disttracing_headers(headers, trace_parent, transaction)\n kwargs[\"headers\"] = headers\n response = await wrapped(*args, **kwargs)\n if response:\n if span.context:\n span.context[\"http\"][\"status_code\"] = response.status\n span.set_success() if response.status < 400 else span.set_failure()\n return response\n\n def mutate_unsampled_call_args(self, module, method, wrapped, instance, args, kwargs, transaction):\n # since we don't have a span, we set the span id to the transaction id\n trace_parent = transaction.trace_parent.copy_from(\n span_id=transaction.id, trace_options=TracingOptions(recorded=False)\n )\n\n headers = kwargs.get(\"headers\") or {}\n self._set_disttracing_headers(headers, trace_parent, transaction)\n kwargs[\"headers\"] = headers\n return args, kwargs\n\n def _set_disttracing_headers(self, headers, trace_parent, transaction):\n trace_parent_str = trace_parent.to_string()\n headers[constants.TRACEPARENT_HEADER_NAME] = trace_parent_str\n if transaction.tracer.config.use_elastic_traceparent_header:\n headers[constants.TRACEPARENT_LEGACY_HEADER_NAME] = trace_parent_str\n if trace_parent.tracestate:\n headers[constants.TRACESTATE_HEADER_NAME] = trace_parent.tracestate\n", "path": "elasticapm/instrumentation/packages/asyncio/aiohttp_client.py"}]} | 2,203 | 231 |
gh_patches_debug_17921 | rasdani/github-patches | git_diff | mirumee__ariadne-59 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Drop Python 3.5
Due to lack of support for [variable type annotations](https://www.python.org/dev/peps/pep-0526/) I suggest to drop support for Python 3.5. This is already a problem in #30 in which either code or mypy is failing and only solution is to remove hints.
We might consider testing ariadne on 3.7 for future-proofing our project.
</issue>
<code>
[start of ariadne/executable_schema.py]
1 from collections import defaultdict
2 from itertools import chain
3 from typing import Iterator, List, Union
4
5 from graphql import GraphQLSchema
6
7 from .build_schema import build_schema_from_type_definitions
8 from .resolvers import add_resolve_functions_to_schema
9
10
11 def decompose_maps(resolvers_maps: List[dict]) -> Iterator[tuple]:
12 def flatten(rm):
13 for key, value in rm.items():
14 for resolver_name, resolver in value.items():
15 yield (key, resolver_name, resolver)
16
17 return chain.from_iterable(flatten(m) for m in resolvers_maps)
18
19
20 def merge_resolvers(resolver_list: Iterator[tuple]) -> dict:
21 output = defaultdict(dict) # type: dict
22 for key, resolver_name, resolver in resolver_list:
23 output[key][resolver_name] = resolver
24 return output
25
26
27 def join_type_defs(type_defs: List[str]) -> str:
28 return "\n\n".join(t.strip() for t in type_defs)
29
30
31 def make_executable_schema(
32 type_defs: Union[str, List[str]], resolvers: Union[dict, List[dict]]
33 ) -> GraphQLSchema:
34 if isinstance(type_defs, list):
35 type_defs = join_type_defs(type_defs)
36
37 schema = build_schema_from_type_definitions(type_defs)
38
39 if isinstance(resolvers, list):
40 add_resolve_functions_to_schema(
41 schema, merge_resolvers(decompose_maps(resolvers))
42 )
43 elif isinstance(resolvers, dict):
44 add_resolve_functions_to_schema(schema, resolvers)
45
46 return schema
47
[end of ariadne/executable_schema.py]
[start of setup.py]
1 #! /usr/bin/env python
2 import os
3 from setuptools import setup
4
5 CLASSIFIERS = [
6 "Intended Audience :: Developers",
7 "License :: OSI Approved :: BSD License",
8 "Operating System :: OS Independent",
9 "Programming Language :: Python",
10 "Programming Language :: Python :: 3.5",
11 "Programming Language :: Python :: 3.6",
12 "Programming Language :: Python :: 3.7",
13 "Topic :: Software Development :: Libraries :: Python Modules",
14 ]
15
16 README_PATH = os.path.join(os.path.dirname(os.path.abspath(__file__)), "README.md")
17 with open(README_PATH, "r") as f:
18 README = f.read()
19
20 setup(
21 name="ariadne",
22 author="Mirumee Software",
23 author_email="[email protected]",
24 description="Ariadne is a Python library for implementing GraphQL servers.",
25 long_description=README,
26 long_description_content_type="text/markdown",
27 license="BSD",
28 version="0.1.0",
29 url="https://github.com/mirumee/ariadne",
30 packages=["ariadne"],
31 install_requires=["graphql-core>=2.1", "typing>=3.6.0"],
32 classifiers=CLASSIFIERS,
33 platforms=["any"],
34 )
35
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/ariadne/executable_schema.py b/ariadne/executable_schema.py
--- a/ariadne/executable_schema.py
+++ b/ariadne/executable_schema.py
@@ -18,7 +18,7 @@
def merge_resolvers(resolver_list: Iterator[tuple]) -> dict:
- output = defaultdict(dict) # type: dict
+ output: dict = defaultdict(dict)
for key, resolver_name, resolver in resolver_list:
output[key][resolver_name] = resolver
return output
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -7,7 +7,6 @@
"License :: OSI Approved :: BSD License",
"Operating System :: OS Independent",
"Programming Language :: Python",
- "Programming Language :: Python :: 3.5",
"Programming Language :: Python :: 3.6",
"Programming Language :: Python :: 3.7",
"Topic :: Software Development :: Libraries :: Python Modules",
| {"golden_diff": "diff --git a/ariadne/executable_schema.py b/ariadne/executable_schema.py\n--- a/ariadne/executable_schema.py\n+++ b/ariadne/executable_schema.py\n@@ -18,7 +18,7 @@\n \n \n def merge_resolvers(resolver_list: Iterator[tuple]) -> dict:\n- output = defaultdict(dict) # type: dict\n+ output: dict = defaultdict(dict)\n for key, resolver_name, resolver in resolver_list:\n output[key][resolver_name] = resolver\n return output\ndiff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -7,7 +7,6 @@\n \"License :: OSI Approved :: BSD License\",\n \"Operating System :: OS Independent\",\n \"Programming Language :: Python\",\n- \"Programming Language :: Python :: 3.5\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Topic :: Software Development :: Libraries :: Python Modules\",\n", "issue": "Drop Python 3.5\nDue to lack of support for [variable type annotations](https://www.python.org/dev/peps/pep-0526/) I suggest to drop support for Python 3.5. This is already a problem in #30 in which either code or mypy is failing and only solution is to remove hints.\r\n\r\nWe might consider testing ariadne on 3.7 for future-proofing our project.\n", "before_files": [{"content": "from collections import defaultdict\nfrom itertools import chain\nfrom typing import Iterator, List, Union\n\nfrom graphql import GraphQLSchema\n\nfrom .build_schema import build_schema_from_type_definitions\nfrom .resolvers import add_resolve_functions_to_schema\n\n\ndef decompose_maps(resolvers_maps: List[dict]) -> Iterator[tuple]:\n def flatten(rm):\n for key, value in rm.items():\n for resolver_name, resolver in value.items():\n yield (key, resolver_name, resolver)\n\n return chain.from_iterable(flatten(m) for m in resolvers_maps)\n\n\ndef merge_resolvers(resolver_list: Iterator[tuple]) -> dict:\n output = defaultdict(dict) # type: dict\n for key, resolver_name, resolver in resolver_list:\n output[key][resolver_name] = resolver\n return output\n\n\ndef join_type_defs(type_defs: List[str]) -> str:\n return \"\\n\\n\".join(t.strip() for t in type_defs)\n\n\ndef make_executable_schema(\n type_defs: Union[str, List[str]], resolvers: Union[dict, List[dict]]\n) -> GraphQLSchema:\n if isinstance(type_defs, list):\n type_defs = join_type_defs(type_defs)\n\n schema = build_schema_from_type_definitions(type_defs)\n\n if isinstance(resolvers, list):\n add_resolve_functions_to_schema(\n schema, merge_resolvers(decompose_maps(resolvers))\n )\n elif isinstance(resolvers, dict):\n add_resolve_functions_to_schema(schema, resolvers)\n\n return schema\n", "path": "ariadne/executable_schema.py"}, {"content": "#! /usr/bin/env python\nimport os\nfrom setuptools import setup\n\nCLASSIFIERS = [\n \"Intended Audience :: Developers\",\n \"License :: OSI Approved :: BSD License\",\n \"Operating System :: OS Independent\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 3.5\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Topic :: Software Development :: Libraries :: Python Modules\",\n]\n\nREADME_PATH = os.path.join(os.path.dirname(os.path.abspath(__file__)), \"README.md\")\nwith open(README_PATH, \"r\") as f:\n README = f.read()\n\nsetup(\n name=\"ariadne\",\n author=\"Mirumee Software\",\n author_email=\"[email protected]\",\n description=\"Ariadne is a Python library for implementing GraphQL servers.\",\n long_description=README,\n long_description_content_type=\"text/markdown\",\n license=\"BSD\",\n version=\"0.1.0\",\n url=\"https://github.com/mirumee/ariadne\",\n packages=[\"ariadne\"],\n install_requires=[\"graphql-core>=2.1\", \"typing>=3.6.0\"],\n classifiers=CLASSIFIERS,\n platforms=[\"any\"],\n)\n", "path": "setup.py"}]} | 1,387 | 224 |
gh_patches_debug_4177 | rasdani/github-patches | git_diff | pretix__pretix-808 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Display remaining quota of add-on products
Currently if you enable "Show number of tickets left" on an event that only applies to products, not add-on products. In my opinion if there is an quota presend for add-on products it should also be displayed (if the quota is exceeded that is already displayed).
Display remaining quota of add-on products
Currently if you enable "Show number of tickets left" on an event that only applies to products, not add-on products. In my opinion if there is an quota presend for add-on products it should also be displayed (if the quota is exceeded that is already displayed).
</issue>
<code>
[start of src/pretix/presale/forms/checkout.py]
1 from itertools import chain
2
3 from django import forms
4 from django.core.exceptions import ValidationError
5 from django.db.models import Count, Prefetch, Q
6 from django.utils.encoding import force_text
7 from django.utils.formats import number_format
8 from django.utils.timezone import now
9 from django.utils.translation import ugettext_lazy as _
10
11 from pretix.base.forms.questions import (
12 BaseInvoiceAddressForm, BaseQuestionsForm,
13 )
14 from pretix.base.models import ItemVariation
15 from pretix.base.models.tax import TAXED_ZERO
16 from pretix.base.templatetags.money import money_filter
17 from pretix.base.templatetags.rich_text import rich_text
18 from pretix.base.validators import EmailBlacklistValidator
19 from pretix.presale.signals import contact_form_fields
20
21
22 class ContactForm(forms.Form):
23 required_css_class = 'required'
24 email = forms.EmailField(label=_('E-mail'),
25 help_text=_('Make sure to enter a valid email address. We will send you an order '
26 'confirmation including a link that you need in case you want to make '
27 'modifications to your order or download your ticket later.'),
28 validators=[EmailBlacklistValidator()],
29 widget=forms.EmailInput(attrs={'autofocus': 'autofocus'}))
30
31 def __init__(self, *args, **kwargs):
32 self.event = kwargs.pop('event')
33 self.request = kwargs.pop('request')
34 super().__init__(*args, **kwargs)
35
36 if self.event.settings.order_email_asked_twice:
37 self.fields['email_repeat'] = forms.EmailField(
38 label=_('E-mail address (repeated)'),
39 help_text=_('Please enter the same email address again to make sure you typed it correctly.')
40 )
41
42 responses = contact_form_fields.send(self.event, request=self.request)
43 for r, response in sorted(responses, key=lambda r: str(r[0])):
44 for key, value in response.items():
45 # We need to be this explicit, since OrderedDict.update does not retain ordering
46 self.fields[key] = value
47
48 def clean(self):
49 if self.event.settings.order_email_asked_twice and self.cleaned_data.get('email') and self.cleaned_data.get('email_repeat'):
50 if self.cleaned_data.get('email').lower() != self.cleaned_data.get('email_repeat').lower():
51 raise ValidationError(_('Please enter the same email address twice.'))
52
53
54 class InvoiceAddressForm(BaseInvoiceAddressForm):
55 required_css_class = 'required'
56 vat_warning = True
57
58
59 class QuestionsForm(BaseQuestionsForm):
60 """
61 This form class is responsible for asking order-related questions. This includes
62 the attendee name for admission tickets, if the corresponding setting is enabled,
63 as well as additional questions defined by the organizer.
64 """
65 required_css_class = 'required'
66
67
68 class AddOnRadioSelect(forms.RadioSelect):
69 option_template_name = 'pretixpresale/forms/addon_choice_option.html'
70
71 def optgroups(self, name, value, attrs=None):
72 attrs = attrs or {}
73 groups = []
74 has_selected = False
75 for index, (option_value, option_label, option_desc) in enumerate(chain(self.choices)):
76 if option_value is None:
77 option_value = ''
78 if isinstance(option_label, (list, tuple)):
79 raise TypeError('Choice groups are not supported here')
80 group_name = None
81 subgroup = []
82 groups.append((group_name, subgroup, index))
83
84 selected = (
85 force_text(option_value) in value and
86 (has_selected is False or self.allow_multiple_selected)
87 )
88 if selected is True and has_selected is False:
89 has_selected = True
90 attrs['description'] = option_desc
91 subgroup.append(self.create_option(
92 name, option_value, option_label, selected, index,
93 subindex=None, attrs=attrs,
94 ))
95
96 return groups
97
98
99 class AddOnVariationField(forms.ChoiceField):
100 def valid_value(self, value):
101 text_value = force_text(value)
102 for k, v, d in self.choices:
103 if value == k or text_value == force_text(k):
104 return True
105 return False
106
107
108 class AddOnsForm(forms.Form):
109 """
110 This form class is responsible for selecting add-ons to a product in the cart.
111 """
112
113 def _label(self, event, item_or_variation, avail, override_price=None):
114 if isinstance(item_or_variation, ItemVariation):
115 variation = item_or_variation
116 item = item_or_variation.item
117 price = variation.price
118 label = variation.value
119 else:
120 item = item_or_variation
121 price = item.default_price
122 label = item.name
123
124 if override_price:
125 price = override_price
126
127 if self.price_included:
128 price = TAXED_ZERO
129 else:
130 price = item.tax(price)
131
132 if not price.gross:
133 n = '{name}'.format(
134 name=label
135 )
136 elif not price.rate:
137 n = _('{name} (+ {price})').format(
138 name=label, price=money_filter(price.gross, event.currency)
139 )
140 elif event.settings.display_net_prices:
141 n = _('{name} (+ {price} plus {taxes}% {taxname})').format(
142 name=label, price=money_filter(price.net, event.currency),
143 taxes=number_format(price.rate), taxname=price.name
144 )
145 else:
146 n = _('{name} (+ {price} incl. {taxes}% {taxname})').format(
147 name=label, price=money_filter(price.gross, event.currency),
148 taxes=number_format(price.rate), taxname=price.name
149 )
150
151 if avail[0] < 20:
152 n += ' – {}'.format(_('SOLD OUT'))
153 elif avail[0] < 100:
154 n += ' – {}'.format(_('Currently unavailable'))
155
156 return n
157
158 def __init__(self, *args, **kwargs):
159 """
160 Takes additional keyword arguments:
161
162 :param category: The category to choose from
163 :param event: The event this belongs to
164 :param subevent: The event the parent cart position belongs to
165 :param initial: The current set of add-ons
166 :param quota_cache: A shared dictionary for quota caching
167 :param item_cache: A shared dictionary for item/category caching
168 """
169 category = kwargs.pop('category')
170 event = kwargs.pop('event')
171 subevent = kwargs.pop('subevent')
172 current_addons = kwargs.pop('initial')
173 quota_cache = kwargs.pop('quota_cache')
174 item_cache = kwargs.pop('item_cache')
175 self.price_included = kwargs.pop('price_included')
176
177 super().__init__(*args, **kwargs)
178
179 if subevent:
180 item_price_override = subevent.item_price_overrides
181 var_price_override = subevent.var_price_overrides
182 else:
183 item_price_override = {}
184 var_price_override = {}
185
186 ckey = '{}-{}'.format(subevent.pk if subevent else 0, category.pk)
187 if ckey not in item_cache:
188 # Get all items to possibly show
189 items = category.items.filter(
190 Q(active=True)
191 & Q(Q(available_from__isnull=True) | Q(available_from__lte=now()))
192 & Q(Q(available_until__isnull=True) | Q(available_until__gte=now()))
193 & Q(hide_without_voucher=False)
194 ).select_related('tax_rule').prefetch_related(
195 Prefetch('quotas',
196 to_attr='_subevent_quotas',
197 queryset=event.quotas.filter(subevent=subevent)),
198 Prefetch('variations', to_attr='available_variations',
199 queryset=ItemVariation.objects.filter(active=True, quotas__isnull=False).prefetch_related(
200 Prefetch('quotas',
201 to_attr='_subevent_quotas',
202 queryset=event.quotas.filter(subevent=subevent))
203 ).distinct()),
204 ).annotate(
205 quotac=Count('quotas'),
206 has_variations=Count('variations')
207 ).filter(
208 quotac__gt=0
209 ).order_by('category__position', 'category_id', 'position', 'name')
210 item_cache[ckey] = items
211 else:
212 items = item_cache[ckey]
213
214 for i in items:
215 if i.has_variations:
216 choices = [('', _('no selection'), '')]
217 for v in i.available_variations:
218 cached_availability = v.check_quotas(subevent=subevent, _cache=quota_cache)
219 if v._subevent_quotas:
220 choices.append(
221 (v.pk,
222 self._label(event, v, cached_availability,
223 override_price=var_price_override.get(v.pk)),
224 v.description)
225 )
226
227 field = AddOnVariationField(
228 choices=choices,
229 label=i.name,
230 required=False,
231 widget=AddOnRadioSelect,
232 help_text=rich_text(str(i.description)),
233 initial=current_addons.get(i.pk),
234 )
235 if len(choices) > 1:
236 self.fields['item_%s' % i.pk] = field
237 else:
238 if not i._subevent_quotas:
239 continue
240 cached_availability = i.check_quotas(subevent=subevent, _cache=quota_cache)
241 field = forms.BooleanField(
242 label=self._label(event, i, cached_availability,
243 override_price=item_price_override.get(i.pk)),
244 required=False,
245 initial=i.pk in current_addons,
246 help_text=rich_text(str(i.description)),
247 )
248 self.fields['item_%s' % i.pk] = field
249
[end of src/pretix/presale/forms/checkout.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/pretix/presale/forms/checkout.py b/src/pretix/presale/forms/checkout.py
--- a/src/pretix/presale/forms/checkout.py
+++ b/src/pretix/presale/forms/checkout.py
@@ -152,6 +152,9 @@
n += ' – {}'.format(_('SOLD OUT'))
elif avail[0] < 100:
n += ' – {}'.format(_('Currently unavailable'))
+ else:
+ if avail[1] is not None and event.settings.show_quota_left:
+ n += ' – {}'.format(_('%(num)s currently available') % {'num': avail[1]})
return n
| {"golden_diff": "diff --git a/src/pretix/presale/forms/checkout.py b/src/pretix/presale/forms/checkout.py\n--- a/src/pretix/presale/forms/checkout.py\n+++ b/src/pretix/presale/forms/checkout.py\n@@ -152,6 +152,9 @@\n n += ' \u2013 {}'.format(_('SOLD OUT'))\n elif avail[0] < 100:\n n += ' \u2013 {}'.format(_('Currently unavailable'))\n+ else:\n+ if avail[1] is not None and event.settings.show_quota_left:\n+ n += ' \u2013 {}'.format(_('%(num)s currently available') % {'num': avail[1]})\n \n return n\n", "issue": "Display remaining quota of add-on products\nCurrently if you enable \"Show number of tickets left\" on an event that only applies to products, not add-on products. In my opinion if there is an quota presend for add-on products it should also be displayed (if the quota is exceeded that is already displayed).\nDisplay remaining quota of add-on products\nCurrently if you enable \"Show number of tickets left\" on an event that only applies to products, not add-on products. In my opinion if there is an quota presend for add-on products it should also be displayed (if the quota is exceeded that is already displayed).\n", "before_files": [{"content": "from itertools import chain\n\nfrom django import forms\nfrom django.core.exceptions import ValidationError\nfrom django.db.models import Count, Prefetch, Q\nfrom django.utils.encoding import force_text\nfrom django.utils.formats import number_format\nfrom django.utils.timezone import now\nfrom django.utils.translation import ugettext_lazy as _\n\nfrom pretix.base.forms.questions import (\n BaseInvoiceAddressForm, BaseQuestionsForm,\n)\nfrom pretix.base.models import ItemVariation\nfrom pretix.base.models.tax import TAXED_ZERO\nfrom pretix.base.templatetags.money import money_filter\nfrom pretix.base.templatetags.rich_text import rich_text\nfrom pretix.base.validators import EmailBlacklistValidator\nfrom pretix.presale.signals import contact_form_fields\n\n\nclass ContactForm(forms.Form):\n required_css_class = 'required'\n email = forms.EmailField(label=_('E-mail'),\n help_text=_('Make sure to enter a valid email address. We will send you an order '\n 'confirmation including a link that you need in case you want to make '\n 'modifications to your order or download your ticket later.'),\n validators=[EmailBlacklistValidator()],\n widget=forms.EmailInput(attrs={'autofocus': 'autofocus'}))\n\n def __init__(self, *args, **kwargs):\n self.event = kwargs.pop('event')\n self.request = kwargs.pop('request')\n super().__init__(*args, **kwargs)\n\n if self.event.settings.order_email_asked_twice:\n self.fields['email_repeat'] = forms.EmailField(\n label=_('E-mail address (repeated)'),\n help_text=_('Please enter the same email address again to make sure you typed it correctly.')\n )\n\n responses = contact_form_fields.send(self.event, request=self.request)\n for r, response in sorted(responses, key=lambda r: str(r[0])):\n for key, value in response.items():\n # We need to be this explicit, since OrderedDict.update does not retain ordering\n self.fields[key] = value\n\n def clean(self):\n if self.event.settings.order_email_asked_twice and self.cleaned_data.get('email') and self.cleaned_data.get('email_repeat'):\n if self.cleaned_data.get('email').lower() != self.cleaned_data.get('email_repeat').lower():\n raise ValidationError(_('Please enter the same email address twice.'))\n\n\nclass InvoiceAddressForm(BaseInvoiceAddressForm):\n required_css_class = 'required'\n vat_warning = True\n\n\nclass QuestionsForm(BaseQuestionsForm):\n \"\"\"\n This form class is responsible for asking order-related questions. This includes\n the attendee name for admission tickets, if the corresponding setting is enabled,\n as well as additional questions defined by the organizer.\n \"\"\"\n required_css_class = 'required'\n\n\nclass AddOnRadioSelect(forms.RadioSelect):\n option_template_name = 'pretixpresale/forms/addon_choice_option.html'\n\n def optgroups(self, name, value, attrs=None):\n attrs = attrs or {}\n groups = []\n has_selected = False\n for index, (option_value, option_label, option_desc) in enumerate(chain(self.choices)):\n if option_value is None:\n option_value = ''\n if isinstance(option_label, (list, tuple)):\n raise TypeError('Choice groups are not supported here')\n group_name = None\n subgroup = []\n groups.append((group_name, subgroup, index))\n\n selected = (\n force_text(option_value) in value and\n (has_selected is False or self.allow_multiple_selected)\n )\n if selected is True and has_selected is False:\n has_selected = True\n attrs['description'] = option_desc\n subgroup.append(self.create_option(\n name, option_value, option_label, selected, index,\n subindex=None, attrs=attrs,\n ))\n\n return groups\n\n\nclass AddOnVariationField(forms.ChoiceField):\n def valid_value(self, value):\n text_value = force_text(value)\n for k, v, d in self.choices:\n if value == k or text_value == force_text(k):\n return True\n return False\n\n\nclass AddOnsForm(forms.Form):\n \"\"\"\n This form class is responsible for selecting add-ons to a product in the cart.\n \"\"\"\n\n def _label(self, event, item_or_variation, avail, override_price=None):\n if isinstance(item_or_variation, ItemVariation):\n variation = item_or_variation\n item = item_or_variation.item\n price = variation.price\n label = variation.value\n else:\n item = item_or_variation\n price = item.default_price\n label = item.name\n\n if override_price:\n price = override_price\n\n if self.price_included:\n price = TAXED_ZERO\n else:\n price = item.tax(price)\n\n if not price.gross:\n n = '{name}'.format(\n name=label\n )\n elif not price.rate:\n n = _('{name} (+ {price})').format(\n name=label, price=money_filter(price.gross, event.currency)\n )\n elif event.settings.display_net_prices:\n n = _('{name} (+ {price} plus {taxes}% {taxname})').format(\n name=label, price=money_filter(price.net, event.currency),\n taxes=number_format(price.rate), taxname=price.name\n )\n else:\n n = _('{name} (+ {price} incl. {taxes}% {taxname})').format(\n name=label, price=money_filter(price.gross, event.currency),\n taxes=number_format(price.rate), taxname=price.name\n )\n\n if avail[0] < 20:\n n += ' \u2013 {}'.format(_('SOLD OUT'))\n elif avail[0] < 100:\n n += ' \u2013 {}'.format(_('Currently unavailable'))\n\n return n\n\n def __init__(self, *args, **kwargs):\n \"\"\"\n Takes additional keyword arguments:\n\n :param category: The category to choose from\n :param event: The event this belongs to\n :param subevent: The event the parent cart position belongs to\n :param initial: The current set of add-ons\n :param quota_cache: A shared dictionary for quota caching\n :param item_cache: A shared dictionary for item/category caching\n \"\"\"\n category = kwargs.pop('category')\n event = kwargs.pop('event')\n subevent = kwargs.pop('subevent')\n current_addons = kwargs.pop('initial')\n quota_cache = kwargs.pop('quota_cache')\n item_cache = kwargs.pop('item_cache')\n self.price_included = kwargs.pop('price_included')\n\n super().__init__(*args, **kwargs)\n\n if subevent:\n item_price_override = subevent.item_price_overrides\n var_price_override = subevent.var_price_overrides\n else:\n item_price_override = {}\n var_price_override = {}\n\n ckey = '{}-{}'.format(subevent.pk if subevent else 0, category.pk)\n if ckey not in item_cache:\n # Get all items to possibly show\n items = category.items.filter(\n Q(active=True)\n & Q(Q(available_from__isnull=True) | Q(available_from__lte=now()))\n & Q(Q(available_until__isnull=True) | Q(available_until__gte=now()))\n & Q(hide_without_voucher=False)\n ).select_related('tax_rule').prefetch_related(\n Prefetch('quotas',\n to_attr='_subevent_quotas',\n queryset=event.quotas.filter(subevent=subevent)),\n Prefetch('variations', to_attr='available_variations',\n queryset=ItemVariation.objects.filter(active=True, quotas__isnull=False).prefetch_related(\n Prefetch('quotas',\n to_attr='_subevent_quotas',\n queryset=event.quotas.filter(subevent=subevent))\n ).distinct()),\n ).annotate(\n quotac=Count('quotas'),\n has_variations=Count('variations')\n ).filter(\n quotac__gt=0\n ).order_by('category__position', 'category_id', 'position', 'name')\n item_cache[ckey] = items\n else:\n items = item_cache[ckey]\n\n for i in items:\n if i.has_variations:\n choices = [('', _('no selection'), '')]\n for v in i.available_variations:\n cached_availability = v.check_quotas(subevent=subevent, _cache=quota_cache)\n if v._subevent_quotas:\n choices.append(\n (v.pk,\n self._label(event, v, cached_availability,\n override_price=var_price_override.get(v.pk)),\n v.description)\n )\n\n field = AddOnVariationField(\n choices=choices,\n label=i.name,\n required=False,\n widget=AddOnRadioSelect,\n help_text=rich_text(str(i.description)),\n initial=current_addons.get(i.pk),\n )\n if len(choices) > 1:\n self.fields['item_%s' % i.pk] = field\n else:\n if not i._subevent_quotas:\n continue\n cached_availability = i.check_quotas(subevent=subevent, _cache=quota_cache)\n field = forms.BooleanField(\n label=self._label(event, i, cached_availability,\n override_price=item_price_override.get(i.pk)),\n required=False,\n initial=i.pk in current_addons,\n help_text=rich_text(str(i.description)),\n )\n self.fields['item_%s' % i.pk] = field\n", "path": "src/pretix/presale/forms/checkout.py"}]} | 3,354 | 155 |
gh_patches_debug_7699 | rasdani/github-patches | git_diff | qutebrowser__qutebrowser-3802 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Tab-take should not show the tabs in current window
<!-- If this is a bug report, please remember to mention your version info from
`:open qute:version` or `qutebrowser --version` -->
If I am in one window and write ```:tab-take ```, the options of available tabs will pop up. All the tabs in the current window will be shown in the options as well. However, a window can not take its own tab, so I think we should remove the tabs in the current window from the options for the ```:tab-take```


.
</issue>
<code>
[start of qutebrowser/completion/models/miscmodels.py]
1 # vim: ft=python fileencoding=utf-8 sts=4 sw=4 et:
2
3 # Copyright 2014-2018 Florian Bruhin (The Compiler) <[email protected]>
4 #
5 # This file is part of qutebrowser.
6 #
7 # qutebrowser is free software: you can redistribute it and/or modify
8 # it under the terms of the GNU General Public License as published by
9 # the Free Software Foundation, either version 3 of the License, or
10 # (at your option) any later version.
11 #
12 # qutebrowser is distributed in the hope that it will be useful,
13 # but WITHOUT ANY WARRANTY; without even the implied warranty of
14 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
15 # GNU General Public License for more details.
16 #
17 # You should have received a copy of the GNU General Public License
18 # along with qutebrowser. If not, see <http://www.gnu.org/licenses/>.
19
20 """Functions that return miscellaneous completion models."""
21
22 from qutebrowser.config import configdata
23 from qutebrowser.utils import objreg, log
24 from qutebrowser.completion.models import completionmodel, listcategory, util
25
26
27 def command(*, info):
28 """A CompletionModel filled with non-hidden commands and descriptions."""
29 model = completionmodel.CompletionModel(column_widths=(20, 60, 20))
30 cmdlist = util.get_cmd_completions(info, include_aliases=True,
31 include_hidden=False)
32 model.add_category(listcategory.ListCategory("Commands", cmdlist))
33 return model
34
35
36 def helptopic(*, info):
37 """A CompletionModel filled with help topics."""
38 model = completionmodel.CompletionModel()
39
40 cmdlist = util.get_cmd_completions(info, include_aliases=False,
41 include_hidden=True, prefix=':')
42 settings = ((opt.name, opt.description)
43 for opt in configdata.DATA.values())
44
45 model.add_category(listcategory.ListCategory("Commands", cmdlist))
46 model.add_category(listcategory.ListCategory("Settings", settings))
47 return model
48
49
50 def quickmark(*, info=None): # pylint: disable=unused-argument
51 """A CompletionModel filled with all quickmarks."""
52 def delete(data):
53 """Delete a quickmark from the completion menu."""
54 name = data[0]
55 quickmark_manager = objreg.get('quickmark-manager')
56 log.completion.debug('Deleting quickmark {}'.format(name))
57 quickmark_manager.delete(name)
58
59 model = completionmodel.CompletionModel(column_widths=(30, 70, 0))
60 marks = objreg.get('quickmark-manager').marks.items()
61 model.add_category(listcategory.ListCategory('Quickmarks', marks,
62 delete_func=delete,
63 sort=False))
64 return model
65
66
67 def bookmark(*, info=None): # pylint: disable=unused-argument
68 """A CompletionModel filled with all bookmarks."""
69 def delete(data):
70 """Delete a bookmark from the completion menu."""
71 urlstr = data[0]
72 log.completion.debug('Deleting bookmark {}'.format(urlstr))
73 bookmark_manager = objreg.get('bookmark-manager')
74 bookmark_manager.delete(urlstr)
75
76 model = completionmodel.CompletionModel(column_widths=(30, 70, 0))
77 marks = objreg.get('bookmark-manager').marks.items()
78 model.add_category(listcategory.ListCategory('Bookmarks', marks,
79 delete_func=delete,
80 sort=False))
81 return model
82
83
84 def session(*, info=None): # pylint: disable=unused-argument
85 """A CompletionModel filled with session names."""
86 model = completionmodel.CompletionModel()
87 try:
88 manager = objreg.get('session-manager')
89 sessions = ((name,) for name in manager.list_sessions()
90 if not name.startswith('_'))
91 model.add_category(listcategory.ListCategory("Sessions", sessions))
92 except OSError:
93 log.completion.exception("Failed to list sessions!")
94 return model
95
96
97 def _buffer(skip_win_id=None):
98 """Helper to get the completion model for buffer/other_buffer.
99
100 Args:
101 skip_win_id: The id of the window to skip, or None to include all.
102 """
103 def delete_buffer(data):
104 """Close the selected tab."""
105 win_id, tab_index = data[0].split('/')
106 tabbed_browser = objreg.get('tabbed-browser', scope='window',
107 window=int(win_id))
108 tabbed_browser.on_tab_close_requested(int(tab_index) - 1)
109
110 model = completionmodel.CompletionModel(column_widths=(6, 40, 54))
111
112 for win_id in objreg.window_registry:
113 if skip_win_id and win_id == skip_win_id:
114 continue
115 tabbed_browser = objreg.get('tabbed-browser', scope='window',
116 window=win_id)
117 if tabbed_browser.shutting_down:
118 continue
119 tabs = []
120 for idx in range(tabbed_browser.widget.count()):
121 tab = tabbed_browser.widget.widget(idx)
122 tabs.append(("{}/{}".format(win_id, idx + 1),
123 tab.url().toDisplayString(),
124 tabbed_browser.widget.page_title(idx)))
125 cat = listcategory.ListCategory("{}".format(win_id), tabs,
126 delete_func=delete_buffer)
127 model.add_category(cat)
128
129 return model
130
131
132 def buffer(*, info=None): # pylint: disable=unused-argument
133 """A model to complete on open tabs across all windows.
134
135 Used for switching the buffer command.
136 """
137 return _buffer()
138
139
140 def other_buffer(*, info):
141 """A model to complete on open tabs across all windows except the current.
142
143 Used for the tab-take command.
144 """
145 return _buffer(skip_win_id=info.win_id)
146
147
148 def window(*, info):
149 """A model to complete on all open windows."""
150 model = completionmodel.CompletionModel(column_widths=(6, 30, 64))
151
152 windows = []
153
154 for win_id in objreg.window_registry:
155 if win_id == info.win_id:
156 continue
157 tabbed_browser = objreg.get('tabbed-browser', scope='window',
158 window=win_id)
159 tab_titles = (tab.title() for tab in tabbed_browser.widgets())
160 windows.append(("{}".format(win_id),
161 objreg.window_registry[win_id].windowTitle(),
162 ", ".join(tab_titles)))
163
164 model.add_category(listcategory.ListCategory("Windows", windows))
165
166 return model
167
[end of qutebrowser/completion/models/miscmodels.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/qutebrowser/completion/models/miscmodels.py b/qutebrowser/completion/models/miscmodels.py
--- a/qutebrowser/completion/models/miscmodels.py
+++ b/qutebrowser/completion/models/miscmodels.py
@@ -110,7 +110,7 @@
model = completionmodel.CompletionModel(column_widths=(6, 40, 54))
for win_id in objreg.window_registry:
- if skip_win_id and win_id == skip_win_id:
+ if skip_win_id is not None and win_id == skip_win_id:
continue
tabbed_browser = objreg.get('tabbed-browser', scope='window',
window=win_id)
| {"golden_diff": "diff --git a/qutebrowser/completion/models/miscmodels.py b/qutebrowser/completion/models/miscmodels.py\n--- a/qutebrowser/completion/models/miscmodels.py\n+++ b/qutebrowser/completion/models/miscmodels.py\n@@ -110,7 +110,7 @@\n model = completionmodel.CompletionModel(column_widths=(6, 40, 54))\n \n for win_id in objreg.window_registry:\n- if skip_win_id and win_id == skip_win_id:\n+ if skip_win_id is not None and win_id == skip_win_id:\n continue\n tabbed_browser = objreg.get('tabbed-browser', scope='window',\n window=win_id)\n", "issue": "Tab-take should not show the tabs in current window\n<!-- If this is a bug report, please remember to mention your version info from\r\n`:open qute:version` or `qutebrowser --version` -->\r\nIf I am in one window and write ```:tab-take ```, the options of available tabs will pop up. All the tabs in the current window will be shown in the options as well. However, a window can not take its own tab, so I think we should remove the tabs in the current window from the options for the ```:tab-take```\r\n\r\n\r\n\r\n. \n", "before_files": [{"content": "# vim: ft=python fileencoding=utf-8 sts=4 sw=4 et:\n\n# Copyright 2014-2018 Florian Bruhin (The Compiler) <[email protected]>\n#\n# This file is part of qutebrowser.\n#\n# qutebrowser is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# qutebrowser is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with qutebrowser. If not, see <http://www.gnu.org/licenses/>.\n\n\"\"\"Functions that return miscellaneous completion models.\"\"\"\n\nfrom qutebrowser.config import configdata\nfrom qutebrowser.utils import objreg, log\nfrom qutebrowser.completion.models import completionmodel, listcategory, util\n\n\ndef command(*, info):\n \"\"\"A CompletionModel filled with non-hidden commands and descriptions.\"\"\"\n model = completionmodel.CompletionModel(column_widths=(20, 60, 20))\n cmdlist = util.get_cmd_completions(info, include_aliases=True,\n include_hidden=False)\n model.add_category(listcategory.ListCategory(\"Commands\", cmdlist))\n return model\n\n\ndef helptopic(*, info):\n \"\"\"A CompletionModel filled with help topics.\"\"\"\n model = completionmodel.CompletionModel()\n\n cmdlist = util.get_cmd_completions(info, include_aliases=False,\n include_hidden=True, prefix=':')\n settings = ((opt.name, opt.description)\n for opt in configdata.DATA.values())\n\n model.add_category(listcategory.ListCategory(\"Commands\", cmdlist))\n model.add_category(listcategory.ListCategory(\"Settings\", settings))\n return model\n\n\ndef quickmark(*, info=None): # pylint: disable=unused-argument\n \"\"\"A CompletionModel filled with all quickmarks.\"\"\"\n def delete(data):\n \"\"\"Delete a quickmark from the completion menu.\"\"\"\n name = data[0]\n quickmark_manager = objreg.get('quickmark-manager')\n log.completion.debug('Deleting quickmark {}'.format(name))\n quickmark_manager.delete(name)\n\n model = completionmodel.CompletionModel(column_widths=(30, 70, 0))\n marks = objreg.get('quickmark-manager').marks.items()\n model.add_category(listcategory.ListCategory('Quickmarks', marks,\n delete_func=delete,\n sort=False))\n return model\n\n\ndef bookmark(*, info=None): # pylint: disable=unused-argument\n \"\"\"A CompletionModel filled with all bookmarks.\"\"\"\n def delete(data):\n \"\"\"Delete a bookmark from the completion menu.\"\"\"\n urlstr = data[0]\n log.completion.debug('Deleting bookmark {}'.format(urlstr))\n bookmark_manager = objreg.get('bookmark-manager')\n bookmark_manager.delete(urlstr)\n\n model = completionmodel.CompletionModel(column_widths=(30, 70, 0))\n marks = objreg.get('bookmark-manager').marks.items()\n model.add_category(listcategory.ListCategory('Bookmarks', marks,\n delete_func=delete,\n sort=False))\n return model\n\n\ndef session(*, info=None): # pylint: disable=unused-argument\n \"\"\"A CompletionModel filled with session names.\"\"\"\n model = completionmodel.CompletionModel()\n try:\n manager = objreg.get('session-manager')\n sessions = ((name,) for name in manager.list_sessions()\n if not name.startswith('_'))\n model.add_category(listcategory.ListCategory(\"Sessions\", sessions))\n except OSError:\n log.completion.exception(\"Failed to list sessions!\")\n return model\n\n\ndef _buffer(skip_win_id=None):\n \"\"\"Helper to get the completion model for buffer/other_buffer.\n\n Args:\n skip_win_id: The id of the window to skip, or None to include all.\n \"\"\"\n def delete_buffer(data):\n \"\"\"Close the selected tab.\"\"\"\n win_id, tab_index = data[0].split('/')\n tabbed_browser = objreg.get('tabbed-browser', scope='window',\n window=int(win_id))\n tabbed_browser.on_tab_close_requested(int(tab_index) - 1)\n\n model = completionmodel.CompletionModel(column_widths=(6, 40, 54))\n\n for win_id in objreg.window_registry:\n if skip_win_id and win_id == skip_win_id:\n continue\n tabbed_browser = objreg.get('tabbed-browser', scope='window',\n window=win_id)\n if tabbed_browser.shutting_down:\n continue\n tabs = []\n for idx in range(tabbed_browser.widget.count()):\n tab = tabbed_browser.widget.widget(idx)\n tabs.append((\"{}/{}\".format(win_id, idx + 1),\n tab.url().toDisplayString(),\n tabbed_browser.widget.page_title(idx)))\n cat = listcategory.ListCategory(\"{}\".format(win_id), tabs,\n delete_func=delete_buffer)\n model.add_category(cat)\n\n return model\n\n\ndef buffer(*, info=None): # pylint: disable=unused-argument\n \"\"\"A model to complete on open tabs across all windows.\n\n Used for switching the buffer command.\n \"\"\"\n return _buffer()\n\n\ndef other_buffer(*, info):\n \"\"\"A model to complete on open tabs across all windows except the current.\n\n Used for the tab-take command.\n \"\"\"\n return _buffer(skip_win_id=info.win_id)\n\n\ndef window(*, info):\n \"\"\"A model to complete on all open windows.\"\"\"\n model = completionmodel.CompletionModel(column_widths=(6, 30, 64))\n\n windows = []\n\n for win_id in objreg.window_registry:\n if win_id == info.win_id:\n continue\n tabbed_browser = objreg.get('tabbed-browser', scope='window',\n window=win_id)\n tab_titles = (tab.title() for tab in tabbed_browser.widgets())\n windows.append((\"{}\".format(win_id),\n objreg.window_registry[win_id].windowTitle(),\n \", \".join(tab_titles)))\n\n model.add_category(listcategory.ListCategory(\"Windows\", windows))\n\n return model\n", "path": "qutebrowser/completion/models/miscmodels.py"}]} | 2,605 | 152 |
gh_patches_debug_10782 | rasdani/github-patches | git_diff | strawberry-graphql__strawberry-1076 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Type refactoring has a regression with inheritance and explicit fields.
```python
@strawberry.input
class A:
a: str = strawberry.field(default='', desc='')
@strawberry.input
class B(A):
b: Optional[str] = strawberry.field(default=None, desc='')
@strawberry.type
class Query:
@strawberry.field
def field(self, arg: B) -> str:
return ''
schema = strawberry.Schema(query=Query)
result = schema.execute_sync('{ field(arg: {}) }')
assert not result.errors
```
raises `TypeError: B fields cannot be resolved. unhashable type: 'StrawberryAnnotation'`.
`StrawberryAnnotation` has a custom `__eq__` without a `__hash__`, causing a set lookup to fail. However adding a suitable `__hash__` just lead to the next `TypeError`.
`StrawberryOptional` likely has the same problem.
</issue>
<code>
[start of strawberry/types/type_resolver.py]
1 import dataclasses
2 import sys
3 from typing import Dict, List, Type
4
5 from strawberry.annotation import StrawberryAnnotation
6 from strawberry.exceptions import (
7 FieldWithResolverAndDefaultFactoryError,
8 FieldWithResolverAndDefaultValueError,
9 PrivateStrawberryFieldError,
10 )
11 from strawberry.field import StrawberryField
12 from strawberry.private import Private
13 from strawberry.utils.str_converters import to_camel_case
14
15 from ..arguments import UNSET
16
17
18 def _get_fields(cls: Type) -> List[StrawberryField]:
19 """Get all the strawberry fields off a strawberry.type cls
20
21 This function returns a list of StrawberryFields (one for each field item), while
22 also paying attention the name and typing of the field.
23
24 StrawberryFields can be defined on a strawberry.type class as either a dataclass-
25 style field or using strawberry.field as a decorator.
26
27 >>> import strawberry
28 >>> @strawberry.type
29 ... class Query:
30 ... type_1a: int = 5
31 ... type_1b: int = strawberry.field(...)
32 ... type_1c: int = strawberry.field(resolver=...)
33 ...
34 ... @strawberry.field
35 ... def type_2(self) -> int:
36 ... ...
37
38 Type #1:
39 A pure dataclass-style field. Will not have a StrawberryField; one will need to
40 be created in this function. Type annotation is required.
41
42 Type #2:
43 A field defined using @strawberry.field as a decorator around the resolver. The
44 resolver must be type-annotated.
45
46 The StrawberryField.python_name value will be assigned to the field's name on the
47 class if one is not set by either using an explicit strawberry.field(name=...) or by
48 passing a named function (i.e. not an anonymous lambda) to strawberry.field
49 (typically as a decorator).
50 """
51 # Deferred import to avoid import cycles
52 from strawberry.field import StrawberryField
53
54 fields: Dict[str, StrawberryField] = {}
55
56 # before trying to find any fields, let's first add the fields defined in
57 # parent classes, we do this by checking if parents have a type definition
58 for base in cls.__bases__:
59 if hasattr(base, "_type_definition"):
60 base_fields = {
61 field.graphql_name: field
62 # TODO: we need to rename _fields to something else
63 for field in base._type_definition._fields # type: ignore
64 }
65
66 # Add base's fields to cls' fields
67 fields = {**fields, **base_fields}
68
69 # then we can proceed with finding the fields for the current class
70 for field in dataclasses.fields(cls):
71
72 if isinstance(field, StrawberryField):
73 # Check that the field type is not Private
74 if isinstance(field.type, Private):
75 raise PrivateStrawberryFieldError(field.python_name, cls.__name__)
76
77 # Check that default is not set if a resolver is defined
78 if field.default != dataclasses.MISSING and field.base_resolver is not None:
79 raise FieldWithResolverAndDefaultValueError(
80 field.python_name, cls.__name__
81 )
82
83 # Check that default_factory is not set if a resolver is defined
84 # Note: using getattr because of this issue:
85 # https://github.com/python/mypy/issues/6910
86 if (
87 getattr(field, "default_factory") != dataclasses.MISSING # noqa
88 and field.base_resolver is not None
89 ):
90 raise FieldWithResolverAndDefaultFactoryError(
91 field.python_name, cls.__name__
92 )
93
94 # we make sure that the origin is either the field's resolver when
95 # called as:
96 #
97 # >>> @strawberry.field
98 # ... def x(self): ...
99 #
100 # or the class where this field was defined, so we always have
101 # the correct origin for determining field types when resolving
102 # the types.
103 field.origin = field.origin or cls
104
105 # Make sure types are StrawberryAnnotations
106 if not isinstance(field.type, StrawberryAnnotation):
107 module = sys.modules[field.origin.__module__]
108 field.type_annotation = StrawberryAnnotation(
109 annotation=field.type_annotation, namespace=module.__dict__
110 )
111
112 # Create a StrawberryField for fields that didn't use strawberry.field
113 else:
114 # Only ignore Private fields that weren't defined using StrawberryFields
115 if isinstance(field.type, Private):
116 continue
117
118 field_type = field.type
119
120 module = sys.modules[cls.__module__]
121
122 # Create a StrawberryField, for fields of Types #1 and #2a
123 field = StrawberryField(
124 python_name=field.name,
125 graphql_name=to_camel_case(field.name),
126 type_annotation=StrawberryAnnotation(
127 annotation=field_type,
128 namespace=module.__dict__,
129 ),
130 origin=cls,
131 default=getattr(cls, field.name, UNSET),
132 )
133
134 field_name = field.graphql_name
135
136 assert_message = "Field must have a name by the time the schema is generated"
137 assert field_name is not None, assert_message
138
139 # TODO: Raise exception if field_name already in fields
140 fields[field_name] = field
141
142 return list(fields.values())
143
[end of strawberry/types/type_resolver.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/strawberry/types/type_resolver.py b/strawberry/types/type_resolver.py
--- a/strawberry/types/type_resolver.py
+++ b/strawberry/types/type_resolver.py
@@ -103,7 +103,7 @@
field.origin = field.origin or cls
# Make sure types are StrawberryAnnotations
- if not isinstance(field.type, StrawberryAnnotation):
+ if not isinstance(field.type_annotation, StrawberryAnnotation):
module = sys.modules[field.origin.__module__]
field.type_annotation = StrawberryAnnotation(
annotation=field.type_annotation, namespace=module.__dict__
| {"golden_diff": "diff --git a/strawberry/types/type_resolver.py b/strawberry/types/type_resolver.py\n--- a/strawberry/types/type_resolver.py\n+++ b/strawberry/types/type_resolver.py\n@@ -103,7 +103,7 @@\n field.origin = field.origin or cls\n \n # Make sure types are StrawberryAnnotations\n- if not isinstance(field.type, StrawberryAnnotation):\n+ if not isinstance(field.type_annotation, StrawberryAnnotation):\n module = sys.modules[field.origin.__module__]\n field.type_annotation = StrawberryAnnotation(\n annotation=field.type_annotation, namespace=module.__dict__\n", "issue": "Type refactoring has a regression with inheritance and explicit fields.\n```python\r\[email protected]\r\nclass A:\r\n a: str = strawberry.field(default='', desc='')\r\n\r\[email protected]\r\nclass B(A):\r\n b: Optional[str] = strawberry.field(default=None, desc='')\r\n\r\[email protected]\r\nclass Query:\r\n @strawberry.field\r\n def field(self, arg: B) -> str:\r\n return ''\r\n\r\nschema = strawberry.Schema(query=Query)\r\nresult = schema.execute_sync('{ field(arg: {}) }')\r\nassert not result.errors\r\n```\r\nraises `TypeError: B fields cannot be resolved. unhashable type: 'StrawberryAnnotation'`.\r\n\r\n`StrawberryAnnotation` has a custom `__eq__` without a `__hash__`, causing a set lookup to fail. However adding a suitable `__hash__` just lead to the next `TypeError`.\r\n\r\n`StrawberryOptional` likely has the same problem.\r\n\n", "before_files": [{"content": "import dataclasses\nimport sys\nfrom typing import Dict, List, Type\n\nfrom strawberry.annotation import StrawberryAnnotation\nfrom strawberry.exceptions import (\n FieldWithResolverAndDefaultFactoryError,\n FieldWithResolverAndDefaultValueError,\n PrivateStrawberryFieldError,\n)\nfrom strawberry.field import StrawberryField\nfrom strawberry.private import Private\nfrom strawberry.utils.str_converters import to_camel_case\n\nfrom ..arguments import UNSET\n\n\ndef _get_fields(cls: Type) -> List[StrawberryField]:\n \"\"\"Get all the strawberry fields off a strawberry.type cls\n\n This function returns a list of StrawberryFields (one for each field item), while\n also paying attention the name and typing of the field.\n\n StrawberryFields can be defined on a strawberry.type class as either a dataclass-\n style field or using strawberry.field as a decorator.\n\n >>> import strawberry\n >>> @strawberry.type\n ... class Query:\n ... type_1a: int = 5\n ... type_1b: int = strawberry.field(...)\n ... type_1c: int = strawberry.field(resolver=...)\n ...\n ... @strawberry.field\n ... def type_2(self) -> int:\n ... ...\n\n Type #1:\n A pure dataclass-style field. Will not have a StrawberryField; one will need to\n be created in this function. Type annotation is required.\n\n Type #2:\n A field defined using @strawberry.field as a decorator around the resolver. The\n resolver must be type-annotated.\n\n The StrawberryField.python_name value will be assigned to the field's name on the\n class if one is not set by either using an explicit strawberry.field(name=...) or by\n passing a named function (i.e. not an anonymous lambda) to strawberry.field\n (typically as a decorator).\n \"\"\"\n # Deferred import to avoid import cycles\n from strawberry.field import StrawberryField\n\n fields: Dict[str, StrawberryField] = {}\n\n # before trying to find any fields, let's first add the fields defined in\n # parent classes, we do this by checking if parents have a type definition\n for base in cls.__bases__:\n if hasattr(base, \"_type_definition\"):\n base_fields = {\n field.graphql_name: field\n # TODO: we need to rename _fields to something else\n for field in base._type_definition._fields # type: ignore\n }\n\n # Add base's fields to cls' fields\n fields = {**fields, **base_fields}\n\n # then we can proceed with finding the fields for the current class\n for field in dataclasses.fields(cls):\n\n if isinstance(field, StrawberryField):\n # Check that the field type is not Private\n if isinstance(field.type, Private):\n raise PrivateStrawberryFieldError(field.python_name, cls.__name__)\n\n # Check that default is not set if a resolver is defined\n if field.default != dataclasses.MISSING and field.base_resolver is not None:\n raise FieldWithResolverAndDefaultValueError(\n field.python_name, cls.__name__\n )\n\n # Check that default_factory is not set if a resolver is defined\n # Note: using getattr because of this issue:\n # https://github.com/python/mypy/issues/6910\n if (\n getattr(field, \"default_factory\") != dataclasses.MISSING # noqa\n and field.base_resolver is not None\n ):\n raise FieldWithResolverAndDefaultFactoryError(\n field.python_name, cls.__name__\n )\n\n # we make sure that the origin is either the field's resolver when\n # called as:\n #\n # >>> @strawberry.field\n # ... def x(self): ...\n #\n # or the class where this field was defined, so we always have\n # the correct origin for determining field types when resolving\n # the types.\n field.origin = field.origin or cls\n\n # Make sure types are StrawberryAnnotations\n if not isinstance(field.type, StrawberryAnnotation):\n module = sys.modules[field.origin.__module__]\n field.type_annotation = StrawberryAnnotation(\n annotation=field.type_annotation, namespace=module.__dict__\n )\n\n # Create a StrawberryField for fields that didn't use strawberry.field\n else:\n # Only ignore Private fields that weren't defined using StrawberryFields\n if isinstance(field.type, Private):\n continue\n\n field_type = field.type\n\n module = sys.modules[cls.__module__]\n\n # Create a StrawberryField, for fields of Types #1 and #2a\n field = StrawberryField(\n python_name=field.name,\n graphql_name=to_camel_case(field.name),\n type_annotation=StrawberryAnnotation(\n annotation=field_type,\n namespace=module.__dict__,\n ),\n origin=cls,\n default=getattr(cls, field.name, UNSET),\n )\n\n field_name = field.graphql_name\n\n assert_message = \"Field must have a name by the time the schema is generated\"\n assert field_name is not None, assert_message\n\n # TODO: Raise exception if field_name already in fields\n fields[field_name] = field\n\n return list(fields.values())\n", "path": "strawberry/types/type_resolver.py"}]} | 2,191 | 131 |
gh_patches_debug_15599 | rasdani/github-patches | git_diff | getsentry__sentry-22143 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Sentry ratelimit cannot be changed when using self-hosted
## Important Details
How are you running Sentry?
<!-- Please pick one of the following -->
On-Premise wo/ Docker, version 20.8.0
## Description
Sentry ratelimit cannot be changed when running on premises
## Steps to Reproduce
1. Go to web-interface, Admin/Settings
2. Set a non-zero ratelimit.
3. Get an error.
````
Oct 6 07:18:49 jsentry sentry[4128]: 10.100.33.5 - - [06/Oct/2020:04:18:49 +0000] "GET /api/0/internal/options/ HTTP/1.1" 200 20407 "https://sentry.findmykids.org/manage/settings/" "Mozilla/5.0 (X11; FreeBSD amd64; rv:76.0) Gecko/20100101 Firefox/76.0"
Oct 6 07:19:09 jsentry sentry[4128]: Traceback (most recent call last):
Oct 6 07:19:09 jsentry sentry[4128]: File "/usr/local/lib/python2.7/site-packages/sentry-20.8.0-py2.7.egg/sentry/api/base.py", line 134, in handle_exception
Oct 6 07:19:09 jsentry sentry[4128]: response = super(Endpoint, self).handle_exception(exc)
Oct 6 07:19:09 jsentry sentry[4128]: File "/usr/local/lib/python2.7/site-packages/djangorestframework-3.6.4-py2.7.egg/rest_framework/views.py", line 449, in handle_exception
Oct 6 07:19:09 jsentry sentry[4128]: self.raise_uncaught_exception(exc)
Oct 6 07:19:09 jsentry sentry[4128]: File "/usr/local/lib/python2.7/site-packages/sentry-20.8.0-py2.7.egg/sentry/api/base.py", line 247, in dispatch
Oct 6 07:19:09 jsentry sentry[4128]: response = handler(request, *args, **kwargs)
Oct 6 07:19:09 jsentry sentry[4128]: File "/usr/local/lib/python2.7/site-packages/sentry-20.8.0-py2.7.egg/sentry/api/endpoints/system_options.py", line 74, in put
Oct 6 07:19:09 jsentry sentry[4128]: options.set(k, v)
Oct 6 07:19:09 jsentry sentry[4128]: File "/usr/local/lib/python2.7/site-packages/sentry-20.8.0-py2.7.egg/sentry/options/manager.py", line 83, in set
Oct 6 07:19:09 jsentry sentry[4128]: "%r cannot be changed at runtime because it is configured on disk" % key
Oct 6 07:19:09 jsentry sentry[4128]: AssertionError: u'system.url-prefix' cannot be changed at runtime because it is configured on disk
Oct 6 07:19:09 jsentry sentry[4128]: 10.100.33.5 - - [06/Oct/2020:04:19:09 +0000] "PUT /api/0/internal/options/ HTTP/1.1" 500 746 "https://sentry.findmykids.org/manage/settings/" "Mozilla/5.0 (X11; FreeBSD amd64; rv:76.0) Gecko/20100101 Firefox/76.0"
````
### What you expected to happen
Ratelimit should be changeable from web-interface.
### Possible Solution
Stop sending system.url-prefix in PUT request ?
</issue>
<code>
[start of src/sentry/api/endpoints/system_options.py]
1 from __future__ import absolute_import
2
3 import six
4
5 import sentry
6
7 from django.conf import settings
8 from rest_framework.response import Response
9
10 from sentry import options
11 from sentry.api.base import Endpoint
12 from sentry.api.permissions import SuperuserPermission
13 from sentry.utils.email import is_smtp_enabled
14
15
16 class SystemOptionsEndpoint(Endpoint):
17 permission_classes = (SuperuserPermission,)
18
19 def get(self, request):
20 query = request.GET.get("query")
21 if query == "is:required":
22 option_list = options.filter(flag=options.FLAG_REQUIRED)
23 elif query:
24 return Response(u"{} is not a supported search query".format(query), status=400)
25 else:
26 option_list = options.all()
27
28 smtp_disabled = not is_smtp_enabled()
29
30 results = {}
31 for k in option_list:
32 disabled, disabled_reason = False, None
33
34 if smtp_disabled and k.name[:5] == "mail.":
35 disabled_reason, disabled = "smtpDisabled", True
36 elif bool(
37 k.flags & options.FLAG_PRIORITIZE_DISK and settings.SENTRY_OPTIONS.get(k.name)
38 ):
39 # TODO(mattrobenolt): Expose this as a property on Key.
40 disabled_reason, disabled = "diskPriority", True
41
42 # TODO(mattrobenolt): help, placeholder, title, type
43 results[k.name] = {
44 "value": options.get(k.name),
45 "field": {
46 "default": k.default(),
47 "required": bool(k.flags & options.FLAG_REQUIRED),
48 "disabled": disabled,
49 "disabledReason": disabled_reason,
50 "isSet": options.isset(k.name),
51 "allowEmpty": bool(k.flags & options.FLAG_ALLOW_EMPTY),
52 },
53 }
54
55 return Response(results)
56
57 def put(self, request):
58 # TODO(dcramer): this should validate options before saving them
59 for k, v in six.iteritems(request.data):
60 if v and isinstance(v, six.string_types):
61 v = v.strip()
62 try:
63 option = options.lookup_key(k)
64 except options.UnknownOption:
65 # TODO(dcramer): unify API errors
66 return Response(
67 {"error": "unknown_option", "errorDetail": {"option": k}}, status=400
68 )
69
70 try:
71 if not (option.flags & options.FLAG_ALLOW_EMPTY) and not v:
72 options.delete(k)
73 else:
74 options.set(k, v)
75 except TypeError as e:
76 return Response(
77 {
78 "error": "invalid_type",
79 "errorDetail": {"option": k, "message": six.text_type(e)},
80 },
81 status=400,
82 )
83 # TODO(dcramer): this has nothing to do with configuring options and
84 # should not be set here
85 options.set("sentry:version-configured", sentry.get_version())
86 return Response(status=200)
87
[end of src/sentry/api/endpoints/system_options.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/sentry/api/endpoints/system_options.py b/src/sentry/api/endpoints/system_options.py
--- a/src/sentry/api/endpoints/system_options.py
+++ b/src/sentry/api/endpoints/system_options.py
@@ -72,10 +72,13 @@
options.delete(k)
else:
options.set(k, v)
- except TypeError as e:
+ except (TypeError, AssertionError) as e:
+ # TODO(chadwhitacre): Use a custom exception for the
+ # immutability case, especially since asserts disappear with
+ # `python -O`.
return Response(
{
- "error": "invalid_type",
+ "error": "invalid_type" if type(e) is TypeError else "immutable_option",
"errorDetail": {"option": k, "message": six.text_type(e)},
},
status=400,
| {"golden_diff": "diff --git a/src/sentry/api/endpoints/system_options.py b/src/sentry/api/endpoints/system_options.py\n--- a/src/sentry/api/endpoints/system_options.py\n+++ b/src/sentry/api/endpoints/system_options.py\n@@ -72,10 +72,13 @@\n options.delete(k)\n else:\n options.set(k, v)\n- except TypeError as e:\n+ except (TypeError, AssertionError) as e:\n+ # TODO(chadwhitacre): Use a custom exception for the\n+ # immutability case, especially since asserts disappear with\n+ # `python -O`.\n return Response(\n {\n- \"error\": \"invalid_type\",\n+ \"error\": \"invalid_type\" if type(e) is TypeError else \"immutable_option\",\n \"errorDetail\": {\"option\": k, \"message\": six.text_type(e)},\n },\n status=400,\n", "issue": "Sentry ratelimit cannot be changed when using self-hosted\n## Important Details\r\n\r\nHow are you running Sentry?\r\n\r\n<!-- Please pick one of the following -->\r\nOn-Premise wo/ Docker, version 20.8.0\r\n\r\n## Description\r\nSentry ratelimit cannot be changed when running on premises\r\n\r\n## Steps to Reproduce\r\n\r\n1. Go to web-interface, Admin/Settings\r\n2. Set a non-zero ratelimit.\r\n3. Get an error.\r\n\r\n````\r\nOct 6 07:18:49 jsentry sentry[4128]: 10.100.33.5 - - [06/Oct/2020:04:18:49 +0000] \"GET /api/0/internal/options/ HTTP/1.1\" 200 20407 \"https://sentry.findmykids.org/manage/settings/\" \"Mozilla/5.0 (X11; FreeBSD amd64; rv:76.0) Gecko/20100101 Firefox/76.0\"\r\nOct 6 07:19:09 jsentry sentry[4128]: Traceback (most recent call last):\r\nOct 6 07:19:09 jsentry sentry[4128]: File \"/usr/local/lib/python2.7/site-packages/sentry-20.8.0-py2.7.egg/sentry/api/base.py\", line 134, in handle_exception\r\nOct 6 07:19:09 jsentry sentry[4128]: response = super(Endpoint, self).handle_exception(exc)\r\nOct 6 07:19:09 jsentry sentry[4128]: File \"/usr/local/lib/python2.7/site-packages/djangorestframework-3.6.4-py2.7.egg/rest_framework/views.py\", line 449, in handle_exception\r\nOct 6 07:19:09 jsentry sentry[4128]: self.raise_uncaught_exception(exc)\r\nOct 6 07:19:09 jsentry sentry[4128]: File \"/usr/local/lib/python2.7/site-packages/sentry-20.8.0-py2.7.egg/sentry/api/base.py\", line 247, in dispatch\r\nOct 6 07:19:09 jsentry sentry[4128]: response = handler(request, *args, **kwargs)\r\nOct 6 07:19:09 jsentry sentry[4128]: File \"/usr/local/lib/python2.7/site-packages/sentry-20.8.0-py2.7.egg/sentry/api/endpoints/system_options.py\", line 74, in put\r\nOct 6 07:19:09 jsentry sentry[4128]: options.set(k, v)\r\nOct 6 07:19:09 jsentry sentry[4128]: File \"/usr/local/lib/python2.7/site-packages/sentry-20.8.0-py2.7.egg/sentry/options/manager.py\", line 83, in set\r\nOct 6 07:19:09 jsentry sentry[4128]: \"%r cannot be changed at runtime because it is configured on disk\" % key\r\nOct 6 07:19:09 jsentry sentry[4128]: AssertionError: u'system.url-prefix' cannot be changed at runtime because it is configured on disk\r\nOct 6 07:19:09 jsentry sentry[4128]: 10.100.33.5 - - [06/Oct/2020:04:19:09 +0000] \"PUT /api/0/internal/options/ HTTP/1.1\" 500 746 \"https://sentry.findmykids.org/manage/settings/\" \"Mozilla/5.0 (X11; FreeBSD amd64; rv:76.0) Gecko/20100101 Firefox/76.0\"\r\n````\r\n\r\n### What you expected to happen\r\nRatelimit should be changeable from web-interface.\r\n\r\n### Possible Solution\r\nStop sending system.url-prefix in PUT request ?\r\n\n", "before_files": [{"content": "from __future__ import absolute_import\n\nimport six\n\nimport sentry\n\nfrom django.conf import settings\nfrom rest_framework.response import Response\n\nfrom sentry import options\nfrom sentry.api.base import Endpoint\nfrom sentry.api.permissions import SuperuserPermission\nfrom sentry.utils.email import is_smtp_enabled\n\n\nclass SystemOptionsEndpoint(Endpoint):\n permission_classes = (SuperuserPermission,)\n\n def get(self, request):\n query = request.GET.get(\"query\")\n if query == \"is:required\":\n option_list = options.filter(flag=options.FLAG_REQUIRED)\n elif query:\n return Response(u\"{} is not a supported search query\".format(query), status=400)\n else:\n option_list = options.all()\n\n smtp_disabled = not is_smtp_enabled()\n\n results = {}\n for k in option_list:\n disabled, disabled_reason = False, None\n\n if smtp_disabled and k.name[:5] == \"mail.\":\n disabled_reason, disabled = \"smtpDisabled\", True\n elif bool(\n k.flags & options.FLAG_PRIORITIZE_DISK and settings.SENTRY_OPTIONS.get(k.name)\n ):\n # TODO(mattrobenolt): Expose this as a property on Key.\n disabled_reason, disabled = \"diskPriority\", True\n\n # TODO(mattrobenolt): help, placeholder, title, type\n results[k.name] = {\n \"value\": options.get(k.name),\n \"field\": {\n \"default\": k.default(),\n \"required\": bool(k.flags & options.FLAG_REQUIRED),\n \"disabled\": disabled,\n \"disabledReason\": disabled_reason,\n \"isSet\": options.isset(k.name),\n \"allowEmpty\": bool(k.flags & options.FLAG_ALLOW_EMPTY),\n },\n }\n\n return Response(results)\n\n def put(self, request):\n # TODO(dcramer): this should validate options before saving them\n for k, v in six.iteritems(request.data):\n if v and isinstance(v, six.string_types):\n v = v.strip()\n try:\n option = options.lookup_key(k)\n except options.UnknownOption:\n # TODO(dcramer): unify API errors\n return Response(\n {\"error\": \"unknown_option\", \"errorDetail\": {\"option\": k}}, status=400\n )\n\n try:\n if not (option.flags & options.FLAG_ALLOW_EMPTY) and not v:\n options.delete(k)\n else:\n options.set(k, v)\n except TypeError as e:\n return Response(\n {\n \"error\": \"invalid_type\",\n \"errorDetail\": {\"option\": k, \"message\": six.text_type(e)},\n },\n status=400,\n )\n # TODO(dcramer): this has nothing to do with configuring options and\n # should not be set here\n options.set(\"sentry:version-configured\", sentry.get_version())\n return Response(status=200)\n", "path": "src/sentry/api/endpoints/system_options.py"}]} | 2,321 | 196 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.