problem_id
stringlengths 18
22
| source
stringclasses 1
value | task_type
stringclasses 1
value | in_source_id
stringlengths 13
58
| prompt
stringlengths 1.71k
18.9k
| golden_diff
stringlengths 145
5.13k
| verification_info
stringlengths 465
23.6k
| num_tokens_prompt
int64 556
4.1k
| num_tokens_diff
int64 47
1.02k
|
---|---|---|---|---|---|---|---|---|
gh_patches_debug_2014 | rasdani/github-patches | git_diff | pandas-dev__pandas-7007 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Matplotlib cursor position wrong after using asfreq method to change freq of DateTimeIndex from None to something
After using the `asfreq` method to change the frequency of a time-series DataFrame from `None` to something e.g. `15Min` the cursor position in matplotlib graphs of that DataFrame is no longer correct (usually shows a datetime just after the unix epoch). The following demonstrates this (NB dt in df1 is not a constant):
```
df1 = pandas.read_csv('tseries1.csv', names=['tstamp', 'Q'], parse_dates=True,
index_col='tstamp').clip_lower(0).fillna(0)
df1['T'] = pandas.read_csv('tseries2.csv', names=['tstamp', 'T'], parse_dates=True,
index_col='tstamp', squeeze=True).clip_lower(0).fillna(0)
df2 = df1.asfreq(freq='15Min', method='ffill')
# NB df1.index.freq is None
# NB df2.index.freq is <15 * Minutes>
df1.plot()
df2.plot()
plt.show()
```
I find the Matplotlib cursor position to be invaluable when looking for features in very long time-series.
Versions:
- pandas master (commit ID 764b444)
- numpy 1.8
- matplotlib 1.3.0
Matplotlib cursor position wrong after using asfreq method to change freq of DateTimeIndex from None to something
After using the `asfreq` method to change the frequency of a time-series DataFrame from `None` to something e.g. `15Min` the cursor position in matplotlib graphs of that DataFrame is no longer correct (usually shows a datetime just after the unix epoch). The following demonstrates this (NB dt in df1 is not a constant):
```
df1 = pandas.read_csv('tseries1.csv', names=['tstamp', 'Q'], parse_dates=True,
index_col='tstamp').clip_lower(0).fillna(0)
df1['T'] = pandas.read_csv('tseries2.csv', names=['tstamp', 'T'], parse_dates=True,
index_col='tstamp', squeeze=True).clip_lower(0).fillna(0)
df2 = df1.asfreq(freq='15Min', method='ffill')
# NB df1.index.freq is None
# NB df2.index.freq is <15 * Minutes>
df1.plot()
df2.plot()
plt.show()
```
I find the Matplotlib cursor position to be invaluable when looking for features in very long time-series.
Versions:
- pandas master (commit ID 764b444)
- numpy 1.8
- matplotlib 1.3.0
</issue>
<code>
[start of pandas/tseries/plotting.py]
1 """
2 Period formatters and locators adapted from scikits.timeseries by
3 Pierre GF Gerard-Marchant & Matt Knox
4 """
5
6 #!!! TODO: Use the fact that axis can have units to simplify the process
7 import datetime as pydt
8 from datetime import datetime
9
10 from matplotlib import pylab
11 import matplotlib.units as units
12
13 import numpy as np
14
15 from pandas import isnull
16 from pandas.tseries.period import Period
17 from pandas.tseries.offsets import DateOffset
18 import pandas.tseries.frequencies as frequencies
19 from pandas.tseries.index import DatetimeIndex
20 import pandas.core.common as com
21
22 from pandas.tseries.converter import (PeriodConverter, TimeSeries_DateLocator,
23 TimeSeries_DateFormatter)
24
25 #----------------------------------------------------------------------
26 # Plotting functions and monkey patches
27
28
29 def tsplot(series, plotf, **kwargs):
30 """
31 Plots a Series on the given Matplotlib axes or the current axes
32
33 Parameters
34 ----------
35 axes : Axes
36 series : Series
37
38 Notes
39 _____
40 Supports same kwargs as Axes.plot
41
42 """
43 # Used inferred freq is possible, need a test case for inferred
44 if 'ax' in kwargs:
45 ax = kwargs.pop('ax')
46 else:
47 import matplotlib.pyplot as plt
48 ax = plt.gca()
49
50 freq = _get_freq(ax, series)
51 # resample against axes freq if necessary
52 if freq is None: # pragma: no cover
53 raise ValueError('Cannot use dynamic axis without frequency info')
54 else:
55 # Convert DatetimeIndex to PeriodIndex
56 if isinstance(series.index, DatetimeIndex):
57 series = series.to_period(freq=freq)
58 freq, ax_freq, series = _maybe_resample(series, ax, freq, plotf,
59 kwargs)
60
61 # Set ax with freq info
62 _decorate_axes(ax, freq, kwargs)
63
64 # mask missing values
65 args = _maybe_mask(series)
66
67 # how to make sure ax.clear() flows through?
68 if not hasattr(ax, '_plot_data'):
69 ax._plot_data = []
70 ax._plot_data.append((series, kwargs))
71
72 # styles
73 style = kwargs.pop('style', None)
74 if style is not None:
75 args.append(style)
76
77 lines = plotf(ax, *args, **kwargs)
78 label = kwargs.get('label', None)
79
80 # set date formatter, locators and rescale limits
81 format_dateaxis(ax, ax.freq)
82 left, right = _get_xlim(ax.get_lines())
83 ax.set_xlim(left, right)
84
85 # x and y coord info
86 tz = series.index.to_datetime().tz
87 ax.format_coord = lambda t, y : "t = {} y = {:8f}".format(datetime.fromtimestamp(t, tz), y)
88
89 return lines
90
91
92 def _maybe_resample(series, ax, freq, plotf, kwargs):
93 ax_freq = _get_ax_freq(ax)
94 if ax_freq is not None and freq != ax_freq:
95 if frequencies.is_superperiod(freq, ax_freq): # upsample input
96 series = series.copy()
97 series.index = series.index.asfreq(ax_freq, how='s')
98 freq = ax_freq
99 elif _is_sup(freq, ax_freq): # one is weekly
100 how = kwargs.pop('how', 'last')
101 series = series.resample('D', how=how).dropna()
102 series = series.resample(ax_freq, how=how).dropna()
103 freq = ax_freq
104 elif frequencies.is_subperiod(freq, ax_freq) or _is_sub(freq, ax_freq):
105 _upsample_others(ax, freq, plotf, kwargs)
106 ax_freq = freq
107 else: # pragma: no cover
108 raise ValueError('Incompatible frequency conversion')
109 return freq, ax_freq, series
110
111
112 def _get_ax_freq(ax):
113 ax_freq = getattr(ax, 'freq', None)
114 if ax_freq is None:
115 if hasattr(ax, 'left_ax'):
116 ax_freq = getattr(ax.left_ax, 'freq', None)
117 if hasattr(ax, 'right_ax'):
118 ax_freq = getattr(ax.right_ax, 'freq', None)
119 return ax_freq
120
121
122 def _is_sub(f1, f2):
123 return ((f1.startswith('W') and frequencies.is_subperiod('D', f2)) or
124 (f2.startswith('W') and frequencies.is_subperiod(f1, 'D')))
125
126
127 def _is_sup(f1, f2):
128 return ((f1.startswith('W') and frequencies.is_superperiod('D', f2)) or
129 (f2.startswith('W') and frequencies.is_superperiod(f1, 'D')))
130
131
132 def _upsample_others(ax, freq, plotf, kwargs):
133 legend = ax.get_legend()
134 lines, labels = _replot_ax(ax, freq, plotf, kwargs)
135
136 other_ax = None
137 if hasattr(ax, 'left_ax'):
138 other_ax = ax.left_ax
139 if hasattr(ax, 'right_ax'):
140 other_ax = ax.right_ax
141
142 if other_ax is not None:
143 rlines, rlabels = _replot_ax(other_ax, freq, plotf, kwargs)
144 lines.extend(rlines)
145 labels.extend(rlabels)
146
147 if (legend is not None and kwargs.get('legend', True) and
148 len(lines) > 0):
149 title = legend.get_title().get_text()
150 if title == 'None':
151 title = None
152 ax.legend(lines, labels, loc='best', title=title)
153
154
155 def _replot_ax(ax, freq, plotf, kwargs):
156 data = getattr(ax, '_plot_data', None)
157 ax._plot_data = []
158 ax.clear()
159 _decorate_axes(ax, freq, kwargs)
160
161 lines = []
162 labels = []
163 if data is not None:
164 for series, kwds in data:
165 series = series.copy()
166 idx = series.index.asfreq(freq, how='S')
167 series.index = idx
168 ax._plot_data.append(series)
169 args = _maybe_mask(series)
170 lines.append(plotf(ax, *args, **kwds)[0])
171 labels.append(com.pprint_thing(series.name))
172
173 return lines, labels
174
175
176 def _decorate_axes(ax, freq, kwargs):
177 ax.freq = freq
178 xaxis = ax.get_xaxis()
179 xaxis.freq = freq
180 if not hasattr(ax, 'legendlabels'):
181 ax.legendlabels = [kwargs.get('label', None)]
182 else:
183 ax.legendlabels.append(kwargs.get('label', None))
184 ax.view_interval = None
185 ax.date_axis_info = None
186
187
188 def _maybe_mask(series):
189 mask = isnull(series)
190 if mask.any():
191 masked_array = np.ma.array(series.values)
192 masked_array = np.ma.masked_where(mask, masked_array)
193 args = [series.index, masked_array]
194 else:
195 args = [series.index, series.values]
196 return args
197
198
199 def _get_freq(ax, series):
200 # get frequency from data
201 freq = getattr(series.index, 'freq', None)
202 if freq is None:
203 freq = getattr(series.index, 'inferred_freq', None)
204
205 ax_freq = getattr(ax, 'freq', None)
206
207 # use axes freq if no data freq
208 if freq is None:
209 freq = ax_freq
210
211 # get the period frequency
212 if isinstance(freq, DateOffset):
213 freq = freq.rule_code
214 else:
215 freq = frequencies.get_base_alias(freq)
216
217 freq = frequencies.get_period_alias(freq)
218
219 return freq
220
221
222 def _get_xlim(lines):
223 left, right = np.inf, -np.inf
224 for l in lines:
225 x = l.get_xdata()
226 left = min(x[0].ordinal, left)
227 right = max(x[-1].ordinal, right)
228 return left, right
229
230 # Patch methods for subplot. Only format_dateaxis is currently used.
231 # Do we need the rest for convenience?
232
233
234 def format_dateaxis(subplot, freq):
235 """
236 Pretty-formats the date axis (x-axis).
237
238 Major and minor ticks are automatically set for the frequency of the
239 current underlying series. As the dynamic mode is activated by
240 default, changing the limits of the x axis will intelligently change
241 the positions of the ticks.
242 """
243 majlocator = TimeSeries_DateLocator(freq, dynamic_mode=True,
244 minor_locator=False,
245 plot_obj=subplot)
246 minlocator = TimeSeries_DateLocator(freq, dynamic_mode=True,
247 minor_locator=True,
248 plot_obj=subplot)
249 subplot.xaxis.set_major_locator(majlocator)
250 subplot.xaxis.set_minor_locator(minlocator)
251
252 majformatter = TimeSeries_DateFormatter(freq, dynamic_mode=True,
253 minor_locator=False,
254 plot_obj=subplot)
255 minformatter = TimeSeries_DateFormatter(freq, dynamic_mode=True,
256 minor_locator=True,
257 plot_obj=subplot)
258 subplot.xaxis.set_major_formatter(majformatter)
259 subplot.xaxis.set_minor_formatter(minformatter)
260 pylab.draw_if_interactive()
261
[end of pandas/tseries/plotting.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/pandas/tseries/plotting.py b/pandas/tseries/plotting.py
--- a/pandas/tseries/plotting.py
+++ b/pandas/tseries/plotting.py
@@ -83,8 +83,7 @@
ax.set_xlim(left, right)
# x and y coord info
- tz = series.index.to_datetime().tz
- ax.format_coord = lambda t, y : "t = {} y = {:8f}".format(datetime.fromtimestamp(t, tz), y)
+ ax.format_coord = lambda t, y: "t = {} y = {:8f}".format(Period(ordinal=int(t), freq=ax.freq), y)
return lines
| {"golden_diff": "diff --git a/pandas/tseries/plotting.py b/pandas/tseries/plotting.py\n--- a/pandas/tseries/plotting.py\n+++ b/pandas/tseries/plotting.py\n@@ -83,8 +83,7 @@\n ax.set_xlim(left, right)\n \n # x and y coord info\n- tz = series.index.to_datetime().tz\n- ax.format_coord = lambda t, y : \"t = {} y = {:8f}\".format(datetime.fromtimestamp(t, tz), y)\n+ ax.format_coord = lambda t, y: \"t = {} y = {:8f}\".format(Period(ordinal=int(t), freq=ax.freq), y)\n \n return lines\n", "issue": "Matplotlib cursor position wrong after using asfreq method to change freq of DateTimeIndex from None to something\nAfter using the `asfreq` method to change the frequency of a time-series DataFrame from `None` to something e.g. `15Min` the cursor position in matplotlib graphs of that DataFrame is no longer correct (usually shows a datetime just after the unix epoch). The following demonstrates this (NB dt in df1 is not a constant):\n\n```\ndf1 = pandas.read_csv('tseries1.csv', names=['tstamp', 'Q'], parse_dates=True, \n index_col='tstamp').clip_lower(0).fillna(0)\ndf1['T'] = pandas.read_csv('tseries2.csv', names=['tstamp', 'T'], parse_dates=True, \n index_col='tstamp', squeeze=True).clip_lower(0).fillna(0)\n\ndf2 = df1.asfreq(freq='15Min', method='ffill')\n# NB df1.index.freq is None\n# NB df2.index.freq is <15 * Minutes>\ndf1.plot()\ndf2.plot()\nplt.show()\n```\n\nI find the Matplotlib cursor position to be invaluable when looking for features in very long time-series.\n\nVersions:\n- pandas master (commit ID 764b444)\n- numpy 1.8\n- matplotlib 1.3.0\n\nMatplotlib cursor position wrong after using asfreq method to change freq of DateTimeIndex from None to something\nAfter using the `asfreq` method to change the frequency of a time-series DataFrame from `None` to something e.g. `15Min` the cursor position in matplotlib graphs of that DataFrame is no longer correct (usually shows a datetime just after the unix epoch). The following demonstrates this (NB dt in df1 is not a constant):\n\n```\ndf1 = pandas.read_csv('tseries1.csv', names=['tstamp', 'Q'], parse_dates=True, \n index_col='tstamp').clip_lower(0).fillna(0)\ndf1['T'] = pandas.read_csv('tseries2.csv', names=['tstamp', 'T'], parse_dates=True, \n index_col='tstamp', squeeze=True).clip_lower(0).fillna(0)\n\ndf2 = df1.asfreq(freq='15Min', method='ffill')\n# NB df1.index.freq is None\n# NB df2.index.freq is <15 * Minutes>\ndf1.plot()\ndf2.plot()\nplt.show()\n```\n\nI find the Matplotlib cursor position to be invaluable when looking for features in very long time-series.\n\nVersions:\n- pandas master (commit ID 764b444)\n- numpy 1.8\n- matplotlib 1.3.0\n\n", "before_files": [{"content": "\"\"\"\nPeriod formatters and locators adapted from scikits.timeseries by\nPierre GF Gerard-Marchant & Matt Knox\n\"\"\"\n\n#!!! TODO: Use the fact that axis can have units to simplify the process\nimport datetime as pydt\nfrom datetime import datetime\n\nfrom matplotlib import pylab\nimport matplotlib.units as units\n\nimport numpy as np\n\nfrom pandas import isnull\nfrom pandas.tseries.period import Period\nfrom pandas.tseries.offsets import DateOffset\nimport pandas.tseries.frequencies as frequencies\nfrom pandas.tseries.index import DatetimeIndex\nimport pandas.core.common as com\n\nfrom pandas.tseries.converter import (PeriodConverter, TimeSeries_DateLocator,\n TimeSeries_DateFormatter)\n\n#----------------------------------------------------------------------\n# Plotting functions and monkey patches\n\n\ndef tsplot(series, plotf, **kwargs):\n \"\"\"\n Plots a Series on the given Matplotlib axes or the current axes\n\n Parameters\n ----------\n axes : Axes\n series : Series\n\n Notes\n _____\n Supports same kwargs as Axes.plot\n\n \"\"\"\n # Used inferred freq is possible, need a test case for inferred\n if 'ax' in kwargs:\n ax = kwargs.pop('ax')\n else:\n import matplotlib.pyplot as plt\n ax = plt.gca()\n\n freq = _get_freq(ax, series)\n # resample against axes freq if necessary\n if freq is None: # pragma: no cover\n raise ValueError('Cannot use dynamic axis without frequency info')\n else:\n # Convert DatetimeIndex to PeriodIndex\n if isinstance(series.index, DatetimeIndex):\n series = series.to_period(freq=freq)\n freq, ax_freq, series = _maybe_resample(series, ax, freq, plotf,\n kwargs)\n\n # Set ax with freq info\n _decorate_axes(ax, freq, kwargs)\n\n # mask missing values\n args = _maybe_mask(series)\n\n # how to make sure ax.clear() flows through?\n if not hasattr(ax, '_plot_data'):\n ax._plot_data = []\n ax._plot_data.append((series, kwargs))\n\n # styles\n style = kwargs.pop('style', None)\n if style is not None:\n args.append(style)\n\n lines = plotf(ax, *args, **kwargs)\n label = kwargs.get('label', None)\n\n # set date formatter, locators and rescale limits\n format_dateaxis(ax, ax.freq)\n left, right = _get_xlim(ax.get_lines())\n ax.set_xlim(left, right)\n\n # x and y coord info\n tz = series.index.to_datetime().tz\n ax.format_coord = lambda t, y : \"t = {} y = {:8f}\".format(datetime.fromtimestamp(t, tz), y)\n\n return lines\n\n\ndef _maybe_resample(series, ax, freq, plotf, kwargs):\n ax_freq = _get_ax_freq(ax)\n if ax_freq is not None and freq != ax_freq:\n if frequencies.is_superperiod(freq, ax_freq): # upsample input\n series = series.copy()\n series.index = series.index.asfreq(ax_freq, how='s')\n freq = ax_freq\n elif _is_sup(freq, ax_freq): # one is weekly\n how = kwargs.pop('how', 'last')\n series = series.resample('D', how=how).dropna()\n series = series.resample(ax_freq, how=how).dropna()\n freq = ax_freq\n elif frequencies.is_subperiod(freq, ax_freq) or _is_sub(freq, ax_freq):\n _upsample_others(ax, freq, plotf, kwargs)\n ax_freq = freq\n else: # pragma: no cover\n raise ValueError('Incompatible frequency conversion')\n return freq, ax_freq, series\n\n\ndef _get_ax_freq(ax):\n ax_freq = getattr(ax, 'freq', None)\n if ax_freq is None:\n if hasattr(ax, 'left_ax'):\n ax_freq = getattr(ax.left_ax, 'freq', None)\n if hasattr(ax, 'right_ax'):\n ax_freq = getattr(ax.right_ax, 'freq', None)\n return ax_freq\n\n\ndef _is_sub(f1, f2):\n return ((f1.startswith('W') and frequencies.is_subperiod('D', f2)) or\n (f2.startswith('W') and frequencies.is_subperiod(f1, 'D')))\n\n\ndef _is_sup(f1, f2):\n return ((f1.startswith('W') and frequencies.is_superperiod('D', f2)) or\n (f2.startswith('W') and frequencies.is_superperiod(f1, 'D')))\n\n\ndef _upsample_others(ax, freq, plotf, kwargs):\n legend = ax.get_legend()\n lines, labels = _replot_ax(ax, freq, plotf, kwargs)\n\n other_ax = None\n if hasattr(ax, 'left_ax'):\n other_ax = ax.left_ax\n if hasattr(ax, 'right_ax'):\n other_ax = ax.right_ax\n\n if other_ax is not None:\n rlines, rlabels = _replot_ax(other_ax, freq, plotf, kwargs)\n lines.extend(rlines)\n labels.extend(rlabels)\n\n if (legend is not None and kwargs.get('legend', True) and\n len(lines) > 0):\n title = legend.get_title().get_text()\n if title == 'None':\n title = None\n ax.legend(lines, labels, loc='best', title=title)\n\n\ndef _replot_ax(ax, freq, plotf, kwargs):\n data = getattr(ax, '_plot_data', None)\n ax._plot_data = []\n ax.clear()\n _decorate_axes(ax, freq, kwargs)\n\n lines = []\n labels = []\n if data is not None:\n for series, kwds in data:\n series = series.copy()\n idx = series.index.asfreq(freq, how='S')\n series.index = idx\n ax._plot_data.append(series)\n args = _maybe_mask(series)\n lines.append(plotf(ax, *args, **kwds)[0])\n labels.append(com.pprint_thing(series.name))\n\n return lines, labels\n\n\ndef _decorate_axes(ax, freq, kwargs):\n ax.freq = freq\n xaxis = ax.get_xaxis()\n xaxis.freq = freq\n if not hasattr(ax, 'legendlabels'):\n ax.legendlabels = [kwargs.get('label', None)]\n else:\n ax.legendlabels.append(kwargs.get('label', None))\n ax.view_interval = None\n ax.date_axis_info = None\n\n\ndef _maybe_mask(series):\n mask = isnull(series)\n if mask.any():\n masked_array = np.ma.array(series.values)\n masked_array = np.ma.masked_where(mask, masked_array)\n args = [series.index, masked_array]\n else:\n args = [series.index, series.values]\n return args\n\n\ndef _get_freq(ax, series):\n # get frequency from data\n freq = getattr(series.index, 'freq', None)\n if freq is None:\n freq = getattr(series.index, 'inferred_freq', None)\n\n ax_freq = getattr(ax, 'freq', None)\n\n # use axes freq if no data freq\n if freq is None:\n freq = ax_freq\n\n # get the period frequency\n if isinstance(freq, DateOffset):\n freq = freq.rule_code\n else:\n freq = frequencies.get_base_alias(freq)\n\n freq = frequencies.get_period_alias(freq)\n\n return freq\n\n\ndef _get_xlim(lines):\n left, right = np.inf, -np.inf\n for l in lines:\n x = l.get_xdata()\n left = min(x[0].ordinal, left)\n right = max(x[-1].ordinal, right)\n return left, right\n\n# Patch methods for subplot. Only format_dateaxis is currently used.\n# Do we need the rest for convenience?\n\n\ndef format_dateaxis(subplot, freq):\n \"\"\"\n Pretty-formats the date axis (x-axis).\n\n Major and minor ticks are automatically set for the frequency of the\n current underlying series. As the dynamic mode is activated by\n default, changing the limits of the x axis will intelligently change\n the positions of the ticks.\n \"\"\"\n majlocator = TimeSeries_DateLocator(freq, dynamic_mode=True,\n minor_locator=False,\n plot_obj=subplot)\n minlocator = TimeSeries_DateLocator(freq, dynamic_mode=True,\n minor_locator=True,\n plot_obj=subplot)\n subplot.xaxis.set_major_locator(majlocator)\n subplot.xaxis.set_minor_locator(minlocator)\n\n majformatter = TimeSeries_DateFormatter(freq, dynamic_mode=True,\n minor_locator=False,\n plot_obj=subplot)\n minformatter = TimeSeries_DateFormatter(freq, dynamic_mode=True,\n minor_locator=True,\n plot_obj=subplot)\n subplot.xaxis.set_major_formatter(majformatter)\n subplot.xaxis.set_minor_formatter(minformatter)\n pylab.draw_if_interactive()\n", "path": "pandas/tseries/plotting.py"}]} | 3,732 | 157 |
gh_patches_debug_10861 | rasdani/github-patches | git_diff | horovod__horovod-2039 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Running horovod.spark.run with env=os.environ fails
Example:
horovod.spark.run(fn, num_proc=2, env=os.environ)
That `env` is an object, not a dictionary. It cannot be pickled:
```
Traceback (most recent call last):
File "horovod/run/common/util/tiny_shell_exec.py", line 32, in execute
exit_code = safe_shell_exec.execute(command, env=env, stdout=output, stderr=output)
File "horovod/run/common/util/safe_shell_exec.py", line 183, in execute
middleman.start()
File "multiprocessing/process.py", line 105, in start
self._popen = self._Popen(self)
File "multiprocessing/context.py", line 284, in _Popen
return Popen(process_obj)
File "multiprocessing/popen_spawn_posix.py", line 32, in __init__
super().__init__(process_obj)
File "multiprocessing/popen_fork.py", line 19, in __init__
self._launch(process_obj)
File "multiprocessing/popen_spawn_posix.py", line 47, in _launch
reduction.dump(process_obj, fp)
File "multiprocessing/reduction.py", line 60, in dump
ForkingPickler(file, protocol).dump(obj)
AttributeError: Can't pickle local object '_createenviron.<locals>.encode'
```
It works with
horovod.spark.run(fn, num_proc=2, env=os.environ.copy())
The `run` function needs to copy `env` itself first.
</issue>
<code>
[start of horovod/run/mpi_run.py]
1 # Copyright 2019 Uber Technologies, Inc. All Rights Reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 # ==============================================================================
15
16 import copy
17 import os
18 import sys
19
20 from shlex import quote
21
22 from horovod.run.common.util import env as env_util, safe_shell_exec, tiny_shell_exec
23
24 # MPI implementations
25 _OMPI_IMPL = 'OpenMPI'
26 _SMPI_IMPL = 'SpectrumMPI'
27 _MPICH_IMPL = 'MPICH'
28 _UNKNOWN_IMPL = 'Unknown'
29 _MISSING_IMPL = 'Missing'
30
31 # Open MPI Flags
32 _OMPI_FLAGS = ['-mca pml ob1', '-mca btl ^openib']
33 # Spectrum MPI Flags
34 _SMPI_FLAGS = []
35 _SMPI_FLAGS_TCP = ['-tcp']
36 # MPICH Flags
37 _MPICH_FLAGS = []
38
39 # Threshold for large cluster MPI issues:
40 _LARGE_CLUSTER_THRESHOLD = 64
41 # No process binding args
42 _NO_BINDING_ARGS = ['-bind-to none', '-map-by slot']
43 # Process socket binding args
44 _SOCKET_BINDING_ARGS = ['-bind-to socket', '-map-by socket', '-rank-by core']
45
46 # MPI not found error message
47 _MPI_NOT_FOUND_ERROR_MSG= ('horovod does not find an installed MPI.\n\n'
48 'Choose one of:\n'
49 '1. Install Open MPI 4.0.0+ or IBM Spectrum MPI or MPICH and re-install Horovod '
50 '(use --no-cache-dir pip option).\n'
51 '2. Run distributed '
52 'training script using the standard way provided by your'
53 ' MPI distribution (usually mpirun, srun, or jsrun).\n'
54 '3. Use built-in gloo option (horovodrun --gloo ...).')
55
56
57 def mpi_available(env=None):
58 return _get_mpi_implementation(env) not in {_UNKNOWN_IMPL, _MISSING_IMPL}
59
60
61 def is_open_mpi(env=None):
62 return _get_mpi_implementation(env) == _OMPI_IMPL
63
64
65 def is_spectrum_mpi(env=None):
66 return _get_mpi_implementation(env) == _SMPI_IMPL
67
68
69 def is_mpich(env=None):
70 return _get_mpi_implementation(env) == _MPICH_IMPL
71
72
73 def _get_mpi_implementation(env=None):
74 """
75 Detects the available MPI implementation by invoking `mpirun --version`.
76 This command is executed by the given execute function, which takes the
77 command as the only argument and returns (output, exit code). Output
78 represents the stdout and stderr as a string.
79
80 Returns one of:
81 - _OMPI_IMPL, _SMPI_IMPL or _MPICH_IMPL for known implementations
82 - _UNKNOWN_IMPL for any unknown implementation
83 - _MISSING_IMPL if `mpirun --version` could not be executed.
84
85 :param env: environment variable to use to run mpirun
86 :return: string representing identified implementation
87 """
88 command = 'mpirun --version'
89 res = tiny_shell_exec.execute(command, env)
90 if res is None:
91 return _MISSING_IMPL
92 (output, exit_code) = res
93
94 if exit_code == 0:
95 if 'Open MPI' in output or 'OpenRTE' in output:
96 return _OMPI_IMPL
97 elif 'IBM Spectrum MPI' in output:
98 return _SMPI_IMPL
99 elif 'MPICH' in output:
100 return _MPICH_IMPL
101
102 print('Unknown MPI implementation given in output of mpirun --version:', file=sys.stderr)
103 print(output, file=sys.stderr)
104 return _UNKNOWN_IMPL
105 else:
106 print('Was unable to run {command}:'.format(command=command), file=sys.stderr)
107 print(output, file=sys.stderr)
108 return _MISSING_IMPL
109
110
111 def _get_mpi_implementation_flags(tcp_flag, env=None):
112 if is_open_mpi(env):
113 return list(_OMPI_FLAGS), list(_NO_BINDING_ARGS)
114 elif is_spectrum_mpi(env):
115 return list(_SMPI_FLAGS) if not tcp_flag else list(_SMPI_FLAGS_TCP), list(_SOCKET_BINDING_ARGS)
116 elif is_mpich(env):
117 return list(_MPICH_FLAGS), list(_NO_BINDING_ARGS)
118 else:
119 return None, None
120
121
122 def mpi_run(settings, nics, env, command, stdout=None, stderr=None):
123 """
124 Runs mpi_run.
125
126 Args:
127 settings: Settings for running MPI.
128 Note: settings.num_proc and settings.hosts must not be None.
129 nics: Interfaces to include by MPI.
130 env: Environment dictionary to use for running command.
131 command: Command and arguments to run as a list of string.
132 stdout: Stdout of the mpi process.
133 Only used when settings.run_func_mode is True.
134 stderr: Stderr of the mpi process.
135 Only used when settings.run_func_mode is True.
136 """
137 mpi_impl_flags, impl_binding_args = _get_mpi_implementation_flags(settings.tcp_flag, env=env)
138 if mpi_impl_flags is None:
139 raise Exception(_MPI_NOT_FOUND_ERROR_MSG)
140
141 ssh_port_arg = '-mca plm_rsh_args \"-p {ssh_port}\"'.format(
142 ssh_port=settings.ssh_port) if settings.ssh_port else ''
143
144 # if user does not specify any hosts, mpirun by default uses local host.
145 # There is no need to specify localhost.
146 hosts_arg = '-H {hosts}'.format(hosts=settings.hosts)
147
148 tcp_intf_arg = '-mca btl_tcp_if_include {nics}'.format(
149 nics=','.join(nics)) if nics else ''
150 nccl_socket_intf_arg = '-x NCCL_SOCKET_IFNAME={nics}'.format(
151 nics=','.join(nics)) if nics else ''
152
153 # On large cluster runs (e.g. Summit), we need extra settings to work around OpenMPI issues
154 if settings.num_hosts and settings.num_hosts >= _LARGE_CLUSTER_THRESHOLD:
155 mpi_impl_flags.append('-mca plm_rsh_no_tree_spawn true')
156 mpi_impl_flags.append('-mca plm_rsh_num_concurrent {}'.format(settings.num_hosts))
157
158 binding_args = settings.binding_args if settings.binding_args else ' '.join(impl_binding_args)
159
160 # Pass all the env variables to the mpirun command.
161 mpirun_command = (
162 'mpirun --allow-run-as-root --tag-output '
163 '-np {num_proc} {hosts_arg} '
164 '{binding_args} '
165 '{mpi_args} '
166 '{ssh_port_arg} '
167 '{tcp_intf_arg} '
168 '{nccl_socket_intf_arg} '
169 '{output_filename_arg} '
170 '{env} {extra_mpi_args} {command}' # expect a lot of environment variables
171 .format(num_proc=settings.num_proc,
172 hosts_arg=hosts_arg,
173 binding_args=binding_args,
174 mpi_args=' '.join(mpi_impl_flags),
175 tcp_intf_arg=tcp_intf_arg,
176 nccl_socket_intf_arg=nccl_socket_intf_arg,
177 ssh_port_arg=ssh_port_arg,
178 output_filename_arg='--output-filename ' + settings.output_filename
179 if settings.output_filename else '',
180 env=' '.join('-x %s' % key for key in sorted(env.keys())
181 if env_util.is_exportable(key)),
182
183 extra_mpi_args=settings.extra_mpi_args if settings.extra_mpi_args else '',
184 command=' '.join(quote(par) for par in command))
185 )
186
187 if settings.verbose >= 2:
188 print(mpirun_command)
189
190 # we need the driver's PATH in env to run mpirun,
191 # env for mpirun is different to env encoded in mpirun_command
192 if 'PATH' not in env and 'PATH' in os.environ:
193 env = copy.copy(env) # copy env so we do not leak env modifications
194 env['PATH'] = os.environ['PATH']
195
196 # Execute the mpirun command.
197 if settings.run_func_mode:
198 exit_code = safe_shell_exec.execute(mpirun_command, env=env, stdout=stdout, stderr=stderr)
199 if exit_code != 0:
200 raise RuntimeError("mpirun failed with exit code {exit_code}".format(exit_code=exit_code))
201 else:
202 os.execve('/bin/sh', ['/bin/sh', '-c', mpirun_command], env)
203
[end of horovod/run/mpi_run.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/horovod/run/mpi_run.py b/horovod/run/mpi_run.py
--- a/horovod/run/mpi_run.py
+++ b/horovod/run/mpi_run.py
@@ -134,6 +134,10 @@
stderr: Stderr of the mpi process.
Only used when settings.run_func_mode is True.
"""
+ if env is not None and not isinstance(env, dict):
+ raise Exception('env argument must be a dict, not {type}: {env}'
+ .format(type=type(env), env=env))
+
mpi_impl_flags, impl_binding_args = _get_mpi_implementation_flags(settings.tcp_flag, env=env)
if mpi_impl_flags is None:
raise Exception(_MPI_NOT_FOUND_ERROR_MSG)
| {"golden_diff": "diff --git a/horovod/run/mpi_run.py b/horovod/run/mpi_run.py\n--- a/horovod/run/mpi_run.py\n+++ b/horovod/run/mpi_run.py\n@@ -134,6 +134,10 @@\n stderr: Stderr of the mpi process.\n Only used when settings.run_func_mode is True.\n \"\"\"\n+ if env is not None and not isinstance(env, dict):\n+ raise Exception('env argument must be a dict, not {type}: {env}'\n+ .format(type=type(env), env=env))\n+\n mpi_impl_flags, impl_binding_args = _get_mpi_implementation_flags(settings.tcp_flag, env=env)\n if mpi_impl_flags is None:\n raise Exception(_MPI_NOT_FOUND_ERROR_MSG)\n", "issue": "Running horovod.spark.run with env=os.environ fails\nExample:\r\n\r\n horovod.spark.run(fn, num_proc=2, env=os.environ)\r\n\r\nThat `env` is an object, not a dictionary. It cannot be pickled:\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"horovod/run/common/util/tiny_shell_exec.py\", line 32, in execute\r\n exit_code = safe_shell_exec.execute(command, env=env, stdout=output, stderr=output)\r\n File \"horovod/run/common/util/safe_shell_exec.py\", line 183, in execute\r\n middleman.start()\r\n File \"multiprocessing/process.py\", line 105, in start\r\n self._popen = self._Popen(self)\r\n File \"multiprocessing/context.py\", line 284, in _Popen\r\n return Popen(process_obj)\r\n File \"multiprocessing/popen_spawn_posix.py\", line 32, in __init__\r\n super().__init__(process_obj)\r\n File \"multiprocessing/popen_fork.py\", line 19, in __init__\r\n self._launch(process_obj)\r\n File \"multiprocessing/popen_spawn_posix.py\", line 47, in _launch\r\n reduction.dump(process_obj, fp)\r\n File \"multiprocessing/reduction.py\", line 60, in dump\r\n ForkingPickler(file, protocol).dump(obj)\r\nAttributeError: Can't pickle local object '_createenviron.<locals>.encode'\r\n```\r\n\r\nIt works with\r\n\r\n horovod.spark.run(fn, num_proc=2, env=os.environ.copy())\r\n\r\nThe `run` function needs to copy `env` itself first.\n", "before_files": [{"content": "# Copyright 2019 Uber Technologies, Inc. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n# ==============================================================================\n\nimport copy\nimport os\nimport sys\n\nfrom shlex import quote\n\nfrom horovod.run.common.util import env as env_util, safe_shell_exec, tiny_shell_exec\n\n# MPI implementations\n_OMPI_IMPL = 'OpenMPI'\n_SMPI_IMPL = 'SpectrumMPI'\n_MPICH_IMPL = 'MPICH'\n_UNKNOWN_IMPL = 'Unknown'\n_MISSING_IMPL = 'Missing'\n\n# Open MPI Flags\n_OMPI_FLAGS = ['-mca pml ob1', '-mca btl ^openib']\n# Spectrum MPI Flags\n_SMPI_FLAGS = []\n_SMPI_FLAGS_TCP = ['-tcp']\n# MPICH Flags\n_MPICH_FLAGS = []\n\n# Threshold for large cluster MPI issues:\n_LARGE_CLUSTER_THRESHOLD = 64\n# No process binding args\n_NO_BINDING_ARGS = ['-bind-to none', '-map-by slot']\n# Process socket binding args\n_SOCKET_BINDING_ARGS = ['-bind-to socket', '-map-by socket', '-rank-by core']\n\n# MPI not found error message\n_MPI_NOT_FOUND_ERROR_MSG= ('horovod does not find an installed MPI.\\n\\n'\n 'Choose one of:\\n'\n '1. Install Open MPI 4.0.0+ or IBM Spectrum MPI or MPICH and re-install Horovod '\n '(use --no-cache-dir pip option).\\n'\n '2. Run distributed '\n 'training script using the standard way provided by your'\n ' MPI distribution (usually mpirun, srun, or jsrun).\\n'\n '3. Use built-in gloo option (horovodrun --gloo ...).')\n\n\ndef mpi_available(env=None):\n return _get_mpi_implementation(env) not in {_UNKNOWN_IMPL, _MISSING_IMPL}\n\n\ndef is_open_mpi(env=None):\n return _get_mpi_implementation(env) == _OMPI_IMPL\n\n\ndef is_spectrum_mpi(env=None):\n return _get_mpi_implementation(env) == _SMPI_IMPL\n\n\ndef is_mpich(env=None):\n return _get_mpi_implementation(env) == _MPICH_IMPL\n\n\ndef _get_mpi_implementation(env=None):\n \"\"\"\n Detects the available MPI implementation by invoking `mpirun --version`.\n This command is executed by the given execute function, which takes the\n command as the only argument and returns (output, exit code). Output\n represents the stdout and stderr as a string.\n\n Returns one of:\n - _OMPI_IMPL, _SMPI_IMPL or _MPICH_IMPL for known implementations\n - _UNKNOWN_IMPL for any unknown implementation\n - _MISSING_IMPL if `mpirun --version` could not be executed.\n\n :param env: environment variable to use to run mpirun\n :return: string representing identified implementation\n \"\"\"\n command = 'mpirun --version'\n res = tiny_shell_exec.execute(command, env)\n if res is None:\n return _MISSING_IMPL\n (output, exit_code) = res\n\n if exit_code == 0:\n if 'Open MPI' in output or 'OpenRTE' in output:\n return _OMPI_IMPL\n elif 'IBM Spectrum MPI' in output:\n return _SMPI_IMPL\n elif 'MPICH' in output:\n return _MPICH_IMPL\n\n print('Unknown MPI implementation given in output of mpirun --version:', file=sys.stderr)\n print(output, file=sys.stderr)\n return _UNKNOWN_IMPL\n else:\n print('Was unable to run {command}:'.format(command=command), file=sys.stderr)\n print(output, file=sys.stderr)\n return _MISSING_IMPL\n\n\ndef _get_mpi_implementation_flags(tcp_flag, env=None):\n if is_open_mpi(env):\n return list(_OMPI_FLAGS), list(_NO_BINDING_ARGS)\n elif is_spectrum_mpi(env):\n return list(_SMPI_FLAGS) if not tcp_flag else list(_SMPI_FLAGS_TCP), list(_SOCKET_BINDING_ARGS)\n elif is_mpich(env):\n return list(_MPICH_FLAGS), list(_NO_BINDING_ARGS)\n else:\n return None, None\n\n\ndef mpi_run(settings, nics, env, command, stdout=None, stderr=None):\n \"\"\"\n Runs mpi_run.\n\n Args:\n settings: Settings for running MPI.\n Note: settings.num_proc and settings.hosts must not be None.\n nics: Interfaces to include by MPI.\n env: Environment dictionary to use for running command.\n command: Command and arguments to run as a list of string.\n stdout: Stdout of the mpi process.\n Only used when settings.run_func_mode is True.\n stderr: Stderr of the mpi process.\n Only used when settings.run_func_mode is True.\n \"\"\"\n mpi_impl_flags, impl_binding_args = _get_mpi_implementation_flags(settings.tcp_flag, env=env)\n if mpi_impl_flags is None:\n raise Exception(_MPI_NOT_FOUND_ERROR_MSG)\n\n ssh_port_arg = '-mca plm_rsh_args \\\"-p {ssh_port}\\\"'.format(\n ssh_port=settings.ssh_port) if settings.ssh_port else ''\n\n # if user does not specify any hosts, mpirun by default uses local host.\n # There is no need to specify localhost.\n hosts_arg = '-H {hosts}'.format(hosts=settings.hosts)\n\n tcp_intf_arg = '-mca btl_tcp_if_include {nics}'.format(\n nics=','.join(nics)) if nics else ''\n nccl_socket_intf_arg = '-x NCCL_SOCKET_IFNAME={nics}'.format(\n nics=','.join(nics)) if nics else ''\n\n # On large cluster runs (e.g. Summit), we need extra settings to work around OpenMPI issues\n if settings.num_hosts and settings.num_hosts >= _LARGE_CLUSTER_THRESHOLD:\n mpi_impl_flags.append('-mca plm_rsh_no_tree_spawn true')\n mpi_impl_flags.append('-mca plm_rsh_num_concurrent {}'.format(settings.num_hosts))\n\n binding_args = settings.binding_args if settings.binding_args else ' '.join(impl_binding_args)\n\n # Pass all the env variables to the mpirun command.\n mpirun_command = (\n 'mpirun --allow-run-as-root --tag-output '\n '-np {num_proc} {hosts_arg} '\n '{binding_args} '\n '{mpi_args} '\n '{ssh_port_arg} '\n '{tcp_intf_arg} '\n '{nccl_socket_intf_arg} '\n '{output_filename_arg} '\n '{env} {extra_mpi_args} {command}' # expect a lot of environment variables\n .format(num_proc=settings.num_proc,\n hosts_arg=hosts_arg,\n binding_args=binding_args,\n mpi_args=' '.join(mpi_impl_flags),\n tcp_intf_arg=tcp_intf_arg,\n nccl_socket_intf_arg=nccl_socket_intf_arg,\n ssh_port_arg=ssh_port_arg,\n output_filename_arg='--output-filename ' + settings.output_filename\n if settings.output_filename else '',\n env=' '.join('-x %s' % key for key in sorted(env.keys())\n if env_util.is_exportable(key)),\n\n extra_mpi_args=settings.extra_mpi_args if settings.extra_mpi_args else '',\n command=' '.join(quote(par) for par in command))\n )\n\n if settings.verbose >= 2:\n print(mpirun_command)\n\n # we need the driver's PATH in env to run mpirun,\n # env for mpirun is different to env encoded in mpirun_command\n if 'PATH' not in env and 'PATH' in os.environ:\n env = copy.copy(env) # copy env so we do not leak env modifications\n env['PATH'] = os.environ['PATH']\n\n # Execute the mpirun command.\n if settings.run_func_mode:\n exit_code = safe_shell_exec.execute(mpirun_command, env=env, stdout=stdout, stderr=stderr)\n if exit_code != 0:\n raise RuntimeError(\"mpirun failed with exit code {exit_code}\".format(exit_code=exit_code))\n else:\n os.execve('/bin/sh', ['/bin/sh', '-c', mpirun_command], env)\n", "path": "horovod/run/mpi_run.py"}]} | 3,291 | 174 |
gh_patches_debug_23834 | rasdani/github-patches | git_diff | NVIDIA__NVFlare-380 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Yaml loader should be replaced by safe_loader or other more secure loader
To load yaml files from unknown source, we should avoid using yaml's loader. A better way is to use either safe_loader or other mechanism.
</issue>
<code>
[start of nvflare/lighter/provision.py]
1 # Copyright (c) 2021-2022, NVIDIA CORPORATION. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from __future__ import absolute_import
16
17 import argparse
18 import os
19 import pathlib
20 import shutil
21 import sys
22 import webbrowser
23
24 import yaml
25
26 from nvflare.fuel.utils.class_utils import instantiate_class
27 from nvflare.lighter.spec import Participant, Project, Provisioner
28
29
30 def main():
31 parser = argparse.ArgumentParser()
32 parser.add_argument("-p", "--project_file", type=str, default="project.yml", help="file to describe FL project")
33 parser.add_argument("-w", "--workspace", type=str, default="workspace", help="directory used by provision")
34 parser.add_argument("-c", "--custom_folder", type=str, default=".", help="additional folder to load python codes")
35 parser.add_argument(
36 "-u",
37 "--ui_tool",
38 action="store_true",
39 help="Run provisioning UI tool to generate project.yml file",
40 )
41
42 args = parser.parse_args()
43
44 file_path = pathlib.Path(__file__).parent.absolute()
45 current_path = os.getcwd()
46 custom_folder_path = os.path.join(current_path, args.custom_folder)
47 sys.path.append(custom_folder_path)
48 print("Path list (sys.path) for python codes loading: {}".format(sys.path))
49
50 # main project file
51 project_file = args.project_file
52 current_project_yml = os.path.join(current_path, "project.yml")
53 if len(sys.argv) == 1 and not os.path.exists(current_project_yml):
54 answer = input(
55 f"No project.yml found in current folder. Is it OK to generate one at {current_project_yml} for you? (y/N) "
56 )
57 if answer.strip().upper() == "Y":
58 shutil.copyfile(os.path.join(file_path, "project.yml"), current_project_yml)
59 print(f"{current_project_yml} was created. Please edit it to fit your FL configuration.")
60 exit(0)
61
62 if args.ui_tool:
63 ui_helper_path = os.path.join(file_path, "provision_helper.html")
64 ui_helper_url = f"file://{ui_helper_path}"
65 webbrowser.open_new_tab(ui_helper_url)
66 print(
67 "\n******\n"
68 "Now launching provisioning UI tool.\n"
69 "After generating project.yml in the browser and saving it to your local folder,\n"
70 "please re-run provision with -p option, pointing to the generated project.yml, to generate all packages.\n******\n"
71 )
72 exit(0)
73
74 workspace = args.workspace
75 workspace_full_path = os.path.join(current_path, workspace)
76
77 project_full_path = os.path.join(current_path, project_file)
78 print(f"Project yaml file: {project_full_path}.")
79
80 project_dict = yaml.load(open(project_full_path, "r"), Loader=yaml.Loader)
81 api_version = project_dict.get("api_version")
82 if api_version not in [3]:
83 raise ValueError(f"API version expected 3 but found {api_version}")
84
85 project_name = project_dict.get("name")
86 project_description = project_dict.get("description", "")
87 participants = list()
88 for p in project_dict.get("participants"):
89 participants.append(Participant(**p))
90 project = Project(name=project_name, description=project_description, participants=participants)
91
92 builders = list()
93 for b in project_dict.get("builders"):
94 path = b.get("path")
95 args = b.get("args")
96 builders.append(instantiate_class(path, args))
97
98 provisioner = Provisioner(workspace_full_path, builders)
99
100 provisioner.provision(project)
101
102
103 if __name__ == "__main__":
104 main()
105
[end of nvflare/lighter/provision.py]
[start of nvflare/lighter/utils.py]
1 # Copyright (c) 2021-2022, NVIDIA CORPORATION. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import os
16 import random
17 from base64 import b64encode
18
19 import yaml
20 from cryptography.hazmat.primitives import hashes
21 from cryptography.hazmat.primitives.asymmetric import padding
22
23
24 def generate_password():
25 s = "abcdefghijklmnopqrstuvwxyz01234567890ABCDEFGHIJKLMNOPQRSTUVWXYZ"
26 passlen = 16
27 p = "".join(random.sample(s, passlen))
28 return p
29
30
31 def sign_all(content_folder, signing_pri_key):
32 signatures = dict()
33 for f in os.listdir(content_folder):
34 path = os.path.join(content_folder, f)
35 if os.path.isfile(path):
36 signature = signing_pri_key.sign(
37 data=open(path, "rb").read(),
38 padding=padding.PSS(
39 mgf=padding.MGF1(hashes.SHA256()),
40 salt_length=padding.PSS.MAX_LENGTH,
41 ),
42 algorithm=hashes.SHA256(),
43 )
44 signatures[f] = b64encode(signature).decode("utf-8")
45 return signatures
46
47
48 def load_yaml(file_name):
49 return yaml.load(open(file_name, "r"), Loader=yaml.Loader)
50
51
52 def sh_replace(src, mapping_dict):
53 result = src
54 for k, v in mapping_dict.items():
55 result = result.replace("{~~" + k + "~~}", str(v))
56 return result
57
[end of nvflare/lighter/utils.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/nvflare/lighter/provision.py b/nvflare/lighter/provision.py
--- a/nvflare/lighter/provision.py
+++ b/nvflare/lighter/provision.py
@@ -21,10 +21,9 @@
import sys
import webbrowser
-import yaml
-
from nvflare.fuel.utils.class_utils import instantiate_class
from nvflare.lighter.spec import Participant, Project, Provisioner
+from nvflare.lighter.utils import load_yaml
def main():
@@ -77,7 +76,7 @@
project_full_path = os.path.join(current_path, project_file)
print(f"Project yaml file: {project_full_path}.")
- project_dict = yaml.load(open(project_full_path, "r"), Loader=yaml.Loader)
+ project_dict = load_yaml(project_full_path)
api_version = project_dict.get("api_version")
if api_version not in [3]:
raise ValueError(f"API version expected 3 but found {api_version}")
diff --git a/nvflare/lighter/utils.py b/nvflare/lighter/utils.py
--- a/nvflare/lighter/utils.py
+++ b/nvflare/lighter/utils.py
@@ -46,7 +46,7 @@
def load_yaml(file_name):
- return yaml.load(open(file_name, "r"), Loader=yaml.Loader)
+ return yaml.safe_load(open(file_name, "r"))
def sh_replace(src, mapping_dict):
| {"golden_diff": "diff --git a/nvflare/lighter/provision.py b/nvflare/lighter/provision.py\n--- a/nvflare/lighter/provision.py\n+++ b/nvflare/lighter/provision.py\n@@ -21,10 +21,9 @@\n import sys\n import webbrowser\n \n-import yaml\n-\n from nvflare.fuel.utils.class_utils import instantiate_class\n from nvflare.lighter.spec import Participant, Project, Provisioner\n+from nvflare.lighter.utils import load_yaml\n \n \n def main():\n@@ -77,7 +76,7 @@\n project_full_path = os.path.join(current_path, project_file)\n print(f\"Project yaml file: {project_full_path}.\")\n \n- project_dict = yaml.load(open(project_full_path, \"r\"), Loader=yaml.Loader)\n+ project_dict = load_yaml(project_full_path)\n api_version = project_dict.get(\"api_version\")\n if api_version not in [3]:\n raise ValueError(f\"API version expected 3 but found {api_version}\")\ndiff --git a/nvflare/lighter/utils.py b/nvflare/lighter/utils.py\n--- a/nvflare/lighter/utils.py\n+++ b/nvflare/lighter/utils.py\n@@ -46,7 +46,7 @@\n \n \n def load_yaml(file_name):\n- return yaml.load(open(file_name, \"r\"), Loader=yaml.Loader)\n+ return yaml.safe_load(open(file_name, \"r\"))\n \n \n def sh_replace(src, mapping_dict):\n", "issue": "Yaml loader should be replaced by safe_loader or other more secure loader\nTo load yaml files from unknown source, we should avoid using yaml's loader. A better way is to use either safe_loader or other mechanism.\n", "before_files": [{"content": "# Copyright (c) 2021-2022, NVIDIA CORPORATION. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom __future__ import absolute_import\n\nimport argparse\nimport os\nimport pathlib\nimport shutil\nimport sys\nimport webbrowser\n\nimport yaml\n\nfrom nvflare.fuel.utils.class_utils import instantiate_class\nfrom nvflare.lighter.spec import Participant, Project, Provisioner\n\n\ndef main():\n parser = argparse.ArgumentParser()\n parser.add_argument(\"-p\", \"--project_file\", type=str, default=\"project.yml\", help=\"file to describe FL project\")\n parser.add_argument(\"-w\", \"--workspace\", type=str, default=\"workspace\", help=\"directory used by provision\")\n parser.add_argument(\"-c\", \"--custom_folder\", type=str, default=\".\", help=\"additional folder to load python codes\")\n parser.add_argument(\n \"-u\",\n \"--ui_tool\",\n action=\"store_true\",\n help=\"Run provisioning UI tool to generate project.yml file\",\n )\n\n args = parser.parse_args()\n\n file_path = pathlib.Path(__file__).parent.absolute()\n current_path = os.getcwd()\n custom_folder_path = os.path.join(current_path, args.custom_folder)\n sys.path.append(custom_folder_path)\n print(\"Path list (sys.path) for python codes loading: {}\".format(sys.path))\n\n # main project file\n project_file = args.project_file\n current_project_yml = os.path.join(current_path, \"project.yml\")\n if len(sys.argv) == 1 and not os.path.exists(current_project_yml):\n answer = input(\n f\"No project.yml found in current folder. Is it OK to generate one at {current_project_yml} for you? (y/N) \"\n )\n if answer.strip().upper() == \"Y\":\n shutil.copyfile(os.path.join(file_path, \"project.yml\"), current_project_yml)\n print(f\"{current_project_yml} was created. Please edit it to fit your FL configuration.\")\n exit(0)\n\n if args.ui_tool:\n ui_helper_path = os.path.join(file_path, \"provision_helper.html\")\n ui_helper_url = f\"file://{ui_helper_path}\"\n webbrowser.open_new_tab(ui_helper_url)\n print(\n \"\\n******\\n\"\n \"Now launching provisioning UI tool.\\n\"\n \"After generating project.yml in the browser and saving it to your local folder,\\n\"\n \"please re-run provision with -p option, pointing to the generated project.yml, to generate all packages.\\n******\\n\"\n )\n exit(0)\n\n workspace = args.workspace\n workspace_full_path = os.path.join(current_path, workspace)\n\n project_full_path = os.path.join(current_path, project_file)\n print(f\"Project yaml file: {project_full_path}.\")\n\n project_dict = yaml.load(open(project_full_path, \"r\"), Loader=yaml.Loader)\n api_version = project_dict.get(\"api_version\")\n if api_version not in [3]:\n raise ValueError(f\"API version expected 3 but found {api_version}\")\n\n project_name = project_dict.get(\"name\")\n project_description = project_dict.get(\"description\", \"\")\n participants = list()\n for p in project_dict.get(\"participants\"):\n participants.append(Participant(**p))\n project = Project(name=project_name, description=project_description, participants=participants)\n\n builders = list()\n for b in project_dict.get(\"builders\"):\n path = b.get(\"path\")\n args = b.get(\"args\")\n builders.append(instantiate_class(path, args))\n\n provisioner = Provisioner(workspace_full_path, builders)\n\n provisioner.provision(project)\n\n\nif __name__ == \"__main__\":\n main()\n", "path": "nvflare/lighter/provision.py"}, {"content": "# Copyright (c) 2021-2022, NVIDIA CORPORATION. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport os\nimport random\nfrom base64 import b64encode\n\nimport yaml\nfrom cryptography.hazmat.primitives import hashes\nfrom cryptography.hazmat.primitives.asymmetric import padding\n\n\ndef generate_password():\n s = \"abcdefghijklmnopqrstuvwxyz01234567890ABCDEFGHIJKLMNOPQRSTUVWXYZ\"\n passlen = 16\n p = \"\".join(random.sample(s, passlen))\n return p\n\n\ndef sign_all(content_folder, signing_pri_key):\n signatures = dict()\n for f in os.listdir(content_folder):\n path = os.path.join(content_folder, f)\n if os.path.isfile(path):\n signature = signing_pri_key.sign(\n data=open(path, \"rb\").read(),\n padding=padding.PSS(\n mgf=padding.MGF1(hashes.SHA256()),\n salt_length=padding.PSS.MAX_LENGTH,\n ),\n algorithm=hashes.SHA256(),\n )\n signatures[f] = b64encode(signature).decode(\"utf-8\")\n return signatures\n\n\ndef load_yaml(file_name):\n return yaml.load(open(file_name, \"r\"), Loader=yaml.Loader)\n\n\ndef sh_replace(src, mapping_dict):\n result = src\n for k, v in mapping_dict.items():\n result = result.replace(\"{~~\" + k + \"~~}\", str(v))\n return result\n", "path": "nvflare/lighter/utils.py"}]} | 2,260 | 315 |
gh_patches_debug_16051 | rasdani/github-patches | git_diff | docker__docker-py-812 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
requests 2.8.0 re-introduces bug #647
</issue>
<code>
[start of docker/unixconn/unixconn.py]
1 # Copyright 2013 dotCloud inc.
2
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6
7 # http://www.apache.org/licenses/LICENSE-2.0
8
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 import six
15 import requests.adapters
16 import socket
17
18 if six.PY3:
19 import http.client as httplib
20 else:
21 import httplib
22
23 try:
24 import requests.packages.urllib3 as urllib3
25 except ImportError:
26 import urllib3
27
28 RecentlyUsedContainer = urllib3._collections.RecentlyUsedContainer
29
30
31 class UnixHTTPConnection(httplib.HTTPConnection, object):
32 def __init__(self, base_url, unix_socket, timeout=60):
33 httplib.HTTPConnection.__init__(self, 'localhost', timeout=timeout)
34 self.base_url = base_url
35 self.unix_socket = unix_socket
36 self.timeout = timeout
37
38 def connect(self):
39 sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)
40 sock.settimeout(self.timeout)
41 sock.connect(self.unix_socket)
42 self.sock = sock
43
44
45 class UnixHTTPConnectionPool(urllib3.connectionpool.HTTPConnectionPool):
46 def __init__(self, base_url, socket_path, timeout=60):
47 urllib3.connectionpool.HTTPConnectionPool.__init__(
48 self, 'localhost', timeout=timeout
49 )
50 self.base_url = base_url
51 self.socket_path = socket_path
52 self.timeout = timeout
53
54 def _new_conn(self):
55 return UnixHTTPConnection(self.base_url, self.socket_path,
56 self.timeout)
57
58
59 class UnixAdapter(requests.adapters.HTTPAdapter):
60 def __init__(self, socket_url, timeout=60):
61 socket_path = socket_url.replace('http+unix://', '')
62 if not socket_path.startswith('/'):
63 socket_path = '/' + socket_path
64 self.socket_path = socket_path
65 self.timeout = timeout
66 self.pools = RecentlyUsedContainer(10,
67 dispose_func=lambda p: p.close())
68 super(UnixAdapter, self).__init__()
69
70 def get_connection(self, url, proxies=None):
71 with self.pools.lock:
72 pool = self.pools.get(url)
73 if pool:
74 return pool
75
76 pool = UnixHTTPConnectionPool(url,
77 self.socket_path,
78 self.timeout)
79 self.pools[url] = pool
80
81 return pool
82
83 def close(self):
84 self.pools.clear()
85
[end of docker/unixconn/unixconn.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/docker/unixconn/unixconn.py b/docker/unixconn/unixconn.py
--- a/docker/unixconn/unixconn.py
+++ b/docker/unixconn/unixconn.py
@@ -73,12 +73,20 @@
if pool:
return pool
- pool = UnixHTTPConnectionPool(url,
- self.socket_path,
- self.timeout)
+ pool = UnixHTTPConnectionPool(
+ url, self.socket_path, self.timeout
+ )
self.pools[url] = pool
return pool
+ def request_url(self, request, proxies):
+ # The select_proxy utility in requests errors out when the provided URL
+ # doesn't have a hostname, like is the case when using a UNIX socket.
+ # Since proxies are an irrelevant notion in the case of UNIX sockets
+ # anyway, we simply return the path URL directly.
+ # See also: https://github.com/docker/docker-py/issues/811
+ return request.path_url
+
def close(self):
self.pools.clear()
| {"golden_diff": "diff --git a/docker/unixconn/unixconn.py b/docker/unixconn/unixconn.py\n--- a/docker/unixconn/unixconn.py\n+++ b/docker/unixconn/unixconn.py\n@@ -73,12 +73,20 @@\n if pool:\n return pool\n \n- pool = UnixHTTPConnectionPool(url,\n- self.socket_path,\n- self.timeout)\n+ pool = UnixHTTPConnectionPool(\n+ url, self.socket_path, self.timeout\n+ )\n self.pools[url] = pool\n \n return pool\n \n+ def request_url(self, request, proxies):\n+ # The select_proxy utility in requests errors out when the provided URL\n+ # doesn't have a hostname, like is the case when using a UNIX socket.\n+ # Since proxies are an irrelevant notion in the case of UNIX sockets\n+ # anyway, we simply return the path URL directly.\n+ # See also: https://github.com/docker/docker-py/issues/811\n+ return request.path_url\n+\n def close(self):\n self.pools.clear()\n", "issue": "requests 2.8.0 re-introduces bug #647\n\n", "before_files": [{"content": "# Copyright 2013 dotCloud inc.\n\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n\n# http://www.apache.org/licenses/LICENSE-2.0\n\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nimport six\nimport requests.adapters\nimport socket\n\nif six.PY3:\n import http.client as httplib\nelse:\n import httplib\n\ntry:\n import requests.packages.urllib3 as urllib3\nexcept ImportError:\n import urllib3\n\nRecentlyUsedContainer = urllib3._collections.RecentlyUsedContainer\n\n\nclass UnixHTTPConnection(httplib.HTTPConnection, object):\n def __init__(self, base_url, unix_socket, timeout=60):\n httplib.HTTPConnection.__init__(self, 'localhost', timeout=timeout)\n self.base_url = base_url\n self.unix_socket = unix_socket\n self.timeout = timeout\n\n def connect(self):\n sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)\n sock.settimeout(self.timeout)\n sock.connect(self.unix_socket)\n self.sock = sock\n\n\nclass UnixHTTPConnectionPool(urllib3.connectionpool.HTTPConnectionPool):\n def __init__(self, base_url, socket_path, timeout=60):\n urllib3.connectionpool.HTTPConnectionPool.__init__(\n self, 'localhost', timeout=timeout\n )\n self.base_url = base_url\n self.socket_path = socket_path\n self.timeout = timeout\n\n def _new_conn(self):\n return UnixHTTPConnection(self.base_url, self.socket_path,\n self.timeout)\n\n\nclass UnixAdapter(requests.adapters.HTTPAdapter):\n def __init__(self, socket_url, timeout=60):\n socket_path = socket_url.replace('http+unix://', '')\n if not socket_path.startswith('/'):\n socket_path = '/' + socket_path\n self.socket_path = socket_path\n self.timeout = timeout\n self.pools = RecentlyUsedContainer(10,\n dispose_func=lambda p: p.close())\n super(UnixAdapter, self).__init__()\n\n def get_connection(self, url, proxies=None):\n with self.pools.lock:\n pool = self.pools.get(url)\n if pool:\n return pool\n\n pool = UnixHTTPConnectionPool(url,\n self.socket_path,\n self.timeout)\n self.pools[url] = pool\n\n return pool\n\n def close(self):\n self.pools.clear()\n", "path": "docker/unixconn/unixconn.py"}]} | 1,326 | 237 |
gh_patches_debug_2925 | rasdani/github-patches | git_diff | spack__spack-20572 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
improve installation of Zoltran: imposing +int64 constrains on parmetis
<!--*Please add a concise summary of your suggestion here.*-->
### Rationale
zoltan spec has a variant called `int64` which imposes the corresponding constrain on metis.
https://github.com/spack/spack/blob/6947951aaf9954b1dfd12ca7a9266d7335f07105/var/spack/repos/builtin/packages/zoltan/package.py#L37-L44
The same constrain must be applied to parmetis.
<!--*Is your feature request related to a problem? Please describe it!*-->
### Description
I guess a solution can be something like
```
depends_on('parmetis@4:', when='+parmetis')
depends_on('parmetis@4: +int64', when='+parmetis+int64')
```
<!--*Describe the solution you'd like and the alternatives you have considered.*-->
### Additional information
<!--*Add any other context about the feature request here.*-->
I guess this happens because parmetis package has been recently updated and `int64` has been added. Because there was no such an option in parmetis for a long time people came up with a workaround by specifying `metis+int64` explicitly in their script. The parametis update brings an inconsistency because `int64` is off by default in parmetis, however, and the ''legacy'' workaround imposes `int64` on metis.
My spack version is 0.16.0
### General information
- [x] I have run `spack --version` and reported the version of Spack
- [x] I have searched the issues of this repo and believe this is not a duplicate
<!--If you want to ask a question about the tool (how to use it, what it can currently do, etc.), try the `#general` channel on our Slack first. We have a welcoming community and chances are you'll get your reply faster and without opening an issue.
Other than that, thanks for taking the time to contribute to Spack!
-->
</issue>
<code>
[start of var/spack/repos/builtin/packages/zoltan/package.py]
1 # Copyright 2013-2020 Lawrence Livermore National Security, LLC and other
2 # Spack Project Developers. See the top-level COPYRIGHT file for details.
3 #
4 # SPDX-License-Identifier: (Apache-2.0 OR MIT)
5
6
7 from spack import *
8 import re
9
10
11 class Zoltan(AutotoolsPackage):
12 """The Zoltan library is a toolkit of parallel combinatorial algorithms
13 for parallel, unstructured, and/or adaptive scientific
14 applications. Zoltan's largest component is a suite of dynamic
15 load-balancing and partitioning algorithms that increase
16 applications' parallel performance by reducing idle time. Zoltan
17 also has graph coloring and graph ordering algorithms, which are
18 useful in task schedulers and parallel preconditioners.
19
20 """
21
22 homepage = "http://www.cs.sandia.gov/zoltan"
23 url = "http://www.cs.sandia.gov/~kddevin/Zoltan_Distributions/zoltan_distrib_v3.83.tar.gz"
24
25 version('3.83', sha256='d0d78fdeab7a385c87d3666b8a8dc748994ff04d3fd846872a4845e12d79c1bb')
26 version('3.8', sha256='5bdd46548fb9c73b225bbcf3d206c558c318cb292f0b19645e536315d14aafb7')
27 version('3.6', sha256='d2cb41e5fb72ca564b24bc5f21d82d9f7992f2c977bc82b243a01a8a8ee4eb9c')
28 version('3.3', sha256='8a90585674ab1bbd011dab29f778b9816519712c78d0aab4cdde9c68f02b30dc')
29
30 patch('notparallel.patch', when='@3.8')
31
32 variant('debug', default=False, description='Builds a debug version of the library.')
33 variant('shared', default=True, description='Builds a shared version of the library.')
34
35 variant('fortran', default=True, description='Enable Fortran support.')
36 variant('mpi', default=True, description='Enable MPI support.')
37 variant('parmetis', default=False, description='Enable ParMETIS support.')
38 variant('int64', default=False, description='Enable 64bit indices.')
39
40 depends_on('mpi', when='+mpi')
41
42 depends_on('parmetis@4:', when='+parmetis')
43 depends_on('metis+int64', when='+parmetis+int64')
44 depends_on('metis', when='+parmetis')
45
46 depends_on('perl@:5.21', type='build', when='@:3.6')
47 depends_on('autoconf', type='build')
48 depends_on('automake', type='build')
49 depends_on('m4', type='build')
50
51 conflicts('+parmetis', when='~mpi')
52
53 build_directory = 'spack-build'
54
55 @property
56 def configure_directory(self):
57 spec = self.spec
58
59 # FIXME: The older Zoltan versions fail to compile the F90 MPI wrappers
60 # because of some complicated generic type problem.
61 if spec.satisfies('@:3.6+fortran+mpi'):
62 raise RuntimeError(('Cannot build Zoltan v{0} with +fortran and '
63 '+mpi; please disable one of these features '
64 'or upgrade versions.').format(self.version))
65 if spec.satisfies('@:3.6'):
66 zoltan_path = 'Zoltan_v{0}'.format(self.version)
67 return zoltan_path
68 return '.'
69
70 @property
71 def parallel(self):
72 # NOTE: Earlier versions of Zoltan cannot be built in parallel
73 # because they contain nested Makefile dependency bugs.
74 return not self.spec.satisfies('@:3.6+fortran')
75
76 def autoreconf(self, spec, prefix):
77 autoreconf = which('autoreconf')
78 with working_dir(self.configure_directory):
79 autoreconf('-ivf')
80
81 def configure_args(self):
82 spec = self.spec
83
84 config_args = [
85 self.get_config_flag('f90interface', 'fortran'),
86 self.get_config_flag('mpi', 'mpi'),
87 ]
88 config_cflags = [
89 '-O0' if '+debug' in spec else '-O3',
90 '-g' if '+debug' in spec else '',
91 ]
92
93 config_ldflags = []
94 # PGI runtime libraries
95 if '%pgi' in spec:
96 config_ldflags.append('-pgf90libs')
97 if '+shared' in spec:
98 config_args.extend([
99 'RANLIB=echo',
100 '--with-ar=$(CXX) -shared $(LDFLAGS) -o'
101 ])
102 config_cflags.append(self.compiler.cc_pic_flag)
103 if spec.satisfies('%gcc'):
104 config_args.append('--with-libs=-lgfortran')
105 if spec.satisfies('%intel'):
106 config_args.append('--with-libs=-lifcore')
107
108 if '+int64' in spec:
109 config_args.append('--with-id-type=ulong')
110
111 if '+parmetis' in spec:
112 parmetis_prefix = spec['parmetis'].prefix
113 config_args.extend([
114 '--with-parmetis',
115 '--with-parmetis-libdir={0}'.format(parmetis_prefix.lib),
116 '--with-parmetis-incdir={0}'.format(parmetis_prefix.include),
117 '--with-incdirs=-I{0}'.format(spec['metis'].prefix.include),
118 '--with-ldflags=-L{0}'.format(spec['metis'].prefix.lib)
119 ])
120 if '+int64' in spec['metis']:
121 config_args.append('--with-id-type=ulong')
122 else:
123 config_args.append('--with-id-type=uint')
124
125 if '+mpi' in spec:
126 config_args.extend([
127 'CC={0}'.format(spec['mpi'].mpicc),
128 'CXX={0}'.format(spec['mpi'].mpicxx),
129 'FC={0}'.format(spec['mpi'].mpifc),
130 '--with-mpi={0}'.format(spec['mpi'].prefix),
131
132 # NOTE: Zoltan assumes that it's linking against an MPI library
133 # that can be found with '-lmpi' which isn't the case for many
134 # MPI packages. We rely on the MPI-wrappers to automatically
135 # add what is required for linking and thus pass an empty
136 # list of libs
137 '--with-mpi-libs= '
138 ])
139
140 config_fcflags = config_cflags[:]
141 if spec.satisfies('%gcc@10:+fortran'):
142 config_fcflags.append('-fallow-argument-mismatch')
143 # NOTE: Early versions of Zoltan come packaged with a few embedded
144 # library packages (e.g. ParMETIS, Scotch), which messes with Spack's
145 # ability to descend directly into the package's source directory.
146 config_args.extend([
147 '--with-cflags={0}'.format(' '.join(config_cflags)),
148 '--with-cxxflags={0}'.format(' '.join(config_cflags)),
149 '--with-fcflags={0}'.format(' '.join(config_fcflags)),
150 '--with-ldflags={0}'.format(' '.join(config_ldflags))
151 ])
152 return config_args
153
154 # NOTE: Unfortunately, Zoltan doesn't provide any configuration
155 # options for the extension of the output library files, so this
156 # script must change these extensions as a post-processing step.
157 @run_after('install')
158 def solib_install(self):
159 if '+shared' in self.spec:
160 for lib_path in find(self.spec.prefix.lib, 'lib*.a'):
161 lib_shared_name = re.sub(r'\.a$', '.{0}'.format(dso_suffix),
162 lib_path)
163 move(lib_path, lib_shared_name)
164
165 def get_config_flag(self, flag_name, flag_variant):
166 flag_pre = 'en' if '+{0}'.format(flag_variant) in self.spec else 'dis'
167 return '--{0}able-{1}'.format(flag_pre, flag_name)
168
[end of var/spack/repos/builtin/packages/zoltan/package.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/var/spack/repos/builtin/packages/zoltan/package.py b/var/spack/repos/builtin/packages/zoltan/package.py
--- a/var/spack/repos/builtin/packages/zoltan/package.py
+++ b/var/spack/repos/builtin/packages/zoltan/package.py
@@ -39,6 +39,7 @@
depends_on('mpi', when='+mpi')
+ depends_on('parmetis@4: +int64', when='+parmetis+int64')
depends_on('parmetis@4:', when='+parmetis')
depends_on('metis+int64', when='+parmetis+int64')
depends_on('metis', when='+parmetis')
| {"golden_diff": "diff --git a/var/spack/repos/builtin/packages/zoltan/package.py b/var/spack/repos/builtin/packages/zoltan/package.py\n--- a/var/spack/repos/builtin/packages/zoltan/package.py\n+++ b/var/spack/repos/builtin/packages/zoltan/package.py\n@@ -39,6 +39,7 @@\n \n depends_on('mpi', when='+mpi')\n \n+ depends_on('parmetis@4: +int64', when='+parmetis+int64')\n depends_on('parmetis@4:', when='+parmetis')\n depends_on('metis+int64', when='+parmetis+int64')\n depends_on('metis', when='+parmetis')\n", "issue": "improve installation of Zoltran: imposing +int64 constrains on parmetis\n<!--*Please add a concise summary of your suggestion here.*-->\r\n\r\n### Rationale\r\nzoltan spec has a variant called `int64` which imposes the corresponding constrain on metis. \r\nhttps://github.com/spack/spack/blob/6947951aaf9954b1dfd12ca7a9266d7335f07105/var/spack/repos/builtin/packages/zoltan/package.py#L37-L44\r\n\r\nThe same constrain must be applied to parmetis. \r\n\r\n\r\n<!--*Is your feature request related to a problem? Please describe it!*-->\r\n\r\n### Description\r\nI guess a solution can be something like\r\n```\r\ndepends_on('parmetis@4:', when='+parmetis') \r\ndepends_on('parmetis@4: +int64', when='+parmetis+int64')\r\n```\r\n\r\n<!--*Describe the solution you'd like and the alternatives you have considered.*-->\r\n\r\n\r\n### Additional information\r\n<!--*Add any other context about the feature request here.*-->\r\nI guess this happens because parmetis package has been recently updated and `int64` has been added. Because there was no such an option in parmetis for a long time people came up with a workaround by specifying `metis+int64` explicitly in their script. The parametis update brings an inconsistency because `int64` is off by default in parmetis, however, and the ''legacy'' workaround imposes `int64` on metis.\r\nMy spack version is 0.16.0\r\n\r\n### General information\r\n\r\n- [x] I have run `spack --version` and reported the version of Spack\r\n- [x] I have searched the issues of this repo and believe this is not a duplicate\r\n\r\n\r\n\r\n<!--If you want to ask a question about the tool (how to use it, what it can currently do, etc.), try the `#general` channel on our Slack first. We have a welcoming community and chances are you'll get your reply faster and without opening an issue.\r\n\r\nOther than that, thanks for taking the time to contribute to Spack!\r\n-->\n", "before_files": [{"content": "# Copyright 2013-2020 Lawrence Livermore National Security, LLC and other\n# Spack Project Developers. See the top-level COPYRIGHT file for details.\n#\n# SPDX-License-Identifier: (Apache-2.0 OR MIT)\n\n\nfrom spack import *\nimport re\n\n\nclass Zoltan(AutotoolsPackage):\n \"\"\"The Zoltan library is a toolkit of parallel combinatorial algorithms\n for parallel, unstructured, and/or adaptive scientific\n applications. Zoltan's largest component is a suite of dynamic\n load-balancing and partitioning algorithms that increase\n applications' parallel performance by reducing idle time. Zoltan\n also has graph coloring and graph ordering algorithms, which are\n useful in task schedulers and parallel preconditioners.\n\n \"\"\"\n\n homepage = \"http://www.cs.sandia.gov/zoltan\"\n url = \"http://www.cs.sandia.gov/~kddevin/Zoltan_Distributions/zoltan_distrib_v3.83.tar.gz\"\n\n version('3.83', sha256='d0d78fdeab7a385c87d3666b8a8dc748994ff04d3fd846872a4845e12d79c1bb')\n version('3.8', sha256='5bdd46548fb9c73b225bbcf3d206c558c318cb292f0b19645e536315d14aafb7')\n version('3.6', sha256='d2cb41e5fb72ca564b24bc5f21d82d9f7992f2c977bc82b243a01a8a8ee4eb9c')\n version('3.3', sha256='8a90585674ab1bbd011dab29f778b9816519712c78d0aab4cdde9c68f02b30dc')\n\n patch('notparallel.patch', when='@3.8')\n\n variant('debug', default=False, description='Builds a debug version of the library.')\n variant('shared', default=True, description='Builds a shared version of the library.')\n\n variant('fortran', default=True, description='Enable Fortran support.')\n variant('mpi', default=True, description='Enable MPI support.')\n variant('parmetis', default=False, description='Enable ParMETIS support.')\n variant('int64', default=False, description='Enable 64bit indices.')\n\n depends_on('mpi', when='+mpi')\n\n depends_on('parmetis@4:', when='+parmetis')\n depends_on('metis+int64', when='+parmetis+int64')\n depends_on('metis', when='+parmetis')\n\n depends_on('perl@:5.21', type='build', when='@:3.6')\n depends_on('autoconf', type='build')\n depends_on('automake', type='build')\n depends_on('m4', type='build')\n\n conflicts('+parmetis', when='~mpi')\n\n build_directory = 'spack-build'\n\n @property\n def configure_directory(self):\n spec = self.spec\n\n # FIXME: The older Zoltan versions fail to compile the F90 MPI wrappers\n # because of some complicated generic type problem.\n if spec.satisfies('@:3.6+fortran+mpi'):\n raise RuntimeError(('Cannot build Zoltan v{0} with +fortran and '\n '+mpi; please disable one of these features '\n 'or upgrade versions.').format(self.version))\n if spec.satisfies('@:3.6'):\n zoltan_path = 'Zoltan_v{0}'.format(self.version)\n return zoltan_path\n return '.'\n\n @property\n def parallel(self):\n # NOTE: Earlier versions of Zoltan cannot be built in parallel\n # because they contain nested Makefile dependency bugs.\n return not self.spec.satisfies('@:3.6+fortran')\n\n def autoreconf(self, spec, prefix):\n autoreconf = which('autoreconf')\n with working_dir(self.configure_directory):\n autoreconf('-ivf')\n\n def configure_args(self):\n spec = self.spec\n\n config_args = [\n self.get_config_flag('f90interface', 'fortran'),\n self.get_config_flag('mpi', 'mpi'),\n ]\n config_cflags = [\n '-O0' if '+debug' in spec else '-O3',\n '-g' if '+debug' in spec else '',\n ]\n\n config_ldflags = []\n # PGI runtime libraries\n if '%pgi' in spec:\n config_ldflags.append('-pgf90libs')\n if '+shared' in spec:\n config_args.extend([\n 'RANLIB=echo',\n '--with-ar=$(CXX) -shared $(LDFLAGS) -o'\n ])\n config_cflags.append(self.compiler.cc_pic_flag)\n if spec.satisfies('%gcc'):\n config_args.append('--with-libs=-lgfortran')\n if spec.satisfies('%intel'):\n config_args.append('--with-libs=-lifcore')\n\n if '+int64' in spec:\n config_args.append('--with-id-type=ulong')\n\n if '+parmetis' in spec:\n parmetis_prefix = spec['parmetis'].prefix\n config_args.extend([\n '--with-parmetis',\n '--with-parmetis-libdir={0}'.format(parmetis_prefix.lib),\n '--with-parmetis-incdir={0}'.format(parmetis_prefix.include),\n '--with-incdirs=-I{0}'.format(spec['metis'].prefix.include),\n '--with-ldflags=-L{0}'.format(spec['metis'].prefix.lib)\n ])\n if '+int64' in spec['metis']:\n config_args.append('--with-id-type=ulong')\n else:\n config_args.append('--with-id-type=uint')\n\n if '+mpi' in spec:\n config_args.extend([\n 'CC={0}'.format(spec['mpi'].mpicc),\n 'CXX={0}'.format(spec['mpi'].mpicxx),\n 'FC={0}'.format(spec['mpi'].mpifc),\n '--with-mpi={0}'.format(spec['mpi'].prefix),\n\n # NOTE: Zoltan assumes that it's linking against an MPI library\n # that can be found with '-lmpi' which isn't the case for many\n # MPI packages. We rely on the MPI-wrappers to automatically\n # add what is required for linking and thus pass an empty\n # list of libs\n '--with-mpi-libs= '\n ])\n\n config_fcflags = config_cflags[:]\n if spec.satisfies('%gcc@10:+fortran'):\n config_fcflags.append('-fallow-argument-mismatch')\n # NOTE: Early versions of Zoltan come packaged with a few embedded\n # library packages (e.g. ParMETIS, Scotch), which messes with Spack's\n # ability to descend directly into the package's source directory.\n config_args.extend([\n '--with-cflags={0}'.format(' '.join(config_cflags)),\n '--with-cxxflags={0}'.format(' '.join(config_cflags)),\n '--with-fcflags={0}'.format(' '.join(config_fcflags)),\n '--with-ldflags={0}'.format(' '.join(config_ldflags))\n ])\n return config_args\n\n # NOTE: Unfortunately, Zoltan doesn't provide any configuration\n # options for the extension of the output library files, so this\n # script must change these extensions as a post-processing step.\n @run_after('install')\n def solib_install(self):\n if '+shared' in self.spec:\n for lib_path in find(self.spec.prefix.lib, 'lib*.a'):\n lib_shared_name = re.sub(r'\\.a$', '.{0}'.format(dso_suffix),\n lib_path)\n move(lib_path, lib_shared_name)\n\n def get_config_flag(self, flag_name, flag_variant):\n flag_pre = 'en' if '+{0}'.format(flag_variant) in self.spec else 'dis'\n return '--{0}able-{1}'.format(flag_pre, flag_name)\n", "path": "var/spack/repos/builtin/packages/zoltan/package.py"}]} | 3,297 | 152 |
gh_patches_debug_16696 | rasdani/github-patches | git_diff | mlcommons__GaNDLF-857 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[FEATURE] Upgrade PyTorch to 2.2.1
### Is your feature request related to a problem? Please describe.
PyTorch 2.2 has been released since 30th Jan, and it would be good to update the dependency to reflect this. Full release notes are [here](https://github.com/pytorch/pytorch/releases/tag/v2.2.0).
### Describe the solution you'd like
Update the requirements.
### Describe alternatives you've considered
N.A.
### Additional context
N.A.
</issue>
<code>
[start of setup.py]
1 #!/usr/bin/env python
2
3 """The setup script."""
4
5
6 import sys, re, os
7 from setuptools import setup, find_packages
8 from setuptools.command.install import install
9 from setuptools.command.develop import develop
10 from setuptools.command.egg_info import egg_info
11
12 try:
13 with open("README.md") as readme_file:
14 readme = readme_file.read()
15 except Exception as error:
16 readme = "No README information found."
17 sys.stderr.write("Warning: Could not open '%s' due %s\n" % ("README.md", error))
18
19
20 class CustomInstallCommand(install):
21 def run(self):
22 install.run(self)
23
24
25 class CustomDevelopCommand(develop):
26 def run(self):
27 develop.run(self)
28
29
30 class CustomEggInfoCommand(egg_info):
31 def run(self):
32 egg_info.run(self)
33
34
35 try:
36 filepath = "GANDLF/version.py"
37 version_file = open(filepath)
38 (__version__,) = re.findall('__version__ = "(.*)"', version_file.read())
39
40 except Exception as error:
41 __version__ = "0.0.1"
42 sys.stderr.write("Warning: Could not open '%s' due %s\n" % (filepath, error))
43
44 # Handle cases where specific files need to be bundled into the final package as installed via PyPI
45 dockerfiles = [
46 item
47 for item in os.listdir(os.path.dirname(os.path.abspath(__file__)))
48 if (os.path.isfile(item) and item.startswith("Dockerfile-"))
49 ]
50 setup_files = ["setup.py", ".dockerignore", "pyproject.toml", "MANIFEST.in"]
51 all_extra_files = dockerfiles + setup_files
52 all_extra_files_pathcorrected = [os.path.join("../", item) for item in all_extra_files]
53 # find_packages should only ever find these as subpackages of gandlf, not as top-level packages
54 # generate this dynamically?
55 # GANDLF.GANDLF is needed to prevent recursion madness in deployments
56 toplevel_package_excludes = [
57 "GANDLF.GANDLF",
58 "anonymize",
59 "cli",
60 "compute",
61 "data",
62 "grad_clipping",
63 "losses",
64 "metrics",
65 "models",
66 "optimizers",
67 "schedulers",
68 "utils",
69 ]
70
71 # specifying version for `black` separately because it is also used to [check for lint](https://github.com/mlcommons/GaNDLF/blob/master/.github/workflows/black.yml)
72 black_version = "23.11.0"
73 requirements = [
74 "torch==2.1.2",
75 f"black=={black_version}",
76 "numpy==1.25.0",
77 "scipy",
78 "SimpleITK!=2.0.*",
79 "SimpleITK!=2.2.1", # https://github.com/mlcommons/GaNDLF/issues/536
80 "torchvision",
81 "tqdm",
82 "torchio==0.19.5",
83 "pandas>=2.0.0",
84 "scikit-learn>=0.23.2",
85 "scikit-image>=0.19.1",
86 "setuptools",
87 "seaborn",
88 "pyyaml",
89 "tiffslide",
90 "matplotlib",
91 "gdown==5.1.0",
92 "pytest",
93 "coverage",
94 "pytest-cov",
95 "psutil",
96 "medcam",
97 "opencv-python",
98 "torchmetrics==1.1.2",
99 "zarr==2.10.3",
100 "pydicom",
101 "onnx",
102 "torchinfo==1.7.0",
103 "segmentation-models-pytorch==0.3.3",
104 "ACSConv==0.1.1",
105 "docker",
106 "dicom-anonymizer==1.0.12",
107 "twine",
108 "zarr",
109 "keyring",
110 "monai==1.3.0",
111 "click>=8.0.0",
112 "deprecated",
113 "packaging==24.0",
114 "typer==0.9.0",
115 ]
116
117 if __name__ == "__main__":
118 setup(
119 name="GANDLF",
120 version=__version__,
121 author="MLCommons",
122 author_email="[email protected]",
123 python_requires=">3.8, <3.12",
124 packages=find_packages(
125 where=os.path.dirname(os.path.abspath(__file__)),
126 exclude=toplevel_package_excludes,
127 ),
128 cmdclass={
129 "install": CustomInstallCommand,
130 "develop": CustomDevelopCommand,
131 "egg_info": CustomEggInfoCommand,
132 },
133 entry_points={
134 "console_scripts": [
135 "gandlf=GANDLF.entrypoints.cli_tool:gandlf",
136 # old entrypoints
137 "gandlf_run=GANDLF.entrypoints.run:old_way",
138 "gandlf_constructCSV=GANDLF.entrypoints.construct_csv:old_way",
139 "gandlf_collectStats=GANDLF.entrypoints.collect_stats:old_way",
140 "gandlf_patchMiner=GANDLF.entrypoints.patch_miner:old_way",
141 "gandlf_preprocess=GANDLF.entrypoints.preprocess:old_way",
142 "gandlf_anonymizer=GANDLF.entrypoints.anonymizer:old_way",
143 "gandlf_configGenerator=GANDLF.entrypoints.config_generator:old_way",
144 "gandlf_verifyInstall=GANDLF.entrypoints.verify_install:old_way",
145 "gandlf_recoverConfig=GANDLF.entrypoints.recover_config:old_way",
146 "gandlf_deploy=GANDLF.entrypoints.deploy:old_way",
147 "gandlf_optimizeModel=GANDLF.entrypoints.optimize_model:old_way",
148 "gandlf_generateMetrics=GANDLF.entrypoints.generate_metrics:old_way",
149 "gandlf_debugInfo=GANDLF.entrypoints.debug_info:old_way",
150 "gandlf_splitCSV=GANDLF.entrypoints.split_csv:old_way",
151 ]
152 },
153 classifiers=[
154 "Development Status :: 3 - Alpha",
155 "Intended Audience :: Science/Research",
156 "License :: OSI Approved :: Apache Software License",
157 "Natural Language :: English",
158 "Operating System :: OS Independent",
159 "Programming Language :: Python :: 3.9",
160 "Programming Language :: Python :: 3.10",
161 "Programming Language :: Python :: 3.11",
162 "Topic :: Scientific/Engineering :: Medical Science Apps.",
163 ],
164 description=(
165 "PyTorch-based framework that handles segmentation/regression/classification using various DL architectures for medical imaging."
166 ),
167 install_requires=requirements,
168 license="Apache-2.0",
169 long_description=readme,
170 long_description_content_type="text/markdown",
171 include_package_data=True,
172 package_data={"GANDLF": all_extra_files_pathcorrected},
173 keywords="semantic, segmentation, regression, classification, data-augmentation, medical-imaging, clinical-workflows, deep-learning, pytorch",
174 zip_safe=False,
175 )
176
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -71,7 +71,7 @@
# specifying version for `black` separately because it is also used to [check for lint](https://github.com/mlcommons/GaNDLF/blob/master/.github/workflows/black.yml)
black_version = "23.11.0"
requirements = [
- "torch==2.1.2",
+ "torch==2.2.1",
f"black=={black_version}",
"numpy==1.25.0",
"scipy",
@@ -79,7 +79,7 @@
"SimpleITK!=2.2.1", # https://github.com/mlcommons/GaNDLF/issues/536
"torchvision",
"tqdm",
- "torchio==0.19.5",
+ "torchio==0.19.6",
"pandas>=2.0.0",
"scikit-learn>=0.23.2",
"scikit-image>=0.19.1",
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -71,7 +71,7 @@\n # specifying version for `black` separately because it is also used to [check for lint](https://github.com/mlcommons/GaNDLF/blob/master/.github/workflows/black.yml)\n black_version = \"23.11.0\"\n requirements = [\n- \"torch==2.1.2\",\n+ \"torch==2.2.1\",\n f\"black=={black_version}\",\n \"numpy==1.25.0\",\n \"scipy\",\n@@ -79,7 +79,7 @@\n \"SimpleITK!=2.2.1\", # https://github.com/mlcommons/GaNDLF/issues/536\n \"torchvision\",\n \"tqdm\",\n- \"torchio==0.19.5\",\n+ \"torchio==0.19.6\",\n \"pandas>=2.0.0\",\n \"scikit-learn>=0.23.2\",\n \"scikit-image>=0.19.1\",\n", "issue": "[FEATURE] Upgrade PyTorch to 2.2.1\n### Is your feature request related to a problem? Please describe.\r\nPyTorch 2.2 has been released since 30th Jan, and it would be good to update the dependency to reflect this. Full release notes are [here](https://github.com/pytorch/pytorch/releases/tag/v2.2.0).\r\n\r\n### Describe the solution you'd like\r\nUpdate the requirements.\r\n\r\n### Describe alternatives you've considered\r\nN.A.\r\n\r\n### Additional context\r\nN.A.\r\n\n", "before_files": [{"content": "#!/usr/bin/env python\n\n\"\"\"The setup script.\"\"\"\n\n\nimport sys, re, os\nfrom setuptools import setup, find_packages\nfrom setuptools.command.install import install\nfrom setuptools.command.develop import develop\nfrom setuptools.command.egg_info import egg_info\n\ntry:\n with open(\"README.md\") as readme_file:\n readme = readme_file.read()\nexcept Exception as error:\n readme = \"No README information found.\"\n sys.stderr.write(\"Warning: Could not open '%s' due %s\\n\" % (\"README.md\", error))\n\n\nclass CustomInstallCommand(install):\n def run(self):\n install.run(self)\n\n\nclass CustomDevelopCommand(develop):\n def run(self):\n develop.run(self)\n\n\nclass CustomEggInfoCommand(egg_info):\n def run(self):\n egg_info.run(self)\n\n\ntry:\n filepath = \"GANDLF/version.py\"\n version_file = open(filepath)\n (__version__,) = re.findall('__version__ = \"(.*)\"', version_file.read())\n\nexcept Exception as error:\n __version__ = \"0.0.1\"\n sys.stderr.write(\"Warning: Could not open '%s' due %s\\n\" % (filepath, error))\n\n# Handle cases where specific files need to be bundled into the final package as installed via PyPI\ndockerfiles = [\n item\n for item in os.listdir(os.path.dirname(os.path.abspath(__file__)))\n if (os.path.isfile(item) and item.startswith(\"Dockerfile-\"))\n]\nsetup_files = [\"setup.py\", \".dockerignore\", \"pyproject.toml\", \"MANIFEST.in\"]\nall_extra_files = dockerfiles + setup_files\nall_extra_files_pathcorrected = [os.path.join(\"../\", item) for item in all_extra_files]\n# find_packages should only ever find these as subpackages of gandlf, not as top-level packages\n# generate this dynamically?\n# GANDLF.GANDLF is needed to prevent recursion madness in deployments\ntoplevel_package_excludes = [\n \"GANDLF.GANDLF\",\n \"anonymize\",\n \"cli\",\n \"compute\",\n \"data\",\n \"grad_clipping\",\n \"losses\",\n \"metrics\",\n \"models\",\n \"optimizers\",\n \"schedulers\",\n \"utils\",\n]\n\n# specifying version for `black` separately because it is also used to [check for lint](https://github.com/mlcommons/GaNDLF/blob/master/.github/workflows/black.yml)\nblack_version = \"23.11.0\"\nrequirements = [\n \"torch==2.1.2\",\n f\"black=={black_version}\",\n \"numpy==1.25.0\",\n \"scipy\",\n \"SimpleITK!=2.0.*\",\n \"SimpleITK!=2.2.1\", # https://github.com/mlcommons/GaNDLF/issues/536\n \"torchvision\",\n \"tqdm\",\n \"torchio==0.19.5\",\n \"pandas>=2.0.0\",\n \"scikit-learn>=0.23.2\",\n \"scikit-image>=0.19.1\",\n \"setuptools\",\n \"seaborn\",\n \"pyyaml\",\n \"tiffslide\",\n \"matplotlib\",\n \"gdown==5.1.0\",\n \"pytest\",\n \"coverage\",\n \"pytest-cov\",\n \"psutil\",\n \"medcam\",\n \"opencv-python\",\n \"torchmetrics==1.1.2\",\n \"zarr==2.10.3\",\n \"pydicom\",\n \"onnx\",\n \"torchinfo==1.7.0\",\n \"segmentation-models-pytorch==0.3.3\",\n \"ACSConv==0.1.1\",\n \"docker\",\n \"dicom-anonymizer==1.0.12\",\n \"twine\",\n \"zarr\",\n \"keyring\",\n \"monai==1.3.0\",\n \"click>=8.0.0\",\n \"deprecated\",\n \"packaging==24.0\",\n \"typer==0.9.0\",\n]\n\nif __name__ == \"__main__\":\n setup(\n name=\"GANDLF\",\n version=__version__,\n author=\"MLCommons\",\n author_email=\"[email protected]\",\n python_requires=\">3.8, <3.12\",\n packages=find_packages(\n where=os.path.dirname(os.path.abspath(__file__)),\n exclude=toplevel_package_excludes,\n ),\n cmdclass={\n \"install\": CustomInstallCommand,\n \"develop\": CustomDevelopCommand,\n \"egg_info\": CustomEggInfoCommand,\n },\n entry_points={\n \"console_scripts\": [\n \"gandlf=GANDLF.entrypoints.cli_tool:gandlf\",\n # old entrypoints\n \"gandlf_run=GANDLF.entrypoints.run:old_way\",\n \"gandlf_constructCSV=GANDLF.entrypoints.construct_csv:old_way\",\n \"gandlf_collectStats=GANDLF.entrypoints.collect_stats:old_way\",\n \"gandlf_patchMiner=GANDLF.entrypoints.patch_miner:old_way\",\n \"gandlf_preprocess=GANDLF.entrypoints.preprocess:old_way\",\n \"gandlf_anonymizer=GANDLF.entrypoints.anonymizer:old_way\",\n \"gandlf_configGenerator=GANDLF.entrypoints.config_generator:old_way\",\n \"gandlf_verifyInstall=GANDLF.entrypoints.verify_install:old_way\",\n \"gandlf_recoverConfig=GANDLF.entrypoints.recover_config:old_way\",\n \"gandlf_deploy=GANDLF.entrypoints.deploy:old_way\",\n \"gandlf_optimizeModel=GANDLF.entrypoints.optimize_model:old_way\",\n \"gandlf_generateMetrics=GANDLF.entrypoints.generate_metrics:old_way\",\n \"gandlf_debugInfo=GANDLF.entrypoints.debug_info:old_way\",\n \"gandlf_splitCSV=GANDLF.entrypoints.split_csv:old_way\",\n ]\n },\n classifiers=[\n \"Development Status :: 3 - Alpha\",\n \"Intended Audience :: Science/Research\",\n \"License :: OSI Approved :: Apache Software License\",\n \"Natural Language :: English\",\n \"Operating System :: OS Independent\",\n \"Programming Language :: Python :: 3.9\",\n \"Programming Language :: Python :: 3.10\",\n \"Programming Language :: Python :: 3.11\",\n \"Topic :: Scientific/Engineering :: Medical Science Apps.\",\n ],\n description=(\n \"PyTorch-based framework that handles segmentation/regression/classification using various DL architectures for medical imaging.\"\n ),\n install_requires=requirements,\n license=\"Apache-2.0\",\n long_description=readme,\n long_description_content_type=\"text/markdown\",\n include_package_data=True,\n package_data={\"GANDLF\": all_extra_files_pathcorrected},\n keywords=\"semantic, segmentation, regression, classification, data-augmentation, medical-imaging, clinical-workflows, deep-learning, pytorch\",\n zip_safe=False,\n )\n", "path": "setup.py"}]} | 2,584 | 249 |
gh_patches_debug_35797 | rasdani/github-patches | git_diff | mathesar-foundation__mathesar-691 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Table, Database, & Schema APIs should support sorting by ID & name
## Problem
<!-- Please provide a clear and concise description of the problem that this feature request is designed to solve.-->
The table, database, and schema APIs don't currently support sorting results.
## Proposed solution
<!-- A clear and concise description of your proposed solution or feature. -->
All three APIs should support sorting by ID and name.
## Additional context
<!-- Add any other context or screenshots about the feature request here.-->
We can use `django-filter` / `django-property-filter` for this.
</issue>
<code>
[start of mathesar/api/filters.py]
1 from django_filters import BooleanFilter, DateTimeFromToRangeFilter
2 from django_property_filter import PropertyFilterSet, PropertyBaseInFilter, PropertyCharFilter
3
4 from mathesar.database.types import MathesarTypeIdentifier
5 from mathesar.models import Schema, Table, Database
6
7 FILTER_OPTIONS_BY_TYPE_IDENTIFIER = {
8 MathesarTypeIdentifier.BOOLEAN.value:
9 {
10 "db_type": "BOOLEAN",
11 "options": [{
12 "op": "eq",
13 "value": {
14 "allowed_types": ["BOOLEAN"],
15 }
16 }, {
17 "op": "is_null",
18 "value": "null",
19 }]
20 }
21 }
22
23
24 class CharInFilter(PropertyBaseInFilter, PropertyCharFilter):
25 pass
26
27
28 class SchemaFilter(PropertyFilterSet):
29 database = CharInFilter(field_name='database__name', lookup_expr='in')
30 name = CharInFilter(field_name='name', lookup_expr='in')
31
32 class Meta:
33 model = Schema
34 fields = ['name']
35
36
37 class TableFilter(PropertyFilterSet):
38 name = CharInFilter(field_name='name', lookup_expr='in')
39 created = DateTimeFromToRangeFilter(field_name='created_at')
40 updated = DateTimeFromToRangeFilter(field_name='updated_at')
41 not_imported = BooleanFilter(lookup_expr="isnull", field_name='import_verified')
42
43 class Meta:
44 model = Table
45 fields = ['name', 'schema', 'created_at', 'updated_at', 'import_verified']
46
47
48 class DatabaseFilter(PropertyFilterSet):
49 class Meta:
50 model = Database
51 fields = ['deleted']
52
[end of mathesar/api/filters.py]
[start of config/settings.py]
1 """
2 Django settings for config project.
3
4 Generated by 'django-admin startproject' using Django 3.1.7.
5
6 For more information on this file, see
7 https://docs.djangoproject.com/en/3.1/topics/settings/
8
9 For the full list of settings and their values, see
10 https://docs.djangoproject.com/en/3.1/ref/settings/
11 """
12
13 import os
14 from pathlib import Path
15
16 from decouple import Csv, config as decouple_config
17 from dj_database_url import parse as db_url
18
19
20 # We use a 'tuple' with pipes as delimiters as decople naively splits the global
21 # variables on commas when casting to Csv()
22 def pipe_delim(pipe_string):
23 # Remove opening and closing brackets
24 pipe_string = pipe_string[1:-1]
25 # Split on pipe delim
26 return pipe_string.split("|")
27
28
29 # Build paths inside the project like this: BASE_DIR / 'subdir'.
30 BASE_DIR = Path(__file__).resolve().parent.parent
31
32 # Application definition
33
34 INSTALLED_APPS = [
35 "django.contrib.admin",
36 "django.contrib.auth",
37 "django.contrib.contenttypes",
38 "django.contrib.sessions",
39 "django.contrib.messages",
40 "django.contrib.staticfiles",
41 "rest_framework",
42 "django_filters",
43 "django_property_filter",
44 "mathesar",
45 ]
46
47 MIDDLEWARE = [
48 "django.middleware.security.SecurityMiddleware",
49 "django.contrib.sessions.middleware.SessionMiddleware",
50 "django.middleware.common.CommonMiddleware",
51 "django.middleware.csrf.CsrfViewMiddleware",
52 "django.contrib.auth.middleware.AuthenticationMiddleware",
53 "django.contrib.messages.middleware.MessageMiddleware",
54 "django.middleware.clickjacking.XFrameOptionsMiddleware",
55 ]
56
57 ROOT_URLCONF = "config.urls"
58
59 TEMPLATES = [
60 {
61 "BACKEND": "django.template.backends.django.DjangoTemplates",
62 "DIRS": [],
63 "APP_DIRS": True,
64 "OPTIONS": {
65 "context_processors": [
66 "config.context_processors.frontend_settings",
67 "django.template.context_processors.debug",
68 "django.template.context_processors.request",
69 "django.contrib.auth.context_processors.auth",
70 "django.contrib.messages.context_processors.messages",
71 ],
72 },
73 },
74 ]
75
76 WSGI_APPLICATION = "config.wsgi.application"
77
78 # Database
79 # https://docs.djangoproject.com/en/3.1/ref/settings/#databases
80
81 # TODO: Add to documentation that database keys should not be than 128 characters.
82
83 # MATHESAR_DATABASES should be of the form '({db_name}|{db_url}), ({db_name}|{db_url})'
84 # See pipe_delim above for why we use pipes as delimiters
85 DATABASES = {
86 db_key: db_url(url_string)
87 for db_key, url_string in decouple_config('MATHESAR_DATABASES', cast=Csv(pipe_delim))
88 }
89 DATABASES[decouple_config('DJANGO_DATABASE_KEY')] = decouple_config('DJANGO_DATABASE_URL', cast=db_url)
90
91 for db_key, db_dict in DATABASES.items():
92 # Engine can be '.postgresql' or '.postgresql_psycopg2'
93 if not db_dict['ENGINE'].startswith('django.db.backends.postgresql'):
94 raise ValueError(
95 f"{db_key} is not a PostgreSQL database. "
96 f"{db_dict['ENGINE']} found for {db_key}'s engine."
97 )
98
99
100 # pytest-django will create a new database named 'test_{DATABASES[table_db]['NAME']}'
101 # and use it for our API tests if we don't specify DATABASES[table_db]['TEST']['NAME']
102 if decouple_config('TEST', default=False, cast=bool):
103 for db_key, _ in decouple_config('MATHESAR_DATABASES', cast=Csv(pipe_delim)):
104 DATABASES[db_key]['TEST'] = {'NAME': DATABASES[db_key]['NAME']}
105
106
107 # Quick-start development settings - unsuitable for production
108 # See https://docs.djangoproject.com/en/3.1/howto/deployment/checklist/
109
110 # SECURITY WARNING: keep the secret key used in production secret!
111 SECRET_KEY = decouple_config('SECRET_KEY')
112
113 # SECURITY WARNING: don't run with debug turned on in production!
114 DEBUG = decouple_config('DEBUG', default=False, cast=bool)
115
116 ALLOWED_HOSTS = decouple_config('ALLOWED_HOSTS', cast=Csv())
117
118 # Password validation
119 # https://docs.djangoproject.com/en/3.1/ref/settings/#auth-password-validators
120
121 AUTH_PASSWORD_VALIDATORS = [
122 {
123 "NAME": "django.contrib.auth.password_validation.UserAttributeSimilarityValidator",
124 },
125 {
126 "NAME": "django.contrib.auth.password_validation.MinimumLengthValidator",
127 },
128 {
129 "NAME": "django.contrib.auth.password_validation.CommonPasswordValidator",
130 },
131 {
132 "NAME": "django.contrib.auth.password_validation.NumericPasswordValidator",
133 },
134 ]
135
136
137 # Internationalization
138 # https://docs.djangoproject.com/en/3.1/topics/i18n/
139
140 LANGUAGE_CODE = "en-us"
141
142 TIME_ZONE = "UTC"
143
144 USE_I18N = True
145
146 USE_L10N = True
147
148 USE_TZ = True
149
150
151 # Static files (CSS, JavaScript, Images)
152 # https://docs.djangoproject.com/en/3.1/howto/static-files/
153 # https://docs.djangoproject.com/en/3.1/ref/contrib/staticfiles/
154
155 STATIC_URL = "/static/"
156
157 # When running with DEBUG=False, the webserver needs to serve files from this location
158 # python manage.py collectstatic has to be run to collect all static files into this location
159 # The files need to served in brotli or gzip compressed format
160 STATIC_ROOT = os.path.join(BASE_DIR, 'static/')
161
162 # Media files (uploaded by the user)
163
164 MEDIA_ROOT = os.path.join(BASE_DIR, '.media/')
165
166 MEDIA_URL = "/media/"
167
168 # Update Authentication classes, removed BasicAuthentication
169 # Defaults: https://www.django-rest-framework.org/api-guide/settings/
170 REST_FRAMEWORK = {
171 'DEFAULT_AUTHENTICATION_CLASSES': [
172 'rest_framework.authentication.SessionAuthentication'
173 ]
174 }
175
176 # Mathesar settings
177 MATHESAR_MODE = decouple_config('MODE', default='PRODUCTION')
178 MATHESAR_UI_BUILD_LOCATION = os.path.join(BASE_DIR, 'mathesar/static/mathesar/')
179 MATHESAR_MANIFEST_LOCATION = os.path.join(MATHESAR_UI_BUILD_LOCATION, 'manifest.json')
180 MATHESAR_CLIENT_DEV_URL = 'http://localhost:3000'
181
182
183 STATICFILES_DIRS = [MATHESAR_UI_BUILD_LOCATION]
184
[end of config/settings.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/config/settings.py b/config/settings.py
--- a/config/settings.py
+++ b/config/settings.py
@@ -170,7 +170,11 @@
REST_FRAMEWORK = {
'DEFAULT_AUTHENTICATION_CLASSES': [
'rest_framework.authentication.SessionAuthentication'
- ]
+ ],
+ 'DEFAULT_FILTER_BACKENDS': (
+ 'django_filters.rest_framework.DjangoFilterBackend',
+ 'rest_framework.filters.OrderingFilter',
+ ),
}
# Mathesar settings
diff --git a/mathesar/api/filters.py b/mathesar/api/filters.py
--- a/mathesar/api/filters.py
+++ b/mathesar/api/filters.py
@@ -1,5 +1,5 @@
-from django_filters import BooleanFilter, DateTimeFromToRangeFilter
-from django_property_filter import PropertyFilterSet, PropertyBaseInFilter, PropertyCharFilter
+from django_filters import BooleanFilter, DateTimeFromToRangeFilter, OrderingFilter
+from django_property_filter import PropertyFilterSet, PropertyBaseInFilter, PropertyCharFilter, PropertyOrderingFilter
from mathesar.database.types import MathesarTypeIdentifier
from mathesar.models import Schema, Table, Database
@@ -29,6 +29,14 @@
database = CharInFilter(field_name='database__name', lookup_expr='in')
name = CharInFilter(field_name='name', lookup_expr='in')
+ sort_by = PropertyOrderingFilter(
+ fields=(
+ ('id', 'id'),
+ ('name', 'name'),
+ ),
+ label="Sort By",
+ )
+
class Meta:
model = Schema
fields = ['name']
@@ -40,12 +48,28 @@
updated = DateTimeFromToRangeFilter(field_name='updated_at')
not_imported = BooleanFilter(lookup_expr="isnull", field_name='import_verified')
+ sort_by = PropertyOrderingFilter(
+ fields=(
+ ('id', 'id'),
+ ('name', 'name'),
+ ),
+ label="Sort By",
+ )
+
class Meta:
model = Table
fields = ['name', 'schema', 'created_at', 'updated_at', 'import_verified']
class DatabaseFilter(PropertyFilterSet):
+ sort_by = OrderingFilter(
+ fields=(
+ ('id', 'id'),
+ ('name', 'name'),
+ ),
+ label="Sort By",
+ )
+
class Meta:
model = Database
fields = ['deleted']
| {"golden_diff": "diff --git a/config/settings.py b/config/settings.py\n--- a/config/settings.py\n+++ b/config/settings.py\n@@ -170,7 +170,11 @@\n REST_FRAMEWORK = {\n 'DEFAULT_AUTHENTICATION_CLASSES': [\n 'rest_framework.authentication.SessionAuthentication'\n- ]\n+ ],\n+ 'DEFAULT_FILTER_BACKENDS': (\n+ 'django_filters.rest_framework.DjangoFilterBackend',\n+ 'rest_framework.filters.OrderingFilter',\n+ ),\n }\n \n # Mathesar settings\ndiff --git a/mathesar/api/filters.py b/mathesar/api/filters.py\n--- a/mathesar/api/filters.py\n+++ b/mathesar/api/filters.py\n@@ -1,5 +1,5 @@\n-from django_filters import BooleanFilter, DateTimeFromToRangeFilter\n-from django_property_filter import PropertyFilterSet, PropertyBaseInFilter, PropertyCharFilter\n+from django_filters import BooleanFilter, DateTimeFromToRangeFilter, OrderingFilter\n+from django_property_filter import PropertyFilterSet, PropertyBaseInFilter, PropertyCharFilter, PropertyOrderingFilter\n \n from mathesar.database.types import MathesarTypeIdentifier\n from mathesar.models import Schema, Table, Database\n@@ -29,6 +29,14 @@\n database = CharInFilter(field_name='database__name', lookup_expr='in')\n name = CharInFilter(field_name='name', lookup_expr='in')\n \n+ sort_by = PropertyOrderingFilter(\n+ fields=(\n+ ('id', 'id'),\n+ ('name', 'name'),\n+ ),\n+ label=\"Sort By\",\n+ )\n+\n class Meta:\n model = Schema\n fields = ['name']\n@@ -40,12 +48,28 @@\n updated = DateTimeFromToRangeFilter(field_name='updated_at')\n not_imported = BooleanFilter(lookup_expr=\"isnull\", field_name='import_verified')\n \n+ sort_by = PropertyOrderingFilter(\n+ fields=(\n+ ('id', 'id'),\n+ ('name', 'name'),\n+ ),\n+ label=\"Sort By\",\n+ )\n+\n class Meta:\n model = Table\n fields = ['name', 'schema', 'created_at', 'updated_at', 'import_verified']\n \n \n class DatabaseFilter(PropertyFilterSet):\n+ sort_by = OrderingFilter(\n+ fields=(\n+ ('id', 'id'),\n+ ('name', 'name'),\n+ ),\n+ label=\"Sort By\",\n+ )\n+\n class Meta:\n model = Database\n fields = ['deleted']\n", "issue": "Table, Database, & Schema APIs should support sorting by ID & name\n## Problem\r\n<!-- Please provide a clear and concise description of the problem that this feature request is designed to solve.-->\r\nThe table, database, and schema APIs don't currently support sorting results.\r\n\r\n## Proposed solution\r\n<!-- A clear and concise description of your proposed solution or feature. -->\r\nAll three APIs should support sorting by ID and name.\r\n\r\n## Additional context\r\n<!-- Add any other context or screenshots about the feature request here.-->\r\nWe can use `django-filter` / `django-property-filter` for this.\n", "before_files": [{"content": "from django_filters import BooleanFilter, DateTimeFromToRangeFilter\nfrom django_property_filter import PropertyFilterSet, PropertyBaseInFilter, PropertyCharFilter\n\nfrom mathesar.database.types import MathesarTypeIdentifier\nfrom mathesar.models import Schema, Table, Database\n\nFILTER_OPTIONS_BY_TYPE_IDENTIFIER = {\n MathesarTypeIdentifier.BOOLEAN.value:\n {\n \"db_type\": \"BOOLEAN\",\n \"options\": [{\n \"op\": \"eq\",\n \"value\": {\n \"allowed_types\": [\"BOOLEAN\"],\n }\n }, {\n \"op\": \"is_null\",\n \"value\": \"null\",\n }]\n }\n}\n\n\nclass CharInFilter(PropertyBaseInFilter, PropertyCharFilter):\n pass\n\n\nclass SchemaFilter(PropertyFilterSet):\n database = CharInFilter(field_name='database__name', lookup_expr='in')\n name = CharInFilter(field_name='name', lookup_expr='in')\n\n class Meta:\n model = Schema\n fields = ['name']\n\n\nclass TableFilter(PropertyFilterSet):\n name = CharInFilter(field_name='name', lookup_expr='in')\n created = DateTimeFromToRangeFilter(field_name='created_at')\n updated = DateTimeFromToRangeFilter(field_name='updated_at')\n not_imported = BooleanFilter(lookup_expr=\"isnull\", field_name='import_verified')\n\n class Meta:\n model = Table\n fields = ['name', 'schema', 'created_at', 'updated_at', 'import_verified']\n\n\nclass DatabaseFilter(PropertyFilterSet):\n class Meta:\n model = Database\n fields = ['deleted']\n", "path": "mathesar/api/filters.py"}, {"content": "\"\"\"\nDjango settings for config project.\n\nGenerated by 'django-admin startproject' using Django 3.1.7.\n\nFor more information on this file, see\nhttps://docs.djangoproject.com/en/3.1/topics/settings/\n\nFor the full list of settings and their values, see\nhttps://docs.djangoproject.com/en/3.1/ref/settings/\n\"\"\"\n\nimport os\nfrom pathlib import Path\n\nfrom decouple import Csv, config as decouple_config\nfrom dj_database_url import parse as db_url\n\n\n# We use a 'tuple' with pipes as delimiters as decople naively splits the global\n# variables on commas when casting to Csv()\ndef pipe_delim(pipe_string):\n # Remove opening and closing brackets\n pipe_string = pipe_string[1:-1]\n # Split on pipe delim\n return pipe_string.split(\"|\")\n\n\n# Build paths inside the project like this: BASE_DIR / 'subdir'.\nBASE_DIR = Path(__file__).resolve().parent.parent\n\n# Application definition\n\nINSTALLED_APPS = [\n \"django.contrib.admin\",\n \"django.contrib.auth\",\n \"django.contrib.contenttypes\",\n \"django.contrib.sessions\",\n \"django.contrib.messages\",\n \"django.contrib.staticfiles\",\n \"rest_framework\",\n \"django_filters\",\n \"django_property_filter\",\n \"mathesar\",\n]\n\nMIDDLEWARE = [\n \"django.middleware.security.SecurityMiddleware\",\n \"django.contrib.sessions.middleware.SessionMiddleware\",\n \"django.middleware.common.CommonMiddleware\",\n \"django.middleware.csrf.CsrfViewMiddleware\",\n \"django.contrib.auth.middleware.AuthenticationMiddleware\",\n \"django.contrib.messages.middleware.MessageMiddleware\",\n \"django.middleware.clickjacking.XFrameOptionsMiddleware\",\n]\n\nROOT_URLCONF = \"config.urls\"\n\nTEMPLATES = [\n {\n \"BACKEND\": \"django.template.backends.django.DjangoTemplates\",\n \"DIRS\": [],\n \"APP_DIRS\": True,\n \"OPTIONS\": {\n \"context_processors\": [\n \"config.context_processors.frontend_settings\",\n \"django.template.context_processors.debug\",\n \"django.template.context_processors.request\",\n \"django.contrib.auth.context_processors.auth\",\n \"django.contrib.messages.context_processors.messages\",\n ],\n },\n },\n]\n\nWSGI_APPLICATION = \"config.wsgi.application\"\n\n# Database\n# https://docs.djangoproject.com/en/3.1/ref/settings/#databases\n\n# TODO: Add to documentation that database keys should not be than 128 characters.\n\n# MATHESAR_DATABASES should be of the form '({db_name}|{db_url}), ({db_name}|{db_url})'\n# See pipe_delim above for why we use pipes as delimiters\nDATABASES = {\n db_key: db_url(url_string)\n for db_key, url_string in decouple_config('MATHESAR_DATABASES', cast=Csv(pipe_delim))\n}\nDATABASES[decouple_config('DJANGO_DATABASE_KEY')] = decouple_config('DJANGO_DATABASE_URL', cast=db_url)\n\nfor db_key, db_dict in DATABASES.items():\n # Engine can be '.postgresql' or '.postgresql_psycopg2'\n if not db_dict['ENGINE'].startswith('django.db.backends.postgresql'):\n raise ValueError(\n f\"{db_key} is not a PostgreSQL database. \"\n f\"{db_dict['ENGINE']} found for {db_key}'s engine.\"\n )\n\n\n# pytest-django will create a new database named 'test_{DATABASES[table_db]['NAME']}'\n# and use it for our API tests if we don't specify DATABASES[table_db]['TEST']['NAME']\nif decouple_config('TEST', default=False, cast=bool):\n for db_key, _ in decouple_config('MATHESAR_DATABASES', cast=Csv(pipe_delim)):\n DATABASES[db_key]['TEST'] = {'NAME': DATABASES[db_key]['NAME']}\n\n\n# Quick-start development settings - unsuitable for production\n# See https://docs.djangoproject.com/en/3.1/howto/deployment/checklist/\n\n# SECURITY WARNING: keep the secret key used in production secret!\nSECRET_KEY = decouple_config('SECRET_KEY')\n\n# SECURITY WARNING: don't run with debug turned on in production!\nDEBUG = decouple_config('DEBUG', default=False, cast=bool)\n\nALLOWED_HOSTS = decouple_config('ALLOWED_HOSTS', cast=Csv())\n\n# Password validation\n# https://docs.djangoproject.com/en/3.1/ref/settings/#auth-password-validators\n\nAUTH_PASSWORD_VALIDATORS = [\n {\n \"NAME\": \"django.contrib.auth.password_validation.UserAttributeSimilarityValidator\",\n },\n {\n \"NAME\": \"django.contrib.auth.password_validation.MinimumLengthValidator\",\n },\n {\n \"NAME\": \"django.contrib.auth.password_validation.CommonPasswordValidator\",\n },\n {\n \"NAME\": \"django.contrib.auth.password_validation.NumericPasswordValidator\",\n },\n]\n\n\n# Internationalization\n# https://docs.djangoproject.com/en/3.1/topics/i18n/\n\nLANGUAGE_CODE = \"en-us\"\n\nTIME_ZONE = \"UTC\"\n\nUSE_I18N = True\n\nUSE_L10N = True\n\nUSE_TZ = True\n\n\n# Static files (CSS, JavaScript, Images)\n# https://docs.djangoproject.com/en/3.1/howto/static-files/\n# https://docs.djangoproject.com/en/3.1/ref/contrib/staticfiles/\n\nSTATIC_URL = \"/static/\"\n\n# When running with DEBUG=False, the webserver needs to serve files from this location\n# python manage.py collectstatic has to be run to collect all static files into this location\n# The files need to served in brotli or gzip compressed format\nSTATIC_ROOT = os.path.join(BASE_DIR, 'static/')\n\n# Media files (uploaded by the user)\n\nMEDIA_ROOT = os.path.join(BASE_DIR, '.media/')\n\nMEDIA_URL = \"/media/\"\n\n# Update Authentication classes, removed BasicAuthentication\n# Defaults: https://www.django-rest-framework.org/api-guide/settings/\nREST_FRAMEWORK = {\n 'DEFAULT_AUTHENTICATION_CLASSES': [\n 'rest_framework.authentication.SessionAuthentication'\n ]\n}\n\n# Mathesar settings\nMATHESAR_MODE = decouple_config('MODE', default='PRODUCTION')\nMATHESAR_UI_BUILD_LOCATION = os.path.join(BASE_DIR, 'mathesar/static/mathesar/')\nMATHESAR_MANIFEST_LOCATION = os.path.join(MATHESAR_UI_BUILD_LOCATION, 'manifest.json')\nMATHESAR_CLIENT_DEV_URL = 'http://localhost:3000'\n\n\nSTATICFILES_DIRS = [MATHESAR_UI_BUILD_LOCATION]\n", "path": "config/settings.py"}]} | 2,919 | 542 |
gh_patches_debug_13689 | rasdani/github-patches | git_diff | conan-io__conan-2592 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
conan search gives AttributeError: 'UserIO' object has no attribute 'warn'
Version: 1.1.1
OS: Linux Ubuntu 14.04
conda: v4.2.7
Repro steps:
* `conda create -n conan python=2.7`
* `source activate conan`
* `pip install conan`
* `conan search zlib/1.2.11@conan/stable -r=conan-center`
Gives the following python stack:
```
(conan) ~ $ conan search zlib/1.2.11@conan/stable -r=conan-center
Traceback (most recent call last):
File "/home/mgodbolt/apps/miniconda/envs/conan/lib/python2.7/site-packages/conans/client/command.py", line 1131, in run
method(args[0][1:])
File "/home/mgodbolt/apps/miniconda/envs/conan/lib/python2.7/site-packages/conans/client/command.py", line 814, in search
outdated=args.outdated)
File "/home/mgodbolt/apps/miniconda/envs/conan/lib/python2.7/site-packages/conans/client/conan_api.py", line 64, in wrapper
return f(*args, **kwargs)
File "/home/mgodbolt/apps/miniconda/envs/conan/lib/python2.7/site-packages/conans/client/conan_api.py", line 595, in search_packages
outdated=outdated)
File "/home/mgodbolt/apps/miniconda/envs/conan/lib/python2.7/site-packages/conans/client/cmd/search.py", line 44, in search_packages
remote = RemoteRegistry(self._client_cache.registry, self._user_io).remote(remote)
File "/home/mgodbolt/apps/miniconda/envs/conan/lib/python2.7/site-packages/conans/client/remote_registry.py", line 95, in remote
remotes, _ = self._load()
File "/home/mgodbolt/apps/miniconda/envs/conan/lib/python2.7/site-packages/conans/client/remote_registry.py", line 65, in _load
self._output.warn("Remotes registry file missing, creating default one in %s"
AttributeError: 'UserIO' object has no attribute 'warn'
ERROR: 'UserIO' object has no attribute 'warn'
```
</issue>
<code>
[start of conans/client/cmd/search.py]
1 from conans.search.search import DiskSearchManager, filter_outdated
2 from collections import OrderedDict
3 from conans.client.remote_registry import RemoteRegistry
4
5
6 class Search(object):
7 def __init__(self, client_cache, remote_manager, user_io):
8 self._client_cache = client_cache
9 self._remote_manager = remote_manager
10 self._user_io = user_io
11
12 def search_recipes(self, pattern, remote=None, case_sensitive=False):
13 ignorecase = not case_sensitive
14 if not remote:
15 return DiskSearchManager(self._client_cache).search_recipes(pattern, ignorecase)
16
17 registry = RemoteRegistry(self._client_cache.registry, self._user_io.out)
18 if remote == 'all':
19 remotes = registry.remotes
20 # We have to check if there is a remote called "all"
21 # Deprecate: 2.0 can remove this check
22 if 'all' not in (r.name for r in remotes):
23 references = {}
24 for remote in remotes:
25 result = self._remote_manager.search_recipes(remote, pattern, ignorecase)
26 if result:
27 references[remote.name] = result
28 return references
29 # single remote
30 remote = registry.remote(remote)
31 return self._remote_manager.search_recipes(remote, pattern, ignorecase)
32
33 def search_packages(self, reference=None, remote=None, query=None, outdated=False):
34 """ Return the single information saved in conan.vars about all the packages
35 or the packages which match with a pattern
36
37 Attributes:
38 pattern = string to match packages
39 remote = search on another origin to get packages info
40 packages_pattern = String query with binary
41 packages properties: "arch=x86 AND os=Windows"
42 """
43 if remote:
44 remote = RemoteRegistry(self._client_cache.registry, self._user_io).remote(remote)
45 packages_props = self._remote_manager.search_packages(remote, reference, query)
46 ordered_packages = OrderedDict(sorted(packages_props.items()))
47 manifest = self._remote_manager.get_conan_digest(reference, remote)
48 recipe_hash = manifest.summary_hash
49 else:
50 searcher = DiskSearchManager(self._client_cache)
51 packages_props = searcher.search_packages(reference, query)
52 ordered_packages = OrderedDict(sorted(packages_props.items()))
53 try:
54 recipe_hash = self._client_cache.load_manifest(reference).summary_hash
55 except IOError: # It could not exist in local
56 recipe_hash = None
57 if outdated and recipe_hash:
58 ordered_packages = filter_outdated(ordered_packages, recipe_hash)
59 return ordered_packages, reference, recipe_hash, query
60
[end of conans/client/cmd/search.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/conans/client/cmd/search.py b/conans/client/cmd/search.py
--- a/conans/client/cmd/search.py
+++ b/conans/client/cmd/search.py
@@ -41,7 +41,7 @@
packages properties: "arch=x86 AND os=Windows"
"""
if remote:
- remote = RemoteRegistry(self._client_cache.registry, self._user_io).remote(remote)
+ remote = RemoteRegistry(self._client_cache.registry, self._user_io.out).remote(remote)
packages_props = self._remote_manager.search_packages(remote, reference, query)
ordered_packages = OrderedDict(sorted(packages_props.items()))
manifest = self._remote_manager.get_conan_digest(reference, remote)
| {"golden_diff": "diff --git a/conans/client/cmd/search.py b/conans/client/cmd/search.py\n--- a/conans/client/cmd/search.py\n+++ b/conans/client/cmd/search.py\n@@ -41,7 +41,7 @@\n packages properties: \"arch=x86 AND os=Windows\"\n \"\"\"\n if remote:\n- remote = RemoteRegistry(self._client_cache.registry, self._user_io).remote(remote)\n+ remote = RemoteRegistry(self._client_cache.registry, self._user_io.out).remote(remote)\n packages_props = self._remote_manager.search_packages(remote, reference, query)\n ordered_packages = OrderedDict(sorted(packages_props.items()))\n manifest = self._remote_manager.get_conan_digest(reference, remote)\n", "issue": "conan search gives AttributeError: 'UserIO' object has no attribute 'warn'\nVersion: 1.1.1\r\nOS: Linux Ubuntu 14.04 \r\nconda: v4.2.7\r\n\r\nRepro steps:\r\n* `conda create -n conan python=2.7`\r\n* `source activate conan`\r\n* `pip install conan`\r\n* `conan search zlib/1.2.11@conan/stable -r=conan-center`\r\n\r\nGives the following python stack:\r\n\r\n```\r\n(conan) ~ $ conan search zlib/1.2.11@conan/stable -r=conan-center\r\nTraceback (most recent call last):\r\n File \"/home/mgodbolt/apps/miniconda/envs/conan/lib/python2.7/site-packages/conans/client/command.py\", line 1131, in run\r\n method(args[0][1:])\r\n File \"/home/mgodbolt/apps/miniconda/envs/conan/lib/python2.7/site-packages/conans/client/command.py\", line 814, in search\r\n outdated=args.outdated)\r\n File \"/home/mgodbolt/apps/miniconda/envs/conan/lib/python2.7/site-packages/conans/client/conan_api.py\", line 64, in wrapper\r\n return f(*args, **kwargs)\r\n File \"/home/mgodbolt/apps/miniconda/envs/conan/lib/python2.7/site-packages/conans/client/conan_api.py\", line 595, in search_packages\r\n outdated=outdated)\r\n File \"/home/mgodbolt/apps/miniconda/envs/conan/lib/python2.7/site-packages/conans/client/cmd/search.py\", line 44, in search_packages\r\n remote = RemoteRegistry(self._client_cache.registry, self._user_io).remote(remote)\r\n File \"/home/mgodbolt/apps/miniconda/envs/conan/lib/python2.7/site-packages/conans/client/remote_registry.py\", line 95, in remote\r\n remotes, _ = self._load()\r\n File \"/home/mgodbolt/apps/miniconda/envs/conan/lib/python2.7/site-packages/conans/client/remote_registry.py\", line 65, in _load\r\n self._output.warn(\"Remotes registry file missing, creating default one in %s\"\r\nAttributeError: 'UserIO' object has no attribute 'warn'\r\n\r\nERROR: 'UserIO' object has no attribute 'warn'\r\n```\r\n\n", "before_files": [{"content": "from conans.search.search import DiskSearchManager, filter_outdated\nfrom collections import OrderedDict\nfrom conans.client.remote_registry import RemoteRegistry\n\n\nclass Search(object):\n def __init__(self, client_cache, remote_manager, user_io):\n self._client_cache = client_cache\n self._remote_manager = remote_manager\n self._user_io = user_io\n\n def search_recipes(self, pattern, remote=None, case_sensitive=False):\n ignorecase = not case_sensitive\n if not remote:\n return DiskSearchManager(self._client_cache).search_recipes(pattern, ignorecase)\n\n registry = RemoteRegistry(self._client_cache.registry, self._user_io.out)\n if remote == 'all':\n remotes = registry.remotes\n # We have to check if there is a remote called \"all\"\n # Deprecate: 2.0 can remove this check\n if 'all' not in (r.name for r in remotes):\n references = {}\n for remote in remotes:\n result = self._remote_manager.search_recipes(remote, pattern, ignorecase)\n if result:\n references[remote.name] = result\n return references\n # single remote\n remote = registry.remote(remote)\n return self._remote_manager.search_recipes(remote, pattern, ignorecase)\n\n def search_packages(self, reference=None, remote=None, query=None, outdated=False):\n \"\"\" Return the single information saved in conan.vars about all the packages\n or the packages which match with a pattern\n\n Attributes:\n pattern = string to match packages\n remote = search on another origin to get packages info\n packages_pattern = String query with binary\n packages properties: \"arch=x86 AND os=Windows\"\n \"\"\"\n if remote:\n remote = RemoteRegistry(self._client_cache.registry, self._user_io).remote(remote)\n packages_props = self._remote_manager.search_packages(remote, reference, query)\n ordered_packages = OrderedDict(sorted(packages_props.items()))\n manifest = self._remote_manager.get_conan_digest(reference, remote)\n recipe_hash = manifest.summary_hash\n else:\n searcher = DiskSearchManager(self._client_cache)\n packages_props = searcher.search_packages(reference, query)\n ordered_packages = OrderedDict(sorted(packages_props.items()))\n try:\n recipe_hash = self._client_cache.load_manifest(reference).summary_hash\n except IOError: # It could not exist in local\n recipe_hash = None\n if outdated and recipe_hash:\n ordered_packages = filter_outdated(ordered_packages, recipe_hash)\n return ordered_packages, reference, recipe_hash, query\n", "path": "conans/client/cmd/search.py"}]} | 1,730 | 151 |
gh_patches_debug_10399 | rasdani/github-patches | git_diff | horovod__horovod-3505 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
hvd.DistributedOptimizer gradient accumulation doesn't clean up infinite gradient correctly
**Environment:**
1. Framework: (TensorFlow, Keras, PyTorch, MXNet) Keras
2. Framework version: 2.4
3. Horovod version: 2.3
4. MPI version:
5. CUDA version:
6. NCCL version:
7. Python version:
8. Spark / PySpark version:
9. Ray version:
10. OS and version:
11. GCC version:
12. CMake version:
**Checklist:**
1. Did you search issues to find if somebody asked this question before?
2. If your question is about hang, did you read [this doc](https://github.com/horovod/horovod/blob/master/docs/running.rst)?
3. If your question is about docker, did you read [this doc](https://github.com/horovod/horovod/blob/master/docs/docker.rst)?
4. Did you check if you question is answered in the [troubleshooting guide](https://github.com/horovod/horovod/blob/master/docs/troubleshooting.rst)?
**Bug report:**
We were training in TensorFlow [FP16 mixed precision](https://www.tensorflow.org/guide/mixed_precision) with keras `model.fit()` and with gradient accumulation/aggregation (`backward_pass_per_step` in `hvd.DistributedOptimizer`) and noticed that the [GradientAggregationHelperEager](https://github.com/horovod/horovod/blob/master/horovod/tensorflow/gradient_aggregation_eager.py#L8) doesn't work correctly with FP16 when the loss goes infinite. Details:
It is kind of expected that at the very first 2-15 steps of the training, the gradient out of TF [LossScaleOptimizer](https://github.com/keras-team/keras/blob/v2.8.0/keras/mixed_precision/loss_scale_optimizer.py#L258-L844) is infinite (because the default initial loss scale factor is as large as `2**15`). Dynamic LSO can handle this gracefully, it just skips applying gradient of that step and divides the scale factor by half. However horovod GradientAggregationHelper will anyway add the infinite gradient up locally, and the infinite gradient will never be correctly cleaned up in [this way](https://github.com/horovod/horovod/blob/133ef0725253db83cfb82a4ed4003df76d189829/horovod/tensorflow/gradient_aggregation_eager.py#L119-L123):
```
def _clear_vars(self):
self.counter.assign(0)
for idx in self.locally_aggregated_grads.keys():
self.locally_aggregated_grads[idx].assign_add(
-1 * self.locally_aggregated_grads[idx])
```
as the result of adding inf value by its negative val will be NaN.
</issue>
<code>
[start of horovod/tensorflow/gradient_aggregation_eager.py]
1 from distutils.version import LooseVersion
2
3 import tensorflow as tf
4
5 _POST_TF_2_4_0 = LooseVersion(tf.__version__) >= LooseVersion('2.4.0')
6
7
8 class LocalGradientAggregationHelperEager:
9 """
10 LocalGradientAggregationHelperEager aggregates gradient updates
11 locally, and communicates the updates across machines only once
12 every backward_passes_per_step. Only supports eager execution.
13 """
14
15 def __init__(
16 self,
17 backward_passes_per_step,
18 allreduce_func,
19 sparse_as_dense,
20 average_aggregated_gradients,
21 ):
22 self.allreduce_grads = allreduce_func
23 self.sparse_as_dense = sparse_as_dense
24
25 # backward_passes_per_step controls how often gradient updates are
26 # synchronized.
27 self.backward_passes_per_step = backward_passes_per_step
28 if self.backward_passes_per_step <= 0:
29 raise ValueError("backward_passes_per_step must be > 0")
30
31 # average_aggregated_gradients controls whether gradient updates that are
32 # aggregated, should be divided by `backward_passes_per_step`.
33 self.average_aggregated_gradients = average_aggregated_gradients
34
35 # This is going to be N data structure holding the aggregated gradient updates
36 # for parameter updates. N is the number of parameters.
37 self.locally_aggregated_grads = {}
38
39 # Used to know when to allreduce and apply gradients. We allreduce when `self.counter`
40 # is equal to `self.backward_passes_per_step`. We apply gradients when `self.counter`
41 # is equal to 0.
42 self.counter = tf.Variable(initial_value=0)
43
44 def compute_gradients(self, grads, vars):
45 # On steps where allreduce happens, resulting_grads returns the allreduced
46 # gradients, on other steps it returns the locally aggregated
47 # gradients.
48 resulting_grads = []
49
50 for idx, grad in enumerate(grads):
51 # Handle IndexedSlices.
52 if self.sparse_as_dense and isinstance(grad, tf.IndexedSlices):
53 grad = tf.convert_to_tensor(grad)
54 elif isinstance(grad, tf.IndexedSlices):
55 raise ValueError(
56 "IndexedSlices are not supported when "
57 "`backward_passes_per_step` > 1 and "
58 "`sparse_as_dense` is False."
59 )
60
61 # Create variables to store to aggregate gradients if they don't
62 # already exist. Skip variables that are None.
63 if idx not in self.locally_aggregated_grads.keys():
64 if grad is not None:
65 self.locally_aggregated_grads[idx] = tf.Variable(
66 initial_value=tf.zeros_like(grad),
67 trainable=False,
68 dtype=grad.dtype,
69 )
70
71 if grad is None:
72 resulting_grads.append(None)
73 else:
74 self.locally_aggregated_grads[idx].assign_add(grad)
75 resulting_grads.append(
76 self.locally_aggregated_grads[idx].read_value())
77 assert len(self.locally_aggregated_grads) == len(grads)
78
79 # Increment counter.
80 self.counter.assign_add(1)
81
82 def _all_reduce_and_clear_aggregated_variables(aggregated_gradients, vars):
83 # Performs allreduce. If `average_aggregated_gradients` is
84 # set to True divides result by `backward_passes_per_step`.
85 reduced_gradients = self._allreduce_helper(aggregated_gradients, vars)
86 assert len(reduced_gradients) == len(grads)
87
88 self._clear_vars()
89 return reduced_gradients
90
91 def _do_nothing(aggregated_gradients):
92 return aggregated_gradients
93
94 resulting_grads = tf.cond(
95 pred=tf.equal(self.counter, self.backward_passes_per_step),
96 true_fn=lambda: _all_reduce_and_clear_aggregated_variables(resulting_grads, vars),
97 false_fn=lambda: _do_nothing(resulting_grads),
98 )
99
100 return resulting_grads
101
102 def _allreduce_helper(self, grads, vars):
103 allreduced_grads = self.allreduce_grads(grads, vars)
104
105 if not self.average_aggregated_gradients:
106 return allreduced_grads
107
108 scaled_allreduced_grads = []
109 for grad in allreduced_grads:
110 if grad is None:
111 scaled_allreduced_grads.append(grad)
112 continue
113
114 scaled_allreduced_grads.append(
115 grad / self.backward_passes_per_step)
116
117 return scaled_allreduced_grads
118
119 def _clear_vars(self):
120 self.counter.assign(0)
121 for idx in self.locally_aggregated_grads.keys():
122 self.locally_aggregated_grads[idx].assign_add(
123 -1 * self.locally_aggregated_grads[idx])
124
125 def apply_gradients(self, apply_grads_closure, optimizer, *args, **kwargs):
126 def increment_optimizer_iteration():
127 if hasattr(optimizer, "_iterations") and optimizer._iterations is not None:
128 return optimizer._iterations.assign_add(1).op
129 return tf.no_op()
130
131 def non_aggregation_step():
132 if _POST_TF_2_4_0:
133 # In TF 2.4+ `_aggregate_gradients()` is called from inside of `apply_gradients()`.
134 # We account for this by calling `_aggregate_gradients()` for steps where we do
135 # not call `apply_gradients()`.
136 transformed_grads_and_vars = optimizer._transform_unaggregated_gradients(
137 args[0])
138 _ = optimizer._aggregate_gradients(transformed_grads_and_vars)
139
140 return increment_optimizer_iteration()
141
142 def is_aggregation_step():
143 return tf.equal(self.counter, 0)
144
145 return tf.cond(
146 pred=is_aggregation_step(),
147 true_fn=apply_grads_closure,
148 false_fn=non_aggregation_step,
149 )
150
[end of horovod/tensorflow/gradient_aggregation_eager.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/horovod/tensorflow/gradient_aggregation_eager.py b/horovod/tensorflow/gradient_aggregation_eager.py
--- a/horovod/tensorflow/gradient_aggregation_eager.py
+++ b/horovod/tensorflow/gradient_aggregation_eager.py
@@ -119,8 +119,8 @@
def _clear_vars(self):
self.counter.assign(0)
for idx in self.locally_aggregated_grads.keys():
- self.locally_aggregated_grads[idx].assign_add(
- -1 * self.locally_aggregated_grads[idx])
+ self.locally_aggregated_grads[idx].assign(
+ tf.zeros_like(self.locally_aggregated_grads[idx]))
def apply_gradients(self, apply_grads_closure, optimizer, *args, **kwargs):
def increment_optimizer_iteration():
| {"golden_diff": "diff --git a/horovod/tensorflow/gradient_aggregation_eager.py b/horovod/tensorflow/gradient_aggregation_eager.py\n--- a/horovod/tensorflow/gradient_aggregation_eager.py\n+++ b/horovod/tensorflow/gradient_aggregation_eager.py\n@@ -119,8 +119,8 @@\n def _clear_vars(self):\n self.counter.assign(0)\n for idx in self.locally_aggregated_grads.keys():\n- self.locally_aggregated_grads[idx].assign_add(\n- -1 * self.locally_aggregated_grads[idx])\n+ self.locally_aggregated_grads[idx].assign(\n+ tf.zeros_like(self.locally_aggregated_grads[idx]))\n \n def apply_gradients(self, apply_grads_closure, optimizer, *args, **kwargs):\n def increment_optimizer_iteration():\n", "issue": "hvd.DistributedOptimizer gradient accumulation doesn't clean up infinite gradient correctly\n**Environment:**\r\n1. Framework: (TensorFlow, Keras, PyTorch, MXNet) Keras\r\n2. Framework version: 2.4\r\n3. Horovod version: 2.3\r\n4. MPI version: \r\n5. CUDA version: \r\n6. NCCL version:\r\n7. Python version:\r\n8. Spark / PySpark version:\r\n9. Ray version:\r\n10. OS and version:\r\n11. GCC version:\r\n12. CMake version:\r\n\r\n**Checklist:**\r\n1. Did you search issues to find if somebody asked this question before?\r\n2. If your question is about hang, did you read [this doc](https://github.com/horovod/horovod/blob/master/docs/running.rst)?\r\n3. If your question is about docker, did you read [this doc](https://github.com/horovod/horovod/blob/master/docs/docker.rst)?\r\n4. Did you check if you question is answered in the [troubleshooting guide](https://github.com/horovod/horovod/blob/master/docs/troubleshooting.rst)?\r\n\r\n**Bug report:**\r\n\r\nWe were training in TensorFlow [FP16 mixed precision](https://www.tensorflow.org/guide/mixed_precision) with keras `model.fit()` and with gradient accumulation/aggregation (`backward_pass_per_step` in `hvd.DistributedOptimizer`) and noticed that the [GradientAggregationHelperEager](https://github.com/horovod/horovod/blob/master/horovod/tensorflow/gradient_aggregation_eager.py#L8) doesn't work correctly with FP16 when the loss goes infinite. Details:\r\n\r\nIt is kind of expected that at the very first 2-15 steps of the training, the gradient out of TF [LossScaleOptimizer](https://github.com/keras-team/keras/blob/v2.8.0/keras/mixed_precision/loss_scale_optimizer.py#L258-L844) is infinite (because the default initial loss scale factor is as large as `2**15`). Dynamic LSO can handle this gracefully, it just skips applying gradient of that step and divides the scale factor by half. However horovod GradientAggregationHelper will anyway add the infinite gradient up locally, and the infinite gradient will never be correctly cleaned up in [this way](https://github.com/horovod/horovod/blob/133ef0725253db83cfb82a4ed4003df76d189829/horovod/tensorflow/gradient_aggregation_eager.py#L119-L123):\r\n```\r\n def _clear_vars(self):\r\n self.counter.assign(0)\r\n for idx in self.locally_aggregated_grads.keys():\r\n self.locally_aggregated_grads[idx].assign_add(\r\n -1 * self.locally_aggregated_grads[idx])\r\n```\r\n\r\nas the result of adding inf value by its negative val will be NaN. \r\n\n", "before_files": [{"content": "from distutils.version import LooseVersion\n\nimport tensorflow as tf\n\n_POST_TF_2_4_0 = LooseVersion(tf.__version__) >= LooseVersion('2.4.0')\n\n\nclass LocalGradientAggregationHelperEager:\n \"\"\"\n LocalGradientAggregationHelperEager aggregates gradient updates\n locally, and communicates the updates across machines only once\n every backward_passes_per_step. Only supports eager execution.\n \"\"\"\n\n def __init__(\n self,\n backward_passes_per_step,\n allreduce_func,\n sparse_as_dense,\n average_aggregated_gradients,\n ):\n self.allreduce_grads = allreduce_func\n self.sparse_as_dense = sparse_as_dense\n\n # backward_passes_per_step controls how often gradient updates are\n # synchronized.\n self.backward_passes_per_step = backward_passes_per_step\n if self.backward_passes_per_step <= 0:\n raise ValueError(\"backward_passes_per_step must be > 0\")\n\n # average_aggregated_gradients controls whether gradient updates that are\n # aggregated, should be divided by `backward_passes_per_step`.\n self.average_aggregated_gradients = average_aggregated_gradients\n\n # This is going to be N data structure holding the aggregated gradient updates\n # for parameter updates. N is the number of parameters.\n self.locally_aggregated_grads = {}\n\n # Used to know when to allreduce and apply gradients. We allreduce when `self.counter`\n # is equal to `self.backward_passes_per_step`. We apply gradients when `self.counter`\n # is equal to 0.\n self.counter = tf.Variable(initial_value=0)\n\n def compute_gradients(self, grads, vars):\n # On steps where allreduce happens, resulting_grads returns the allreduced\n # gradients, on other steps it returns the locally aggregated\n # gradients.\n resulting_grads = []\n\n for idx, grad in enumerate(grads):\n # Handle IndexedSlices.\n if self.sparse_as_dense and isinstance(grad, tf.IndexedSlices):\n grad = tf.convert_to_tensor(grad)\n elif isinstance(grad, tf.IndexedSlices):\n raise ValueError(\n \"IndexedSlices are not supported when \"\n \"`backward_passes_per_step` > 1 and \"\n \"`sparse_as_dense` is False.\"\n )\n\n # Create variables to store to aggregate gradients if they don't\n # already exist. Skip variables that are None.\n if idx not in self.locally_aggregated_grads.keys():\n if grad is not None:\n self.locally_aggregated_grads[idx] = tf.Variable(\n initial_value=tf.zeros_like(grad),\n trainable=False,\n dtype=grad.dtype,\n )\n\n if grad is None:\n resulting_grads.append(None)\n else:\n self.locally_aggregated_grads[idx].assign_add(grad)\n resulting_grads.append(\n self.locally_aggregated_grads[idx].read_value())\n assert len(self.locally_aggregated_grads) == len(grads)\n\n # Increment counter.\n self.counter.assign_add(1)\n\n def _all_reduce_and_clear_aggregated_variables(aggregated_gradients, vars):\n # Performs allreduce. If `average_aggregated_gradients` is\n # set to True divides result by `backward_passes_per_step`.\n reduced_gradients = self._allreduce_helper(aggregated_gradients, vars)\n assert len(reduced_gradients) == len(grads)\n\n self._clear_vars()\n return reduced_gradients\n\n def _do_nothing(aggregated_gradients):\n return aggregated_gradients\n\n resulting_grads = tf.cond(\n pred=tf.equal(self.counter, self.backward_passes_per_step),\n true_fn=lambda: _all_reduce_and_clear_aggregated_variables(resulting_grads, vars),\n false_fn=lambda: _do_nothing(resulting_grads),\n )\n\n return resulting_grads\n\n def _allreduce_helper(self, grads, vars):\n allreduced_grads = self.allreduce_grads(grads, vars)\n\n if not self.average_aggregated_gradients:\n return allreduced_grads\n\n scaled_allreduced_grads = []\n for grad in allreduced_grads:\n if grad is None:\n scaled_allreduced_grads.append(grad)\n continue\n\n scaled_allreduced_grads.append(\n grad / self.backward_passes_per_step)\n\n return scaled_allreduced_grads\n\n def _clear_vars(self):\n self.counter.assign(0)\n for idx in self.locally_aggregated_grads.keys():\n self.locally_aggregated_grads[idx].assign_add(\n -1 * self.locally_aggregated_grads[idx])\n\n def apply_gradients(self, apply_grads_closure, optimizer, *args, **kwargs):\n def increment_optimizer_iteration():\n if hasattr(optimizer, \"_iterations\") and optimizer._iterations is not None:\n return optimizer._iterations.assign_add(1).op\n return tf.no_op()\n\n def non_aggregation_step():\n if _POST_TF_2_4_0:\n # In TF 2.4+ `_aggregate_gradients()` is called from inside of `apply_gradients()`.\n # We account for this by calling `_aggregate_gradients()` for steps where we do\n # not call `apply_gradients()`.\n transformed_grads_and_vars = optimizer._transform_unaggregated_gradients(\n args[0])\n _ = optimizer._aggregate_gradients(transformed_grads_and_vars)\n\n return increment_optimizer_iteration()\n\n def is_aggregation_step():\n return tf.equal(self.counter, 0)\n\n return tf.cond(\n pred=is_aggregation_step(),\n true_fn=apply_grads_closure,\n false_fn=non_aggregation_step,\n )\n", "path": "horovod/tensorflow/gradient_aggregation_eager.py"}]} | 2,771 | 190 |
gh_patches_debug_35497 | rasdani/github-patches | git_diff | iterative__dvc-2765 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
lock: improve error message when .dvc/lock is taken
https://github.com/iterative/dvc/pull/2519#discussion_r326844119
</issue>
<code>
[start of dvc/lock.py]
1 """Manages dvc lock file."""
2 from __future__ import unicode_literals
3
4 import hashlib
5 import os
6 import time
7 from datetime import timedelta
8
9 from funcy.py3 import lkeep
10
11 from dvc.exceptions import DvcException
12 from dvc.utils import makedirs
13 from dvc.utils.compat import is_py3
14
15
16 DEFAULT_TIMEOUT = 5
17
18
19 class LockError(DvcException):
20 """Thrown when unable to acquire the lock for dvc repo."""
21
22
23 if is_py3:
24 import flufl.lock
25
26 class Lock(flufl.lock.Lock):
27 """Class for dvc repo lock.
28
29 Args:
30 lockfile (str): the lock filename
31 in.
32 tmp_dir (str): a directory to store claim files.
33 """
34
35 def __init__(self, lockfile, tmp_dir=None):
36 import socket
37
38 self._tmp_dir = tmp_dir
39 if self._tmp_dir is not None:
40 makedirs(self._tmp_dir, exist_ok=True)
41
42 # NOTE: this is basically Lock.__init__ copy-paste, except that
43 # instead of using `socket.getfqdn()` we use `socket.gethostname()`
44 # to speed this up. We've seen [1] `getfqdn()` take ~5sec to return
45 # anything, which is way too slow. `gethostname()` is actually a
46 # fallback for `getfqdn()` when it is not able to resolve a
47 # canonical hostname through network. The claimfile that uses
48 # `self._hostname` is still usable, as it uses `pid` and random
49 # number to generate the resulting lock file name, which is unique
50 # enough for our application.
51 #
52 # [1] https://github.com/iterative/dvc/issues/2582
53 self._hostname = socket.gethostname()
54
55 self._lockfile = lockfile
56 self._lifetime = timedelta(days=365) # Lock for good by default
57 self._separator = flufl.lock.SEP
58 self._set_claimfile()
59 self._owned = True
60 self._retry_errnos = []
61
62 @property
63 def lockfile(self):
64 return self._lockfile
65
66 @property
67 def files(self):
68 return lkeep([self._lockfile, self._tmp_dir])
69
70 def lock(self):
71 try:
72 super(Lock, self).lock(timedelta(seconds=DEFAULT_TIMEOUT))
73 except flufl.lock.TimeOutError:
74 raise LockError(
75 "cannot perform the cmd since DVC is busy and "
76 "locked. Please retry the cmd later."
77 )
78
79 def _set_claimfile(self, pid=None):
80 super(Lock, self)._set_claimfile(pid)
81
82 if self._tmp_dir is not None:
83 # Under Windows file path length is limited so we hash it
84 filename = hashlib.md5(self._claimfile.encode()).hexdigest()
85 self._claimfile = os.path.join(
86 self._tmp_dir, filename + ".lock"
87 )
88
89 # Fix for __del__ bug in flufl.lock [1] which is causing errors on
90 # Python shutdown [2].
91 # [1] https://gitlab.com/warsaw/flufl.lock/issues/7
92 # [2] https://github.com/iterative/dvc/issues/2573
93 def __del__(self):
94 try:
95 if self._owned:
96 self.finalize()
97 except ImportError:
98 pass
99
100
101 else:
102 import zc.lockfile
103
104 class Lock(object):
105 """Class for dvc repo lock.
106
107 Uses zc.lockfile as backend.
108 """
109
110 def __init__(self, lockfile, tmp_dir=None):
111 self.lockfile = lockfile
112 self._lock = None
113
114 @property
115 def files(self):
116 return [self.lockfile]
117
118 def _do_lock(self):
119 try:
120 self._lock = zc.lockfile.LockFile(self.lockfile)
121 except zc.lockfile.LockError:
122 raise LockError(
123 "cannot perform the cmd since DVC is busy and "
124 "locked. Please retry the cmd later."
125 )
126
127 def lock(self):
128 try:
129 self._do_lock()
130 return
131 except LockError:
132 time.sleep(DEFAULT_TIMEOUT)
133
134 self._do_lock()
135
136 def unlock(self):
137 self._lock.close()
138 self._lock = None
139
140 def __enter__(self):
141 self.lock()
142
143 def __exit__(self, typ, value, tbck):
144 self.unlock()
145
[end of dvc/lock.py]
[start of dvc/main.py]
1 """Main entry point for dvc CLI."""
2 from __future__ import unicode_literals
3
4 import logging
5
6 from dvc.analytics import Analytics
7 from dvc.cli import parse_args
8 from dvc.config import ConfigError
9 from dvc.exceptions import DvcParserError
10 from dvc.exceptions import NotDvcRepoError
11 from dvc.external_repo import clean_repos
12 from dvc.lock import LockError
13 from dvc.logger import FOOTER
14 from dvc.remote.pool import close_pools
15 from dvc.utils.compat import is_py2
16
17
18 # Workaround for CPython bug. See [1] and [2] for more info.
19 # [1] https://github.com/aws/aws-cli/blob/1.16.277/awscli/clidriver.py#L55
20 # [2] https://bugs.python.org/issue29288
21 "".encode("idna")
22
23
24 logger = logging.getLogger("dvc")
25
26
27 def main(argv=None):
28 """Run dvc CLI command.
29
30 Args:
31 argv: optional list of arguments to parse. sys.argv is used by default.
32
33 Returns:
34 int: command's return code.
35 """
36 args = None
37 cmd = None
38
39 outerLogLevel = logger.level
40 try:
41 args = parse_args(argv)
42
43 if args.quiet:
44 logger.setLevel(logging.CRITICAL)
45
46 elif args.verbose:
47 logger.setLevel(logging.DEBUG)
48
49 cmd = args.func(args)
50 ret = cmd.run()
51 except LockError:
52 logger.exception("failed to lock before running a command")
53 ret = 250
54 except ConfigError:
55 logger.exception("configuration error")
56 ret = 251
57 except KeyboardInterrupt:
58 logger.exception("interrupted by the user")
59 ret = 252
60 except NotDvcRepoError:
61 logger.exception("")
62 ret = 253
63 except DvcParserError:
64 ret = 254
65 except Exception as exc: # pylint: disable=broad-except
66 if isinstance(exc, UnicodeError) and is_py2:
67 logger.exception(
68 "unicode is not supported in DVC for Python 2 "
69 "(end-of-life January 1, 2020), please upgrade to Python 3"
70 )
71 else:
72 logger.exception("unexpected error")
73 ret = 255
74 finally:
75 logger.setLevel(outerLogLevel)
76
77 # Python 2 fails to close these clean occasionally and users see
78 # weird error messages, so we do it manually
79 close_pools()
80
81 # Remove cached repos in the end of the call, these are anonymous
82 # so won't be reused by any other subsequent run anyway.
83 clean_repos()
84
85 if ret != 0:
86 logger.info(FOOTER)
87
88 Analytics().send_cmd(cmd, args, ret)
89
90 return ret
91
[end of dvc/main.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/dvc/lock.py b/dvc/lock.py
--- a/dvc/lock.py
+++ b/dvc/lock.py
@@ -15,6 +15,12 @@
DEFAULT_TIMEOUT = 5
+FAILED_TO_LOCK_MESSAGE = (
+ "cannot perform the command because another DVC process seems to be "
+ "running on this project. If that is not the case, manually remove "
+ "`.dvc/lock` and try again."
+)
+
class LockError(DvcException):
"""Thrown when unable to acquire the lock for dvc repo."""
@@ -71,10 +77,7 @@
try:
super(Lock, self).lock(timedelta(seconds=DEFAULT_TIMEOUT))
except flufl.lock.TimeOutError:
- raise LockError(
- "cannot perform the cmd since DVC is busy and "
- "locked. Please retry the cmd later."
- )
+ raise LockError(FAILED_TO_LOCK_MESSAGE)
def _set_claimfile(self, pid=None):
super(Lock, self)._set_claimfile(pid)
@@ -119,10 +122,7 @@
try:
self._lock = zc.lockfile.LockFile(self.lockfile)
except zc.lockfile.LockError:
- raise LockError(
- "cannot perform the cmd since DVC is busy and "
- "locked. Please retry the cmd later."
- )
+ raise LockError(FAILED_TO_LOCK_MESSAGE)
def lock(self):
try:
diff --git a/dvc/main.py b/dvc/main.py
--- a/dvc/main.py
+++ b/dvc/main.py
@@ -9,7 +9,6 @@
from dvc.exceptions import DvcParserError
from dvc.exceptions import NotDvcRepoError
from dvc.external_repo import clean_repos
-from dvc.lock import LockError
from dvc.logger import FOOTER
from dvc.remote.pool import close_pools
from dvc.utils.compat import is_py2
@@ -48,9 +47,6 @@
cmd = args.func(args)
ret = cmd.run()
- except LockError:
- logger.exception("failed to lock before running a command")
- ret = 250
except ConfigError:
logger.exception("configuration error")
ret = 251
| {"golden_diff": "diff --git a/dvc/lock.py b/dvc/lock.py\n--- a/dvc/lock.py\n+++ b/dvc/lock.py\n@@ -15,6 +15,12 @@\n \n DEFAULT_TIMEOUT = 5\n \n+FAILED_TO_LOCK_MESSAGE = (\n+ \"cannot perform the command because another DVC process seems to be \"\n+ \"running on this project. If that is not the case, manually remove \"\n+ \"`.dvc/lock` and try again.\"\n+)\n+\n \n class LockError(DvcException):\n \"\"\"Thrown when unable to acquire the lock for dvc repo.\"\"\"\n@@ -71,10 +77,7 @@\n try:\n super(Lock, self).lock(timedelta(seconds=DEFAULT_TIMEOUT))\n except flufl.lock.TimeOutError:\n- raise LockError(\n- \"cannot perform the cmd since DVC is busy and \"\n- \"locked. Please retry the cmd later.\"\n- )\n+ raise LockError(FAILED_TO_LOCK_MESSAGE)\n \n def _set_claimfile(self, pid=None):\n super(Lock, self)._set_claimfile(pid)\n@@ -119,10 +122,7 @@\n try:\n self._lock = zc.lockfile.LockFile(self.lockfile)\n except zc.lockfile.LockError:\n- raise LockError(\n- \"cannot perform the cmd since DVC is busy and \"\n- \"locked. Please retry the cmd later.\"\n- )\n+ raise LockError(FAILED_TO_LOCK_MESSAGE)\n \n def lock(self):\n try:\ndiff --git a/dvc/main.py b/dvc/main.py\n--- a/dvc/main.py\n+++ b/dvc/main.py\n@@ -9,7 +9,6 @@\n from dvc.exceptions import DvcParserError\n from dvc.exceptions import NotDvcRepoError\n from dvc.external_repo import clean_repos\n-from dvc.lock import LockError\n from dvc.logger import FOOTER\n from dvc.remote.pool import close_pools\n from dvc.utils.compat import is_py2\n@@ -48,9 +47,6 @@\n \n cmd = args.func(args)\n ret = cmd.run()\n- except LockError:\n- logger.exception(\"failed to lock before running a command\")\n- ret = 250\n except ConfigError:\n logger.exception(\"configuration error\")\n ret = 251\n", "issue": "lock: improve error message when .dvc/lock is taken\nhttps://github.com/iterative/dvc/pull/2519#discussion_r326844119\n", "before_files": [{"content": "\"\"\"Manages dvc lock file.\"\"\"\nfrom __future__ import unicode_literals\n\nimport hashlib\nimport os\nimport time\nfrom datetime import timedelta\n\nfrom funcy.py3 import lkeep\n\nfrom dvc.exceptions import DvcException\nfrom dvc.utils import makedirs\nfrom dvc.utils.compat import is_py3\n\n\nDEFAULT_TIMEOUT = 5\n\n\nclass LockError(DvcException):\n \"\"\"Thrown when unable to acquire the lock for dvc repo.\"\"\"\n\n\nif is_py3:\n import flufl.lock\n\n class Lock(flufl.lock.Lock):\n \"\"\"Class for dvc repo lock.\n\n Args:\n lockfile (str): the lock filename\n in.\n tmp_dir (str): a directory to store claim files.\n \"\"\"\n\n def __init__(self, lockfile, tmp_dir=None):\n import socket\n\n self._tmp_dir = tmp_dir\n if self._tmp_dir is not None:\n makedirs(self._tmp_dir, exist_ok=True)\n\n # NOTE: this is basically Lock.__init__ copy-paste, except that\n # instead of using `socket.getfqdn()` we use `socket.gethostname()`\n # to speed this up. We've seen [1] `getfqdn()` take ~5sec to return\n # anything, which is way too slow. `gethostname()` is actually a\n # fallback for `getfqdn()` when it is not able to resolve a\n # canonical hostname through network. The claimfile that uses\n # `self._hostname` is still usable, as it uses `pid` and random\n # number to generate the resulting lock file name, which is unique\n # enough for our application.\n #\n # [1] https://github.com/iterative/dvc/issues/2582\n self._hostname = socket.gethostname()\n\n self._lockfile = lockfile\n self._lifetime = timedelta(days=365) # Lock for good by default\n self._separator = flufl.lock.SEP\n self._set_claimfile()\n self._owned = True\n self._retry_errnos = []\n\n @property\n def lockfile(self):\n return self._lockfile\n\n @property\n def files(self):\n return lkeep([self._lockfile, self._tmp_dir])\n\n def lock(self):\n try:\n super(Lock, self).lock(timedelta(seconds=DEFAULT_TIMEOUT))\n except flufl.lock.TimeOutError:\n raise LockError(\n \"cannot perform the cmd since DVC is busy and \"\n \"locked. Please retry the cmd later.\"\n )\n\n def _set_claimfile(self, pid=None):\n super(Lock, self)._set_claimfile(pid)\n\n if self._tmp_dir is not None:\n # Under Windows file path length is limited so we hash it\n filename = hashlib.md5(self._claimfile.encode()).hexdigest()\n self._claimfile = os.path.join(\n self._tmp_dir, filename + \".lock\"\n )\n\n # Fix for __del__ bug in flufl.lock [1] which is causing errors on\n # Python shutdown [2].\n # [1] https://gitlab.com/warsaw/flufl.lock/issues/7\n # [2] https://github.com/iterative/dvc/issues/2573\n def __del__(self):\n try:\n if self._owned:\n self.finalize()\n except ImportError:\n pass\n\n\nelse:\n import zc.lockfile\n\n class Lock(object):\n \"\"\"Class for dvc repo lock.\n\n Uses zc.lockfile as backend.\n \"\"\"\n\n def __init__(self, lockfile, tmp_dir=None):\n self.lockfile = lockfile\n self._lock = None\n\n @property\n def files(self):\n return [self.lockfile]\n\n def _do_lock(self):\n try:\n self._lock = zc.lockfile.LockFile(self.lockfile)\n except zc.lockfile.LockError:\n raise LockError(\n \"cannot perform the cmd since DVC is busy and \"\n \"locked. Please retry the cmd later.\"\n )\n\n def lock(self):\n try:\n self._do_lock()\n return\n except LockError:\n time.sleep(DEFAULT_TIMEOUT)\n\n self._do_lock()\n\n def unlock(self):\n self._lock.close()\n self._lock = None\n\n def __enter__(self):\n self.lock()\n\n def __exit__(self, typ, value, tbck):\n self.unlock()\n", "path": "dvc/lock.py"}, {"content": "\"\"\"Main entry point for dvc CLI.\"\"\"\nfrom __future__ import unicode_literals\n\nimport logging\n\nfrom dvc.analytics import Analytics\nfrom dvc.cli import parse_args\nfrom dvc.config import ConfigError\nfrom dvc.exceptions import DvcParserError\nfrom dvc.exceptions import NotDvcRepoError\nfrom dvc.external_repo import clean_repos\nfrom dvc.lock import LockError\nfrom dvc.logger import FOOTER\nfrom dvc.remote.pool import close_pools\nfrom dvc.utils.compat import is_py2\n\n\n# Workaround for CPython bug. See [1] and [2] for more info.\n# [1] https://github.com/aws/aws-cli/blob/1.16.277/awscli/clidriver.py#L55\n# [2] https://bugs.python.org/issue29288\n\"\".encode(\"idna\")\n\n\nlogger = logging.getLogger(\"dvc\")\n\n\ndef main(argv=None):\n \"\"\"Run dvc CLI command.\n\n Args:\n argv: optional list of arguments to parse. sys.argv is used by default.\n\n Returns:\n int: command's return code.\n \"\"\"\n args = None\n cmd = None\n\n outerLogLevel = logger.level\n try:\n args = parse_args(argv)\n\n if args.quiet:\n logger.setLevel(logging.CRITICAL)\n\n elif args.verbose:\n logger.setLevel(logging.DEBUG)\n\n cmd = args.func(args)\n ret = cmd.run()\n except LockError:\n logger.exception(\"failed to lock before running a command\")\n ret = 250\n except ConfigError:\n logger.exception(\"configuration error\")\n ret = 251\n except KeyboardInterrupt:\n logger.exception(\"interrupted by the user\")\n ret = 252\n except NotDvcRepoError:\n logger.exception(\"\")\n ret = 253\n except DvcParserError:\n ret = 254\n except Exception as exc: # pylint: disable=broad-except\n if isinstance(exc, UnicodeError) and is_py2:\n logger.exception(\n \"unicode is not supported in DVC for Python 2 \"\n \"(end-of-life January 1, 2020), please upgrade to Python 3\"\n )\n else:\n logger.exception(\"unexpected error\")\n ret = 255\n finally:\n logger.setLevel(outerLogLevel)\n\n # Python 2 fails to close these clean occasionally and users see\n # weird error messages, so we do it manually\n close_pools()\n\n # Remove cached repos in the end of the call, these are anonymous\n # so won't be reused by any other subsequent run anyway.\n clean_repos()\n\n if ret != 0:\n logger.info(FOOTER)\n\n Analytics().send_cmd(cmd, args, ret)\n\n return ret\n", "path": "dvc/main.py"}]} | 2,705 | 516 |
gh_patches_debug_38054 | rasdani/github-patches | git_diff | translate__pootle-4613 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
TP creation email should use BCC
Let's BCC by default when Pootle is communicating to lists of people. When you create a TP all relevant people are listed in the To field.
</issue>
<code>
[start of pootle/core/mail.py]
1 # -*- coding: utf-8 -*-
2 #
3 # Copyright (C) Pootle contributors.
4 #
5 # This file is a part of the Pootle project. It is distributed under the GPL3
6 # or later license. See the LICENSE file for a copy of the license and the
7 # AUTHORS file for copyright and authorship information.
8
9 from django.core.mail import EmailMultiAlternatives, get_connection
10
11
12 def send_mail(subject, message, from_email, recipient_list,
13 fail_silently=False, auth_user=None, auth_password=None,
14 connection=None, html_message=None, headers=None):
15 """Override django send_mail function to allow use of custom email headers.
16 """
17
18 connection = connection or get_connection(username=auth_user,
19 password=auth_password,
20 fail_silently=fail_silently)
21
22 mail = EmailMultiAlternatives(subject, message,
23 from_email, recipient_list,
24 connection=connection, headers=headers)
25
26 if html_message:
27 mail.attach_alternative(html_message, 'text/html')
28
29 return mail.send()
30
[end of pootle/core/mail.py]
[start of pootle/apps/pootle_translationproject/receivers.py]
1 #!/usr/bin/env python
2 # -*- coding: utf-8 -*-
3 #
4 # Copyright (C) Pootle contributors.
5 #
6 # This file is a part of the Pootle project. It is distributed under the GPL3
7 # or later license. See the LICENSE file for a copy of the license and the
8 # AUTHORS file for copyright and authorship information.
9
10 from django.contrib.auth import get_user_model
11 from django.core.mail import send_mail
12 from django.db.models import Q
13 from django.dispatch import receiver
14 from django.template.loader import render_to_string
15 from django.utils.translation import ugettext_lazy as _
16
17 from pootle.core.url_helpers import urljoin
18
19 from .models import TranslationProject
20 from .signals import tp_init_failed_async, tp_inited_async
21
22
23 def get_recipients(project):
24 User = get_user_model()
25 return list(set(User.objects.filter(
26 Q(permissionset__positive_permissions__codename="administrate",
27 permissionset__directory__pootle_path=project.pootle_path) |
28 Q(is_superuser=True)).values_list("email", flat=True)))
29
30
31 @receiver(tp_inited_async, sender=TranslationProject)
32 def tp_inited_async(instance, response_url, **kwargs):
33 ctx = {"tp": instance,
34 "url": urljoin(response_url, instance.get_absolute_url())}
35 message = render_to_string(
36 'projects/admin/email/translation_project_created.txt', ctx)
37 subject = _(u"Translation project (%s) created" % instance)
38 recipients = get_recipients(instance.project)
39 send_mail(subject, message, from_email=None,
40 recipient_list=recipients, fail_silently=True)
41
42
43 @receiver(tp_init_failed_async, sender=TranslationProject)
44 def tp_init_failed_async(instance, **kwargs):
45 ctx = {"tp": instance}
46 message = render_to_string(
47 'projects/admin/email/translation_project_creation_failed.txt', ctx)
48 subject = _(u"Translation project (%s) creation failed" % instance)
49 recipients = get_recipients(instance.project)
50 send_mail(subject, message, from_email=None,
51 recipient_list=recipients, fail_silently=True)
52
[end of pootle/apps/pootle_translationproject/receivers.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/pootle/apps/pootle_translationproject/receivers.py b/pootle/apps/pootle_translationproject/receivers.py
--- a/pootle/apps/pootle_translationproject/receivers.py
+++ b/pootle/apps/pootle_translationproject/receivers.py
@@ -8,12 +8,12 @@
# AUTHORS file for copyright and authorship information.
from django.contrib.auth import get_user_model
-from django.core.mail import send_mail
from django.db.models import Q
from django.dispatch import receiver
from django.template.loader import render_to_string
from django.utils.translation import ugettext_lazy as _
+from pootle.core.mail import send_mail
from pootle.core.url_helpers import urljoin
from .models import TranslationProject
@@ -37,7 +37,7 @@
subject = _(u"Translation project (%s) created" % instance)
recipients = get_recipients(instance.project)
send_mail(subject, message, from_email=None,
- recipient_list=recipients, fail_silently=True)
+ recipient_list=[], fail_silently=True, bcc=recipients)
@receiver(tp_init_failed_async, sender=TranslationProject)
@@ -48,4 +48,4 @@
subject = _(u"Translation project (%s) creation failed" % instance)
recipients = get_recipients(instance.project)
send_mail(subject, message, from_email=None,
- recipient_list=recipients, fail_silently=True)
+ recipient_list=[], fail_silently=True, bcc=recipients)
diff --git a/pootle/core/mail.py b/pootle/core/mail.py
--- a/pootle/core/mail.py
+++ b/pootle/core/mail.py
@@ -11,7 +11,8 @@
def send_mail(subject, message, from_email, recipient_list,
fail_silently=False, auth_user=None, auth_password=None,
- connection=None, html_message=None, headers=None):
+ connection=None, html_message=None, headers=None,
+ cc=None, bcc=None):
"""Override django send_mail function to allow use of custom email headers.
"""
@@ -21,7 +22,8 @@
mail = EmailMultiAlternatives(subject, message,
from_email, recipient_list,
- connection=connection, headers=headers)
+ connection=connection, headers=headers,
+ cc=cc, bcc=bcc)
if html_message:
mail.attach_alternative(html_message, 'text/html')
| {"golden_diff": "diff --git a/pootle/apps/pootle_translationproject/receivers.py b/pootle/apps/pootle_translationproject/receivers.py\n--- a/pootle/apps/pootle_translationproject/receivers.py\n+++ b/pootle/apps/pootle_translationproject/receivers.py\n@@ -8,12 +8,12 @@\n # AUTHORS file for copyright and authorship information.\n \n from django.contrib.auth import get_user_model\n-from django.core.mail import send_mail\n from django.db.models import Q\n from django.dispatch import receiver\n from django.template.loader import render_to_string\n from django.utils.translation import ugettext_lazy as _\n \n+from pootle.core.mail import send_mail\n from pootle.core.url_helpers import urljoin\n \n from .models import TranslationProject\n@@ -37,7 +37,7 @@\n subject = _(u\"Translation project (%s) created\" % instance)\n recipients = get_recipients(instance.project)\n send_mail(subject, message, from_email=None,\n- recipient_list=recipients, fail_silently=True)\n+ recipient_list=[], fail_silently=True, bcc=recipients)\n \n \n @receiver(tp_init_failed_async, sender=TranslationProject)\n@@ -48,4 +48,4 @@\n subject = _(u\"Translation project (%s) creation failed\" % instance)\n recipients = get_recipients(instance.project)\n send_mail(subject, message, from_email=None,\n- recipient_list=recipients, fail_silently=True)\n+ recipient_list=[], fail_silently=True, bcc=recipients)\ndiff --git a/pootle/core/mail.py b/pootle/core/mail.py\n--- a/pootle/core/mail.py\n+++ b/pootle/core/mail.py\n@@ -11,7 +11,8 @@\n \n def send_mail(subject, message, from_email, recipient_list,\n fail_silently=False, auth_user=None, auth_password=None,\n- connection=None, html_message=None, headers=None):\n+ connection=None, html_message=None, headers=None,\n+ cc=None, bcc=None):\n \"\"\"Override django send_mail function to allow use of custom email headers.\n \"\"\"\n \n@@ -21,7 +22,8 @@\n \n mail = EmailMultiAlternatives(subject, message,\n from_email, recipient_list,\n- connection=connection, headers=headers)\n+ connection=connection, headers=headers,\n+ cc=cc, bcc=bcc)\n \n if html_message:\n mail.attach_alternative(html_message, 'text/html')\n", "issue": "TP creation email should use BCC\nLet's BCC by default when Pootle is communicating to lists of people. When you create a TP all relevant people are listed in the To field.\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# Copyright (C) Pootle contributors.\n#\n# This file is a part of the Pootle project. It is distributed under the GPL3\n# or later license. See the LICENSE file for a copy of the license and the\n# AUTHORS file for copyright and authorship information.\n\nfrom django.core.mail import EmailMultiAlternatives, get_connection\n\n\ndef send_mail(subject, message, from_email, recipient_list,\n fail_silently=False, auth_user=None, auth_password=None,\n connection=None, html_message=None, headers=None):\n \"\"\"Override django send_mail function to allow use of custom email headers.\n \"\"\"\n\n connection = connection or get_connection(username=auth_user,\n password=auth_password,\n fail_silently=fail_silently)\n\n mail = EmailMultiAlternatives(subject, message,\n from_email, recipient_list,\n connection=connection, headers=headers)\n\n if html_message:\n mail.attach_alternative(html_message, 'text/html')\n\n return mail.send()\n", "path": "pootle/core/mail.py"}, {"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n#\n# Copyright (C) Pootle contributors.\n#\n# This file is a part of the Pootle project. It is distributed under the GPL3\n# or later license. See the LICENSE file for a copy of the license and the\n# AUTHORS file for copyright and authorship information.\n\nfrom django.contrib.auth import get_user_model\nfrom django.core.mail import send_mail\nfrom django.db.models import Q\nfrom django.dispatch import receiver\nfrom django.template.loader import render_to_string\nfrom django.utils.translation import ugettext_lazy as _\n\nfrom pootle.core.url_helpers import urljoin\n\nfrom .models import TranslationProject\nfrom .signals import tp_init_failed_async, tp_inited_async\n\n\ndef get_recipients(project):\n User = get_user_model()\n return list(set(User.objects.filter(\n Q(permissionset__positive_permissions__codename=\"administrate\",\n permissionset__directory__pootle_path=project.pootle_path) |\n Q(is_superuser=True)).values_list(\"email\", flat=True)))\n\n\n@receiver(tp_inited_async, sender=TranslationProject)\ndef tp_inited_async(instance, response_url, **kwargs):\n ctx = {\"tp\": instance,\n \"url\": urljoin(response_url, instance.get_absolute_url())}\n message = render_to_string(\n 'projects/admin/email/translation_project_created.txt', ctx)\n subject = _(u\"Translation project (%s) created\" % instance)\n recipients = get_recipients(instance.project)\n send_mail(subject, message, from_email=None,\n recipient_list=recipients, fail_silently=True)\n\n\n@receiver(tp_init_failed_async, sender=TranslationProject)\ndef tp_init_failed_async(instance, **kwargs):\n ctx = {\"tp\": instance}\n message = render_to_string(\n 'projects/admin/email/translation_project_creation_failed.txt', ctx)\n subject = _(u\"Translation project (%s) creation failed\" % instance)\n recipients = get_recipients(instance.project)\n send_mail(subject, message, from_email=None,\n recipient_list=recipients, fail_silently=True)\n", "path": "pootle/apps/pootle_translationproject/receivers.py"}]} | 1,430 | 544 |
gh_patches_debug_8480 | rasdani/github-patches | git_diff | elastic__apm-agent-python-580 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
DB interactions not traced when using context manager with psycopg2 connections or cursors
When using a context manager with psycopg2 connections or cursors, db interactions are not captured in spans.
The code below generates a span for `psycopg2.connect`, but not the query:
```
with psycopg2.connect(DSN) as conn:
with conn.cursor() as curs:
curs.execute("SELECT * FROM data.portfolio;")
portfolios = curs.fetchall()
```
whereas the following captures both spans as expected:
```
conn = psycopg2.connect(DSN)
curs = conn.cursor()
curs.execute("SELECT * FROM data.portfolio;")
portfolios = curs.fetchall()
curs.close()
conn.close()
```
</issue>
<code>
[start of elasticapm/instrumentation/packages/psycopg2.py]
1 # BSD 3-Clause License
2 #
3 # Copyright (c) 2019, Elasticsearch BV
4 # All rights reserved.
5 #
6 # Redistribution and use in source and binary forms, with or without
7 # modification, are permitted provided that the following conditions are met:
8 #
9 # * Redistributions of source code must retain the above copyright notice, this
10 # list of conditions and the following disclaimer.
11 #
12 # * Redistributions in binary form must reproduce the above copyright notice,
13 # this list of conditions and the following disclaimer in the documentation
14 # and/or other materials provided with the distribution.
15 #
16 # * Neither the name of the copyright holder nor the names of its
17 # contributors may be used to endorse or promote products derived from
18 # this software without specific prior written permission.
19 #
20 # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
21 # AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
22 # IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
23 # DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
24 # FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
25 # DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
26 # SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
27 # CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
28 # OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
29 # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
30
31 from elasticapm.instrumentation.packages.dbapi2 import (
32 ConnectionProxy,
33 CursorProxy,
34 DbApi2Instrumentation,
35 extract_signature,
36 )
37 from elasticapm.traces import capture_span
38 from elasticapm.utils import default_ports
39
40
41 class PGCursorProxy(CursorProxy):
42 provider_name = "postgresql"
43
44 def _bake_sql(self, sql):
45 # if this is a Composable object, use its `as_string` method
46 # see http://initd.org/psycopg/docs/sql.html
47 if hasattr(sql, "as_string"):
48 return sql.as_string(self.__wrapped__)
49 return sql
50
51 def extract_signature(self, sql):
52 return extract_signature(sql)
53
54
55 class PGConnectionProxy(ConnectionProxy):
56 cursor_proxy = PGCursorProxy
57
58
59 class Psycopg2Instrumentation(DbApi2Instrumentation):
60 name = "psycopg2"
61
62 instrument_list = [("psycopg2", "connect")]
63
64 def call(self, module, method, wrapped, instance, args, kwargs):
65 signature = "psycopg2.connect"
66
67 host = kwargs.get("host")
68 if host:
69 signature += " " + str(host)
70
71 port = kwargs.get("port")
72 if port:
73 port = str(port)
74 if int(port) != default_ports.get("postgresql"):
75 signature += ":" + port
76 else:
77 # Parse connection string and extract host/port
78 pass
79
80 with capture_span(signature, span_type="db", span_subtype="postgresql", span_action="connect"):
81 return PGConnectionProxy(wrapped(*args, **kwargs))
82
83
84 class Psycopg2RegisterTypeInstrumentation(DbApi2Instrumentation):
85 name = "psycopg2-register-type"
86
87 instrument_list = [
88 ("psycopg2.extensions", "register_type"),
89 # specifically instrument `register_json` as it bypasses `register_type`
90 ("psycopg2._json", "register_json"),
91 ]
92
93 def call(self, module, method, wrapped, instance, args, kwargs):
94 if "conn_or_curs" in kwargs and hasattr(kwargs["conn_or_curs"], "__wrapped__"):
95 kwargs["conn_or_curs"] = kwargs["conn_or_curs"].__wrapped__
96 # register_type takes the connection as second argument
97 elif len(args) == 2 and hasattr(args[1], "__wrapped__"):
98 args = (args[0], args[1].__wrapped__)
99 # register_json takes the connection as first argument, and can have
100 # several more arguments
101 elif method == "register_json":
102 if args and hasattr(args[0], "__wrapped__"):
103 args = (args[0].__wrapped__,) + args[1:]
104
105 return wrapped(*args, **kwargs)
106
[end of elasticapm/instrumentation/packages/psycopg2.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/elasticapm/instrumentation/packages/psycopg2.py b/elasticapm/instrumentation/packages/psycopg2.py
--- a/elasticapm/instrumentation/packages/psycopg2.py
+++ b/elasticapm/instrumentation/packages/psycopg2.py
@@ -51,10 +51,16 @@
def extract_signature(self, sql):
return extract_signature(sql)
+ def __enter__(self):
+ return PGCursorProxy(self.__wrapped__.__enter__())
+
class PGConnectionProxy(ConnectionProxy):
cursor_proxy = PGCursorProxy
+ def __enter__(self):
+ return PGConnectionProxy(self.__wrapped__.__enter__())
+
class Psycopg2Instrumentation(DbApi2Instrumentation):
name = "psycopg2"
| {"golden_diff": "diff --git a/elasticapm/instrumentation/packages/psycopg2.py b/elasticapm/instrumentation/packages/psycopg2.py\n--- a/elasticapm/instrumentation/packages/psycopg2.py\n+++ b/elasticapm/instrumentation/packages/psycopg2.py\n@@ -51,10 +51,16 @@\n def extract_signature(self, sql):\n return extract_signature(sql)\n \n+ def __enter__(self):\n+ return PGCursorProxy(self.__wrapped__.__enter__())\n+\n \n class PGConnectionProxy(ConnectionProxy):\n cursor_proxy = PGCursorProxy\n \n+ def __enter__(self):\n+ return PGConnectionProxy(self.__wrapped__.__enter__())\n+\n \n class Psycopg2Instrumentation(DbApi2Instrumentation):\n name = \"psycopg2\"\n", "issue": "DB interactions not traced when using context manager with psycopg2 connections or cursors\nWhen using a context manager with psycopg2 connections or cursors, db interactions are not captured in spans.\r\n\r\nThe code below generates a span for `psycopg2.connect`, but not the query:\r\n```\r\nwith psycopg2.connect(DSN) as conn:\r\n with conn.cursor() as curs:\r\n curs.execute(\"SELECT * FROM data.portfolio;\")\r\n portfolios = curs.fetchall()\r\n```\r\n\r\nwhereas the following captures both spans as expected:\r\n```\r\nconn = psycopg2.connect(DSN)\r\ncurs = conn.cursor()\r\ncurs.execute(\"SELECT * FROM data.portfolio;\")\r\nportfolios = curs.fetchall()\r\ncurs.close()\r\nconn.close()\r\n```\n", "before_files": [{"content": "# BSD 3-Clause License\n#\n# Copyright (c) 2019, Elasticsearch BV\n# All rights reserved.\n#\n# Redistribution and use in source and binary forms, with or without\n# modification, are permitted provided that the following conditions are met:\n#\n# * Redistributions of source code must retain the above copyright notice, this\n# list of conditions and the following disclaimer.\n#\n# * Redistributions in binary form must reproduce the above copyright notice,\n# this list of conditions and the following disclaimer in the documentation\n# and/or other materials provided with the distribution.\n#\n# * Neither the name of the copyright holder nor the names of its\n# contributors may be used to endorse or promote products derived from\n# this software without specific prior written permission.\n#\n# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\"\n# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\n# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE\n# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE\n# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL\n# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR\n# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER\n# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,\n# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\n# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n\nfrom elasticapm.instrumentation.packages.dbapi2 import (\n ConnectionProxy,\n CursorProxy,\n DbApi2Instrumentation,\n extract_signature,\n)\nfrom elasticapm.traces import capture_span\nfrom elasticapm.utils import default_ports\n\n\nclass PGCursorProxy(CursorProxy):\n provider_name = \"postgresql\"\n\n def _bake_sql(self, sql):\n # if this is a Composable object, use its `as_string` method\n # see http://initd.org/psycopg/docs/sql.html\n if hasattr(sql, \"as_string\"):\n return sql.as_string(self.__wrapped__)\n return sql\n\n def extract_signature(self, sql):\n return extract_signature(sql)\n\n\nclass PGConnectionProxy(ConnectionProxy):\n cursor_proxy = PGCursorProxy\n\n\nclass Psycopg2Instrumentation(DbApi2Instrumentation):\n name = \"psycopg2\"\n\n instrument_list = [(\"psycopg2\", \"connect\")]\n\n def call(self, module, method, wrapped, instance, args, kwargs):\n signature = \"psycopg2.connect\"\n\n host = kwargs.get(\"host\")\n if host:\n signature += \" \" + str(host)\n\n port = kwargs.get(\"port\")\n if port:\n port = str(port)\n if int(port) != default_ports.get(\"postgresql\"):\n signature += \":\" + port\n else:\n # Parse connection string and extract host/port\n pass\n\n with capture_span(signature, span_type=\"db\", span_subtype=\"postgresql\", span_action=\"connect\"):\n return PGConnectionProxy(wrapped(*args, **kwargs))\n\n\nclass Psycopg2RegisterTypeInstrumentation(DbApi2Instrumentation):\n name = \"psycopg2-register-type\"\n\n instrument_list = [\n (\"psycopg2.extensions\", \"register_type\"),\n # specifically instrument `register_json` as it bypasses `register_type`\n (\"psycopg2._json\", \"register_json\"),\n ]\n\n def call(self, module, method, wrapped, instance, args, kwargs):\n if \"conn_or_curs\" in kwargs and hasattr(kwargs[\"conn_or_curs\"], \"__wrapped__\"):\n kwargs[\"conn_or_curs\"] = kwargs[\"conn_or_curs\"].__wrapped__\n # register_type takes the connection as second argument\n elif len(args) == 2 and hasattr(args[1], \"__wrapped__\"):\n args = (args[0], args[1].__wrapped__)\n # register_json takes the connection as first argument, and can have\n # several more arguments\n elif method == \"register_json\":\n if args and hasattr(args[0], \"__wrapped__\"):\n args = (args[0].__wrapped__,) + args[1:]\n\n return wrapped(*args, **kwargs)\n", "path": "elasticapm/instrumentation/packages/psycopg2.py"}]} | 1,832 | 174 |
gh_patches_debug_7901 | rasdani/github-patches | git_diff | searxng__searxng-687 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Disabling all engines in a category makes the bang search in general
**How To Reproduce**
1. Disable all engines in one category in the user preferences (e.g. in the files category).
2. Search in the category using the bang syntax (e.g. `!files test`).
**Expected behavior**
No results (and maybe a message that all engines in this category are disabled).
**Observed behavior**
The search is performed in the general category.
</issue>
<code>
[start of searx/webadapter.py]
1 from collections import defaultdict
2 from typing import Dict, List, Optional, Tuple
3 from searx.exceptions import SearxParameterException
4 from searx.webutils import VALID_LANGUAGE_CODE
5 from searx.query import RawTextQuery
6 from searx.engines import categories, engines
7 from searx.search import SearchQuery, EngineRef
8 from searx.preferences import Preferences, is_locked
9
10
11 # remove duplicate queries.
12 # FIXME: does not fix "!music !soundcloud", because the categories are 'none' and 'music'
13 def deduplicate_engineref_list(engineref_list: List[EngineRef]) -> List[EngineRef]:
14 engineref_dict = {q.category + '|' + q.name: q for q in engineref_list}
15 return list(engineref_dict.values())
16
17
18 def validate_engineref_list(
19 engineref_list: List[EngineRef], preferences: Preferences
20 ) -> Tuple[List[EngineRef], List[EngineRef], List[EngineRef]]:
21 """Validate query_engines according to the preferences
22
23 Returns:
24 List[EngineRef]: list of existing engines with a validated token
25 List[EngineRef]: list of unknown engine
26 List[EngineRef]: list of engine with invalid token according to the preferences
27 """
28 valid = []
29 unknown = []
30 no_token = []
31 for engineref in engineref_list:
32 if engineref.name not in engines:
33 unknown.append(engineref)
34 continue
35
36 engine = engines[engineref.name]
37 if not preferences.validate_token(engine):
38 no_token.append(engineref)
39 continue
40
41 valid.append(engineref)
42 return valid, unknown, no_token
43
44
45 def parse_pageno(form: Dict[str, str]) -> int:
46 pageno_param = form.get('pageno', '1')
47 if not pageno_param.isdigit() or int(pageno_param) < 1:
48 raise SearxParameterException('pageno', pageno_param)
49 return int(pageno_param)
50
51
52 def parse_lang(preferences: Preferences, form: Dict[str, str], raw_text_query: RawTextQuery) -> str:
53 if is_locked('language'):
54 return preferences.get_value('language')
55 # get language
56 # set specific language if set on request, query or preferences
57 # TODO support search with multible languages
58 if len(raw_text_query.languages):
59 query_lang = raw_text_query.languages[-1]
60 elif 'language' in form:
61 query_lang = form.get('language')
62 else:
63 query_lang = preferences.get_value('language')
64
65 # check language
66 if not VALID_LANGUAGE_CODE.match(query_lang):
67 raise SearxParameterException('language', query_lang)
68
69 return query_lang
70
71
72 def parse_safesearch(preferences: Preferences, form: Dict[str, str]) -> int:
73 if is_locked('safesearch'):
74 return preferences.get_value('safesearch')
75
76 if 'safesearch' in form:
77 query_safesearch = form.get('safesearch')
78 # first check safesearch
79 if not query_safesearch.isdigit():
80 raise SearxParameterException('safesearch', query_safesearch)
81 query_safesearch = int(query_safesearch)
82 else:
83 query_safesearch = preferences.get_value('safesearch')
84
85 # safesearch : second check
86 if query_safesearch < 0 or query_safesearch > 2:
87 raise SearxParameterException('safesearch', query_safesearch)
88
89 return query_safesearch
90
91
92 def parse_time_range(form: Dict[str, str]) -> Optional[str]:
93 query_time_range = form.get('time_range')
94 # check time_range
95 query_time_range = None if query_time_range in ('', 'None') else query_time_range
96 if query_time_range not in (None, 'day', 'week', 'month', 'year'):
97 raise SearxParameterException('time_range', query_time_range)
98 return query_time_range
99
100
101 def parse_timeout(form: Dict[str, str], raw_text_query: RawTextQuery) -> Optional[float]:
102 timeout_limit = raw_text_query.timeout_limit
103 if timeout_limit is None:
104 timeout_limit = form.get('timeout_limit')
105
106 if timeout_limit is None or timeout_limit in ['None', '']:
107 return None
108 try:
109 return float(timeout_limit)
110 except ValueError as e:
111 raise SearxParameterException('timeout_limit', timeout_limit) from e
112
113
114 def parse_category_form(query_categories: List[str], name: str, value: str) -> None:
115 if name == 'categories':
116 query_categories.extend(categ for categ in map(str.strip, value.split(',')) if categ in categories)
117 elif name.startswith('category_'):
118 category = name[9:]
119
120 # if category is not found in list, skip
121 if category not in categories:
122 return
123
124 if value != 'off':
125 # add category to list
126 query_categories.append(category)
127 elif category in query_categories:
128 # remove category from list if property is set to 'off'
129 query_categories.remove(category)
130
131
132 def get_selected_categories(preferences: Preferences, form: Optional[Dict[str, str]]) -> List[str]:
133 selected_categories = []
134
135 if not is_locked('categories') and form is not None:
136 for name, value in form.items():
137 parse_category_form(selected_categories, name, value)
138
139 # if no category is specified for this search,
140 # using user-defined default-configuration which
141 # (is stored in cookie)
142 if not selected_categories:
143 cookie_categories = preferences.get_value('categories')
144 for ccateg in cookie_categories:
145 selected_categories.append(ccateg)
146
147 # if still no category is specified, using general
148 # as default-category
149 if not selected_categories:
150 selected_categories = ['general']
151
152 return selected_categories
153
154
155 def get_engineref_from_category_list(category_list: List[str], disabled_engines: List[str]) -> List[EngineRef]:
156 result = []
157 for categ in category_list:
158 result.extend(
159 EngineRef(engine.name, categ)
160 for engine in categories[categ]
161 if (engine.name, categ) not in disabled_engines
162 )
163 return result
164
165
166 def parse_generic(preferences: Preferences, form: Dict[str, str], disabled_engines: List[str]) -> List[EngineRef]:
167 query_engineref_list = []
168 query_categories = []
169
170 # set categories/engines
171 explicit_engine_list = False
172 if not is_locked('categories'):
173 # parse the form only if the categories are not locked
174 for pd_name, pd in form.items():
175 if pd_name == 'engines':
176 pd_engines = [
177 EngineRef(engine_name, engines[engine_name].categories[0])
178 for engine_name in map(str.strip, pd.split(','))
179 if engine_name in engines
180 ]
181 if pd_engines:
182 query_engineref_list.extend(pd_engines)
183 explicit_engine_list = True
184 else:
185 parse_category_form(query_categories, pd_name, pd)
186
187 if explicit_engine_list:
188 # explicit list of engines with the "engines" parameter in the form
189 if query_categories:
190 # add engines from referenced by the "categories" parameter and the "category_*"" parameters
191 query_engineref_list.extend(get_engineref_from_category_list(query_categories, disabled_engines))
192 else:
193 # no "engines" parameters in the form
194 if not query_categories:
195 # and neither "categories" parameter nor "category_*"" parameters in the form
196 # -> get the categories from the preferences (the cookies or the settings)
197 query_categories = get_selected_categories(preferences, None)
198
199 # using all engines for that search, which are
200 # declared under the specific categories
201 query_engineref_list.extend(get_engineref_from_category_list(query_categories, disabled_engines))
202
203 return query_engineref_list
204
205
206 def parse_engine_data(form):
207 engine_data = defaultdict(dict)
208 for k, v in form.items():
209 if k.startswith("engine_data"):
210 _, engine, key = k.split('-')
211 engine_data[engine][key] = v
212 return engine_data
213
214
215 def get_search_query_from_webapp(
216 preferences: Preferences, form: Dict[str, str]
217 ) -> Tuple[SearchQuery, RawTextQuery, List[EngineRef], List[EngineRef]]:
218 # no text for the query ?
219 if not form.get('q'):
220 raise SearxParameterException('q', '')
221
222 # set blocked engines
223 disabled_engines = preferences.engines.get_disabled()
224
225 # parse query, if tags are set, which change
226 # the serch engine or search-language
227 raw_text_query = RawTextQuery(form['q'], disabled_engines)
228
229 # set query
230 query = raw_text_query.getQuery()
231 query_pageno = parse_pageno(form)
232 query_lang = parse_lang(preferences, form, raw_text_query)
233 query_safesearch = parse_safesearch(preferences, form)
234 query_time_range = parse_time_range(form)
235 query_timeout = parse_timeout(form, raw_text_query)
236 external_bang = raw_text_query.external_bang
237 engine_data = parse_engine_data(form)
238
239 if not is_locked('categories') and raw_text_query.enginerefs and raw_text_query.specific:
240 # if engines are calculated from query,
241 # set categories by using that informations
242 query_engineref_list = raw_text_query.enginerefs
243 else:
244 # otherwise, using defined categories to
245 # calculate which engines should be used
246 query_engineref_list = parse_generic(preferences, form, disabled_engines)
247
248 query_engineref_list = deduplicate_engineref_list(query_engineref_list)
249 query_engineref_list, query_engineref_list_unknown, query_engineref_list_notoken = validate_engineref_list(
250 query_engineref_list, preferences
251 )
252
253 return (
254 SearchQuery(
255 query,
256 query_engineref_list,
257 query_lang,
258 query_safesearch,
259 query_pageno,
260 query_time_range,
261 query_timeout,
262 external_bang=external_bang,
263 engine_data=engine_data,
264 ),
265 raw_text_query,
266 query_engineref_list_unknown,
267 query_engineref_list_notoken,
268 )
269
[end of searx/webadapter.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/searx/webadapter.py b/searx/webadapter.py
--- a/searx/webadapter.py
+++ b/searx/webadapter.py
@@ -236,7 +236,7 @@
external_bang = raw_text_query.external_bang
engine_data = parse_engine_data(form)
- if not is_locked('categories') and raw_text_query.enginerefs and raw_text_query.specific:
+ if not is_locked('categories') and raw_text_query.specific:
# if engines are calculated from query,
# set categories by using that informations
query_engineref_list = raw_text_query.enginerefs
| {"golden_diff": "diff --git a/searx/webadapter.py b/searx/webadapter.py\n--- a/searx/webadapter.py\n+++ b/searx/webadapter.py\n@@ -236,7 +236,7 @@\n external_bang = raw_text_query.external_bang\n engine_data = parse_engine_data(form)\n \n- if not is_locked('categories') and raw_text_query.enginerefs and raw_text_query.specific:\n+ if not is_locked('categories') and raw_text_query.specific:\n # if engines are calculated from query,\n # set categories by using that informations\n query_engineref_list = raw_text_query.enginerefs\n", "issue": "Disabling all engines in a category makes the bang search in general\n**How To Reproduce**\r\n1. Disable all engines in one category in the user preferences (e.g. in the files category).\r\n2. Search in the category using the bang syntax (e.g. `!files test`).\r\n\r\n**Expected behavior**\r\nNo results (and maybe a message that all engines in this category are disabled).\r\n\r\n**Observed behavior**\r\nThe search is performed in the general category.\n", "before_files": [{"content": "from collections import defaultdict\nfrom typing import Dict, List, Optional, Tuple\nfrom searx.exceptions import SearxParameterException\nfrom searx.webutils import VALID_LANGUAGE_CODE\nfrom searx.query import RawTextQuery\nfrom searx.engines import categories, engines\nfrom searx.search import SearchQuery, EngineRef\nfrom searx.preferences import Preferences, is_locked\n\n\n# remove duplicate queries.\n# FIXME: does not fix \"!music !soundcloud\", because the categories are 'none' and 'music'\ndef deduplicate_engineref_list(engineref_list: List[EngineRef]) -> List[EngineRef]:\n engineref_dict = {q.category + '|' + q.name: q for q in engineref_list}\n return list(engineref_dict.values())\n\n\ndef validate_engineref_list(\n engineref_list: List[EngineRef], preferences: Preferences\n) -> Tuple[List[EngineRef], List[EngineRef], List[EngineRef]]:\n \"\"\"Validate query_engines according to the preferences\n\n Returns:\n List[EngineRef]: list of existing engines with a validated token\n List[EngineRef]: list of unknown engine\n List[EngineRef]: list of engine with invalid token according to the preferences\n \"\"\"\n valid = []\n unknown = []\n no_token = []\n for engineref in engineref_list:\n if engineref.name not in engines:\n unknown.append(engineref)\n continue\n\n engine = engines[engineref.name]\n if not preferences.validate_token(engine):\n no_token.append(engineref)\n continue\n\n valid.append(engineref)\n return valid, unknown, no_token\n\n\ndef parse_pageno(form: Dict[str, str]) -> int:\n pageno_param = form.get('pageno', '1')\n if not pageno_param.isdigit() or int(pageno_param) < 1:\n raise SearxParameterException('pageno', pageno_param)\n return int(pageno_param)\n\n\ndef parse_lang(preferences: Preferences, form: Dict[str, str], raw_text_query: RawTextQuery) -> str:\n if is_locked('language'):\n return preferences.get_value('language')\n # get language\n # set specific language if set on request, query or preferences\n # TODO support search with multible languages\n if len(raw_text_query.languages):\n query_lang = raw_text_query.languages[-1]\n elif 'language' in form:\n query_lang = form.get('language')\n else:\n query_lang = preferences.get_value('language')\n\n # check language\n if not VALID_LANGUAGE_CODE.match(query_lang):\n raise SearxParameterException('language', query_lang)\n\n return query_lang\n\n\ndef parse_safesearch(preferences: Preferences, form: Dict[str, str]) -> int:\n if is_locked('safesearch'):\n return preferences.get_value('safesearch')\n\n if 'safesearch' in form:\n query_safesearch = form.get('safesearch')\n # first check safesearch\n if not query_safesearch.isdigit():\n raise SearxParameterException('safesearch', query_safesearch)\n query_safesearch = int(query_safesearch)\n else:\n query_safesearch = preferences.get_value('safesearch')\n\n # safesearch : second check\n if query_safesearch < 0 or query_safesearch > 2:\n raise SearxParameterException('safesearch', query_safesearch)\n\n return query_safesearch\n\n\ndef parse_time_range(form: Dict[str, str]) -> Optional[str]:\n query_time_range = form.get('time_range')\n # check time_range\n query_time_range = None if query_time_range in ('', 'None') else query_time_range\n if query_time_range not in (None, 'day', 'week', 'month', 'year'):\n raise SearxParameterException('time_range', query_time_range)\n return query_time_range\n\n\ndef parse_timeout(form: Dict[str, str], raw_text_query: RawTextQuery) -> Optional[float]:\n timeout_limit = raw_text_query.timeout_limit\n if timeout_limit is None:\n timeout_limit = form.get('timeout_limit')\n\n if timeout_limit is None or timeout_limit in ['None', '']:\n return None\n try:\n return float(timeout_limit)\n except ValueError as e:\n raise SearxParameterException('timeout_limit', timeout_limit) from e\n\n\ndef parse_category_form(query_categories: List[str], name: str, value: str) -> None:\n if name == 'categories':\n query_categories.extend(categ for categ in map(str.strip, value.split(',')) if categ in categories)\n elif name.startswith('category_'):\n category = name[9:]\n\n # if category is not found in list, skip\n if category not in categories:\n return\n\n if value != 'off':\n # add category to list\n query_categories.append(category)\n elif category in query_categories:\n # remove category from list if property is set to 'off'\n query_categories.remove(category)\n\n\ndef get_selected_categories(preferences: Preferences, form: Optional[Dict[str, str]]) -> List[str]:\n selected_categories = []\n\n if not is_locked('categories') and form is not None:\n for name, value in form.items():\n parse_category_form(selected_categories, name, value)\n\n # if no category is specified for this search,\n # using user-defined default-configuration which\n # (is stored in cookie)\n if not selected_categories:\n cookie_categories = preferences.get_value('categories')\n for ccateg in cookie_categories:\n selected_categories.append(ccateg)\n\n # if still no category is specified, using general\n # as default-category\n if not selected_categories:\n selected_categories = ['general']\n\n return selected_categories\n\n\ndef get_engineref_from_category_list(category_list: List[str], disabled_engines: List[str]) -> List[EngineRef]:\n result = []\n for categ in category_list:\n result.extend(\n EngineRef(engine.name, categ)\n for engine in categories[categ]\n if (engine.name, categ) not in disabled_engines\n )\n return result\n\n\ndef parse_generic(preferences: Preferences, form: Dict[str, str], disabled_engines: List[str]) -> List[EngineRef]:\n query_engineref_list = []\n query_categories = []\n\n # set categories/engines\n explicit_engine_list = False\n if not is_locked('categories'):\n # parse the form only if the categories are not locked\n for pd_name, pd in form.items():\n if pd_name == 'engines':\n pd_engines = [\n EngineRef(engine_name, engines[engine_name].categories[0])\n for engine_name in map(str.strip, pd.split(','))\n if engine_name in engines\n ]\n if pd_engines:\n query_engineref_list.extend(pd_engines)\n explicit_engine_list = True\n else:\n parse_category_form(query_categories, pd_name, pd)\n\n if explicit_engine_list:\n # explicit list of engines with the \"engines\" parameter in the form\n if query_categories:\n # add engines from referenced by the \"categories\" parameter and the \"category_*\"\" parameters\n query_engineref_list.extend(get_engineref_from_category_list(query_categories, disabled_engines))\n else:\n # no \"engines\" parameters in the form\n if not query_categories:\n # and neither \"categories\" parameter nor \"category_*\"\" parameters in the form\n # -> get the categories from the preferences (the cookies or the settings)\n query_categories = get_selected_categories(preferences, None)\n\n # using all engines for that search, which are\n # declared under the specific categories\n query_engineref_list.extend(get_engineref_from_category_list(query_categories, disabled_engines))\n\n return query_engineref_list\n\n\ndef parse_engine_data(form):\n engine_data = defaultdict(dict)\n for k, v in form.items():\n if k.startswith(\"engine_data\"):\n _, engine, key = k.split('-')\n engine_data[engine][key] = v\n return engine_data\n\n\ndef get_search_query_from_webapp(\n preferences: Preferences, form: Dict[str, str]\n) -> Tuple[SearchQuery, RawTextQuery, List[EngineRef], List[EngineRef]]:\n # no text for the query ?\n if not form.get('q'):\n raise SearxParameterException('q', '')\n\n # set blocked engines\n disabled_engines = preferences.engines.get_disabled()\n\n # parse query, if tags are set, which change\n # the serch engine or search-language\n raw_text_query = RawTextQuery(form['q'], disabled_engines)\n\n # set query\n query = raw_text_query.getQuery()\n query_pageno = parse_pageno(form)\n query_lang = parse_lang(preferences, form, raw_text_query)\n query_safesearch = parse_safesearch(preferences, form)\n query_time_range = parse_time_range(form)\n query_timeout = parse_timeout(form, raw_text_query)\n external_bang = raw_text_query.external_bang\n engine_data = parse_engine_data(form)\n\n if not is_locked('categories') and raw_text_query.enginerefs and raw_text_query.specific:\n # if engines are calculated from query,\n # set categories by using that informations\n query_engineref_list = raw_text_query.enginerefs\n else:\n # otherwise, using defined categories to\n # calculate which engines should be used\n query_engineref_list = parse_generic(preferences, form, disabled_engines)\n\n query_engineref_list = deduplicate_engineref_list(query_engineref_list)\n query_engineref_list, query_engineref_list_unknown, query_engineref_list_notoken = validate_engineref_list(\n query_engineref_list, preferences\n )\n\n return (\n SearchQuery(\n query,\n query_engineref_list,\n query_lang,\n query_safesearch,\n query_pageno,\n query_time_range,\n query_timeout,\n external_bang=external_bang,\n engine_data=engine_data,\n ),\n raw_text_query,\n query_engineref_list_unknown,\n query_engineref_list_notoken,\n )\n", "path": "searx/webadapter.py"}]} | 3,615 | 145 |
gh_patches_debug_15463 | rasdani/github-patches | git_diff | interactions-py__interactions.py-1643 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[BUG] Recorder Memory Usage
### Library Version
5.11.0
### Describe the Bug
Leaving and rejoining a voice channel with a bot recording causes enormous RAM usage and the bot recorder to stop functioning properly or the bot to crash completely (from out-of-memory). This is because the calculation for how many silence frames to insert is wrong (multiplies by sample rate twice).
### Steps to Reproduce
Use whatever you prefer and start monitoring your system memory usage. Join a voice channel and have a bot join it to record. Talk and then disconnect from the voice channel. Wait 3-5 seconds and rejoin the voice channel. Memory usage will spike into the 10-20 GB range or the bot will crash.
The exact outcome and how long to wait between rejoining the voice channel really depends on the amount of memory available on your system. If you wait too long and don't have enough memory, you likely will see "Error while recording: " in the logs because Python was smart enough to not let you try to allocate that much memory. I have 64 GB of RAM, of which 32 GB is available to the bot, and it will spike up to 25 GB consistently or just crash. When it crashes, you will see "Killed" in the terminal and the exit code will be 137, which is an out-of-memory exit code.
### Expected Results
The memory will not have a substantial spike and the bot will not crash.
### Minimal Reproducible Code
_No response_
### Traceback
_No response_
### Checklist
- [X] I have searched the open issues for duplicates.
- [X] I have shown the entire traceback, if possible.
- [X] I have removed my token from display, if visible.
- [X] I have attempted to debug this myself, and I believe this issue is with the library
### Additional Information
_No response_
</issue>
<code>
[start of interactions/api/voice/recorder.py]
1 import asyncio
2 import io
3 import logging
4 import os
5 import shutil
6 import struct
7 import threading
8 import time
9 from asyncio import AbstractEventLoop
10 from collections import defaultdict
11 from typing import TYPE_CHECKING
12
13 import select
14
15 from interactions.api.voice.audio import RawInputAudio
16 from interactions.api.voice.audio_writer import AudioWriter
17 from interactions.api.voice.encryption import Decryption
18 from interactions.api.voice.opus import Decoder
19 from interactions.client.const import logger_name, Missing
20 from interactions.client.utils.input_utils import unpack_helper
21 from interactions.models.discord.snowflake import Snowflake_Type, to_snowflake_list
22
23 if TYPE_CHECKING:
24 from interactions.models.internal.active_voice_state import ActiveVoiceState
25
26 __all__ = ("Recorder",)
27
28 log = logging.getLogger(logger_name)
29
30
31 class Recorder(threading.Thread):
32 def __init__(self, v_state, loop, *, output_dir: str | None = None) -> None:
33 super().__init__()
34 self.daemon = True
35
36 self.state: "ActiveVoiceState" = v_state
37 self.loop: AbstractEventLoop = loop
38 self.decrypter: Decryption = Decryption(self.state.ws.secret)
39 self._decoders: dict[str, Decoder] = defaultdict(Decoder)
40
41 # check if output_dir is a folder not a file
42 if output_dir and not os.path.isdir(output_dir):
43 raise ValueError("output_dir must be a directory")
44
45 self.output_dir = output_dir
46 self.audio: AudioWriter | None = None
47 self.encoding = "mp3"
48 self.recording = False
49 self.used = False
50
51 self.start_time = 0
52 self.user_timestamps = {}
53 self.recording_whitelist: list[Snowflake_Type] = []
54
55 if not shutil.which("ffmpeg"):
56 raise RuntimeError(
57 "Unable to start recorder. FFmpeg was not found. Please add it to your project directory or PATH. (https://ffmpeg.org/)"
58 )
59
60 async def __aenter__(self) -> "Recorder":
61 return self
62
63 async def __aexit__(self, exc_type, exc_val, exc_tb) -> None:
64 await self.stop_recording()
65
66 async def start_recording(self, *user_id: Snowflake_Type, output_dir: str | Missing = Missing) -> None:
67 """
68 Start recording audio from the current channel.
69
70 Args:
71 *user_id: The user_id(s) to record, if not specified everyone will be recorded.
72 output_dir: The directory to save the audio to (overrides the constructor output_dir if specified)
73
74 """
75 if self.used:
76 raise RuntimeError("Cannot reuse a recorder.")
77 self.used = True
78
79 if user_id:
80 self.recording_whitelist = to_snowflake_list(unpack_helper(user_id))
81
82 if output_dir is not Missing:
83 self.output_dir = output_dir
84
85 self.recording = True
86 self.audio = AudioWriter(self, self.state.channel.id)
87 self.start()
88 self.start_time = time.monotonic()
89
90 async def stop_recording(self) -> None:
91 """Stop recording audio from the current channel."""
92 self.recording = False
93
94 def wait() -> None:
95 self.audio.cleanup()
96 self.audio.encode_audio(self.encoding)
97
98 await asyncio.to_thread(wait)
99
100 def decrypt(self, header: bytes, data: bytes) -> bytes:
101 """
102 An alias to call the decryption methods.
103
104 Args:
105 header: The payload header
106 data: The payload data
107 Returns:
108 The decrypted payload
109
110 """
111 # a shorter alias to call
112 return self.decrypter.decrypt(self.state.ws.selected_mode, header, data)
113
114 def get_decoder(self, ssrc) -> Decoder:
115 return self._decoders[ssrc]
116
117 def get_user(self, ssrc: str) -> Snowflake_Type:
118 """
119 Get the corresponding user from a ssrc.
120
121 Args:
122 ssrc: The source to retrieve the user from
123 Returns:
124 A snowflake representing the user
125
126 """
127 return self.state.ws.user_ssrc_map.get(ssrc)["user_id"]
128
129 def get_ssrc(self, user_id: Snowflake_Type) -> str:
130 """
131 Get the corresponding ssrc from a user.
132
133 Args:
134 user_id: The user to retrieve the ssrc from
135 Returns:
136 A string representing the ssrc
137
138 """
139 return next((ssrc for ssrc, user in self.state.ws.user_ssrc_map.items() if user["user_id"] == user_id), None)
140
141 def __enter__(self) -> "Recorder":
142 return self
143
144 @property
145 def output(self) -> dict[int, io.BytesIO | str]:
146 """
147 The output of the recorder.
148
149 Returns:
150 A dictionary of the user_id and the output file.
151 Output file can be a BytesIO or a string (if output_dir is specified)
152
153 """
154 return self.audio.files if self.audio.finished.is_set() else {}
155
156 @property
157 def elapsed_time(self) -> float:
158 return time.monotonic() - self.start_time
159
160 def filter(self, *user_id: Snowflake_Type) -> None:
161 """
162 Filter the users that are being recorded.
163
164 Args:
165 *user_id: The user_id(s) to record
166
167 """
168 if not user_id:
169 self.recording_whitelist = []
170 self.recording_whitelist = to_snowflake_list(unpack_helper(user_id))
171
172 def run(self) -> None:
173 """The recording loop itself."""
174 sock = self.state.ws.socket
175
176 # purge any data that is already in the socket
177 readable, _, _ = select.select([sock], [], [], 0)
178 log.debug("Purging socket buffer")
179 while readable and sock.recv(4096):
180 readable, _, _ = select.select([sock], [], [], 0)
181 log.debug("Socket buffer purged, starting recording")
182
183 with self.audio:
184 while self.recording:
185 ready, _, err = select.select([sock], [], [sock], 0.01)
186 if not ready:
187 if err:
188 log.error("Error while recording: %s", err)
189 continue
190
191 data = sock.recv(4096)
192
193 if 200 <= data[1] <= 204:
194 continue
195
196 try:
197 raw_audio = RawInputAudio(self, data)
198 self.process_data(raw_audio)
199 except Exception as ex:
200 log.error("Error while recording: %s", ex)
201
202 def process_data(self, raw_audio: RawInputAudio) -> None:
203 """
204 Processes incoming audio data and writes it to the corresponding buffer.
205
206 Args:
207 raw_audio: The raw audio that has been received
208
209 """
210 if raw_audio.user_id is None:
211 return # usually the first frame when a user rejoins
212
213 if self.recording_whitelist and raw_audio.user_id not in self.recording_whitelist:
214 return
215
216 decoder = self.get_decoder(raw_audio.ssrc)
217
218 if raw_audio.ssrc not in self.user_timestamps:
219 if last_timestamp := self.audio.last_timestamps.get(raw_audio.user_id, None):
220 diff = raw_audio.timestamp - last_timestamp
221 silence = int(diff * decoder.sample_rate)
222 log.debug(
223 f"{self.state.channel.id}::{raw_audio.user_id} - User rejoined, adding {silence} silence frames ({diff} seconds)"
224 )
225 else:
226 silence = 0
227
228 self.user_timestamps.update({raw_audio.ssrc: raw_audio.timestamp})
229 else:
230 silence = raw_audio.timestamp - self.user_timestamps[raw_audio.ssrc]
231 if silence < 0.1:
232 silence = 0
233 self.user_timestamps[raw_audio.ssrc] = raw_audio.timestamp
234
235 raw_audio.pcm = struct.pack("<h", 0) * int(silence * decoder.sample_rate) * 2 + raw_audio.decoded
236
237 self.audio.write(raw_audio, raw_audio.user_id)
238
[end of interactions/api/voice/recorder.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/interactions/api/voice/recorder.py b/interactions/api/voice/recorder.py
--- a/interactions/api/voice/recorder.py
+++ b/interactions/api/voice/recorder.py
@@ -217,10 +217,10 @@
if raw_audio.ssrc not in self.user_timestamps:
if last_timestamp := self.audio.last_timestamps.get(raw_audio.user_id, None):
- diff = raw_audio.timestamp - last_timestamp
- silence = int(diff * decoder.sample_rate)
+ silence = raw_audio.timestamp - last_timestamp
+ frames = int(silence * decoder.sample_rate)
log.debug(
- f"{self.state.channel.id}::{raw_audio.user_id} - User rejoined, adding {silence} silence frames ({diff} seconds)"
+ f"{self.state.channel.id}::{raw_audio.user_id} - User rejoined, adding {frames} silence frames ({silence} seconds)"
)
else:
silence = 0
| {"golden_diff": "diff --git a/interactions/api/voice/recorder.py b/interactions/api/voice/recorder.py\n--- a/interactions/api/voice/recorder.py\n+++ b/interactions/api/voice/recorder.py\n@@ -217,10 +217,10 @@\n \n if raw_audio.ssrc not in self.user_timestamps:\n if last_timestamp := self.audio.last_timestamps.get(raw_audio.user_id, None):\n- diff = raw_audio.timestamp - last_timestamp\n- silence = int(diff * decoder.sample_rate)\n+ silence = raw_audio.timestamp - last_timestamp\n+ frames = int(silence * decoder.sample_rate)\n log.debug(\n- f\"{self.state.channel.id}::{raw_audio.user_id} - User rejoined, adding {silence} silence frames ({diff} seconds)\"\n+ f\"{self.state.channel.id}::{raw_audio.user_id} - User rejoined, adding {frames} silence frames ({silence} seconds)\"\n )\n else:\n silence = 0\n", "issue": "[BUG] Recorder Memory Usage\n### Library Version\n\n5.11.0\n\n### Describe the Bug\n\nLeaving and rejoining a voice channel with a bot recording causes enormous RAM usage and the bot recorder to stop functioning properly or the bot to crash completely (from out-of-memory). This is because the calculation for how many silence frames to insert is wrong (multiplies by sample rate twice).\n\n### Steps to Reproduce\n\nUse whatever you prefer and start monitoring your system memory usage. Join a voice channel and have a bot join it to record. Talk and then disconnect from the voice channel. Wait 3-5 seconds and rejoin the voice channel. Memory usage will spike into the 10-20 GB range or the bot will crash.\r\n\r\nThe exact outcome and how long to wait between rejoining the voice channel really depends on the amount of memory available on your system. If you wait too long and don't have enough memory, you likely will see \"Error while recording: \" in the logs because Python was smart enough to not let you try to allocate that much memory. I have 64 GB of RAM, of which 32 GB is available to the bot, and it will spike up to 25 GB consistently or just crash. When it crashes, you will see \"Killed\" in the terminal and the exit code will be 137, which is an out-of-memory exit code.\n\n### Expected Results\n\nThe memory will not have a substantial spike and the bot will not crash.\n\n### Minimal Reproducible Code\n\n_No response_\n\n### Traceback\n\n_No response_\n\n### Checklist\n\n- [X] I have searched the open issues for duplicates.\n- [X] I have shown the entire traceback, if possible.\n- [X] I have removed my token from display, if visible.\n- [X] I have attempted to debug this myself, and I believe this issue is with the library\n\n### Additional Information\n\n_No response_\n", "before_files": [{"content": "import asyncio\nimport io\nimport logging\nimport os\nimport shutil\nimport struct\nimport threading\nimport time\nfrom asyncio import AbstractEventLoop\nfrom collections import defaultdict\nfrom typing import TYPE_CHECKING\n\nimport select\n\nfrom interactions.api.voice.audio import RawInputAudio\nfrom interactions.api.voice.audio_writer import AudioWriter\nfrom interactions.api.voice.encryption import Decryption\nfrom interactions.api.voice.opus import Decoder\nfrom interactions.client.const import logger_name, Missing\nfrom interactions.client.utils.input_utils import unpack_helper\nfrom interactions.models.discord.snowflake import Snowflake_Type, to_snowflake_list\n\nif TYPE_CHECKING:\n from interactions.models.internal.active_voice_state import ActiveVoiceState\n\n__all__ = (\"Recorder\",)\n\nlog = logging.getLogger(logger_name)\n\n\nclass Recorder(threading.Thread):\n def __init__(self, v_state, loop, *, output_dir: str | None = None) -> None:\n super().__init__()\n self.daemon = True\n\n self.state: \"ActiveVoiceState\" = v_state\n self.loop: AbstractEventLoop = loop\n self.decrypter: Decryption = Decryption(self.state.ws.secret)\n self._decoders: dict[str, Decoder] = defaultdict(Decoder)\n\n # check if output_dir is a folder not a file\n if output_dir and not os.path.isdir(output_dir):\n raise ValueError(\"output_dir must be a directory\")\n\n self.output_dir = output_dir\n self.audio: AudioWriter | None = None\n self.encoding = \"mp3\"\n self.recording = False\n self.used = False\n\n self.start_time = 0\n self.user_timestamps = {}\n self.recording_whitelist: list[Snowflake_Type] = []\n\n if not shutil.which(\"ffmpeg\"):\n raise RuntimeError(\n \"Unable to start recorder. FFmpeg was not found. Please add it to your project directory or PATH. (https://ffmpeg.org/)\"\n )\n\n async def __aenter__(self) -> \"Recorder\":\n return self\n\n async def __aexit__(self, exc_type, exc_val, exc_tb) -> None:\n await self.stop_recording()\n\n async def start_recording(self, *user_id: Snowflake_Type, output_dir: str | Missing = Missing) -> None:\n \"\"\"\n Start recording audio from the current channel.\n\n Args:\n *user_id: The user_id(s) to record, if not specified everyone will be recorded.\n output_dir: The directory to save the audio to (overrides the constructor output_dir if specified)\n\n \"\"\"\n if self.used:\n raise RuntimeError(\"Cannot reuse a recorder.\")\n self.used = True\n\n if user_id:\n self.recording_whitelist = to_snowflake_list(unpack_helper(user_id))\n\n if output_dir is not Missing:\n self.output_dir = output_dir\n\n self.recording = True\n self.audio = AudioWriter(self, self.state.channel.id)\n self.start()\n self.start_time = time.monotonic()\n\n async def stop_recording(self) -> None:\n \"\"\"Stop recording audio from the current channel.\"\"\"\n self.recording = False\n\n def wait() -> None:\n self.audio.cleanup()\n self.audio.encode_audio(self.encoding)\n\n await asyncio.to_thread(wait)\n\n def decrypt(self, header: bytes, data: bytes) -> bytes:\n \"\"\"\n An alias to call the decryption methods.\n\n Args:\n header: The payload header\n data: The payload data\n Returns:\n The decrypted payload\n\n \"\"\"\n # a shorter alias to call\n return self.decrypter.decrypt(self.state.ws.selected_mode, header, data)\n\n def get_decoder(self, ssrc) -> Decoder:\n return self._decoders[ssrc]\n\n def get_user(self, ssrc: str) -> Snowflake_Type:\n \"\"\"\n Get the corresponding user from a ssrc.\n\n Args:\n ssrc: The source to retrieve the user from\n Returns:\n A snowflake representing the user\n\n \"\"\"\n return self.state.ws.user_ssrc_map.get(ssrc)[\"user_id\"]\n\n def get_ssrc(self, user_id: Snowflake_Type) -> str:\n \"\"\"\n Get the corresponding ssrc from a user.\n\n Args:\n user_id: The user to retrieve the ssrc from\n Returns:\n A string representing the ssrc\n\n \"\"\"\n return next((ssrc for ssrc, user in self.state.ws.user_ssrc_map.items() if user[\"user_id\"] == user_id), None)\n\n def __enter__(self) -> \"Recorder\":\n return self\n\n @property\n def output(self) -> dict[int, io.BytesIO | str]:\n \"\"\"\n The output of the recorder.\n\n Returns:\n A dictionary of the user_id and the output file.\n Output file can be a BytesIO or a string (if output_dir is specified)\n\n \"\"\"\n return self.audio.files if self.audio.finished.is_set() else {}\n\n @property\n def elapsed_time(self) -> float:\n return time.monotonic() - self.start_time\n\n def filter(self, *user_id: Snowflake_Type) -> None:\n \"\"\"\n Filter the users that are being recorded.\n\n Args:\n *user_id: The user_id(s) to record\n\n \"\"\"\n if not user_id:\n self.recording_whitelist = []\n self.recording_whitelist = to_snowflake_list(unpack_helper(user_id))\n\n def run(self) -> None:\n \"\"\"The recording loop itself.\"\"\"\n sock = self.state.ws.socket\n\n # purge any data that is already in the socket\n readable, _, _ = select.select([sock], [], [], 0)\n log.debug(\"Purging socket buffer\")\n while readable and sock.recv(4096):\n readable, _, _ = select.select([sock], [], [], 0)\n log.debug(\"Socket buffer purged, starting recording\")\n\n with self.audio:\n while self.recording:\n ready, _, err = select.select([sock], [], [sock], 0.01)\n if not ready:\n if err:\n log.error(\"Error while recording: %s\", err)\n continue\n\n data = sock.recv(4096)\n\n if 200 <= data[1] <= 204:\n continue\n\n try:\n raw_audio = RawInputAudio(self, data)\n self.process_data(raw_audio)\n except Exception as ex:\n log.error(\"Error while recording: %s\", ex)\n\n def process_data(self, raw_audio: RawInputAudio) -> None:\n \"\"\"\n Processes incoming audio data and writes it to the corresponding buffer.\n\n Args:\n raw_audio: The raw audio that has been received\n\n \"\"\"\n if raw_audio.user_id is None:\n return # usually the first frame when a user rejoins\n\n if self.recording_whitelist and raw_audio.user_id not in self.recording_whitelist:\n return\n\n decoder = self.get_decoder(raw_audio.ssrc)\n\n if raw_audio.ssrc not in self.user_timestamps:\n if last_timestamp := self.audio.last_timestamps.get(raw_audio.user_id, None):\n diff = raw_audio.timestamp - last_timestamp\n silence = int(diff * decoder.sample_rate)\n log.debug(\n f\"{self.state.channel.id}::{raw_audio.user_id} - User rejoined, adding {silence} silence frames ({diff} seconds)\"\n )\n else:\n silence = 0\n\n self.user_timestamps.update({raw_audio.ssrc: raw_audio.timestamp})\n else:\n silence = raw_audio.timestamp - self.user_timestamps[raw_audio.ssrc]\n if silence < 0.1:\n silence = 0\n self.user_timestamps[raw_audio.ssrc] = raw_audio.timestamp\n\n raw_audio.pcm = struct.pack(\"<h\", 0) * int(silence * decoder.sample_rate) * 2 + raw_audio.decoded\n\n self.audio.write(raw_audio, raw_audio.user_id)\n", "path": "interactions/api/voice/recorder.py"}]} | 3,286 | 221 |
gh_patches_debug_64454 | rasdani/github-patches | git_diff | bokeh__bokeh-1923 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
app_reveal fails importing old plotting stuff
```
(py34devel)[damian@damian-S400CA][slideshow](master)$ python app_reveal.py
Traceback (most recent call last):
File "app_reveal.py", line 19, in <module>
from bokeh.plotting import (annular_wedge, cursession, figure, hold, legend,
ImportError: cannot import name 'annular_wedge'
```
</issue>
<code>
[start of examples/embed/slideshow/app_reveal.py]
1 # -*- coding: utf-8 -*-
2 """
3 In this example, we want to show you how you can take isolated blocks of code
4 (featuring different kinds of Bokeh visualizations) and rearrange them in a
5 bigger (encompassing) flask-based application without losing the independence
6 of each example. This is the reason of some weirdness through the code.
7 We are using this "building blocks" approach here because we believe it has some
8 conceptual advantages for people trying to quickly understand, and more
9 importantly, use the embed API, in a more complex way than just a simple script.
10 """
11 import time
12 from threading import Thread
13
14 import numpy as np
15 import scipy.special
16
17 from bokeh.embed import autoload_server
18 from bokeh.models import GlyphRenderer
19 from bokeh.plotting import (annular_wedge, cursession, figure, hold, legend,
20 line, output_server, push, quad, xgrid, ygrid)
21
22 from flask import Flask, render_template
23 app = Flask(__name__)
24
25 @app.route('/')
26 def render_plot():
27 """
28 Get the script tags from each plot object and "insert" them into the template.
29
30 This also starts different threads for each update function, so you can have
31 a non-blocking update.
32 """
33 dist_plot, dist_session = distribution()
34 dist_tag = autoload_server(dist_plot, dist_session)
35
36 anim_plot, anim_session = animated()
37 anim_tag = autoload_server(anim_plot, anim_session)
38 # for update_animation as target we need to pass the anim_plot and anim_session as args
39 thread = Thread(target=update_animation, args=(anim_plot, anim_session))
40 thread.start()
41
42 pop = Population()
43 pop_tag = autoload_server(pop.layout, pop.session)
44 # for update_population as target we need to pass the pop instance as args
45 thread = Thread(target=update_population, args=(pop,))
46 thread.start()
47
48 return render_template('app_plot.html', tag1=dist_tag, tag2=anim_tag, tag3=pop_tag)
49
50
51 def distribution():
52
53 mu, sigma = 0, 0.5
54
55 measured = np.random.normal(mu, sigma, 1000)
56 hist, edges = np.histogram(measured, density=True, bins=20)
57
58 x = np.linspace(-2, 2, 1000)
59 pdf = 1 / (sigma * np.sqrt(2 * np.pi)) * np.exp(-(x - mu) ** 2 / (2 * sigma ** 2))
60 cdf = (1 + scipy.special.erf((x - mu) / np.sqrt(2 * sigma ** 2))) / 2
61
62 output_server("distribution_reveal")
63
64 p = figure(title="Interactive plots",
65 background_fill="#E5E5E5")
66 p.quad(top=hist, bottom=0, left=edges[:-1], right=edges[1:],
67 fill_color="#333333", line_color="#E5E5E5", line_width=3)
68
69 # Use `line` renderers to display the PDF and CDF
70 p.line(x, pdf, line_color="#348abd", line_width=8, alpha=0.7, legend="PDF")
71 p.line(x, cdf, line_color="#7a68a6", line_width=8, alpha=0.7, legend="CDF")
72
73 p.legend.orientation = "top_left"
74
75 p.xaxis.axis_label = 'x'
76 p.xgrid[0].grid_line_color = "white"
77 p.xgrid[0].grid_line_width = 3
78
79 p.yaxis.axis_label = 'Pr(x)'
80 p.ygrid[0].grid_line_color = "white"
81 p.ygrid[0].grid_line_width = 3
82
83 push()
84
85 return p, cursession()
86
87
88 def animated():
89
90 from numpy import pi, cos, sin, linspace
91
92 N = 50 + 1
93 r_base = 8
94 theta = linspace(0, 2 * pi, N)
95 r_x = linspace(0, 6 * pi, N - 1)
96 rmin = r_base - cos(r_x) - 1
97 rmax = r_base + sin(r_x) + 1
98
99 colors = ["FFFFCC", "#C7E9B4", "#7FCDBB", "#41B6C4", "#2C7FB8",
100 "#253494", "#2C7FB8", "#41B6C4", "#7FCDBB", "#C7E9B4"] * 5
101
102 output_server("animated_reveal")
103
104 p = figure(title="Animations", x_range=[-11, 11], y_range=[-11, 11])
105
106 p.annular_wedge(
107 0, 0, rmin, rmax, theta[:-1], theta[1:],
108 inner_radius_units="data",
109 outer_radius_units="data",
110 fill_color=colors,
111 line_color="black",
112 )
113
114 push()
115
116 return p, cursession()
117
118
119 def update_animation(plot, session):
120
121 from numpy import roll
122
123 renderer = plot.select(dict(type=GlyphRenderer))
124 ds = renderer[0].data_source
125
126 while True:
127
128 rmin = ds.data["inner_radius"]
129 rmin = roll(rmin, 1)
130 ds.data["inner_radius"] = rmin
131
132 rmax = ds.data["outer_radius"]
133 rmax = roll(rmax, -1)
134 ds.data["outer_radius"] = rmax
135
136 cursession().store_objects(ds)
137 time.sleep(0.1)
138
139
140 class Population(object):
141
142 year = 2010
143 location = "World"
144
145 def __init__(self):
146 from bokeh.models import ColumnDataSource
147 from bokeh.document import Document
148 from bokeh.session import Session
149 from bokeh.sampledata.population import load_population
150
151 self.document = Document()
152 self.session = Session()
153 self.session.use_doc('population_reveal')
154 self.session.load_document(self.document)
155
156 self.df = load_population()
157 self.source_pyramid = ColumnDataSource(data=dict())
158
159 # just render at the initialization
160 self._render()
161
162 def _render(self):
163 self.pyramid_plot()
164 self.create_layout()
165 self.document.add(self.layout)
166 self.update_pyramid()
167
168 def pyramid_plot(self):
169 from bokeh.models import (Plot, DataRange1d, LinearAxis, Grid,
170 Legend, SingleIntervalTicker)
171 from bokeh.models.glyphs import Quad
172
173 xdr = DataRange1d(sources=[self.source_pyramid.columns("male"),
174 self.source_pyramid.columns("female")])
175 ydr = DataRange1d(sources=[self.source_pyramid.columns("groups")])
176
177 self.plot = Plot(title="Widgets", x_range=xdr, y_range=ydr,
178 plot_width=600, plot_height=600)
179
180 xaxis = LinearAxis()
181 self.plot.add_layout(xaxis, 'below')
182 yaxis = LinearAxis(ticker=SingleIntervalTicker(interval=5))
183 self.plot.add_layout(yaxis, 'left')
184
185 self.plot.add_layout(Grid(dimension=0, ticker=xaxis.ticker))
186 self.plot.add_layout(Grid(dimension=1, ticker=yaxis.ticker))
187
188 male_quad = Quad(left="male", right=0, bottom="groups", top="shifted",
189 fill_color="#3B8686")
190 male_quad_glyph = self.plot.add_glyph(self.source_pyramid, male_quad)
191
192 female_quad = Quad(left=0, right="female", bottom="groups", top="shifted",
193 fill_color="#CFF09E")
194 female_quad_glyph = self.plot.add_glyph(self.source_pyramid, female_quad)
195
196 self.plot.add_layout(Legend(legends=dict(Male=[male_quad_glyph],
197 Female=[female_quad_glyph])))
198
199 def on_year_change(self, obj, attr, old, new):
200 self.year = int(new)
201 self.update_pyramid()
202
203 def on_location_change(self, obj, attr, old, new):
204 self.location = new
205 self.update_pyramid()
206
207 def create_layout(self):
208 from bokeh.models.widgets import Select, HBox, VBox
209
210 years = list(map(str, sorted(self.df.Year.unique())))
211 locations = sorted(self.df.Location.unique())
212
213 year_select = Select(title="Year:", value="2010", options=years)
214 location_select = Select(title="Location:", value="World", options=locations)
215
216 year_select.on_change('value', self.on_year_change)
217 location_select.on_change('value', self.on_location_change)
218
219 controls = HBox(year_select, location_select)
220 self.layout = VBox(controls, self.plot)
221
222 def update_pyramid(self):
223 pyramid = self.df[(self.df.Location == self.location) & (self.df.Year == self.year)]
224
225 male = pyramid[pyramid.Sex == "Male"]
226 female = pyramid[pyramid.Sex == "Female"]
227
228 total = male.Value.sum() + female.Value.sum()
229
230 male_percent = -male.Value / total
231 female_percent = female.Value / total
232
233 groups = male.AgeGrpStart.tolist()
234 shifted = groups[1:] + [groups[-1] + 5]
235
236 self.source_pyramid.data = dict(
237 groups=groups,
238 shifted=shifted,
239 male=male_percent,
240 female=female_percent,
241 )
242 self.session.store_document(self.document)
243
244
245 def update_population(plot):
246 while True:
247 plot.session.load_document(plot.document)
248 time.sleep(0.1)
249
250 if __name__ == '__main__':
251 app.run(debug=True)
252
[end of examples/embed/slideshow/app_reveal.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/examples/embed/slideshow/app_reveal.py b/examples/embed/slideshow/app_reveal.py
--- a/examples/embed/slideshow/app_reveal.py
+++ b/examples/embed/slideshow/app_reveal.py
@@ -16,8 +16,7 @@
from bokeh.embed import autoload_server
from bokeh.models import GlyphRenderer
-from bokeh.plotting import (annular_wedge, cursession, figure, hold, legend,
- line, output_server, push, quad, xgrid, ygrid)
+from bokeh.plotting import cursession, figure, output_server, push
from flask import Flask, render_template
app = Flask(__name__)
| {"golden_diff": "diff --git a/examples/embed/slideshow/app_reveal.py b/examples/embed/slideshow/app_reveal.py\n--- a/examples/embed/slideshow/app_reveal.py\n+++ b/examples/embed/slideshow/app_reveal.py\n@@ -16,8 +16,7 @@\n \n from bokeh.embed import autoload_server\n from bokeh.models import GlyphRenderer\n-from bokeh.plotting import (annular_wedge, cursession, figure, hold, legend,\n- line, output_server, push, quad, xgrid, ygrid)\n+from bokeh.plotting import cursession, figure, output_server, push\n \n from flask import Flask, render_template\n app = Flask(__name__)\n", "issue": "app_reveal fails importing old plotting stuff\n```\n(py34devel)[damian@damian-S400CA][slideshow](master)$ python app_reveal.py \nTraceback (most recent call last):\n File \"app_reveal.py\", line 19, in <module>\n from bokeh.plotting import (annular_wedge, cursession, figure, hold, legend,\nImportError: cannot import name 'annular_wedge'\n```\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"\nIn this example, we want to show you how you can take isolated blocks of code\n(featuring different kinds of Bokeh visualizations) and rearrange them in a\nbigger (encompassing) flask-based application without losing the independence\nof each example. This is the reason of some weirdness through the code.\nWe are using this \"building blocks\" approach here because we believe it has some\nconceptual advantages for people trying to quickly understand, and more\nimportantly, use the embed API, in a more complex way than just a simple script.\n\"\"\"\nimport time\nfrom threading import Thread\n\nimport numpy as np\nimport scipy.special\n\nfrom bokeh.embed import autoload_server\nfrom bokeh.models import GlyphRenderer\nfrom bokeh.plotting import (annular_wedge, cursession, figure, hold, legend,\n line, output_server, push, quad, xgrid, ygrid)\n\nfrom flask import Flask, render_template\napp = Flask(__name__)\n\[email protected]('/')\ndef render_plot():\n \"\"\"\n Get the script tags from each plot object and \"insert\" them into the template.\n\n This also starts different threads for each update function, so you can have\n a non-blocking update.\n \"\"\"\n dist_plot, dist_session = distribution()\n dist_tag = autoload_server(dist_plot, dist_session)\n\n anim_plot, anim_session = animated()\n anim_tag = autoload_server(anim_plot, anim_session)\n # for update_animation as target we need to pass the anim_plot and anim_session as args\n thread = Thread(target=update_animation, args=(anim_plot, anim_session))\n thread.start()\n\n pop = Population()\n pop_tag = autoload_server(pop.layout, pop.session)\n # for update_population as target we need to pass the pop instance as args\n thread = Thread(target=update_population, args=(pop,))\n thread.start()\n\n return render_template('app_plot.html', tag1=dist_tag, tag2=anim_tag, tag3=pop_tag)\n\n\ndef distribution():\n\n mu, sigma = 0, 0.5\n\n measured = np.random.normal(mu, sigma, 1000)\n hist, edges = np.histogram(measured, density=True, bins=20)\n\n x = np.linspace(-2, 2, 1000)\n pdf = 1 / (sigma * np.sqrt(2 * np.pi)) * np.exp(-(x - mu) ** 2 / (2 * sigma ** 2))\n cdf = (1 + scipy.special.erf((x - mu) / np.sqrt(2 * sigma ** 2))) / 2\n\n output_server(\"distribution_reveal\")\n\n p = figure(title=\"Interactive plots\",\n background_fill=\"#E5E5E5\")\n p.quad(top=hist, bottom=0, left=edges[:-1], right=edges[1:],\n fill_color=\"#333333\", line_color=\"#E5E5E5\", line_width=3)\n\n # Use `line` renderers to display the PDF and CDF\n p.line(x, pdf, line_color=\"#348abd\", line_width=8, alpha=0.7, legend=\"PDF\")\n p.line(x, cdf, line_color=\"#7a68a6\", line_width=8, alpha=0.7, legend=\"CDF\")\n\n p.legend.orientation = \"top_left\"\n\n p.xaxis.axis_label = 'x'\n p.xgrid[0].grid_line_color = \"white\"\n p.xgrid[0].grid_line_width = 3\n\n p.yaxis.axis_label = 'Pr(x)'\n p.ygrid[0].grid_line_color = \"white\"\n p.ygrid[0].grid_line_width = 3\n\n push()\n\n return p, cursession()\n\n\ndef animated():\n\n from numpy import pi, cos, sin, linspace\n\n N = 50 + 1\n r_base = 8\n theta = linspace(0, 2 * pi, N)\n r_x = linspace(0, 6 * pi, N - 1)\n rmin = r_base - cos(r_x) - 1\n rmax = r_base + sin(r_x) + 1\n\n colors = [\"FFFFCC\", \"#C7E9B4\", \"#7FCDBB\", \"#41B6C4\", \"#2C7FB8\",\n \"#253494\", \"#2C7FB8\", \"#41B6C4\", \"#7FCDBB\", \"#C7E9B4\"] * 5\n\n output_server(\"animated_reveal\")\n\n p = figure(title=\"Animations\", x_range=[-11, 11], y_range=[-11, 11])\n\n p.annular_wedge(\n 0, 0, rmin, rmax, theta[:-1], theta[1:],\n inner_radius_units=\"data\",\n outer_radius_units=\"data\",\n fill_color=colors,\n line_color=\"black\",\n )\n\n push()\n\n return p, cursession()\n\n\ndef update_animation(plot, session):\n\n from numpy import roll\n\n renderer = plot.select(dict(type=GlyphRenderer))\n ds = renderer[0].data_source\n\n while True:\n\n rmin = ds.data[\"inner_radius\"]\n rmin = roll(rmin, 1)\n ds.data[\"inner_radius\"] = rmin\n\n rmax = ds.data[\"outer_radius\"]\n rmax = roll(rmax, -1)\n ds.data[\"outer_radius\"] = rmax\n\n cursession().store_objects(ds)\n time.sleep(0.1)\n\n\nclass Population(object):\n\n year = 2010\n location = \"World\"\n\n def __init__(self):\n from bokeh.models import ColumnDataSource\n from bokeh.document import Document\n from bokeh.session import Session\n from bokeh.sampledata.population import load_population\n\n self.document = Document()\n self.session = Session()\n self.session.use_doc('population_reveal')\n self.session.load_document(self.document)\n\n self.df = load_population()\n self.source_pyramid = ColumnDataSource(data=dict())\n\n # just render at the initialization\n self._render()\n\n def _render(self):\n self.pyramid_plot()\n self.create_layout()\n self.document.add(self.layout)\n self.update_pyramid()\n\n def pyramid_plot(self):\n from bokeh.models import (Plot, DataRange1d, LinearAxis, Grid,\n Legend, SingleIntervalTicker)\n from bokeh.models.glyphs import Quad\n\n xdr = DataRange1d(sources=[self.source_pyramid.columns(\"male\"),\n self.source_pyramid.columns(\"female\")])\n ydr = DataRange1d(sources=[self.source_pyramid.columns(\"groups\")])\n\n self.plot = Plot(title=\"Widgets\", x_range=xdr, y_range=ydr,\n plot_width=600, plot_height=600)\n\n xaxis = LinearAxis()\n self.plot.add_layout(xaxis, 'below')\n yaxis = LinearAxis(ticker=SingleIntervalTicker(interval=5))\n self.plot.add_layout(yaxis, 'left')\n\n self.plot.add_layout(Grid(dimension=0, ticker=xaxis.ticker))\n self.plot.add_layout(Grid(dimension=1, ticker=yaxis.ticker))\n\n male_quad = Quad(left=\"male\", right=0, bottom=\"groups\", top=\"shifted\",\n fill_color=\"#3B8686\")\n male_quad_glyph = self.plot.add_glyph(self.source_pyramid, male_quad)\n\n female_quad = Quad(left=0, right=\"female\", bottom=\"groups\", top=\"shifted\",\n fill_color=\"#CFF09E\")\n female_quad_glyph = self.plot.add_glyph(self.source_pyramid, female_quad)\n\n self.plot.add_layout(Legend(legends=dict(Male=[male_quad_glyph],\n Female=[female_quad_glyph])))\n\n def on_year_change(self, obj, attr, old, new):\n self.year = int(new)\n self.update_pyramid()\n\n def on_location_change(self, obj, attr, old, new):\n self.location = new\n self.update_pyramid()\n\n def create_layout(self):\n from bokeh.models.widgets import Select, HBox, VBox\n\n years = list(map(str, sorted(self.df.Year.unique())))\n locations = sorted(self.df.Location.unique())\n\n year_select = Select(title=\"Year:\", value=\"2010\", options=years)\n location_select = Select(title=\"Location:\", value=\"World\", options=locations)\n\n year_select.on_change('value', self.on_year_change)\n location_select.on_change('value', self.on_location_change)\n\n controls = HBox(year_select, location_select)\n self.layout = VBox(controls, self.plot)\n\n def update_pyramid(self):\n pyramid = self.df[(self.df.Location == self.location) & (self.df.Year == self.year)]\n\n male = pyramid[pyramid.Sex == \"Male\"]\n female = pyramid[pyramid.Sex == \"Female\"]\n\n total = male.Value.sum() + female.Value.sum()\n\n male_percent = -male.Value / total\n female_percent = female.Value / total\n\n groups = male.AgeGrpStart.tolist()\n shifted = groups[1:] + [groups[-1] + 5]\n\n self.source_pyramid.data = dict(\n groups=groups,\n shifted=shifted,\n male=male_percent,\n female=female_percent,\n )\n self.session.store_document(self.document)\n\n\ndef update_population(plot):\n while True:\n plot.session.load_document(plot.document)\n time.sleep(0.1)\n\nif __name__ == '__main__':\n app.run(debug=True)\n", "path": "examples/embed/slideshow/app_reveal.py"}]} | 3,423 | 142 |
gh_patches_debug_11096 | rasdani/github-patches | git_diff | azavea__raster-vision-1958 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Multi-temporal raster source visualizer fails when batch size is 1
https://github.com/azavea/raster-vision/blob/e4e10ad04313bbe5355693ef96f3854f7963f2b1/rastervision_pytorch_learner/rastervision/pytorch_learner/dataset/visualizer/visualizer.py#L122-L127
This code fails when the batch size is 1 because the created `subfigs` doesn't have a `flat` property if there's only one row and one column. Not sure whether this should be fixed upstream in `matplotlib`...
Matplotlib version 3.7.1, rastervision version 0.21.2
```python
import matplotlib.pyplot as plt
fig = plt.figure()
subfigs = fig.subfigures(nrows=2, ncols=1, hspace=0)
subfigs.flat
#> <numpy.flatiter object at 0x5575c63e68f0>
subfigs = fig.subfigures(nrows=1, ncols=1, hspace=0)
subfigs.flat
#> Traceback (most recent call last):
#> File "<string>", line 1, in <module>
#> AttributeError: 'SubFigure' object has no attribute 'flat'
```
<sup>Created at 2023-10-11 17:12:31 CDT by [reprexlite](https://github.com/jayqi/reprexlite) v0.5.0</sup>
</issue>
<code>
[start of rastervision_pytorch_learner/rastervision/pytorch_learner/dataset/visualizer/visualizer.py]
1 from typing import (TYPE_CHECKING, Sequence, Optional, List, Dict, Union,
2 Tuple, Any)
3 from abc import ABC, abstractmethod
4
5 import numpy as np
6 import torch
7 from torch import Tensor
8 import albumentations as A
9 from torch.utils.data import DataLoader
10 import matplotlib.pyplot as plt
11
12 from rastervision.pipeline.file_system import make_dir
13 from rastervision.pytorch_learner.utils import (
14 deserialize_albumentation_transform, validate_albumentation_transform,
15 MinMaxNormalize)
16 from rastervision.pytorch_learner.learner_config import (
17 RGBTuple,
18 ChannelInds,
19 ensure_class_colors,
20 validate_channel_display_groups,
21 get_default_channel_display_groups,
22 )
23
24 if TYPE_CHECKING:
25 from torch.utils.data import Dataset
26 from matplotlib.figure import Figure
27
28
29 class Visualizer(ABC):
30 """Base class for plotting samples from computer vision PyTorch Datasets."""
31
32 scale: float = 3.
33
34 def __init__(self,
35 class_names: List[str],
36 class_colors: Optional[List[Union[str, RGBTuple]]] = None,
37 transform: Optional[Dict] = A.to_dict(MinMaxNormalize()),
38 channel_display_groups: Optional[Union[Dict[
39 str, ChannelInds], Sequence[ChannelInds]]] = None):
40 """Constructor.
41
42 Args:
43 class_names: names of classes
44 class_colors: Colors used to display classes. Can be color 3-tuples
45 in list form.
46 transform: An Albumentations transform serialized as a dict that
47 will be applied to each image before it is plotted. Mainly useful
48 for undoing any data transformation that you do not want included in
49 the plot, such as normalization. The default value will shift and scale
50 the image so the values range from 0.0 to 1.0 which is the expected range
51 for the plotting function. This default is useful for cases where the
52 values after normalization are close to zero which makes the plot
53 difficult to see.
54 channel_display_groups: Groups of image channels to display together as a
55 subplot when plotting the data and predictions.
56 Can be a list or tuple of groups (e.g. [(0, 1, 2), (3,)]) or a
57 dict containing title-to-group mappings
58 (e.g. {"RGB": [0, 1, 2], "IR": [3]}),
59 where each group is a list or tuple of channel indices and
60 title is a string that will be used as the title of the subplot
61 for that group.
62 """
63 self.class_names = class_names
64 self.class_colors = ensure_class_colors(self.class_names, class_colors)
65 self.transform = validate_albumentation_transform(transform)
66 self._channel_display_groups = validate_channel_display_groups(
67 channel_display_groups)
68
69 @abstractmethod
70 def plot_xyz(self,
71 axs,
72 x: Tensor,
73 y: Sequence,
74 z: Optional[Sequence] = None,
75 plot_title: bool = True):
76 """Plot image, ground truth labels, and predicted labels.
77
78 Args:
79 axs: matplotlib axes on which to plot
80 x: image
81 y: ground truth labels
82 z: optional predicted labels
83 """
84 pass
85
86 def plot_batch(self,
87 x: Tensor,
88 y: Sequence,
89 output_path: Optional[str] = None,
90 z: Optional[Sequence] = None,
91 batch_limit: Optional[int] = None,
92 show: bool = False):
93 """Plot a whole batch in a grid using plot_xyz.
94
95 Args:
96 x: batch of images
97 y: ground truth labels
98 output_path: local path where to save plot image
99 z: optional predicted labels
100 batch_limit: optional limit on (rendered) batch size
101 """
102 params = self.get_plot_params(
103 x=x, y=y, z=z, output_path=output_path, batch_limit=batch_limit)
104 if params['subplot_args']['nrows'] == 0:
105 return
106
107 if x.ndim == 4:
108 fig, axs = plt.subplots(**params['fig_args'],
109 **params['subplot_args'])
110 plot_xyz_args = params['plot_xyz_args']
111 self._plot_batch(fig, axs, plot_xyz_args, x, y=y, z=z)
112 elif x.ndim == 5:
113 # If a temporal dimension is present, we divide the figure into
114 # multiple subfigures--one for each batch. Then, in each subfigure,
115 # we plot all timesteps as if they were a single batch. To
116 # delineate the boundary b/w batch items, we adopt the convention
117 # of only displaying subplot titles once per batch (above the first
118 # row in each batch).
119 batch_sz, T, *_ = x.shape
120 params['fig_args']['figsize'][1] *= T
121 fig = plt.figure(**params['fig_args'])
122 subfigs = fig.subfigures(nrows=batch_sz, ncols=1, hspace=0.0)
123 subfig_axs = [
124 subfig.subplots(
125 nrows=T, ncols=params['subplot_args']['ncols'])
126 for subfig in subfigs.flat
127 ]
128 for i, axs in enumerate(subfig_axs):
129 plot_xyz_args = [
130 dict(params['plot_xyz_args'][i]) for _ in range(T)
131 ]
132 plot_xyz_args[0]['plot_title'] = True
133 for args in plot_xyz_args[1:]:
134 args['plot_title'] = False
135 _x = x[i]
136 _y = [y[i]] * T
137 _z = None if z is None else [z[i]] * T
138 self._plot_batch(fig, axs, plot_xyz_args, _x, y=_y, z=_z)
139 else:
140 raise ValueError('Expected x to have 4 or 5 dims, but found '
141 f'x.shape: {x.shape}')
142
143 if show:
144 plt.show()
145 if output_path is not None:
146 make_dir(output_path, use_dirname=True)
147 fig.savefig(output_path, bbox_inches='tight', pad_inches=0.2)
148
149 plt.close(fig)
150
151 def _plot_batch(
152 self,
153 fig: 'Figure',
154 axs: Sequence,
155 plot_xyz_args: List[dict],
156 x: Tensor,
157 y: Optional[Sequence] = None,
158 z: Optional[Sequence] = None,
159 ):
160 # (N, c, h, w) --> (N, h, w, c)
161 x = x.permute(0, 2, 3, 1)
162
163 # apply transform, if given
164 if self.transform is not None:
165 tf = deserialize_albumentation_transform(self.transform)
166 imgs = [tf(image=img)['image'] for img in x.numpy()]
167 x = torch.from_numpy(np.stack(imgs))
168
169 for i, row_axs in enumerate(axs):
170 _z = None if z is None else z[i]
171 self.plot_xyz(row_axs, x[i], y[i], z=_z, **plot_xyz_args[i])
172
173 def get_channel_display_groups(
174 self, nb_img_channels: int
175 ) -> Union[Dict[str, ChannelInds], Sequence[ChannelInds]]:
176 # The default channel_display_groups object depends on the number of
177 # channels in the image. This number is not known when the Visualizer
178 # is constructed which is why it needs to be created later.
179 if self._channel_display_groups is not None:
180 return self._channel_display_groups
181 return get_default_channel_display_groups(nb_img_channels)
182
183 def get_collate_fn(self) -> Optional[callable]:
184 """Returns a custom collate_fn to use in DataLoader.
185
186 None is returned if default collate_fn should be used.
187
188 See https://pytorch.org/docs/stable/data.html#working-with-collate-fn
189 """
190 return None
191
192 def get_batch(self, dataset: 'Dataset', batch_sz: int = 4,
193 **kwargs) -> Tuple[Tensor, Any]:
194 """Generate a batch from a dataset.
195
196 This is a convenience method for generating a batch of data to plot.
197
198 Args:
199 dataset (Dataset): A Pytorch Datset.
200 batch_sz (int): Batch size. Defaults to 4.
201 **kwargs: Extra args for :class:`~torch.utils.data.DataLoader`.
202
203 Returns:
204 Tuple[Tensor, Any]: (x, y) tuple where x is images and y is labels.
205 """
206 collate_fn = self.get_collate_fn()
207 dl = DataLoader(dataset, batch_sz, collate_fn=collate_fn, **kwargs)
208 try:
209 x, y = next(iter(dl))
210 except StopIteration:
211 raise ValueError('dataset did not return a batch')
212
213 return x, y
214
215 def get_plot_nrows(self, **kwargs) -> int:
216 x = kwargs['x']
217 batch_limit = kwargs.get('batch_limit')
218 batch_sz = x.shape[0]
219 nrows = min(batch_sz,
220 batch_limit) if batch_limit is not None else batch_sz
221 return nrows
222
223 def get_plot_ncols(self, **kwargs) -> int:
224 x = kwargs['x']
225 nb_img_channels = x.shape[1]
226 ncols = len(self.get_channel_display_groups(nb_img_channels))
227 return ncols
228
229 def get_plot_params(self, **kwargs) -> dict:
230 nrows = self.get_plot_nrows(**kwargs)
231 ncols = self.get_plot_ncols(**kwargs)
232 params = {
233 'fig_args': {
234 'constrained_layout': True,
235 'figsize': np.array((self.scale * ncols, self.scale * nrows)),
236 },
237 'subplot_args': {
238 'nrows': nrows,
239 'ncols': ncols,
240 'squeeze': False
241 },
242 'plot_xyz_args': [{} for _ in range(nrows)]
243 }
244 return params
245
[end of rastervision_pytorch_learner/rastervision/pytorch_learner/dataset/visualizer/visualizer.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/rastervision_pytorch_learner/rastervision/pytorch_learner/dataset/visualizer/visualizer.py b/rastervision_pytorch_learner/rastervision/pytorch_learner/dataset/visualizer/visualizer.py
--- a/rastervision_pytorch_learner/rastervision/pytorch_learner/dataset/visualizer/visualizer.py
+++ b/rastervision_pytorch_learner/rastervision/pytorch_learner/dataset/visualizer/visualizer.py
@@ -119,7 +119,8 @@
batch_sz, T, *_ = x.shape
params['fig_args']['figsize'][1] *= T
fig = plt.figure(**params['fig_args'])
- subfigs = fig.subfigures(nrows=batch_sz, ncols=1, hspace=0.0)
+ subfigs = fig.subfigures(
+ nrows=batch_sz, ncols=1, hspace=0.0, squeeze=False)
subfig_axs = [
subfig.subplots(
nrows=T, ncols=params['subplot_args']['ncols'])
| {"golden_diff": "diff --git a/rastervision_pytorch_learner/rastervision/pytorch_learner/dataset/visualizer/visualizer.py b/rastervision_pytorch_learner/rastervision/pytorch_learner/dataset/visualizer/visualizer.py\n--- a/rastervision_pytorch_learner/rastervision/pytorch_learner/dataset/visualizer/visualizer.py\n+++ b/rastervision_pytorch_learner/rastervision/pytorch_learner/dataset/visualizer/visualizer.py\n@@ -119,7 +119,8 @@\n batch_sz, T, *_ = x.shape\n params['fig_args']['figsize'][1] *= T\n fig = plt.figure(**params['fig_args'])\n- subfigs = fig.subfigures(nrows=batch_sz, ncols=1, hspace=0.0)\n+ subfigs = fig.subfigures(\n+ nrows=batch_sz, ncols=1, hspace=0.0, squeeze=False)\n subfig_axs = [\n subfig.subplots(\n nrows=T, ncols=params['subplot_args']['ncols'])\n", "issue": "Multi-temporal raster source visualizer fails when batch size is 1\nhttps://github.com/azavea/raster-vision/blob/e4e10ad04313bbe5355693ef96f3854f7963f2b1/rastervision_pytorch_learner/rastervision/pytorch_learner/dataset/visualizer/visualizer.py#L122-L127\r\n\r\nThis code fails when the batch size is 1 because the created `subfigs` doesn't have a `flat` property if there's only one row and one column. Not sure whether this should be fixed upstream in `matplotlib`...\r\n\r\nMatplotlib version 3.7.1, rastervision version 0.21.2\r\n\r\n```python\r\nimport matplotlib.pyplot as plt\r\nfig = plt.figure()\r\nsubfigs = fig.subfigures(nrows=2, ncols=1, hspace=0)\r\nsubfigs.flat\r\n#> <numpy.flatiter object at 0x5575c63e68f0>\r\n\r\nsubfigs = fig.subfigures(nrows=1, ncols=1, hspace=0)\r\nsubfigs.flat\r\n#> Traceback (most recent call last):\r\n#> File \"<string>\", line 1, in <module>\r\n#> AttributeError: 'SubFigure' object has no attribute 'flat'\r\n```\r\n\r\n<sup>Created at 2023-10-11 17:12:31 CDT by [reprexlite](https://github.com/jayqi/reprexlite) v0.5.0</sup>\r\n\n", "before_files": [{"content": "from typing import (TYPE_CHECKING, Sequence, Optional, List, Dict, Union,\n Tuple, Any)\nfrom abc import ABC, abstractmethod\n\nimport numpy as np\nimport torch\nfrom torch import Tensor\nimport albumentations as A\nfrom torch.utils.data import DataLoader\nimport matplotlib.pyplot as plt\n\nfrom rastervision.pipeline.file_system import make_dir\nfrom rastervision.pytorch_learner.utils import (\n deserialize_albumentation_transform, validate_albumentation_transform,\n MinMaxNormalize)\nfrom rastervision.pytorch_learner.learner_config import (\n RGBTuple,\n ChannelInds,\n ensure_class_colors,\n validate_channel_display_groups,\n get_default_channel_display_groups,\n)\n\nif TYPE_CHECKING:\n from torch.utils.data import Dataset\n from matplotlib.figure import Figure\n\n\nclass Visualizer(ABC):\n \"\"\"Base class for plotting samples from computer vision PyTorch Datasets.\"\"\"\n\n scale: float = 3.\n\n def __init__(self,\n class_names: List[str],\n class_colors: Optional[List[Union[str, RGBTuple]]] = None,\n transform: Optional[Dict] = A.to_dict(MinMaxNormalize()),\n channel_display_groups: Optional[Union[Dict[\n str, ChannelInds], Sequence[ChannelInds]]] = None):\n \"\"\"Constructor.\n\n Args:\n class_names: names of classes\n class_colors: Colors used to display classes. Can be color 3-tuples\n in list form.\n transform: An Albumentations transform serialized as a dict that\n will be applied to each image before it is plotted. Mainly useful\n for undoing any data transformation that you do not want included in\n the plot, such as normalization. The default value will shift and scale\n the image so the values range from 0.0 to 1.0 which is the expected range\n for the plotting function. This default is useful for cases where the\n values after normalization are close to zero which makes the plot\n difficult to see.\n channel_display_groups: Groups of image channels to display together as a\n subplot when plotting the data and predictions.\n Can be a list or tuple of groups (e.g. [(0, 1, 2), (3,)]) or a\n dict containing title-to-group mappings\n (e.g. {\"RGB\": [0, 1, 2], \"IR\": [3]}),\n where each group is a list or tuple of channel indices and\n title is a string that will be used as the title of the subplot\n for that group.\n \"\"\"\n self.class_names = class_names\n self.class_colors = ensure_class_colors(self.class_names, class_colors)\n self.transform = validate_albumentation_transform(transform)\n self._channel_display_groups = validate_channel_display_groups(\n channel_display_groups)\n\n @abstractmethod\n def plot_xyz(self,\n axs,\n x: Tensor,\n y: Sequence,\n z: Optional[Sequence] = None,\n plot_title: bool = True):\n \"\"\"Plot image, ground truth labels, and predicted labels.\n\n Args:\n axs: matplotlib axes on which to plot\n x: image\n y: ground truth labels\n z: optional predicted labels\n \"\"\"\n pass\n\n def plot_batch(self,\n x: Tensor,\n y: Sequence,\n output_path: Optional[str] = None,\n z: Optional[Sequence] = None,\n batch_limit: Optional[int] = None,\n show: bool = False):\n \"\"\"Plot a whole batch in a grid using plot_xyz.\n\n Args:\n x: batch of images\n y: ground truth labels\n output_path: local path where to save plot image\n z: optional predicted labels\n batch_limit: optional limit on (rendered) batch size\n \"\"\"\n params = self.get_plot_params(\n x=x, y=y, z=z, output_path=output_path, batch_limit=batch_limit)\n if params['subplot_args']['nrows'] == 0:\n return\n\n if x.ndim == 4:\n fig, axs = plt.subplots(**params['fig_args'],\n **params['subplot_args'])\n plot_xyz_args = params['plot_xyz_args']\n self._plot_batch(fig, axs, plot_xyz_args, x, y=y, z=z)\n elif x.ndim == 5:\n # If a temporal dimension is present, we divide the figure into\n # multiple subfigures--one for each batch. Then, in each subfigure,\n # we plot all timesteps as if they were a single batch. To\n # delineate the boundary b/w batch items, we adopt the convention\n # of only displaying subplot titles once per batch (above the first\n # row in each batch).\n batch_sz, T, *_ = x.shape\n params['fig_args']['figsize'][1] *= T\n fig = plt.figure(**params['fig_args'])\n subfigs = fig.subfigures(nrows=batch_sz, ncols=1, hspace=0.0)\n subfig_axs = [\n subfig.subplots(\n nrows=T, ncols=params['subplot_args']['ncols'])\n for subfig in subfigs.flat\n ]\n for i, axs in enumerate(subfig_axs):\n plot_xyz_args = [\n dict(params['plot_xyz_args'][i]) for _ in range(T)\n ]\n plot_xyz_args[0]['plot_title'] = True\n for args in plot_xyz_args[1:]:\n args['plot_title'] = False\n _x = x[i]\n _y = [y[i]] * T\n _z = None if z is None else [z[i]] * T\n self._plot_batch(fig, axs, plot_xyz_args, _x, y=_y, z=_z)\n else:\n raise ValueError('Expected x to have 4 or 5 dims, but found '\n f'x.shape: {x.shape}')\n\n if show:\n plt.show()\n if output_path is not None:\n make_dir(output_path, use_dirname=True)\n fig.savefig(output_path, bbox_inches='tight', pad_inches=0.2)\n\n plt.close(fig)\n\n def _plot_batch(\n self,\n fig: 'Figure',\n axs: Sequence,\n plot_xyz_args: List[dict],\n x: Tensor,\n y: Optional[Sequence] = None,\n z: Optional[Sequence] = None,\n ):\n # (N, c, h, w) --> (N, h, w, c)\n x = x.permute(0, 2, 3, 1)\n\n # apply transform, if given\n if self.transform is not None:\n tf = deserialize_albumentation_transform(self.transform)\n imgs = [tf(image=img)['image'] for img in x.numpy()]\n x = torch.from_numpy(np.stack(imgs))\n\n for i, row_axs in enumerate(axs):\n _z = None if z is None else z[i]\n self.plot_xyz(row_axs, x[i], y[i], z=_z, **plot_xyz_args[i])\n\n def get_channel_display_groups(\n self, nb_img_channels: int\n ) -> Union[Dict[str, ChannelInds], Sequence[ChannelInds]]:\n # The default channel_display_groups object depends on the number of\n # channels in the image. This number is not known when the Visualizer\n # is constructed which is why it needs to be created later.\n if self._channel_display_groups is not None:\n return self._channel_display_groups\n return get_default_channel_display_groups(nb_img_channels)\n\n def get_collate_fn(self) -> Optional[callable]:\n \"\"\"Returns a custom collate_fn to use in DataLoader.\n\n None is returned if default collate_fn should be used.\n\n See https://pytorch.org/docs/stable/data.html#working-with-collate-fn\n \"\"\"\n return None\n\n def get_batch(self, dataset: 'Dataset', batch_sz: int = 4,\n **kwargs) -> Tuple[Tensor, Any]:\n \"\"\"Generate a batch from a dataset.\n\n This is a convenience method for generating a batch of data to plot.\n\n Args:\n dataset (Dataset): A Pytorch Datset.\n batch_sz (int): Batch size. Defaults to 4.\n **kwargs: Extra args for :class:`~torch.utils.data.DataLoader`.\n\n Returns:\n Tuple[Tensor, Any]: (x, y) tuple where x is images and y is labels.\n \"\"\"\n collate_fn = self.get_collate_fn()\n dl = DataLoader(dataset, batch_sz, collate_fn=collate_fn, **kwargs)\n try:\n x, y = next(iter(dl))\n except StopIteration:\n raise ValueError('dataset did not return a batch')\n\n return x, y\n\n def get_plot_nrows(self, **kwargs) -> int:\n x = kwargs['x']\n batch_limit = kwargs.get('batch_limit')\n batch_sz = x.shape[0]\n nrows = min(batch_sz,\n batch_limit) if batch_limit is not None else batch_sz\n return nrows\n\n def get_plot_ncols(self, **kwargs) -> int:\n x = kwargs['x']\n nb_img_channels = x.shape[1]\n ncols = len(self.get_channel_display_groups(nb_img_channels))\n return ncols\n\n def get_plot_params(self, **kwargs) -> dict:\n nrows = self.get_plot_nrows(**kwargs)\n ncols = self.get_plot_ncols(**kwargs)\n params = {\n 'fig_args': {\n 'constrained_layout': True,\n 'figsize': np.array((self.scale * ncols, self.scale * nrows)),\n },\n 'subplot_args': {\n 'nrows': nrows,\n 'ncols': ncols,\n 'squeeze': False\n },\n 'plot_xyz_args': [{} for _ in range(nrows)]\n }\n return params\n", "path": "rastervision_pytorch_learner/rastervision/pytorch_learner/dataset/visualizer/visualizer.py"}]} | 3,711 | 242 |
gh_patches_debug_143 | rasdani/github-patches | git_diff | ManimCommunity__manim-126 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Remove argparse from setup.py
https://github.com/ManimCommunity/manim/blob/cf8c5b9938abafba9f6c2c1aeff9e15c8edbfdd1/setup.py#L17
Remove `argparse` from setup.py as it is a default library and need not be mentioned in `requirements.txt` and `setup.py`.
</issue>
<code>
[start of setup.py]
1 from setuptools import setup, find_namespace_packages
2
3 setup(
4 name="manimlib",
5 version="0.2.0",
6 description="Animation engine for explanatory math videos",
7 license="MIT",
8 packages=find_namespace_packages(),
9 package_data={ "manim": ["*.tex"] },
10 entry_points={
11 "console_scripts": [
12 "manim=manim.__main__:main",
13 "manimcm=manim.__main__:main",
14 ]
15 },
16 install_requires=[
17 "argparse",
18 "colour",
19 "numpy",
20 "Pillow",
21 "progressbar",
22 "scipy",
23 "tqdm",
24 "pycairo",
25 "pydub",
26 "pygments",
27 "pyreadline; sys_platform == 'win32'",
28 "rich",
29 ],
30 )
31
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -14,7 +14,6 @@
]
},
install_requires=[
- "argparse",
"colour",
"numpy",
"Pillow",
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -14,7 +14,6 @@\n ]\n },\n install_requires=[\n- \"argparse\",\n \"colour\",\n \"numpy\",\n \"Pillow\",\n", "issue": "Remove argparse from setup.py\nhttps://github.com/ManimCommunity/manim/blob/cf8c5b9938abafba9f6c2c1aeff9e15c8edbfdd1/setup.py#L17\r\nRemove `argparse` from setup.py as it is a default library and need not be mentioned in `requirements.txt` and `setup.py`.\n", "before_files": [{"content": "from setuptools import setup, find_namespace_packages\n\nsetup(\n name=\"manimlib\",\n version=\"0.2.0\",\n description=\"Animation engine for explanatory math videos\",\n license=\"MIT\",\n packages=find_namespace_packages(),\n package_data={ \"manim\": [\"*.tex\"] },\n entry_points={\n \"console_scripts\": [\n \"manim=manim.__main__:main\",\n \"manimcm=manim.__main__:main\",\n ]\n },\n install_requires=[\n \"argparse\",\n \"colour\",\n \"numpy\",\n \"Pillow\",\n \"progressbar\",\n \"scipy\",\n \"tqdm\",\n \"pycairo\",\n \"pydub\",\n \"pygments\",\n \"pyreadline; sys_platform == 'win32'\",\n \"rich\",\n ],\n)\n", "path": "setup.py"}]} | 842 | 59 |
gh_patches_debug_9582 | rasdani/github-patches | git_diff | mampfes__hacs_waste_collection_schedule-596 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Error with Service Provider Stadtreinigung Leipzig / stadtreinigung-leipzig.de
Hi everyone,
Since 03.01.2023 (this is where I noticed it), WCS can no longer retrieve data from Stadtwerke Leipzig.
The following error is displayed:
fetch failed for source Stadtreinigung Leipzig: Traceback (most recent call last): File "/config/custom_components/waste_collection_schedule/waste_collection_schedule/source_shell.py", line 134, in fetch entries = self._source.fetch() File "/config/custom_components/waste_collection_schedule/waste_collection_schedule/source/stadtreinigung_leipzig_de.py", line 34, in fetch raise Exception(f"street not found: {self._street}") Exception: street not found: Pflugkstraße
My configuration.yaml:
waste_collection_schedule:
sources:
- name: stadtreinigung_leipzig_de
args:
street: Pflugkstraße
house_number: 1
calendar_title: Abfallkalender
I've been trying around here for a few days, but I can't find a solution to the problem. Is it possible that the API has been changed/defective?
Thanks for your help...
</issue>
<code>
[start of custom_components/waste_collection_schedule/waste_collection_schedule/source/stadtreinigung_leipzig_de.py]
1 import json
2 import logging
3
4 import requests
5 from waste_collection_schedule import Collection # type: ignore[attr-defined]
6 from waste_collection_schedule.service.ICS import ICS
7
8 _LOGGER = logging.getLogger(__name__)
9
10 TITLE = "Stadtreinigung Leipzig"
11 DESCRIPTION = "Source for Stadtreinigung Leipzig."
12 URL = "https://stadtreinigung-leipzig.de"
13 TEST_CASES = {"Bahnhofsallee": {"street": "Bahnhofsallee", "house_number": 7}}
14
15
16 class Source:
17 def __init__(self, street, house_number):
18 self._street = street
19 self._house_number = house_number
20 self._ics = ICS()
21
22 def fetch(self):
23 params = {
24 "name": self._street,
25 }
26
27 # get list of streets and house numbers
28 r = requests.get(
29 "https://stadtreinigung-leipzig.de/rest/wastecalendarstreets", params=params
30 )
31
32 data = json.loads(r.text)
33 if len(data["results"]) == 0:
34 raise Exception(f"street not found: {self._street}")
35 street_entry = data["results"].get(self._street)
36 if street_entry is None:
37 raise Exception(f"street not found: {self._street}")
38
39 id = street_entry.get(str(self._house_number))
40 if id is None:
41 raise Exception(f"house_number not found: {self._house_number}")
42
43 # get ics file
44 params = {
45 "position_nos": id,
46 }
47 r = requests.get(
48 "https://stadtreinigung-leipzig.de/wir-kommen-zu-ihnen/abfallkalender/ical.ics",
49 params=params,
50 )
51 dates = self._ics.convert(r.text)
52
53 entries = []
54 for d in dates:
55 entries.append(Collection(d[0], d[1].removesuffix(", ")))
56 return entries
57
[end of custom_components/waste_collection_schedule/waste_collection_schedule/source/stadtreinigung_leipzig_de.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/custom_components/waste_collection_schedule/waste_collection_schedule/source/stadtreinigung_leipzig_de.py b/custom_components/waste_collection_schedule/waste_collection_schedule/source/stadtreinigung_leipzig_de.py
--- a/custom_components/waste_collection_schedule/waste_collection_schedule/source/stadtreinigung_leipzig_de.py
+++ b/custom_components/waste_collection_schedule/waste_collection_schedule/source/stadtreinigung_leipzig_de.py
@@ -21,12 +21,13 @@
def fetch(self):
params = {
- "name": self._street,
+ "old_format": 1,
+ "search": self._street,
}
# get list of streets and house numbers
r = requests.get(
- "https://stadtreinigung-leipzig.de/rest/wastecalendarstreets", params=params
+ "https://stadtreinigung-leipzig.de/rest/Navision/Streets", params=params
)
data = json.loads(r.text)
| {"golden_diff": "diff --git a/custom_components/waste_collection_schedule/waste_collection_schedule/source/stadtreinigung_leipzig_de.py b/custom_components/waste_collection_schedule/waste_collection_schedule/source/stadtreinigung_leipzig_de.py\n--- a/custom_components/waste_collection_schedule/waste_collection_schedule/source/stadtreinigung_leipzig_de.py\n+++ b/custom_components/waste_collection_schedule/waste_collection_schedule/source/stadtreinigung_leipzig_de.py\n@@ -21,12 +21,13 @@\n \n def fetch(self):\n params = {\n- \"name\": self._street,\n+ \"old_format\": 1,\n+ \"search\": self._street,\n }\n \n # get list of streets and house numbers\n r = requests.get(\n- \"https://stadtreinigung-leipzig.de/rest/wastecalendarstreets\", params=params\n+ \"https://stadtreinigung-leipzig.de/rest/Navision/Streets\", params=params\n )\n \n data = json.loads(r.text)\n", "issue": "Error with Service Provider Stadtreinigung Leipzig / stadtreinigung-leipzig.de\nHi everyone,\r\nSince 03.01.2023 (this is where I noticed it), WCS can no longer retrieve data from Stadtwerke Leipzig.\r\nThe following error is displayed:\r\n\r\nfetch failed for source Stadtreinigung Leipzig: Traceback (most recent call last): File \"/config/custom_components/waste_collection_schedule/waste_collection_schedule/source_shell.py\", line 134, in fetch entries = self._source.fetch() File \"/config/custom_components/waste_collection_schedule/waste_collection_schedule/source/stadtreinigung_leipzig_de.py\", line 34, in fetch raise Exception(f\"street not found: {self._street}\") Exception: street not found: Pflugkstra\u00dfe\r\n\r\nMy configuration.yaml:\r\nwaste_collection_schedule:\r\n sources:\r\n - name: stadtreinigung_leipzig_de\r\n args:\r\n street: Pflugkstra\u00dfe\r\n house_number: 1\r\n calendar_title: Abfallkalender\r\n\r\nI've been trying around here for a few days, but I can't find a solution to the problem. Is it possible that the API has been changed/defective?\r\nThanks for your help...\n", "before_files": [{"content": "import json\nimport logging\n\nimport requests\nfrom waste_collection_schedule import Collection # type: ignore[attr-defined]\nfrom waste_collection_schedule.service.ICS import ICS\n\n_LOGGER = logging.getLogger(__name__)\n\nTITLE = \"Stadtreinigung Leipzig\"\nDESCRIPTION = \"Source for Stadtreinigung Leipzig.\"\nURL = \"https://stadtreinigung-leipzig.de\"\nTEST_CASES = {\"Bahnhofsallee\": {\"street\": \"Bahnhofsallee\", \"house_number\": 7}}\n\n\nclass Source:\n def __init__(self, street, house_number):\n self._street = street\n self._house_number = house_number\n self._ics = ICS()\n\n def fetch(self):\n params = {\n \"name\": self._street,\n }\n\n # get list of streets and house numbers\n r = requests.get(\n \"https://stadtreinigung-leipzig.de/rest/wastecalendarstreets\", params=params\n )\n\n data = json.loads(r.text)\n if len(data[\"results\"]) == 0:\n raise Exception(f\"street not found: {self._street}\")\n street_entry = data[\"results\"].get(self._street)\n if street_entry is None:\n raise Exception(f\"street not found: {self._street}\")\n\n id = street_entry.get(str(self._house_number))\n if id is None:\n raise Exception(f\"house_number not found: {self._house_number}\")\n\n # get ics file\n params = {\n \"position_nos\": id,\n }\n r = requests.get(\n \"https://stadtreinigung-leipzig.de/wir-kommen-zu-ihnen/abfallkalender/ical.ics\",\n params=params,\n )\n dates = self._ics.convert(r.text)\n\n entries = []\n for d in dates:\n entries.append(Collection(d[0], d[1].removesuffix(\", \")))\n return entries\n", "path": "custom_components/waste_collection_schedule/waste_collection_schedule/source/stadtreinigung_leipzig_de.py"}]} | 1,347 | 219 |
gh_patches_debug_37670 | rasdani/github-patches | git_diff | biolab__orange3-3842 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Replicability in Neural networks and Random forests
Follow up from #3715: Neural networks and Random forests should have a checkbox `Replicable training` or something like this, which would decide whether random seed is fixed (to 0) or "random".
In Neural networks: add the check box.
In Random forest: remove the spin box.
</issue>
<code>
[start of Orange/widgets/model/owrandomforest.py]
1 from AnyQt.QtCore import Qt
2
3 from Orange.data import Table
4 from Orange.modelling import RandomForestLearner
5 from Orange.widgets import settings, gui
6 from Orange.widgets.utils.owlearnerwidget import OWBaseLearner
7 from Orange.widgets.utils.widgetpreview import WidgetPreview
8 from Orange.widgets.widget import Msg
9
10
11 class OWRandomForest(OWBaseLearner):
12 name = "Random Forest"
13 description = "Predict using an ensemble of decision trees."
14 icon = "icons/RandomForest.svg"
15 replaces = [
16 "Orange.widgets.classify.owrandomforest.OWRandomForest",
17 "Orange.widgets.regression.owrandomforestregression.OWRandomForestRegression",
18 ]
19 priority = 40
20 keywords = []
21
22 LEARNER = RandomForestLearner
23
24 n_estimators = settings.Setting(10)
25 max_features = settings.Setting(5)
26 use_max_features = settings.Setting(False)
27 random_state = settings.Setting(0)
28 use_random_state = settings.Setting(False)
29 max_depth = settings.Setting(3)
30 use_max_depth = settings.Setting(False)
31 min_samples_split = settings.Setting(5)
32 use_min_samples_split = settings.Setting(True)
33 index_output = settings.Setting(0)
34
35 class Error(OWBaseLearner.Error):
36 not_enough_features = Msg("Insufficient number of attributes ({})")
37
38 def add_main_layout(self):
39 box = gui.vBox(self.controlArea, 'Basic Properties')
40 self.n_estimators_spin = gui.spin(
41 box, self, "n_estimators", minv=1, maxv=10000, controlWidth=80,
42 alignment=Qt.AlignRight, label="Number of trees: ",
43 callback=self.settings_changed)
44 self.max_features_spin = gui.spin(
45 box, self, "max_features", 2, 50, controlWidth=80,
46 label="Number of attributes considered at each split: ",
47 callback=self.settings_changed, checked="use_max_features",
48 checkCallback=self.settings_changed, alignment=Qt.AlignRight,)
49 self.random_state_spin = gui.spin(
50 box, self, "random_state", 0, 2 ** 31 - 1, controlWidth=80,
51 label="Fixed seed for random generator: ", alignment=Qt.AlignRight,
52 callback=self.settings_changed, checked="use_random_state",
53 checkCallback=self.settings_changed)
54
55 box = gui.vBox(self.controlArea, "Growth Control")
56 self.max_depth_spin = gui.spin(
57 box, self, "max_depth", 1, 50, controlWidth=80,
58 label="Limit depth of individual trees: ", alignment=Qt.AlignRight,
59 callback=self.settings_changed, checked="use_max_depth",
60 checkCallback=self.settings_changed)
61 self.min_samples_split_spin = gui.spin(
62 box, self, "min_samples_split", 2, 1000, controlWidth=80,
63 label="Do not split subsets smaller than: ",
64 callback=self.settings_changed, checked="use_min_samples_split",
65 checkCallback=self.settings_changed, alignment=Qt.AlignRight)
66
67 def create_learner(self):
68 common_args = {"n_estimators": self.n_estimators}
69 if self.use_max_features:
70 common_args["max_features"] = self.max_features
71 if self.use_random_state:
72 common_args["random_state"] = self.random_state
73 if self.use_max_depth:
74 common_args["max_depth"] = self.max_depth
75 if self.use_min_samples_split:
76 common_args["min_samples_split"] = self.min_samples_split
77
78 return self.LEARNER(preprocessors=self.preprocessors, **common_args)
79
80 def check_data(self):
81 self.Error.not_enough_features.clear()
82 if super().check_data():
83 n_features = len(self.data.domain.attributes)
84 if self.use_max_features and self.max_features > n_features:
85 self.Error.not_enough_features(n_features)
86 self.valid_data = False
87 return self.valid_data
88
89 def get_learner_parameters(self):
90 """Called by send report to list the parameters of the learner."""
91 return (
92 ("Number of trees", self.n_estimators),
93 ("Maximal number of considered features",
94 self.max_features if self.use_max_features else "unlimited"),
95 ("Fixed random seed", self.use_random_state and self.random_state),
96 ("Maximal tree depth",
97 self.max_depth if self.use_max_depth else "unlimited"),
98 ("Stop splitting nodes with maximum instances",
99 self.min_samples_split if self.use_min_samples_split else "unlimited")
100 )
101
102
103 if __name__ == "__main__": # pragma: no cover
104 WidgetPreview(OWRandomForest).run(Table("iris"))
105
[end of Orange/widgets/model/owrandomforest.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/Orange/widgets/model/owrandomforest.py b/Orange/widgets/model/owrandomforest.py
--- a/Orange/widgets/model/owrandomforest.py
+++ b/Orange/widgets/model/owrandomforest.py
@@ -24,7 +24,6 @@
n_estimators = settings.Setting(10)
max_features = settings.Setting(5)
use_max_features = settings.Setting(False)
- random_state = settings.Setting(0)
use_random_state = settings.Setting(False)
max_depth = settings.Setting(3)
use_max_depth = settings.Setting(False)
@@ -46,11 +45,9 @@
label="Number of attributes considered at each split: ",
callback=self.settings_changed, checked="use_max_features",
checkCallback=self.settings_changed, alignment=Qt.AlignRight,)
- self.random_state_spin = gui.spin(
- box, self, "random_state", 0, 2 ** 31 - 1, controlWidth=80,
- label="Fixed seed for random generator: ", alignment=Qt.AlignRight,
- callback=self.settings_changed, checked="use_random_state",
- checkCallback=self.settings_changed)
+ self.random_state = gui.checkBox(
+ box, self, "use_random_state", label="Replicable training",
+ callback=self.settings_changed)
box = gui.vBox(self.controlArea, "Growth Control")
self.max_depth_spin = gui.spin(
@@ -69,7 +66,7 @@
if self.use_max_features:
common_args["max_features"] = self.max_features
if self.use_random_state:
- common_args["random_state"] = self.random_state
+ common_args["random_state"] = 0
if self.use_max_depth:
common_args["max_depth"] = self.max_depth
if self.use_min_samples_split:
@@ -92,7 +89,7 @@
("Number of trees", self.n_estimators),
("Maximal number of considered features",
self.max_features if self.use_max_features else "unlimited"),
- ("Fixed random seed", self.use_random_state and self.random_state),
+ ("Replicable training", ["No", "Yes"][self.use_random_state]),
("Maximal tree depth",
self.max_depth if self.use_max_depth else "unlimited"),
("Stop splitting nodes with maximum instances",
| {"golden_diff": "diff --git a/Orange/widgets/model/owrandomforest.py b/Orange/widgets/model/owrandomforest.py\n--- a/Orange/widgets/model/owrandomforest.py\n+++ b/Orange/widgets/model/owrandomforest.py\n@@ -24,7 +24,6 @@\n n_estimators = settings.Setting(10)\n max_features = settings.Setting(5)\n use_max_features = settings.Setting(False)\n- random_state = settings.Setting(0)\n use_random_state = settings.Setting(False)\n max_depth = settings.Setting(3)\n use_max_depth = settings.Setting(False)\n@@ -46,11 +45,9 @@\n label=\"Number of attributes considered at each split: \",\n callback=self.settings_changed, checked=\"use_max_features\",\n checkCallback=self.settings_changed, alignment=Qt.AlignRight,)\n- self.random_state_spin = gui.spin(\n- box, self, \"random_state\", 0, 2 ** 31 - 1, controlWidth=80,\n- label=\"Fixed seed for random generator: \", alignment=Qt.AlignRight,\n- callback=self.settings_changed, checked=\"use_random_state\",\n- checkCallback=self.settings_changed)\n+ self.random_state = gui.checkBox(\n+ box, self, \"use_random_state\", label=\"Replicable training\",\n+ callback=self.settings_changed)\n \n box = gui.vBox(self.controlArea, \"Growth Control\")\n self.max_depth_spin = gui.spin(\n@@ -69,7 +66,7 @@\n if self.use_max_features:\n common_args[\"max_features\"] = self.max_features\n if self.use_random_state:\n- common_args[\"random_state\"] = self.random_state\n+ common_args[\"random_state\"] = 0\n if self.use_max_depth:\n common_args[\"max_depth\"] = self.max_depth\n if self.use_min_samples_split:\n@@ -92,7 +89,7 @@\n (\"Number of trees\", self.n_estimators),\n (\"Maximal number of considered features\",\n self.max_features if self.use_max_features else \"unlimited\"),\n- (\"Fixed random seed\", self.use_random_state and self.random_state),\n+ (\"Replicable training\", [\"No\", \"Yes\"][self.use_random_state]),\n (\"Maximal tree depth\",\n self.max_depth if self.use_max_depth else \"unlimited\"),\n (\"Stop splitting nodes with maximum instances\",\n", "issue": "Replicability in Neural networks and Random forests\nFollow up from #3715: Neural networks and Random forests should have a checkbox `Replicable training` or something like this, which would decide whether random seed is fixed (to 0) or \"random\".\r\n\r\nIn Neural networks: add the check box.\r\n\r\nIn Random forest: remove the spin box.\n", "before_files": [{"content": "from AnyQt.QtCore import Qt\n\nfrom Orange.data import Table\nfrom Orange.modelling import RandomForestLearner\nfrom Orange.widgets import settings, gui\nfrom Orange.widgets.utils.owlearnerwidget import OWBaseLearner\nfrom Orange.widgets.utils.widgetpreview import WidgetPreview\nfrom Orange.widgets.widget import Msg\n\n\nclass OWRandomForest(OWBaseLearner):\n name = \"Random Forest\"\n description = \"Predict using an ensemble of decision trees.\"\n icon = \"icons/RandomForest.svg\"\n replaces = [\n \"Orange.widgets.classify.owrandomforest.OWRandomForest\",\n \"Orange.widgets.regression.owrandomforestregression.OWRandomForestRegression\",\n ]\n priority = 40\n keywords = []\n\n LEARNER = RandomForestLearner\n\n n_estimators = settings.Setting(10)\n max_features = settings.Setting(5)\n use_max_features = settings.Setting(False)\n random_state = settings.Setting(0)\n use_random_state = settings.Setting(False)\n max_depth = settings.Setting(3)\n use_max_depth = settings.Setting(False)\n min_samples_split = settings.Setting(5)\n use_min_samples_split = settings.Setting(True)\n index_output = settings.Setting(0)\n\n class Error(OWBaseLearner.Error):\n not_enough_features = Msg(\"Insufficient number of attributes ({})\")\n\n def add_main_layout(self):\n box = gui.vBox(self.controlArea, 'Basic Properties')\n self.n_estimators_spin = gui.spin(\n box, self, \"n_estimators\", minv=1, maxv=10000, controlWidth=80,\n alignment=Qt.AlignRight, label=\"Number of trees: \",\n callback=self.settings_changed)\n self.max_features_spin = gui.spin(\n box, self, \"max_features\", 2, 50, controlWidth=80,\n label=\"Number of attributes considered at each split: \",\n callback=self.settings_changed, checked=\"use_max_features\",\n checkCallback=self.settings_changed, alignment=Qt.AlignRight,)\n self.random_state_spin = gui.spin(\n box, self, \"random_state\", 0, 2 ** 31 - 1, controlWidth=80,\n label=\"Fixed seed for random generator: \", alignment=Qt.AlignRight,\n callback=self.settings_changed, checked=\"use_random_state\",\n checkCallback=self.settings_changed)\n\n box = gui.vBox(self.controlArea, \"Growth Control\")\n self.max_depth_spin = gui.spin(\n box, self, \"max_depth\", 1, 50, controlWidth=80,\n label=\"Limit depth of individual trees: \", alignment=Qt.AlignRight,\n callback=self.settings_changed, checked=\"use_max_depth\",\n checkCallback=self.settings_changed)\n self.min_samples_split_spin = gui.spin(\n box, self, \"min_samples_split\", 2, 1000, controlWidth=80,\n label=\"Do not split subsets smaller than: \",\n callback=self.settings_changed, checked=\"use_min_samples_split\",\n checkCallback=self.settings_changed, alignment=Qt.AlignRight)\n\n def create_learner(self):\n common_args = {\"n_estimators\": self.n_estimators}\n if self.use_max_features:\n common_args[\"max_features\"] = self.max_features\n if self.use_random_state:\n common_args[\"random_state\"] = self.random_state\n if self.use_max_depth:\n common_args[\"max_depth\"] = self.max_depth\n if self.use_min_samples_split:\n common_args[\"min_samples_split\"] = self.min_samples_split\n\n return self.LEARNER(preprocessors=self.preprocessors, **common_args)\n\n def check_data(self):\n self.Error.not_enough_features.clear()\n if super().check_data():\n n_features = len(self.data.domain.attributes)\n if self.use_max_features and self.max_features > n_features:\n self.Error.not_enough_features(n_features)\n self.valid_data = False\n return self.valid_data\n\n def get_learner_parameters(self):\n \"\"\"Called by send report to list the parameters of the learner.\"\"\"\n return (\n (\"Number of trees\", self.n_estimators),\n (\"Maximal number of considered features\",\n self.max_features if self.use_max_features else \"unlimited\"),\n (\"Fixed random seed\", self.use_random_state and self.random_state),\n (\"Maximal tree depth\",\n self.max_depth if self.use_max_depth else \"unlimited\"),\n (\"Stop splitting nodes with maximum instances\",\n self.min_samples_split if self.use_min_samples_split else \"unlimited\")\n )\n\n\nif __name__ == \"__main__\": # pragma: no cover\n WidgetPreview(OWRandomForest).run(Table(\"iris\"))\n", "path": "Orange/widgets/model/owrandomforest.py"}]} | 1,835 | 516 |
gh_patches_debug_31272 | rasdani/github-patches | git_diff | Lightning-AI__pytorch-lightning-453 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
min_max log_gpu_memory option bug
**Describe the bug**
Setting `log_gpu_memory='min_max'` in `Trainer` leads to the following bug.
```
Traceback (most recent call last):
File "/opt/conda/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 347, in fit
self.single_gpu_train(model)
File "/opt/conda/lib/python3.7/site-packages/pytorch_lightning/trainer/dp_mixin.py", line 79, in single_gpu_train
self.run_pretrain_routine(model)
File "/opt/conda/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 467, in run_pretrain_routine
self.train()
File "/opt/conda/lib/python3.7/site-packages/pytorch_lightning/trainer/train_loop_mixin.py", line 60, in train
self.run_training_epoch()
File "/opt/conda/lib/python3.7/site-packages/pytorch_lightning/trainer/train_loop_mixin.py", line 126, in run_training_epoch
self.log_metrics(batch_step_metrics, grad_norm_dic)
File "/opt/conda/lib/python3.7/site-packages/pytorch_lightning/trainer/logging_mixin.py", line 20, in log_metrics
mem_map = memory.get_memory_profile(self.log_gpu_memory)
File "/opt/conda/lib/python3.7/site-packages/pytorch_lightning/root_module/memory.py", line 205, in get_memory_profile
for k, v in memory_map:
ValueError: too many values to unpack (expected 2)
```
**To Reproduce**
On current master, execute the following.
```
trainer = Trainer(
...
log_gpu_memory='min_max',
...
)
trainer.fit(model)
```
**Expected behavior**
Log the min/max utilization of gpu memory, as `min_max` option is documented.
**Desktop (please complete the following information):**
- OS: Ubuntu 18.04
- Version: Current master
I am working on this issue. Will submit a PR soon.
</issue>
<code>
[start of pytorch_lightning/root_module/memory.py]
1 '''
2 Generates a summary of a model's layers and dimensionality
3 '''
4
5 import gc
6 import subprocess
7
8 import numpy as np
9 import pandas as pd
10 import torch
11
12
13 class ModelSummary(object):
14
15 def __init__(self, model, mode='full'):
16 '''
17 Generates summaries of model layers and dimensions.
18 '''
19 self.model = model
20 self.mode = mode
21 self.in_sizes = []
22 self.out_sizes = []
23
24 self.summarize()
25
26 def __str__(self):
27 return self.summary.__str__()
28
29 def __repr__(self):
30 return self.summary.__str__()
31
32 def named_modules(self):
33 if self.mode == 'full':
34 mods = self.model.named_modules()
35 mods = list(mods)[1:] # do not include root module (LightningModule)
36 elif self.mode == 'top':
37 # the children are the top-level modules
38 mods = self.model.named_children()
39 else:
40 mods = []
41 return list(mods)
42
43 def get_variable_sizes(self):
44 '''Run sample input through each layer to get output sizes'''
45 mods = self.named_modules()
46 in_sizes = []
47 out_sizes = []
48 input_ = self.model.example_input_array
49
50 if self.model.on_gpu:
51 input_ = input_.cuda(0)
52
53 if self.model.trainer.use_amp:
54 input_ = input_.half()
55
56 with torch.no_grad():
57
58 for _, m in mods:
59 if type(input_) is list or type(input_) is tuple: # pragma: no cover
60 out = m(*input_)
61 else:
62 out = m(input_)
63
64 if type(input_) is tuple or type(input_) is list: # pragma: no cover
65 in_size = []
66 for x in input_:
67 if type(x) is list:
68 in_size.append(len(x))
69 else:
70 in_size.append(x.size())
71 else:
72 in_size = np.array(input_.size())
73
74 in_sizes.append(in_size)
75
76 if type(out) is tuple or type(out) is list: # pragma: no cover
77 out_size = np.asarray([x.size() for x in out])
78 else:
79 out_size = np.array(out.size())
80
81 out_sizes.append(out_size)
82 input_ = out
83
84 self.in_sizes = in_sizes
85 self.out_sizes = out_sizes
86 assert len(in_sizes) == len(out_sizes)
87 return
88
89 def get_layer_names(self):
90 '''Collect Layer Names'''
91 mods = self.named_modules()
92 names = []
93 layers = []
94 for name, m in mods:
95 names += [name]
96 layers += [str(m.__class__)]
97
98 layer_types = [x.split('.')[-1][:-2] for x in layers]
99
100 self.layer_names = names
101 self.layer_types = layer_types
102 return
103
104 def get_parameter_sizes(self):
105 '''Get sizes of all parameters in `model`'''
106 mods = self.named_modules()
107 sizes = []
108 for _, m in mods:
109 p = list(m.parameters())
110 modsz = []
111 for j in range(len(p)):
112 modsz.append(np.array(p[j].size()))
113 sizes.append(modsz)
114
115 self.param_sizes = sizes
116 return
117
118 def get_parameter_nums(self):
119 '''Get number of parameters in each layer'''
120 param_nums = []
121 for mod in self.param_sizes:
122 all_params = 0
123 for p in mod:
124 all_params += np.prod(p)
125 param_nums.append(all_params)
126 self.param_nums = param_nums
127 return
128
129 def make_summary(self):
130 '''
131 Makes a summary listing with:
132
133 Layer Name, Layer Type, Input Size, Output Size, Number of Parameters
134 '''
135
136 cols = ['Name', 'Type', 'Params']
137 if self.model.example_input_array is not None:
138 cols.extend(['In_sizes', 'Out_sizes'])
139
140 df = pd.DataFrame(np.zeros((len(self.layer_names), len(cols))))
141 df.columns = cols
142
143 df['Name'] = self.layer_names
144 df['Type'] = self.layer_types
145 df['Params'] = self.param_nums
146 df['Params'] = df['Params'].map(get_human_readable_count)
147
148 if self.model.example_input_array is not None:
149 df['In_sizes'] = self.in_sizes
150 df['Out_sizes'] = self.out_sizes
151
152 self.summary = df
153 return
154
155 def summarize(self):
156 self.get_layer_names()
157 self.get_parameter_sizes()
158 self.get_parameter_nums()
159
160 if self.model.example_input_array is not None:
161 self.get_variable_sizes()
162 self.make_summary()
163
164
165 def print_mem_stack(): # pragma: no cover
166 for obj in gc.get_objects():
167 try:
168 if torch.is_tensor(obj) or (hasattr(obj, 'data') and torch.is_tensor(obj.data)):
169 print(type(obj), obj.size())
170 except Exception:
171 pass
172
173
174 def count_mem_items(): # pragma: no cover
175 nb_params = 0
176 nb_tensors = 0
177 for obj in gc.get_objects():
178 try:
179 if torch.is_tensor(obj) or (hasattr(obj, 'data') and torch.is_tensor(obj.data)):
180 obj_type = str(type(obj))
181 if 'parameter' in obj_type:
182 nb_params += 1
183 else:
184 nb_tensors += 1
185 except Exception:
186 pass
187
188 return nb_params, nb_tensors
189
190
191 def get_memory_profile(mode):
192 """
193 'all' means return memory for all gpus
194 'min_max' means return memory for max and min
195 :param mode:
196 :return:
197 """
198 memory_map = get_gpu_memory_map()
199
200 if mode == 'min_max':
201 min_mem = 1000000
202 min_k = None
203 max_mem = 0
204 max_k = None
205 for k, v in memory_map:
206 if v > max_mem:
207 max_mem = v
208 max_k = k
209 if v < min_mem:
210 min_mem = v
211 min_k = k
212
213 memory_map = {min_k: min_mem, max_k: max_mem}
214
215 return memory_map
216
217
218 def get_gpu_memory_map():
219 """Get the current gpu usage.
220
221 Returns
222 -------
223 usage: dict
224 Keys are device ids as integers.
225 Values are memory usage as integers in MB.
226 """
227 result = subprocess.check_output(
228 [
229 'nvidia-smi', '--query-gpu=memory.used',
230 '--format=csv,nounits,noheader'
231 ], encoding='utf-8')
232 # Convert lines into a dictionary
233 gpu_memory = [int(x) for x in result.strip().split('\n')]
234 gpu_memory_map = {}
235 for k, v in zip(range(len(gpu_memory)), gpu_memory):
236 k = f'gpu_{k}'
237 gpu_memory_map[k] = v
238 return gpu_memory_map
239
240
241 def get_human_readable_count(number):
242 """
243 Abbreviates an integer number with K, M, B, T for thousands, millions,
244 billions and trillions, respectively.
245 Examples:
246 123 -> 123
247 1234 -> 1 K (one thousand)
248 2e6 -> 2 M (two million)
249 3e9 -> 3 B (three billion)
250 4e12 -> 4 T (four trillion)
251 5e15 -> 5,000 T
252 :param number: a positive integer number
253 :returns a string formatted according to the pattern described above.
254 """
255 assert number >= 0
256 labels = [' ', 'K', 'M', 'B', 'T']
257 num_digits = int(np.floor(np.log10(number)) + 1 if number > 0 else 1)
258 num_groups = int(np.ceil(num_digits / 3))
259 num_groups = min(num_groups, len(labels)) # don't abbreviate beyond trillions
260 shift = -3 * (num_groups - 1)
261 number = number * (10 ** shift)
262 index = num_groups - 1
263 return f'{int(number):,d} {labels[index]}'
264
[end of pytorch_lightning/root_module/memory.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/pytorch_lightning/root_module/memory.py b/pytorch_lightning/root_module/memory.py
--- a/pytorch_lightning/root_module/memory.py
+++ b/pytorch_lightning/root_module/memory.py
@@ -3,6 +3,7 @@
'''
import gc
+import os
import subprocess
import numpy as np
@@ -198,19 +199,10 @@
memory_map = get_gpu_memory_map()
if mode == 'min_max':
- min_mem = 1000000
- min_k = None
- max_mem = 0
- max_k = None
- for k, v in memory_map:
- if v > max_mem:
- max_mem = v
- max_k = k
- if v < min_mem:
- min_mem = v
- min_k = k
-
- memory_map = {min_k: min_mem, max_k: max_mem}
+ min_index, min_memory = min(memory_map.items(), key=lambda item: item[1])
+ max_index, max_memory = max(memory_map.items(), key=lambda item: item[1])
+
+ memory_map = {min_index: min_memory, max_index: max_memory}
return memory_map
@@ -224,17 +216,18 @@
Keys are device ids as integers.
Values are memory usage as integers in MB.
"""
- result = subprocess.check_output(
+ result = subprocess.run(
[
- 'nvidia-smi', '--query-gpu=memory.used',
- '--format=csv,nounits,noheader'
- ], encoding='utf-8')
+ 'nvidia-smi',
+ '--query-gpu=memory.used',
+ '--format=csv,nounits,noheader',
+ ],
+ encoding='utf-8',
+ capture_output=True,
+ check=True)
# Convert lines into a dictionary
- gpu_memory = [int(x) for x in result.strip().split('\n')]
- gpu_memory_map = {}
- for k, v in zip(range(len(gpu_memory)), gpu_memory):
- k = f'gpu_{k}'
- gpu_memory_map[k] = v
+ gpu_memory = [int(x) for x in result.stdout.strip().split(os.linesep)]
+ gpu_memory_map = {f'gpu_{index}': memory for index, memory in enumerate(gpu_memory)}
return gpu_memory_map
| {"golden_diff": "diff --git a/pytorch_lightning/root_module/memory.py b/pytorch_lightning/root_module/memory.py\n--- a/pytorch_lightning/root_module/memory.py\n+++ b/pytorch_lightning/root_module/memory.py\n@@ -3,6 +3,7 @@\n '''\n \n import gc\n+import os\n import subprocess\n \n import numpy as np\n@@ -198,19 +199,10 @@\n memory_map = get_gpu_memory_map()\n \n if mode == 'min_max':\n- min_mem = 1000000\n- min_k = None\n- max_mem = 0\n- max_k = None\n- for k, v in memory_map:\n- if v > max_mem:\n- max_mem = v\n- max_k = k\n- if v < min_mem:\n- min_mem = v\n- min_k = k\n-\n- memory_map = {min_k: min_mem, max_k: max_mem}\n+ min_index, min_memory = min(memory_map.items(), key=lambda item: item[1])\n+ max_index, max_memory = max(memory_map.items(), key=lambda item: item[1])\n+\n+ memory_map = {min_index: min_memory, max_index: max_memory}\n \n return memory_map\n \n@@ -224,17 +216,18 @@\n Keys are device ids as integers.\n Values are memory usage as integers in MB.\n \"\"\"\n- result = subprocess.check_output(\n+ result = subprocess.run(\n [\n- 'nvidia-smi', '--query-gpu=memory.used',\n- '--format=csv,nounits,noheader'\n- ], encoding='utf-8')\n+ 'nvidia-smi',\n+ '--query-gpu=memory.used',\n+ '--format=csv,nounits,noheader',\n+ ],\n+ encoding='utf-8',\n+ capture_output=True,\n+ check=True)\n # Convert lines into a dictionary\n- gpu_memory = [int(x) for x in result.strip().split('\\n')]\n- gpu_memory_map = {}\n- for k, v in zip(range(len(gpu_memory)), gpu_memory):\n- k = f'gpu_{k}'\n- gpu_memory_map[k] = v\n+ gpu_memory = [int(x) for x in result.stdout.strip().split(os.linesep)]\n+ gpu_memory_map = {f'gpu_{index}': memory for index, memory in enumerate(gpu_memory)}\n return gpu_memory_map\n", "issue": "min_max log_gpu_memory option bug\n**Describe the bug**\r\nSetting `log_gpu_memory='min_max'` in `Trainer` leads to the following bug.\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"/opt/conda/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py\", line 347, in fit\r\n self.single_gpu_train(model)\r\n File \"/opt/conda/lib/python3.7/site-packages/pytorch_lightning/trainer/dp_mixin.py\", line 79, in single_gpu_train\r\n self.run_pretrain_routine(model)\r\n File \"/opt/conda/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py\", line 467, in run_pretrain_routine\r\n self.train()\r\n File \"/opt/conda/lib/python3.7/site-packages/pytorch_lightning/trainer/train_loop_mixin.py\", line 60, in train\r\n self.run_training_epoch()\r\n File \"/opt/conda/lib/python3.7/site-packages/pytorch_lightning/trainer/train_loop_mixin.py\", line 126, in run_training_epoch\r\n self.log_metrics(batch_step_metrics, grad_norm_dic)\r\n File \"/opt/conda/lib/python3.7/site-packages/pytorch_lightning/trainer/logging_mixin.py\", line 20, in log_metrics\r\n mem_map = memory.get_memory_profile(self.log_gpu_memory)\r\n File \"/opt/conda/lib/python3.7/site-packages/pytorch_lightning/root_module/memory.py\", line 205, in get_memory_profile\r\n for k, v in memory_map:\r\nValueError: too many values to unpack (expected 2)\r\n```\r\n\r\n\r\n**To Reproduce**\r\nOn current master, execute the following.\r\n```\r\n trainer = Trainer(\r\n ...\r\n log_gpu_memory='min_max',\r\n ...\r\n )\r\n trainer.fit(model)\r\n```\r\n\r\n**Expected behavior**\r\nLog the min/max utilization of gpu memory, as `min_max` option is documented.\r\n\r\n**Desktop (please complete the following information):**\r\n - OS: Ubuntu 18.04 \r\n - Version: Current master\r\n\r\nI am working on this issue. Will submit a PR soon.\n", "before_files": [{"content": "'''\nGenerates a summary of a model's layers and dimensionality\n'''\n\nimport gc\nimport subprocess\n\nimport numpy as np\nimport pandas as pd\nimport torch\n\n\nclass ModelSummary(object):\n\n def __init__(self, model, mode='full'):\n '''\n Generates summaries of model layers and dimensions.\n '''\n self.model = model\n self.mode = mode\n self.in_sizes = []\n self.out_sizes = []\n\n self.summarize()\n\n def __str__(self):\n return self.summary.__str__()\n\n def __repr__(self):\n return self.summary.__str__()\n\n def named_modules(self):\n if self.mode == 'full':\n mods = self.model.named_modules()\n mods = list(mods)[1:] # do not include root module (LightningModule)\n elif self.mode == 'top':\n # the children are the top-level modules\n mods = self.model.named_children()\n else:\n mods = []\n return list(mods)\n\n def get_variable_sizes(self):\n '''Run sample input through each layer to get output sizes'''\n mods = self.named_modules()\n in_sizes = []\n out_sizes = []\n input_ = self.model.example_input_array\n\n if self.model.on_gpu:\n input_ = input_.cuda(0)\n\n if self.model.trainer.use_amp:\n input_ = input_.half()\n\n with torch.no_grad():\n\n for _, m in mods:\n if type(input_) is list or type(input_) is tuple: # pragma: no cover\n out = m(*input_)\n else:\n out = m(input_)\n\n if type(input_) is tuple or type(input_) is list: # pragma: no cover\n in_size = []\n for x in input_:\n if type(x) is list:\n in_size.append(len(x))\n else:\n in_size.append(x.size())\n else:\n in_size = np.array(input_.size())\n\n in_sizes.append(in_size)\n\n if type(out) is tuple or type(out) is list: # pragma: no cover\n out_size = np.asarray([x.size() for x in out])\n else:\n out_size = np.array(out.size())\n\n out_sizes.append(out_size)\n input_ = out\n\n self.in_sizes = in_sizes\n self.out_sizes = out_sizes\n assert len(in_sizes) == len(out_sizes)\n return\n\n def get_layer_names(self):\n '''Collect Layer Names'''\n mods = self.named_modules()\n names = []\n layers = []\n for name, m in mods:\n names += [name]\n layers += [str(m.__class__)]\n\n layer_types = [x.split('.')[-1][:-2] for x in layers]\n\n self.layer_names = names\n self.layer_types = layer_types\n return\n\n def get_parameter_sizes(self):\n '''Get sizes of all parameters in `model`'''\n mods = self.named_modules()\n sizes = []\n for _, m in mods:\n p = list(m.parameters())\n modsz = []\n for j in range(len(p)):\n modsz.append(np.array(p[j].size()))\n sizes.append(modsz)\n\n self.param_sizes = sizes\n return\n\n def get_parameter_nums(self):\n '''Get number of parameters in each layer'''\n param_nums = []\n for mod in self.param_sizes:\n all_params = 0\n for p in mod:\n all_params += np.prod(p)\n param_nums.append(all_params)\n self.param_nums = param_nums\n return\n\n def make_summary(self):\n '''\n Makes a summary listing with:\n\n Layer Name, Layer Type, Input Size, Output Size, Number of Parameters\n '''\n\n cols = ['Name', 'Type', 'Params']\n if self.model.example_input_array is not None:\n cols.extend(['In_sizes', 'Out_sizes'])\n\n df = pd.DataFrame(np.zeros((len(self.layer_names), len(cols))))\n df.columns = cols\n\n df['Name'] = self.layer_names\n df['Type'] = self.layer_types\n df['Params'] = self.param_nums\n df['Params'] = df['Params'].map(get_human_readable_count)\n\n if self.model.example_input_array is not None:\n df['In_sizes'] = self.in_sizes\n df['Out_sizes'] = self.out_sizes\n\n self.summary = df\n return\n\n def summarize(self):\n self.get_layer_names()\n self.get_parameter_sizes()\n self.get_parameter_nums()\n\n if self.model.example_input_array is not None:\n self.get_variable_sizes()\n self.make_summary()\n\n\ndef print_mem_stack(): # pragma: no cover\n for obj in gc.get_objects():\n try:\n if torch.is_tensor(obj) or (hasattr(obj, 'data') and torch.is_tensor(obj.data)):\n print(type(obj), obj.size())\n except Exception:\n pass\n\n\ndef count_mem_items(): # pragma: no cover\n nb_params = 0\n nb_tensors = 0\n for obj in gc.get_objects():\n try:\n if torch.is_tensor(obj) or (hasattr(obj, 'data') and torch.is_tensor(obj.data)):\n obj_type = str(type(obj))\n if 'parameter' in obj_type:\n nb_params += 1\n else:\n nb_tensors += 1\n except Exception:\n pass\n\n return nb_params, nb_tensors\n\n\ndef get_memory_profile(mode):\n \"\"\"\n 'all' means return memory for all gpus\n 'min_max' means return memory for max and min\n :param mode:\n :return:\n \"\"\"\n memory_map = get_gpu_memory_map()\n\n if mode == 'min_max':\n min_mem = 1000000\n min_k = None\n max_mem = 0\n max_k = None\n for k, v in memory_map:\n if v > max_mem:\n max_mem = v\n max_k = k\n if v < min_mem:\n min_mem = v\n min_k = k\n\n memory_map = {min_k: min_mem, max_k: max_mem}\n\n return memory_map\n\n\ndef get_gpu_memory_map():\n \"\"\"Get the current gpu usage.\n\n Returns\n -------\n usage: dict\n Keys are device ids as integers.\n Values are memory usage as integers in MB.\n \"\"\"\n result = subprocess.check_output(\n [\n 'nvidia-smi', '--query-gpu=memory.used',\n '--format=csv,nounits,noheader'\n ], encoding='utf-8')\n # Convert lines into a dictionary\n gpu_memory = [int(x) for x in result.strip().split('\\n')]\n gpu_memory_map = {}\n for k, v in zip(range(len(gpu_memory)), gpu_memory):\n k = f'gpu_{k}'\n gpu_memory_map[k] = v\n return gpu_memory_map\n\n\ndef get_human_readable_count(number):\n \"\"\"\n Abbreviates an integer number with K, M, B, T for thousands, millions,\n billions and trillions, respectively.\n Examples:\n 123 -> 123\n 1234 -> 1 K (one thousand)\n 2e6 -> 2 M (two million)\n 3e9 -> 3 B (three billion)\n 4e12 -> 4 T (four trillion)\n 5e15 -> 5,000 T\n :param number: a positive integer number\n :returns a string formatted according to the pattern described above.\n \"\"\"\n assert number >= 0\n labels = [' ', 'K', 'M', 'B', 'T']\n num_digits = int(np.floor(np.log10(number)) + 1 if number > 0 else 1)\n num_groups = int(np.ceil(num_digits / 3))\n num_groups = min(num_groups, len(labels)) # don't abbreviate beyond trillions\n shift = -3 * (num_groups - 1)\n number = number * (10 ** shift)\n index = num_groups - 1\n return f'{int(number):,d} {labels[index]}'\n", "path": "pytorch_lightning/root_module/memory.py"}]} | 3,480 | 546 |
gh_patches_debug_15202 | rasdani/github-patches | git_diff | vega__altair-1265 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
line_percent.py does not work offline
We need all examples to work offline. Currently ``line_percent.py`` uses ``pd.read_json`` from a URL.
The example should probably use a URL plus a filter.
</issue>
<code>
[start of altair/examples/boxplot_max_min.py]
1 """
2 Box Plot with Min/Max Whiskers
3 ------------------------------
4 This example shows how to make a basic box plot using US Population data from 2000.
5 """
6 # category: other charts
7 import altair as alt
8 from vega_datasets import data
9
10 source = data.population()
11
12 base = alt.Chart(source)
13
14 # Define aggregate fields
15 lower_box = 'q1(people):Q'
16 lower_whisker = 'min(people):Q'
17 upper_box = 'q3(people):Q'
18 upper_whisker = 'max(people):Q'
19
20 # Compose each layer individually
21 lower_plot = base.mark_rule().encode(
22 y=alt.Y(lower_whisker, title="population"),
23 y2=lower_box,
24 x='age:O'
25 )
26
27 middle_plot = base.mark_bar(size=5.0).encode(
28 y=lower_box,
29 y2=upper_box,
30 x='age:O'
31 )
32
33 upper_plot = base.mark_rule().encode(
34 y=upper_whisker,
35 y2=upper_box,
36 x='age:O'
37 )
38
39 middle_tick = base.mark_tick(
40 color='white',
41 size=5.0
42 ).encode(
43 y='median(people):Q',
44 x='age:O',
45 )
46
47 lower_plot + middle_plot + upper_plot + middle_tick
48
[end of altair/examples/boxplot_max_min.py]
[start of altair/examples/line_percent.py]
1 """
2 Line Chart with Percent axis
3 ----------------------------
4 This example shows how to format the tick labels of the y-axis of a chart as percentages.
5 """
6 # category: line charts
7 import altair as alt
8 import pandas as pd
9 from vega_datasets import data
10
11 source = pd.read_json(data.jobs.url)
12 welders = source[source.job == 'Welder']
13
14 alt.Chart(welders).mark_line().encode(
15 alt.X('year:O'),
16 alt.Y('perc:Q', axis=alt.Axis(format='%')),
17 color='sex:N'
18 )
19
[end of altair/examples/line_percent.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/altair/examples/boxplot_max_min.py b/altair/examples/boxplot_max_min.py
--- a/altair/examples/boxplot_max_min.py
+++ b/altair/examples/boxplot_max_min.py
@@ -7,7 +7,7 @@
import altair as alt
from vega_datasets import data
-source = data.population()
+source = data.population.url
base = alt.Chart(source)
diff --git a/altair/examples/line_percent.py b/altair/examples/line_percent.py
--- a/altair/examples/line_percent.py
+++ b/altair/examples/line_percent.py
@@ -5,14 +5,14 @@
"""
# category: line charts
import altair as alt
-import pandas as pd
from vega_datasets import data
-source = pd.read_json(data.jobs.url)
-welders = source[source.job == 'Welder']
+source = data.jobs.url
-alt.Chart(welders).mark_line().encode(
+alt.Chart(source).mark_line().encode(
alt.X('year:O'),
alt.Y('perc:Q', axis=alt.Axis(format='%')),
color='sex:N'
+).transform_filter(
+ alt.datum.job == 'Welder'
)
| {"golden_diff": "diff --git a/altair/examples/boxplot_max_min.py b/altair/examples/boxplot_max_min.py\n--- a/altair/examples/boxplot_max_min.py\n+++ b/altair/examples/boxplot_max_min.py\n@@ -7,7 +7,7 @@\n import altair as alt\n from vega_datasets import data\n \n-source = data.population()\n+source = data.population.url\n \n base = alt.Chart(source)\n \ndiff --git a/altair/examples/line_percent.py b/altair/examples/line_percent.py\n--- a/altair/examples/line_percent.py\n+++ b/altair/examples/line_percent.py\n@@ -5,14 +5,14 @@\n \"\"\"\n # category: line charts\n import altair as alt\n-import pandas as pd\n from vega_datasets import data\n \n-source = pd.read_json(data.jobs.url)\n-welders = source[source.job == 'Welder']\n+source = data.jobs.url\n \n-alt.Chart(welders).mark_line().encode(\n+alt.Chart(source).mark_line().encode(\n alt.X('year:O'),\n alt.Y('perc:Q', axis=alt.Axis(format='%')),\n color='sex:N'\n+).transform_filter(\n+ alt.datum.job == 'Welder'\n )\n", "issue": "line_percent.py does not work offline\nWe need all examples to work offline. Currently ``line_percent.py`` uses ``pd.read_json`` from a URL.\r\n\r\nThe example should probably use a URL plus a filter.\n", "before_files": [{"content": "\"\"\"\nBox Plot with Min/Max Whiskers\n------------------------------\nThis example shows how to make a basic box plot using US Population data from 2000.\n\"\"\"\n# category: other charts\nimport altair as alt\nfrom vega_datasets import data\n\nsource = data.population()\n\nbase = alt.Chart(source)\n\n# Define aggregate fields\nlower_box = 'q1(people):Q'\nlower_whisker = 'min(people):Q'\nupper_box = 'q3(people):Q'\nupper_whisker = 'max(people):Q'\n\n# Compose each layer individually\nlower_plot = base.mark_rule().encode(\n y=alt.Y(lower_whisker, title=\"population\"),\n y2=lower_box,\n x='age:O'\n)\n\nmiddle_plot = base.mark_bar(size=5.0).encode(\n y=lower_box,\n y2=upper_box,\n x='age:O'\n)\n\nupper_plot = base.mark_rule().encode(\n y=upper_whisker,\n y2=upper_box,\n x='age:O'\n)\n\nmiddle_tick = base.mark_tick(\n color='white',\n size=5.0\n).encode(\n y='median(people):Q',\n x='age:O',\n)\n\nlower_plot + middle_plot + upper_plot + middle_tick\n", "path": "altair/examples/boxplot_max_min.py"}, {"content": "\"\"\"\nLine Chart with Percent axis\n----------------------------\nThis example shows how to format the tick labels of the y-axis of a chart as percentages.\n\"\"\"\n# category: line charts\nimport altair as alt\nimport pandas as pd\nfrom vega_datasets import data\n\nsource = pd.read_json(data.jobs.url)\nwelders = source[source.job == 'Welder']\n\nalt.Chart(welders).mark_line().encode(\n alt.X('year:O'),\n alt.Y('perc:Q', axis=alt.Axis(format='%')),\n color='sex:N'\n)\n", "path": "altair/examples/line_percent.py"}]} | 1,126 | 270 |
gh_patches_debug_14641 | rasdani/github-patches | git_diff | scrapy__scrapy-5993 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Backward compatibility in utils.conf.build_component_list
There is some code from 2015 in `scrapy.utils.conf.build_component_list` marked as "Backward compatibility for old (base, custom) call signature", which was added in #1586. I couldn't understand after a quick glance why is it "backward compatibility" but if it's something deprecated we should deprecare it properly with a message, and if it's a properly supported code path we should remove the comments.
</issue>
<code>
[start of scrapy/utils/conf.py]
1 import numbers
2 import os
3 import sys
4 import warnings
5 from configparser import ConfigParser
6 from operator import itemgetter
7 from pathlib import Path
8 from typing import Any, Dict, List, Optional, Union
9
10 from scrapy.exceptions import ScrapyDeprecationWarning, UsageError
11 from scrapy.settings import BaseSettings
12 from scrapy.utils.deprecate import update_classpath
13 from scrapy.utils.python import without_none_values
14
15
16 def build_component_list(compdict, custom=None, convert=update_classpath):
17 """Compose a component list from a { class: order } dictionary."""
18
19 def _check_components(complist):
20 if len({convert(c) for c in complist}) != len(complist):
21 raise ValueError(
22 f"Some paths in {complist!r} convert to the same object, "
23 "please update your settings"
24 )
25
26 def _map_keys(compdict):
27 if isinstance(compdict, BaseSettings):
28 compbs = BaseSettings()
29 for k, v in compdict.items():
30 prio = compdict.getpriority(k)
31 assert prio is not None
32 if compbs.getpriority(convert(k)) == prio:
33 raise ValueError(
34 f"Some paths in {list(compdict.keys())!r} "
35 "convert to the same "
36 "object, please update your settings"
37 )
38 else:
39 compbs.set(convert(k), v, priority=prio)
40 return compbs
41 _check_components(compdict)
42 return {convert(k): v for k, v in compdict.items()}
43
44 def _validate_values(compdict):
45 """Fail if a value in the components dict is not a real number or None."""
46 for name, value in compdict.items():
47 if value is not None and not isinstance(value, numbers.Real):
48 raise ValueError(
49 f"Invalid value {value} for component {name}, "
50 "please provide a real number or None instead"
51 )
52
53 if isinstance(custom, (list, tuple)):
54 _check_components(custom)
55 return type(custom)(convert(c) for c in custom)
56
57 if custom is not None:
58 compdict.update(custom)
59
60 _validate_values(compdict)
61 compdict = without_none_values(_map_keys(compdict))
62 return [k for k, v in sorted(compdict.items(), key=itemgetter(1))]
63
64
65 def arglist_to_dict(arglist):
66 """Convert a list of arguments like ['arg1=val1', 'arg2=val2', ...] to a
67 dict
68 """
69 return dict(x.split("=", 1) for x in arglist)
70
71
72 def closest_scrapy_cfg(
73 path: Union[str, os.PathLike] = ".",
74 prevpath: Optional[Union[str, os.PathLike]] = None,
75 ) -> str:
76 """Return the path to the closest scrapy.cfg file by traversing the current
77 directory and its parents
78 """
79 if prevpath is not None and str(path) == str(prevpath):
80 return ""
81 path = Path(path).resolve()
82 cfgfile = path / "scrapy.cfg"
83 if cfgfile.exists():
84 return str(cfgfile)
85 return closest_scrapy_cfg(path.parent, path)
86
87
88 def init_env(project="default", set_syspath=True):
89 """Initialize environment to use command-line tool from inside a project
90 dir. This sets the Scrapy settings module and modifies the Python path to
91 be able to locate the project module.
92 """
93 cfg = get_config()
94 if cfg.has_option("settings", project):
95 os.environ["SCRAPY_SETTINGS_MODULE"] = cfg.get("settings", project)
96 closest = closest_scrapy_cfg()
97 if closest:
98 projdir = str(Path(closest).parent)
99 if set_syspath and projdir not in sys.path:
100 sys.path.append(projdir)
101
102
103 def get_config(use_closest=True):
104 """Get Scrapy config file as a ConfigParser"""
105 sources = get_sources(use_closest)
106 cfg = ConfigParser()
107 cfg.read(sources)
108 return cfg
109
110
111 def get_sources(use_closest=True) -> List[str]:
112 xdg_config_home = (
113 os.environ.get("XDG_CONFIG_HOME") or Path("~/.config").expanduser()
114 )
115 sources = [
116 "/etc/scrapy.cfg",
117 r"c:\scrapy\scrapy.cfg",
118 str(Path(xdg_config_home) / "scrapy.cfg"),
119 str(Path("~/.scrapy.cfg").expanduser()),
120 ]
121 if use_closest:
122 sources.append(closest_scrapy_cfg())
123 return sources
124
125
126 def feed_complete_default_values_from_settings(feed, settings):
127 out = feed.copy()
128 out.setdefault("batch_item_count", settings.getint("FEED_EXPORT_BATCH_ITEM_COUNT"))
129 out.setdefault("encoding", settings["FEED_EXPORT_ENCODING"])
130 out.setdefault("fields", settings.getdictorlist("FEED_EXPORT_FIELDS") or None)
131 out.setdefault("store_empty", settings.getbool("FEED_STORE_EMPTY"))
132 out.setdefault("uri_params", settings["FEED_URI_PARAMS"])
133 out.setdefault("item_export_kwargs", {})
134 if settings["FEED_EXPORT_INDENT"] is None:
135 out.setdefault("indent", None)
136 else:
137 out.setdefault("indent", settings.getint("FEED_EXPORT_INDENT"))
138 return out
139
140
141 def feed_process_params_from_cli(
142 settings,
143 output: List[str],
144 output_format=None,
145 overwrite_output: Optional[List[str]] = None,
146 ):
147 """
148 Receives feed export params (from the 'crawl' or 'runspider' commands),
149 checks for inconsistencies in their quantities and returns a dictionary
150 suitable to be used as the FEEDS setting.
151 """
152 valid_output_formats = without_none_values(
153 settings.getwithbase("FEED_EXPORTERS")
154 ).keys()
155
156 def check_valid_format(output_format):
157 if output_format not in valid_output_formats:
158 raise UsageError(
159 f"Unrecognized output format '{output_format}'. "
160 f"Set a supported one ({tuple(valid_output_formats)}) "
161 "after a colon at the end of the output URI (i.e. -o/-O "
162 "<URI>:<FORMAT>) or as a file extension."
163 )
164
165 overwrite = False
166 if overwrite_output:
167 if output:
168 raise UsageError(
169 "Please use only one of -o/--output and -O/--overwrite-output"
170 )
171 if output_format:
172 raise UsageError(
173 "-t/--output-format is a deprecated command line option"
174 " and does not work in combination with -O/--overwrite-output."
175 " To specify a format please specify it after a colon at the end of the"
176 " output URI (i.e. -O <URI>:<FORMAT>)."
177 " Example working in the tutorial: "
178 "scrapy crawl quotes -O quotes.json:json"
179 )
180 output = overwrite_output
181 overwrite = True
182
183 if output_format:
184 if len(output) == 1:
185 check_valid_format(output_format)
186 message = (
187 "The -t/--output-format command line option is deprecated in favor of "
188 "specifying the output format within the output URI using the -o/--output or the"
189 " -O/--overwrite-output option (i.e. -o/-O <URI>:<FORMAT>). See the documentation"
190 " of the -o or -O option or the following examples for more information. "
191 "Examples working in the tutorial: "
192 "scrapy crawl quotes -o quotes.csv:csv or "
193 "scrapy crawl quotes -O quotes.json:json"
194 )
195 warnings.warn(message, ScrapyDeprecationWarning, stacklevel=2)
196 return {output[0]: {"format": output_format}}
197 raise UsageError(
198 "The -t command-line option cannot be used if multiple output "
199 "URIs are specified"
200 )
201
202 result: Dict[str, Dict[str, Any]] = {}
203 for element in output:
204 try:
205 feed_uri, feed_format = element.rsplit(":", 1)
206 check_valid_format(feed_format)
207 except (ValueError, UsageError):
208 feed_uri = element
209 feed_format = Path(element).suffix.replace(".", "")
210 else:
211 if feed_uri == "-":
212 feed_uri = "stdout:"
213 check_valid_format(feed_format)
214 result[feed_uri] = {"format": feed_format}
215 if overwrite:
216 result[feed_uri]["overwrite"] = True
217
218 # FEEDS setting should take precedence over the matching CLI options
219 result.update(settings.getdict("FEEDS"))
220
221 return result
222
[end of scrapy/utils/conf.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/scrapy/utils/conf.py b/scrapy/utils/conf.py
--- a/scrapy/utils/conf.py
+++ b/scrapy/utils/conf.py
@@ -50,11 +50,17 @@
"please provide a real number or None instead"
)
- if isinstance(custom, (list, tuple)):
- _check_components(custom)
- return type(custom)(convert(c) for c in custom)
-
if custom is not None:
+ warnings.warn(
+ "The 'custom' attribute of build_component_list() is deprecated. "
+ "Please merge its value into 'compdict' manually or change your "
+ "code to use Settings.getwithbase().",
+ category=ScrapyDeprecationWarning,
+ stacklevel=2,
+ )
+ if isinstance(custom, (list, tuple)):
+ _check_components(custom)
+ return type(custom)(convert(c) for c in custom)
compdict.update(custom)
_validate_values(compdict)
| {"golden_diff": "diff --git a/scrapy/utils/conf.py b/scrapy/utils/conf.py\n--- a/scrapy/utils/conf.py\n+++ b/scrapy/utils/conf.py\n@@ -50,11 +50,17 @@\n \"please provide a real number or None instead\"\n )\n \n- if isinstance(custom, (list, tuple)):\n- _check_components(custom)\n- return type(custom)(convert(c) for c in custom)\n-\n if custom is not None:\n+ warnings.warn(\n+ \"The 'custom' attribute of build_component_list() is deprecated. \"\n+ \"Please merge its value into 'compdict' manually or change your \"\n+ \"code to use Settings.getwithbase().\",\n+ category=ScrapyDeprecationWarning,\n+ stacklevel=2,\n+ )\n+ if isinstance(custom, (list, tuple)):\n+ _check_components(custom)\n+ return type(custom)(convert(c) for c in custom)\n compdict.update(custom)\n \n _validate_values(compdict)\n", "issue": "Backward compatibility in utils.conf.build_component_list\nThere is some code from 2015 in `scrapy.utils.conf.build_component_list` marked as \"Backward compatibility for old (base, custom) call signature\", which was added in #1586. I couldn't understand after a quick glance why is it \"backward compatibility\" but if it's something deprecated we should deprecare it properly with a message, and if it's a properly supported code path we should remove the comments.\n", "before_files": [{"content": "import numbers\nimport os\nimport sys\nimport warnings\nfrom configparser import ConfigParser\nfrom operator import itemgetter\nfrom pathlib import Path\nfrom typing import Any, Dict, List, Optional, Union\n\nfrom scrapy.exceptions import ScrapyDeprecationWarning, UsageError\nfrom scrapy.settings import BaseSettings\nfrom scrapy.utils.deprecate import update_classpath\nfrom scrapy.utils.python import without_none_values\n\n\ndef build_component_list(compdict, custom=None, convert=update_classpath):\n \"\"\"Compose a component list from a { class: order } dictionary.\"\"\"\n\n def _check_components(complist):\n if len({convert(c) for c in complist}) != len(complist):\n raise ValueError(\n f\"Some paths in {complist!r} convert to the same object, \"\n \"please update your settings\"\n )\n\n def _map_keys(compdict):\n if isinstance(compdict, BaseSettings):\n compbs = BaseSettings()\n for k, v in compdict.items():\n prio = compdict.getpriority(k)\n assert prio is not None\n if compbs.getpriority(convert(k)) == prio:\n raise ValueError(\n f\"Some paths in {list(compdict.keys())!r} \"\n \"convert to the same \"\n \"object, please update your settings\"\n )\n else:\n compbs.set(convert(k), v, priority=prio)\n return compbs\n _check_components(compdict)\n return {convert(k): v for k, v in compdict.items()}\n\n def _validate_values(compdict):\n \"\"\"Fail if a value in the components dict is not a real number or None.\"\"\"\n for name, value in compdict.items():\n if value is not None and not isinstance(value, numbers.Real):\n raise ValueError(\n f\"Invalid value {value} for component {name}, \"\n \"please provide a real number or None instead\"\n )\n\n if isinstance(custom, (list, tuple)):\n _check_components(custom)\n return type(custom)(convert(c) for c in custom)\n\n if custom is not None:\n compdict.update(custom)\n\n _validate_values(compdict)\n compdict = without_none_values(_map_keys(compdict))\n return [k for k, v in sorted(compdict.items(), key=itemgetter(1))]\n\n\ndef arglist_to_dict(arglist):\n \"\"\"Convert a list of arguments like ['arg1=val1', 'arg2=val2', ...] to a\n dict\n \"\"\"\n return dict(x.split(\"=\", 1) for x in arglist)\n\n\ndef closest_scrapy_cfg(\n path: Union[str, os.PathLike] = \".\",\n prevpath: Optional[Union[str, os.PathLike]] = None,\n) -> str:\n \"\"\"Return the path to the closest scrapy.cfg file by traversing the current\n directory and its parents\n \"\"\"\n if prevpath is not None and str(path) == str(prevpath):\n return \"\"\n path = Path(path).resolve()\n cfgfile = path / \"scrapy.cfg\"\n if cfgfile.exists():\n return str(cfgfile)\n return closest_scrapy_cfg(path.parent, path)\n\n\ndef init_env(project=\"default\", set_syspath=True):\n \"\"\"Initialize environment to use command-line tool from inside a project\n dir. This sets the Scrapy settings module and modifies the Python path to\n be able to locate the project module.\n \"\"\"\n cfg = get_config()\n if cfg.has_option(\"settings\", project):\n os.environ[\"SCRAPY_SETTINGS_MODULE\"] = cfg.get(\"settings\", project)\n closest = closest_scrapy_cfg()\n if closest:\n projdir = str(Path(closest).parent)\n if set_syspath and projdir not in sys.path:\n sys.path.append(projdir)\n\n\ndef get_config(use_closest=True):\n \"\"\"Get Scrapy config file as a ConfigParser\"\"\"\n sources = get_sources(use_closest)\n cfg = ConfigParser()\n cfg.read(sources)\n return cfg\n\n\ndef get_sources(use_closest=True) -> List[str]:\n xdg_config_home = (\n os.environ.get(\"XDG_CONFIG_HOME\") or Path(\"~/.config\").expanduser()\n )\n sources = [\n \"/etc/scrapy.cfg\",\n r\"c:\\scrapy\\scrapy.cfg\",\n str(Path(xdg_config_home) / \"scrapy.cfg\"),\n str(Path(\"~/.scrapy.cfg\").expanduser()),\n ]\n if use_closest:\n sources.append(closest_scrapy_cfg())\n return sources\n\n\ndef feed_complete_default_values_from_settings(feed, settings):\n out = feed.copy()\n out.setdefault(\"batch_item_count\", settings.getint(\"FEED_EXPORT_BATCH_ITEM_COUNT\"))\n out.setdefault(\"encoding\", settings[\"FEED_EXPORT_ENCODING\"])\n out.setdefault(\"fields\", settings.getdictorlist(\"FEED_EXPORT_FIELDS\") or None)\n out.setdefault(\"store_empty\", settings.getbool(\"FEED_STORE_EMPTY\"))\n out.setdefault(\"uri_params\", settings[\"FEED_URI_PARAMS\"])\n out.setdefault(\"item_export_kwargs\", {})\n if settings[\"FEED_EXPORT_INDENT\"] is None:\n out.setdefault(\"indent\", None)\n else:\n out.setdefault(\"indent\", settings.getint(\"FEED_EXPORT_INDENT\"))\n return out\n\n\ndef feed_process_params_from_cli(\n settings,\n output: List[str],\n output_format=None,\n overwrite_output: Optional[List[str]] = None,\n):\n \"\"\"\n Receives feed export params (from the 'crawl' or 'runspider' commands),\n checks for inconsistencies in their quantities and returns a dictionary\n suitable to be used as the FEEDS setting.\n \"\"\"\n valid_output_formats = without_none_values(\n settings.getwithbase(\"FEED_EXPORTERS\")\n ).keys()\n\n def check_valid_format(output_format):\n if output_format not in valid_output_formats:\n raise UsageError(\n f\"Unrecognized output format '{output_format}'. \"\n f\"Set a supported one ({tuple(valid_output_formats)}) \"\n \"after a colon at the end of the output URI (i.e. -o/-O \"\n \"<URI>:<FORMAT>) or as a file extension.\"\n )\n\n overwrite = False\n if overwrite_output:\n if output:\n raise UsageError(\n \"Please use only one of -o/--output and -O/--overwrite-output\"\n )\n if output_format:\n raise UsageError(\n \"-t/--output-format is a deprecated command line option\"\n \" and does not work in combination with -O/--overwrite-output.\"\n \" To specify a format please specify it after a colon at the end of the\"\n \" output URI (i.e. -O <URI>:<FORMAT>).\"\n \" Example working in the tutorial: \"\n \"scrapy crawl quotes -O quotes.json:json\"\n )\n output = overwrite_output\n overwrite = True\n\n if output_format:\n if len(output) == 1:\n check_valid_format(output_format)\n message = (\n \"The -t/--output-format command line option is deprecated in favor of \"\n \"specifying the output format within the output URI using the -o/--output or the\"\n \" -O/--overwrite-output option (i.e. -o/-O <URI>:<FORMAT>). See the documentation\"\n \" of the -o or -O option or the following examples for more information. \"\n \"Examples working in the tutorial: \"\n \"scrapy crawl quotes -o quotes.csv:csv or \"\n \"scrapy crawl quotes -O quotes.json:json\"\n )\n warnings.warn(message, ScrapyDeprecationWarning, stacklevel=2)\n return {output[0]: {\"format\": output_format}}\n raise UsageError(\n \"The -t command-line option cannot be used if multiple output \"\n \"URIs are specified\"\n )\n\n result: Dict[str, Dict[str, Any]] = {}\n for element in output:\n try:\n feed_uri, feed_format = element.rsplit(\":\", 1)\n check_valid_format(feed_format)\n except (ValueError, UsageError):\n feed_uri = element\n feed_format = Path(element).suffix.replace(\".\", \"\")\n else:\n if feed_uri == \"-\":\n feed_uri = \"stdout:\"\n check_valid_format(feed_format)\n result[feed_uri] = {\"format\": feed_format}\n if overwrite:\n result[feed_uri][\"overwrite\"] = True\n\n # FEEDS setting should take precedence over the matching CLI options\n result.update(settings.getdict(\"FEEDS\"))\n\n return result\n", "path": "scrapy/utils/conf.py"}]} | 3,011 | 218 |
gh_patches_debug_11084 | rasdani/github-patches | git_diff | python-discord__bot-852 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Disallow editing duration of expired infractions
Currently the command assumes that an infraction is still active if a duration is being edited. It tries to cancel the previous infraction but it will fail with a warning if the infraction already expired. Relevant code can be found here:
https://github.com/python-discord/bot/blob/582ddbb1ca8bab2cb883781911f5f35962330995/bot/cogs/moderation/management.py#L130-L142
</issue>
<code>
[start of bot/cogs/moderation/management.py]
1 import logging
2 import textwrap
3 import typing as t
4 from datetime import datetime
5
6 import discord
7 from discord.ext import commands
8 from discord.ext.commands import Context
9
10 from bot import constants
11 from bot.bot import Bot
12 from bot.converters import Expiry, InfractionSearchQuery, allowed_strings, proxy_user
13 from bot.pagination import LinePaginator
14 from bot.utils import time
15 from bot.utils.checks import in_channel_check, with_role_check
16 from . import utils
17 from .infractions import Infractions
18 from .modlog import ModLog
19
20 log = logging.getLogger(__name__)
21
22
23 class ModManagement(commands.Cog):
24 """Management of infractions."""
25
26 category = "Moderation"
27
28 def __init__(self, bot: Bot):
29 self.bot = bot
30
31 @property
32 def mod_log(self) -> ModLog:
33 """Get currently loaded ModLog cog instance."""
34 return self.bot.get_cog("ModLog")
35
36 @property
37 def infractions_cog(self) -> Infractions:
38 """Get currently loaded Infractions cog instance."""
39 return self.bot.get_cog("Infractions")
40
41 # region: Edit infraction commands
42
43 @commands.group(name='infraction', aliases=('infr', 'infractions', 'inf'), invoke_without_command=True)
44 async def infraction_group(self, ctx: Context) -> None:
45 """Infraction manipulation commands."""
46 await ctx.invoke(self.bot.get_command("help"), "infraction")
47
48 @infraction_group.command(name='edit')
49 async def infraction_edit(
50 self,
51 ctx: Context,
52 infraction_id: t.Union[int, allowed_strings("l", "last", "recent")],
53 duration: t.Union[Expiry, allowed_strings("p", "permanent"), None],
54 *,
55 reason: str = None
56 ) -> None:
57 """
58 Edit the duration and/or the reason of an infraction.
59
60 Durations are relative to the time of updating and should be appended with a unit of time.
61 Units (∗case-sensitive):
62 \u2003`y` - years
63 \u2003`m` - months∗
64 \u2003`w` - weeks
65 \u2003`d` - days
66 \u2003`h` - hours
67 \u2003`M` - minutes∗
68 \u2003`s` - seconds
69
70 Use "l", "last", or "recent" as the infraction ID to specify that the most recent infraction
71 authored by the command invoker should be edited.
72
73 Use "p" or "permanent" to mark the infraction as permanent. Alternatively, an ISO 8601
74 timestamp can be provided for the duration.
75 """
76 if duration is None and reason is None:
77 # Unlike UserInputError, the error handler will show a specified message for BadArgument
78 raise commands.BadArgument("Neither a new expiry nor a new reason was specified.")
79
80 # Retrieve the previous infraction for its information.
81 if isinstance(infraction_id, str):
82 params = {
83 "actor__id": ctx.author.id,
84 "ordering": "-inserted_at"
85 }
86 infractions = await self.bot.api_client.get(f"bot/infractions", params=params)
87
88 if infractions:
89 old_infraction = infractions[0]
90 infraction_id = old_infraction["id"]
91 else:
92 await ctx.send(
93 f":x: Couldn't find most recent infraction; you have never given an infraction."
94 )
95 return
96 else:
97 old_infraction = await self.bot.api_client.get(f"bot/infractions/{infraction_id}")
98
99 request_data = {}
100 confirm_messages = []
101 log_text = ""
102
103 if isinstance(duration, str):
104 request_data['expires_at'] = None
105 confirm_messages.append("marked as permanent")
106 elif duration is not None:
107 request_data['expires_at'] = duration.isoformat()
108 expiry = time.format_infraction_with_duration(request_data['expires_at'])
109 confirm_messages.append(f"set to expire on {expiry}")
110 else:
111 confirm_messages.append("expiry unchanged")
112
113 if reason:
114 request_data['reason'] = reason
115 confirm_messages.append("set a new reason")
116 log_text += f"""
117 Previous reason: {old_infraction['reason']}
118 New reason: {reason}
119 """.rstrip()
120 else:
121 confirm_messages.append("reason unchanged")
122
123 # Update the infraction
124 new_infraction = await self.bot.api_client.patch(
125 f'bot/infractions/{infraction_id}',
126 json=request_data,
127 )
128
129 # Re-schedule infraction if the expiration has been updated
130 if 'expires_at' in request_data:
131 # A scheduled task should only exist if the old infraction wasn't permanent
132 if old_infraction['expires_at']:
133 self.infractions_cog.cancel_task(new_infraction['id'])
134
135 # If the infraction was not marked as permanent, schedule a new expiration task
136 if request_data['expires_at']:
137 self.infractions_cog.schedule_task(new_infraction['id'], new_infraction)
138
139 log_text += f"""
140 Previous expiry: {old_infraction['expires_at'] or "Permanent"}
141 New expiry: {new_infraction['expires_at'] or "Permanent"}
142 """.rstrip()
143
144 changes = ' & '.join(confirm_messages)
145 await ctx.send(f":ok_hand: Updated infraction #{infraction_id}: {changes}")
146
147 # Get information about the infraction's user
148 user_id = new_infraction['user']
149 user = ctx.guild.get_member(user_id)
150
151 if user:
152 user_text = f"{user.mention} (`{user.id}`)"
153 thumbnail = user.avatar_url_as(static_format="png")
154 else:
155 user_text = f"`{user_id}`"
156 thumbnail = None
157
158 # The infraction's actor
159 actor_id = new_infraction['actor']
160 actor = ctx.guild.get_member(actor_id) or f"`{actor_id}`"
161
162 await self.mod_log.send_log_message(
163 icon_url=constants.Icons.pencil,
164 colour=discord.Colour.blurple(),
165 title="Infraction edited",
166 thumbnail=thumbnail,
167 text=textwrap.dedent(f"""
168 Member: {user_text}
169 Actor: {actor}
170 Edited by: {ctx.message.author}{log_text}
171 """)
172 )
173
174 # endregion
175 # region: Search infractions
176
177 @infraction_group.group(name="search", invoke_without_command=True)
178 async def infraction_search_group(self, ctx: Context, query: InfractionSearchQuery) -> None:
179 """Searches for infractions in the database."""
180 if isinstance(query, discord.User):
181 await ctx.invoke(self.search_user, query)
182 else:
183 await ctx.invoke(self.search_reason, query)
184
185 @infraction_search_group.command(name="user", aliases=("member", "id"))
186 async def search_user(self, ctx: Context, user: t.Union[discord.User, proxy_user]) -> None:
187 """Search for infractions by member."""
188 infraction_list = await self.bot.api_client.get(
189 'bot/infractions',
190 params={'user__id': str(user.id)}
191 )
192 embed = discord.Embed(
193 title=f"Infractions for {user} ({len(infraction_list)} total)",
194 colour=discord.Colour.orange()
195 )
196 await self.send_infraction_list(ctx, embed, infraction_list)
197
198 @infraction_search_group.command(name="reason", aliases=("match", "regex", "re"))
199 async def search_reason(self, ctx: Context, reason: str) -> None:
200 """Search for infractions by their reason. Use Re2 for matching."""
201 infraction_list = await self.bot.api_client.get(
202 'bot/infractions',
203 params={'search': reason}
204 )
205 embed = discord.Embed(
206 title=f"Infractions matching `{reason}` ({len(infraction_list)} total)",
207 colour=discord.Colour.orange()
208 )
209 await self.send_infraction_list(ctx, embed, infraction_list)
210
211 # endregion
212 # region: Utility functions
213
214 async def send_infraction_list(
215 self,
216 ctx: Context,
217 embed: discord.Embed,
218 infractions: t.Iterable[utils.Infraction]
219 ) -> None:
220 """Send a paginated embed of infractions for the specified user."""
221 if not infractions:
222 await ctx.send(f":warning: No infractions could be found for that query.")
223 return
224
225 lines = tuple(
226 self.infraction_to_string(infraction)
227 for infraction in infractions
228 )
229
230 await LinePaginator.paginate(
231 lines,
232 ctx=ctx,
233 embed=embed,
234 empty=True,
235 max_lines=3,
236 max_size=1000
237 )
238
239 def infraction_to_string(self, infraction: utils.Infraction) -> str:
240 """Convert the infraction object to a string representation."""
241 actor_id = infraction["actor"]
242 guild = self.bot.get_guild(constants.Guild.id)
243 actor = guild.get_member(actor_id)
244 active = infraction["active"]
245 user_id = infraction["user"]
246 hidden = infraction["hidden"]
247 created = time.format_infraction(infraction["inserted_at"])
248
249 if active:
250 remaining = time.until_expiration(infraction["expires_at"]) or "Expired"
251 else:
252 remaining = "Inactive"
253
254 if infraction["expires_at"] is None:
255 expires = "*Permanent*"
256 else:
257 date_from = datetime.strptime(created, time.INFRACTION_FORMAT)
258 expires = time.format_infraction_with_duration(infraction["expires_at"], date_from)
259
260 lines = textwrap.dedent(f"""
261 {"**===============**" if active else "==============="}
262 Status: {"__**Active**__" if active else "Inactive"}
263 User: {self.bot.get_user(user_id)} (`{user_id}`)
264 Type: **{infraction["type"]}**
265 Shadow: {hidden}
266 Reason: {infraction["reason"] or "*None*"}
267 Created: {created}
268 Expires: {expires}
269 Remaining: {remaining}
270 Actor: {actor.mention if actor else actor_id}
271 ID: `{infraction["id"]}`
272 {"**===============**" if active else "==============="}
273 """)
274
275 return lines.strip()
276
277 # endregion
278
279 # This cannot be static (must have a __func__ attribute).
280 def cog_check(self, ctx: Context) -> bool:
281 """Only allow moderators from moderator channels to invoke the commands in this cog."""
282 checks = [
283 with_role_check(ctx, *constants.MODERATION_ROLES),
284 in_channel_check(ctx, *constants.MODERATION_CHANNELS)
285 ]
286 return all(checks)
287
288 # This cannot be static (must have a __func__ attribute).
289 async def cog_command_error(self, ctx: Context, error: Exception) -> None:
290 """Send a notification to the invoking context on a Union failure."""
291 if isinstance(error, commands.BadUnionArgument):
292 if discord.User in error.converters:
293 await ctx.send(str(error.errors[0]))
294 error.handled = True
295
[end of bot/cogs/moderation/management.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/bot/cogs/moderation/management.py b/bot/cogs/moderation/management.py
--- a/bot/cogs/moderation/management.py
+++ b/bot/cogs/moderation/management.py
@@ -100,7 +100,12 @@
confirm_messages = []
log_text = ""
- if isinstance(duration, str):
+ if duration is not None and not old_infraction['active']:
+ if reason is None:
+ await ctx.send(":x: Cannot edit the expiration of an expired infraction.")
+ return
+ confirm_messages.append("expiry unchanged (infraction already expired)")
+ elif isinstance(duration, str):
request_data['expires_at'] = None
confirm_messages.append("marked as permanent")
elif duration is not None:
| {"golden_diff": "diff --git a/bot/cogs/moderation/management.py b/bot/cogs/moderation/management.py\n--- a/bot/cogs/moderation/management.py\n+++ b/bot/cogs/moderation/management.py\n@@ -100,7 +100,12 @@\n confirm_messages = []\n log_text = \"\"\n \n- if isinstance(duration, str):\n+ if duration is not None and not old_infraction['active']:\n+ if reason is None:\n+ await ctx.send(\":x: Cannot edit the expiration of an expired infraction.\")\n+ return\n+ confirm_messages.append(\"expiry unchanged (infraction already expired)\")\n+ elif isinstance(duration, str):\n request_data['expires_at'] = None\n confirm_messages.append(\"marked as permanent\")\n elif duration is not None:\n", "issue": "Disallow editing duration of expired infractions\nCurrently the command assumes that an infraction is still active if a duration is being edited. It tries to cancel the previous infraction but it will fail with a warning if the infraction already expired. Relevant code can be found here:\r\nhttps://github.com/python-discord/bot/blob/582ddbb1ca8bab2cb883781911f5f35962330995/bot/cogs/moderation/management.py#L130-L142\n", "before_files": [{"content": "import logging\nimport textwrap\nimport typing as t\nfrom datetime import datetime\n\nimport discord\nfrom discord.ext import commands\nfrom discord.ext.commands import Context\n\nfrom bot import constants\nfrom bot.bot import Bot\nfrom bot.converters import Expiry, InfractionSearchQuery, allowed_strings, proxy_user\nfrom bot.pagination import LinePaginator\nfrom bot.utils import time\nfrom bot.utils.checks import in_channel_check, with_role_check\nfrom . import utils\nfrom .infractions import Infractions\nfrom .modlog import ModLog\n\nlog = logging.getLogger(__name__)\n\n\nclass ModManagement(commands.Cog):\n \"\"\"Management of infractions.\"\"\"\n\n category = \"Moderation\"\n\n def __init__(self, bot: Bot):\n self.bot = bot\n\n @property\n def mod_log(self) -> ModLog:\n \"\"\"Get currently loaded ModLog cog instance.\"\"\"\n return self.bot.get_cog(\"ModLog\")\n\n @property\n def infractions_cog(self) -> Infractions:\n \"\"\"Get currently loaded Infractions cog instance.\"\"\"\n return self.bot.get_cog(\"Infractions\")\n\n # region: Edit infraction commands\n\n @commands.group(name='infraction', aliases=('infr', 'infractions', 'inf'), invoke_without_command=True)\n async def infraction_group(self, ctx: Context) -> None:\n \"\"\"Infraction manipulation commands.\"\"\"\n await ctx.invoke(self.bot.get_command(\"help\"), \"infraction\")\n\n @infraction_group.command(name='edit')\n async def infraction_edit(\n self,\n ctx: Context,\n infraction_id: t.Union[int, allowed_strings(\"l\", \"last\", \"recent\")],\n duration: t.Union[Expiry, allowed_strings(\"p\", \"permanent\"), None],\n *,\n reason: str = None\n ) -> None:\n \"\"\"\n Edit the duration and/or the reason of an infraction.\n\n Durations are relative to the time of updating and should be appended with a unit of time.\n Units (\u2217case-sensitive):\n \\u2003`y` - years\n \\u2003`m` - months\u2217\n \\u2003`w` - weeks\n \\u2003`d` - days\n \\u2003`h` - hours\n \\u2003`M` - minutes\u2217\n \\u2003`s` - seconds\n\n Use \"l\", \"last\", or \"recent\" as the infraction ID to specify that the most recent infraction\n authored by the command invoker should be edited.\n\n Use \"p\" or \"permanent\" to mark the infraction as permanent. Alternatively, an ISO 8601\n timestamp can be provided for the duration.\n \"\"\"\n if duration is None and reason is None:\n # Unlike UserInputError, the error handler will show a specified message for BadArgument\n raise commands.BadArgument(\"Neither a new expiry nor a new reason was specified.\")\n\n # Retrieve the previous infraction for its information.\n if isinstance(infraction_id, str):\n params = {\n \"actor__id\": ctx.author.id,\n \"ordering\": \"-inserted_at\"\n }\n infractions = await self.bot.api_client.get(f\"bot/infractions\", params=params)\n\n if infractions:\n old_infraction = infractions[0]\n infraction_id = old_infraction[\"id\"]\n else:\n await ctx.send(\n f\":x: Couldn't find most recent infraction; you have never given an infraction.\"\n )\n return\n else:\n old_infraction = await self.bot.api_client.get(f\"bot/infractions/{infraction_id}\")\n\n request_data = {}\n confirm_messages = []\n log_text = \"\"\n\n if isinstance(duration, str):\n request_data['expires_at'] = None\n confirm_messages.append(\"marked as permanent\")\n elif duration is not None:\n request_data['expires_at'] = duration.isoformat()\n expiry = time.format_infraction_with_duration(request_data['expires_at'])\n confirm_messages.append(f\"set to expire on {expiry}\")\n else:\n confirm_messages.append(\"expiry unchanged\")\n\n if reason:\n request_data['reason'] = reason\n confirm_messages.append(\"set a new reason\")\n log_text += f\"\"\"\n Previous reason: {old_infraction['reason']}\n New reason: {reason}\n \"\"\".rstrip()\n else:\n confirm_messages.append(\"reason unchanged\")\n\n # Update the infraction\n new_infraction = await self.bot.api_client.patch(\n f'bot/infractions/{infraction_id}',\n json=request_data,\n )\n\n # Re-schedule infraction if the expiration has been updated\n if 'expires_at' in request_data:\n # A scheduled task should only exist if the old infraction wasn't permanent\n if old_infraction['expires_at']:\n self.infractions_cog.cancel_task(new_infraction['id'])\n\n # If the infraction was not marked as permanent, schedule a new expiration task\n if request_data['expires_at']:\n self.infractions_cog.schedule_task(new_infraction['id'], new_infraction)\n\n log_text += f\"\"\"\n Previous expiry: {old_infraction['expires_at'] or \"Permanent\"}\n New expiry: {new_infraction['expires_at'] or \"Permanent\"}\n \"\"\".rstrip()\n\n changes = ' & '.join(confirm_messages)\n await ctx.send(f\":ok_hand: Updated infraction #{infraction_id}: {changes}\")\n\n # Get information about the infraction's user\n user_id = new_infraction['user']\n user = ctx.guild.get_member(user_id)\n\n if user:\n user_text = f\"{user.mention} (`{user.id}`)\"\n thumbnail = user.avatar_url_as(static_format=\"png\")\n else:\n user_text = f\"`{user_id}`\"\n thumbnail = None\n\n # The infraction's actor\n actor_id = new_infraction['actor']\n actor = ctx.guild.get_member(actor_id) or f\"`{actor_id}`\"\n\n await self.mod_log.send_log_message(\n icon_url=constants.Icons.pencil,\n colour=discord.Colour.blurple(),\n title=\"Infraction edited\",\n thumbnail=thumbnail,\n text=textwrap.dedent(f\"\"\"\n Member: {user_text}\n Actor: {actor}\n Edited by: {ctx.message.author}{log_text}\n \"\"\")\n )\n\n # endregion\n # region: Search infractions\n\n @infraction_group.group(name=\"search\", invoke_without_command=True)\n async def infraction_search_group(self, ctx: Context, query: InfractionSearchQuery) -> None:\n \"\"\"Searches for infractions in the database.\"\"\"\n if isinstance(query, discord.User):\n await ctx.invoke(self.search_user, query)\n else:\n await ctx.invoke(self.search_reason, query)\n\n @infraction_search_group.command(name=\"user\", aliases=(\"member\", \"id\"))\n async def search_user(self, ctx: Context, user: t.Union[discord.User, proxy_user]) -> None:\n \"\"\"Search for infractions by member.\"\"\"\n infraction_list = await self.bot.api_client.get(\n 'bot/infractions',\n params={'user__id': str(user.id)}\n )\n embed = discord.Embed(\n title=f\"Infractions for {user} ({len(infraction_list)} total)\",\n colour=discord.Colour.orange()\n )\n await self.send_infraction_list(ctx, embed, infraction_list)\n\n @infraction_search_group.command(name=\"reason\", aliases=(\"match\", \"regex\", \"re\"))\n async def search_reason(self, ctx: Context, reason: str) -> None:\n \"\"\"Search for infractions by their reason. Use Re2 for matching.\"\"\"\n infraction_list = await self.bot.api_client.get(\n 'bot/infractions',\n params={'search': reason}\n )\n embed = discord.Embed(\n title=f\"Infractions matching `{reason}` ({len(infraction_list)} total)\",\n colour=discord.Colour.orange()\n )\n await self.send_infraction_list(ctx, embed, infraction_list)\n\n # endregion\n # region: Utility functions\n\n async def send_infraction_list(\n self,\n ctx: Context,\n embed: discord.Embed,\n infractions: t.Iterable[utils.Infraction]\n ) -> None:\n \"\"\"Send a paginated embed of infractions for the specified user.\"\"\"\n if not infractions:\n await ctx.send(f\":warning: No infractions could be found for that query.\")\n return\n\n lines = tuple(\n self.infraction_to_string(infraction)\n for infraction in infractions\n )\n\n await LinePaginator.paginate(\n lines,\n ctx=ctx,\n embed=embed,\n empty=True,\n max_lines=3,\n max_size=1000\n )\n\n def infraction_to_string(self, infraction: utils.Infraction) -> str:\n \"\"\"Convert the infraction object to a string representation.\"\"\"\n actor_id = infraction[\"actor\"]\n guild = self.bot.get_guild(constants.Guild.id)\n actor = guild.get_member(actor_id)\n active = infraction[\"active\"]\n user_id = infraction[\"user\"]\n hidden = infraction[\"hidden\"]\n created = time.format_infraction(infraction[\"inserted_at\"])\n\n if active:\n remaining = time.until_expiration(infraction[\"expires_at\"]) or \"Expired\"\n else:\n remaining = \"Inactive\"\n\n if infraction[\"expires_at\"] is None:\n expires = \"*Permanent*\"\n else:\n date_from = datetime.strptime(created, time.INFRACTION_FORMAT)\n expires = time.format_infraction_with_duration(infraction[\"expires_at\"], date_from)\n\n lines = textwrap.dedent(f\"\"\"\n {\"**===============**\" if active else \"===============\"}\n Status: {\"__**Active**__\" if active else \"Inactive\"}\n User: {self.bot.get_user(user_id)} (`{user_id}`)\n Type: **{infraction[\"type\"]}**\n Shadow: {hidden}\n Reason: {infraction[\"reason\"] or \"*None*\"}\n Created: {created}\n Expires: {expires}\n Remaining: {remaining}\n Actor: {actor.mention if actor else actor_id}\n ID: `{infraction[\"id\"]}`\n {\"**===============**\" if active else \"===============\"}\n \"\"\")\n\n return lines.strip()\n\n # endregion\n\n # This cannot be static (must have a __func__ attribute).\n def cog_check(self, ctx: Context) -> bool:\n \"\"\"Only allow moderators from moderator channels to invoke the commands in this cog.\"\"\"\n checks = [\n with_role_check(ctx, *constants.MODERATION_ROLES),\n in_channel_check(ctx, *constants.MODERATION_CHANNELS)\n ]\n return all(checks)\n\n # This cannot be static (must have a __func__ attribute).\n async def cog_command_error(self, ctx: Context, error: Exception) -> None:\n \"\"\"Send a notification to the invoking context on a Union failure.\"\"\"\n if isinstance(error, commands.BadUnionArgument):\n if discord.User in error.converters:\n await ctx.send(str(error.errors[0]))\n error.handled = True\n", "path": "bot/cogs/moderation/management.py"}]} | 3,870 | 174 |
gh_patches_debug_5955 | rasdani/github-patches | git_diff | vispy__vispy-1383 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Wrong install location for docs
From `setup.py`:
```
setup(
[...]
package_data={
'vispy': [op.join('io', '_data', '*'),
op.join('html', 'static', 'js', '*'),
op.join('app', 'tests', 'qt-designer.ui'),
op.join('..', 'doc', '*'),
],
```
This line `op.join('..', 'doc', '*')` is wrong for a system-wide install. It leads to the documentation being install under `dist-packages` or `site-packages`, which is definitely non-standard. IMO, the best would be to just not install the docs yourself, and let the package build system (conda or Debian) handle it.
</issue>
<code>
[start of setup.py]
1 # -*- coding: utf-8 -*-
2 # Copyright (c) Vispy Development Team. All Rights Reserved.
3 # Distributed under the (new) BSD License. See LICENSE.txt for more info.
4
5 """ Vispy setup script.
6
7 Steps to do a new release:
8
9 Preparations:
10 * Test on Windows, Linux, Mac
11 * Make release notes
12 * Update API documentation and other docs that need updating.
13 * Install 'twine' package for uploading to PyPI
14
15 Define the version:
16 * update __version__ in __init__.py
17 * tag the tip changeset as version x.x.x; `git tag -a 'vX.Y.Z'`
18
19 Test installation:
20 * clear the build and dist dir (if they exist)
21 * python setup.py sdist
22 * twine register --repository-url https://test.pypi.org/legacy/ dist/*
23 * twine upload --repository-url https://test.pypi.org/legacy/ dist/*
24 * pip install -i https://testpypi.python.org/pypi vispy
25
26 Generate and upload package
27 * python setup.py sdist
28 * twine register dist/*
29 * twine upload dist/*
30
31 Announcing:
32 * It can be worth waiting a day for eager users to report critical bugs
33 * Announce in scipy-user, vispy mailing list, G+
34
35 """
36
37 import os
38 from os import path as op
39 from warnings import warn
40
41 try:
42 # use setuptools namespace, allows for "develop"
43 import setuptools # noqa, analysis:ignore
44 except ImportError:
45 warn("unable to load setuptools. 'setup.py develop' will not work")
46 pass # it's not essential for installation
47 from distutils.core import setup
48
49 name = 'vispy'
50 description = 'Interactive visualization in Python'
51
52
53 # Get version and docstring
54 __version__ = None
55 __doc__ = ''
56 docStatus = 0 # Not started, in progress, done
57 initFile = os.path.join(os.path.dirname(__file__), 'vispy', '__init__.py')
58 for line in open(initFile).readlines():
59 if (line.startswith('version_info') or line.startswith('__version__')):
60 exec(line.strip())
61 elif line.startswith('"""'):
62 if docStatus == 0:
63 docStatus = 1
64 line = line.lstrip('"')
65 elif docStatus == 1:
66 docStatus = 2
67 if docStatus == 1:
68 __doc__ += line
69
70
71 def package_tree(pkgroot):
72 path = os.path.dirname(__file__)
73 subdirs = [os.path.relpath(i[0], path).replace(os.path.sep, '.')
74 for i in os.walk(os.path.join(path, pkgroot))
75 if '__init__.py' in i[2]]
76 return subdirs
77
78
79 setup(
80 name=name,
81 version=__version__,
82 author='Vispy contributors',
83 author_email='[email protected]',
84 license='(new) BSD',
85 url='http://vispy.org',
86 download_url='https://pypi.python.org/pypi/vispy',
87 keywords="visualization OpenGl ES medical imaging 3D plotting "
88 "numpy bigdata",
89 description=description,
90 long_description=__doc__,
91 platforms='any',
92 provides=['vispy'],
93 install_requires=['numpy'],
94 extras_require={
95 'ipython-static': ['ipython'],
96 'ipython-vnc': ['ipython>=2'],
97 'ipython-webgl': ['ipython>=2', 'tornado'],
98 'pyglet': ['pyglet>=1.2'],
99 # 'pyqt4': [], # Why is this on PyPI, but without downloads?
100 # 'pyqt5': [], # Ditto.
101 'pyside': ['PySide'],
102 'sdl2': ['PySDL2'],
103 'wx': ['wxPython'],
104 },
105 packages=package_tree('vispy'),
106 package_dir={
107 'vispy': 'vispy'},
108 package_data={
109 'vispy': [op.join('io', '_data', '*'),
110 op.join('html', 'static', 'js', '*'),
111 op.join('app', 'tests', 'qt-designer.ui'),
112 op.join('..', 'doc', '*'),
113 ],
114
115 'vispy.glsl': ['*.vert','*.frag', "*.glsl"],
116 'vispy.glsl.antialias': ['*.vert','*.frag', "*.glsl"],
117 'vispy.glsl.arrowheads': ['*.vert','*.frag', "*.glsl"],
118 'vispy.glsl.arrows': ['*.vert','*.frag', "*.glsl"],
119 'vispy.glsl.collections': ['*.vert','*.frag', "*.glsl"],
120 'vispy.glsl.colormaps': ['*.vert','*.frag', "*.glsl"],
121 'vispy.glsl.lines': ['*.vert','*.frag', "*.glsl"],
122 'vispy.glsl.markers': ['*.vert','*.frag', "*.glsl"],
123 'vispy.glsl.math': ['*.vert','*.frag', "*.glsl"],
124 'vispy.glsl.misc': ['*.vert','*.frag', "*.glsl"],
125 'vispy.glsl.transforms': ['*.vert','*.frag', "*.glsl"],
126
127 },
128 zip_safe=False,
129 classifiers=[
130 'Development Status :: 3 - Alpha',
131 'Intended Audience :: Science/Research',
132 'Intended Audience :: Education',
133 'Intended Audience :: Developers',
134 'Topic :: Scientific/Engineering :: Visualization',
135 'License :: OSI Approved :: BSD License',
136 'Operating System :: MacOS :: MacOS X',
137 'Operating System :: Microsoft :: Windows',
138 'Operating System :: POSIX',
139 'Programming Language :: Python',
140 'Programming Language :: Python :: 2.7',
141 'Programming Language :: Python :: 3.3',
142 'Programming Language :: Python :: 3.4',
143 'Programming Language :: Python :: 3.5',
144 'Programming Language :: Python :: 3.6',
145 'Framework :: IPython'
146 ],
147 )
148
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -109,7 +109,6 @@
'vispy': [op.join('io', '_data', '*'),
op.join('html', 'static', 'js', '*'),
op.join('app', 'tests', 'qt-designer.ui'),
- op.join('..', 'doc', '*'),
],
'vispy.glsl': ['*.vert','*.frag', "*.glsl"],
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -109,7 +109,6 @@\n 'vispy': [op.join('io', '_data', '*'),\n op.join('html', 'static', 'js', '*'),\n op.join('app', 'tests', 'qt-designer.ui'),\n- op.join('..', 'doc', '*'),\n ],\n \n 'vispy.glsl': ['*.vert','*.frag', \"*.glsl\"],\n", "issue": "Wrong install location for docs\nFrom `setup.py`:\r\n```\r\nsetup(\r\n [...]\r\n package_data={\r\n 'vispy': [op.join('io', '_data', '*'),\r\n op.join('html', 'static', 'js', '*'),\r\n op.join('app', 'tests', 'qt-designer.ui'),\r\n op.join('..', 'doc', '*'),\r\n ],\r\n```\r\nThis line `op.join('..', 'doc', '*')` is wrong for a system-wide install. It leads to the documentation being install under `dist-packages` or `site-packages`, which is definitely non-standard. IMO, the best would be to just not install the docs yourself, and let the package build system (conda or Debian) handle it.\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n# Copyright (c) Vispy Development Team. All Rights Reserved.\n# Distributed under the (new) BSD License. See LICENSE.txt for more info.\n\n\"\"\" Vispy setup script.\n\nSteps to do a new release:\n\nPreparations:\n * Test on Windows, Linux, Mac\n * Make release notes\n * Update API documentation and other docs that need updating.\n * Install 'twine' package for uploading to PyPI\n\nDefine the version:\n * update __version__ in __init__.py\n * tag the tip changeset as version x.x.x; `git tag -a 'vX.Y.Z'`\n\nTest installation:\n * clear the build and dist dir (if they exist)\n * python setup.py sdist\n * twine register --repository-url https://test.pypi.org/legacy/ dist/*\n * twine upload --repository-url https://test.pypi.org/legacy/ dist/*\n * pip install -i https://testpypi.python.org/pypi vispy\n\nGenerate and upload package\n * python setup.py sdist\n * twine register dist/*\n * twine upload dist/*\n\nAnnouncing:\n * It can be worth waiting a day for eager users to report critical bugs\n * Announce in scipy-user, vispy mailing list, G+\n\n\"\"\"\n\nimport os\nfrom os import path as op\nfrom warnings import warn\n\ntry:\n # use setuptools namespace, allows for \"develop\"\n import setuptools # noqa, analysis:ignore\nexcept ImportError:\n warn(\"unable to load setuptools. 'setup.py develop' will not work\")\n pass # it's not essential for installation\nfrom distutils.core import setup\n\nname = 'vispy'\ndescription = 'Interactive visualization in Python'\n\n\n# Get version and docstring\n__version__ = None\n__doc__ = ''\ndocStatus = 0 # Not started, in progress, done\ninitFile = os.path.join(os.path.dirname(__file__), 'vispy', '__init__.py')\nfor line in open(initFile).readlines():\n if (line.startswith('version_info') or line.startswith('__version__')):\n exec(line.strip())\n elif line.startswith('\"\"\"'):\n if docStatus == 0:\n docStatus = 1\n line = line.lstrip('\"')\n elif docStatus == 1:\n docStatus = 2\n if docStatus == 1:\n __doc__ += line\n\n\ndef package_tree(pkgroot):\n path = os.path.dirname(__file__)\n subdirs = [os.path.relpath(i[0], path).replace(os.path.sep, '.')\n for i in os.walk(os.path.join(path, pkgroot))\n if '__init__.py' in i[2]]\n return subdirs\n\n\nsetup(\n name=name,\n version=__version__,\n author='Vispy contributors',\n author_email='[email protected]',\n license='(new) BSD',\n url='http://vispy.org',\n download_url='https://pypi.python.org/pypi/vispy',\n keywords=\"visualization OpenGl ES medical imaging 3D plotting \"\n \"numpy bigdata\",\n description=description,\n long_description=__doc__,\n platforms='any',\n provides=['vispy'],\n install_requires=['numpy'],\n extras_require={\n 'ipython-static': ['ipython'],\n 'ipython-vnc': ['ipython>=2'],\n 'ipython-webgl': ['ipython>=2', 'tornado'],\n 'pyglet': ['pyglet>=1.2'],\n # 'pyqt4': [], # Why is this on PyPI, but without downloads?\n # 'pyqt5': [], # Ditto.\n 'pyside': ['PySide'],\n 'sdl2': ['PySDL2'],\n 'wx': ['wxPython'],\n },\n packages=package_tree('vispy'),\n package_dir={\n 'vispy': 'vispy'},\n package_data={\n 'vispy': [op.join('io', '_data', '*'),\n op.join('html', 'static', 'js', '*'),\n op.join('app', 'tests', 'qt-designer.ui'),\n op.join('..', 'doc', '*'),\n ],\n\n 'vispy.glsl': ['*.vert','*.frag', \"*.glsl\"],\n 'vispy.glsl.antialias': ['*.vert','*.frag', \"*.glsl\"],\n 'vispy.glsl.arrowheads': ['*.vert','*.frag', \"*.glsl\"],\n 'vispy.glsl.arrows': ['*.vert','*.frag', \"*.glsl\"],\n 'vispy.glsl.collections': ['*.vert','*.frag', \"*.glsl\"],\n 'vispy.glsl.colormaps': ['*.vert','*.frag', \"*.glsl\"],\n 'vispy.glsl.lines': ['*.vert','*.frag', \"*.glsl\"],\n 'vispy.glsl.markers': ['*.vert','*.frag', \"*.glsl\"],\n 'vispy.glsl.math': ['*.vert','*.frag', \"*.glsl\"],\n 'vispy.glsl.misc': ['*.vert','*.frag', \"*.glsl\"],\n 'vispy.glsl.transforms': ['*.vert','*.frag', \"*.glsl\"],\n\n },\n zip_safe=False,\n classifiers=[\n 'Development Status :: 3 - Alpha',\n 'Intended Audience :: Science/Research',\n 'Intended Audience :: Education',\n 'Intended Audience :: Developers',\n 'Topic :: Scientific/Engineering :: Visualization',\n 'License :: OSI Approved :: BSD License',\n 'Operating System :: MacOS :: MacOS X',\n 'Operating System :: Microsoft :: Windows',\n 'Operating System :: POSIX',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3.3',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'Framework :: IPython'\n ],\n)\n", "path": "setup.py"}]} | 2,329 | 112 |
gh_patches_debug_12654 | rasdani/github-patches | git_diff | ocadotechnology__aimmo-499 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Game Creator RC initialised with wrong game API URL
The `REPLACE_ME` change in one of the latest PR's has broken the game on minikube level in `minikube.py`. The URL is incorrect so minikube does not work and prohibits testing.
</issue>
<code>
[start of setup.py]
1 # -*- coding: utf-8 -*-
2 from setuptools import find_packages, setup
3
4 import versioneer
5
6 setup(
7 name='aimmo',
8 cmdclass=versioneer.get_cmdclass(),
9 packages=find_packages(),
10 include_package_data=True,
11 install_requires=[
12 'django >= 1.8.3, < 1.9.0',
13 'django-autoconfig >= 0.3.6, < 1.0.0',
14 'django-forms-bootstrap',
15 'django-js-reverse',
16 'eventlet',
17 'flask',
18 'flask-socketio',
19 'requests',
20 'six',
21 'pykube',
22 'hypothesis',
23 'flask-cors >= 3.0, < 3.1',
24 'psutil >= 5.4, < 5.5',
25 ],
26 tests_require=[
27 'django-setuptest',
28 'httmock',
29 ],
30 test_suite='setuptest.setuptest.SetupTestSuite',
31 version=versioneer.get_version(),
32 zip_safe=False,
33 )
34
[end of setup.py]
[start of aimmo_runner/shell_api.py]
1 import subprocess
2 import sys
3 import os
4 import stat
5 import errno
6 import platform
7 from subprocess import CalledProcessError
8 from urllib import urlretrieve, urlopen
9
10 BASE_DIR = os.path.abspath(os.path.dirname(os.path.dirname(__file__)))
11 TEST_BIN = os.path.join(BASE_DIR, 'test-bin')
12 OS = platform.system().lower()
13 FILE_SUFFIX = '.exe' if OS == 'windows' else ''
14 KUBECTL = os.path.join(TEST_BIN, 'kubectl%s' % FILE_SUFFIX)
15 MINIKUBE = os.path.join(TEST_BIN, 'minikube%s' % FILE_SUFFIX)
16 FNULL = open(os.devnull, 'w')
17
18 def log(message):
19 sys.stderr.write(message + "\n")
20
21
22 def run_command(args, capture_output=False):
23 try:
24 if capture_output:
25 return subprocess.check_output(args)
26 else:
27 subprocess.check_call(args)
28 except CalledProcessError as e:
29 log('Command failed with exit status %d: %s' % (e.returncode, ' '.join(args)))
30 raise
31
32
33 def run_command_async(args, capture_output=False):
34 if capture_output is True:
35 p = subprocess.Popen(args, stdout=FNULL, stderr=subprocess.STDOUT)
36 else:
37 p = subprocess.Popen(args)
38 return p
39
40
41 def create_test_bin():
42 try:
43 os.makedirs(TEST_BIN)
44 except OSError as err:
45 if err.errno != errno.EEXIST:
46 raise
47
48
49 def binary_exists(filename):
50 # Check if binary is callable on our path
51 try:
52 run_command([filename], True)
53 return True
54 except OSError:
55 return False
56
57
58 def download_exec(url, dest):
59 dest = urlretrieve(url, dest)[0]
60 make_exec(dest)
61
62
63 def make_exec(file):
64 current_stat = os.stat(file)
65 os.chmod(file, current_stat.st_mode | stat.S_IEXEC)
66
67
68 def get_latest_github_version(repo):
69 result = urlopen('https://github.com/%s/releases/latest' % repo)
70 return result.geturl().split('/')[-1]
71
72
[end of aimmo_runner/shell_api.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/aimmo_runner/shell_api.py b/aimmo_runner/shell_api.py
--- a/aimmo_runner/shell_api.py
+++ b/aimmo_runner/shell_api.py
@@ -15,6 +15,7 @@
MINIKUBE = os.path.join(TEST_BIN, 'minikube%s' % FILE_SUFFIX)
FNULL = open(os.devnull, 'w')
+
def log(message):
sys.stderr.write(message + "\n")
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -26,6 +26,10 @@
tests_require=[
'django-setuptest',
'httmock',
+ 'mock == 2.0.0',
+ 'docker == 2.7.0',
+ 'kubernetes == 4.0.0',
+ 'PyYAML == 3.12',
],
test_suite='setuptest.setuptest.SetupTestSuite',
version=versioneer.get_version(),
| {"golden_diff": "diff --git a/aimmo_runner/shell_api.py b/aimmo_runner/shell_api.py\n--- a/aimmo_runner/shell_api.py\n+++ b/aimmo_runner/shell_api.py\n@@ -15,6 +15,7 @@\n MINIKUBE = os.path.join(TEST_BIN, 'minikube%s' % FILE_SUFFIX)\n FNULL = open(os.devnull, 'w')\n \n+\n def log(message):\n sys.stderr.write(message + \"\\n\")\n \ndiff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -26,6 +26,10 @@\n tests_require=[\n 'django-setuptest',\n 'httmock',\n+ 'mock == 2.0.0',\n+ 'docker == 2.7.0',\n+ 'kubernetes == 4.0.0',\n+ 'PyYAML == 3.12',\n ],\n test_suite='setuptest.setuptest.SetupTestSuite',\n version=versioneer.get_version(),\n", "issue": "Game Creator RC initialised with wrong game API URL\nThe `REPLACE_ME` change in one of the latest PR's has broken the game on minikube level in `minikube.py`. The URL is incorrect so minikube does not work and prohibits testing. \r\n\r\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\nfrom setuptools import find_packages, setup\n\nimport versioneer\n\nsetup(\n name='aimmo',\n cmdclass=versioneer.get_cmdclass(),\n packages=find_packages(),\n include_package_data=True,\n install_requires=[\n 'django >= 1.8.3, < 1.9.0',\n 'django-autoconfig >= 0.3.6, < 1.0.0',\n 'django-forms-bootstrap',\n 'django-js-reverse',\n 'eventlet',\n 'flask',\n 'flask-socketio',\n 'requests',\n 'six',\n 'pykube',\n 'hypothesis',\n 'flask-cors >= 3.0, < 3.1',\n 'psutil >= 5.4, < 5.5',\n ],\n tests_require=[\n 'django-setuptest',\n 'httmock',\n ],\n test_suite='setuptest.setuptest.SetupTestSuite',\n version=versioneer.get_version(),\n zip_safe=False,\n)\n", "path": "setup.py"}, {"content": "import subprocess\nimport sys\nimport os\nimport stat\nimport errno\nimport platform\nfrom subprocess import CalledProcessError\nfrom urllib import urlretrieve, urlopen\n\nBASE_DIR = os.path.abspath(os.path.dirname(os.path.dirname(__file__)))\nTEST_BIN = os.path.join(BASE_DIR, 'test-bin')\nOS = platform.system().lower()\nFILE_SUFFIX = '.exe' if OS == 'windows' else ''\nKUBECTL = os.path.join(TEST_BIN, 'kubectl%s' % FILE_SUFFIX)\nMINIKUBE = os.path.join(TEST_BIN, 'minikube%s' % FILE_SUFFIX)\nFNULL = open(os.devnull, 'w')\n\ndef log(message):\n sys.stderr.write(message + \"\\n\")\n\n\ndef run_command(args, capture_output=False):\n try:\n if capture_output:\n return subprocess.check_output(args)\n else:\n subprocess.check_call(args)\n except CalledProcessError as e:\n log('Command failed with exit status %d: %s' % (e.returncode, ' '.join(args)))\n raise\n\n\ndef run_command_async(args, capture_output=False):\n if capture_output is True:\n p = subprocess.Popen(args, stdout=FNULL, stderr=subprocess.STDOUT)\n else:\n p = subprocess.Popen(args)\n return p\n\n\ndef create_test_bin():\n try:\n os.makedirs(TEST_BIN)\n except OSError as err:\n if err.errno != errno.EEXIST:\n raise\n\n\ndef binary_exists(filename):\n # Check if binary is callable on our path\n try:\n run_command([filename], True)\n return True\n except OSError:\n return False\n\n\ndef download_exec(url, dest):\n dest = urlretrieve(url, dest)[0]\n make_exec(dest)\n\n\ndef make_exec(file):\n current_stat = os.stat(file)\n os.chmod(file, current_stat.st_mode | stat.S_IEXEC)\n\n\ndef get_latest_github_version(repo):\n result = urlopen('https://github.com/%s/releases/latest' % repo)\n return result.geturl().split('/')[-1]\n\n", "path": "aimmo_runner/shell_api.py"}]} | 1,473 | 227 |
gh_patches_debug_2784 | rasdani/github-patches | git_diff | archlinux__archinstall-1954 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[master] awesome (desktops in general?) don't install packages?
It appears when choosing awesome, install is called: https://github.com/archlinux/archinstall/blob/7326d51161bf6fd7f1c683cf1d7ce09338efe4b7/archinstall/default_profiles/desktops/awesome.py#L23-L24
And super being `XorgProfile`: https://github.com/archlinux/archinstall/blob/7326d51161bf6fd7f1c683cf1d7ce09338efe4b7/archinstall/default_profiles/xorg.py#L1-L21
That class does not have an install so it calls `Profile.install()` which contains: https://github.com/archlinux/archinstall/blob/7326d51161bf6fd7f1c683cf1d7ce09338efe4b7/archinstall/default_profiles/profile.py#L101-L104
Which is just a placeholder?

I haven't ran through all the profiles yet, but have we overlooked something here?
What happened to all the packages per profile when we moved them to the dataclass structure? :)
I obviously missed something in a PR some where hehe
</issue>
<code>
[start of archinstall/default_profiles/desktops/awesome.py]
1 from typing import List, Optional, Any, TYPE_CHECKING
2
3 from archinstall.default_profiles.profile import ProfileType
4 from archinstall.default_profiles.xorg import XorgProfile
5
6 if TYPE_CHECKING:
7 from archinstall.lib.installer import Installer
8 _: Any
9
10
11 class AwesomeProfile(XorgProfile):
12 def __init__(self):
13 super().__init__('Awesome', ProfileType.WindowMgr, description='')
14
15 @property
16 def packages(self) -> List[str]:
17 return ['alacritty']
18
19 def preview_text(self) -> Optional[str]:
20 text = str(_('Environment type: {}')).format(self.profile_type.value)
21 return text + '\n' + self.packages_text()
22
23 def install(self, install_session: 'Installer'):
24 super().install(install_session)
25
26 # TODO: Copy a full configuration to ~/.config/awesome/rc.lua instead.
27 with open(f"{install_session.target}/etc/xdg/awesome/rc.lua", 'r') as fh:
28 awesome_lua = fh.read()
29
30 # Replace xterm with alacritty for a smoother experience.
31 awesome_lua = awesome_lua.replace('"xterm"', '"alacritty"')
32
33 with open(f"{install_session.target}/etc/xdg/awesome/rc.lua", 'w') as fh:
34 fh.write(awesome_lua)
35
36 # TODO: Configure the right-click-menu to contain the above packages that were installed. (as a user config)
37
[end of archinstall/default_profiles/desktops/awesome.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/archinstall/default_profiles/desktops/awesome.py b/archinstall/default_profiles/desktops/awesome.py
--- a/archinstall/default_profiles/desktops/awesome.py
+++ b/archinstall/default_profiles/desktops/awesome.py
@@ -14,7 +14,10 @@
@property
def packages(self) -> List[str]:
- return ['alacritty']
+ return [
+ 'awesome',
+ 'alacritty'
+ ]
def preview_text(self) -> Optional[str]:
text = str(_('Environment type: {}')).format(self.profile_type.value)
| {"golden_diff": "diff --git a/archinstall/default_profiles/desktops/awesome.py b/archinstall/default_profiles/desktops/awesome.py\n--- a/archinstall/default_profiles/desktops/awesome.py\n+++ b/archinstall/default_profiles/desktops/awesome.py\n@@ -14,7 +14,10 @@\n \n \t@property\n \tdef packages(self) -> List[str]:\n-\t\treturn ['alacritty']\n+\t\treturn [\n+\t\t\t'awesome',\n+\t\t\t'alacritty'\n+\t\t]\n \n \tdef preview_text(self) -> Optional[str]:\n \t\ttext = str(_('Environment type: {}')).format(self.profile_type.value)\n", "issue": "[master] awesome (desktops in general?) don't install packages?\nIt appears when choosing awesome, install is called: https://github.com/archlinux/archinstall/blob/7326d51161bf6fd7f1c683cf1d7ce09338efe4b7/archinstall/default_profiles/desktops/awesome.py#L23-L24\r\n\r\nAnd super being `XorgProfile`: https://github.com/archlinux/archinstall/blob/7326d51161bf6fd7f1c683cf1d7ce09338efe4b7/archinstall/default_profiles/xorg.py#L1-L21\r\n\r\nThat class does not have an install so it calls `Profile.install()` which contains: https://github.com/archlinux/archinstall/blob/7326d51161bf6fd7f1c683cf1d7ce09338efe4b7/archinstall/default_profiles/profile.py#L101-L104\r\nWhich is just a placeholder?\r\n\r\n\r\n\r\nI haven't ran through all the profiles yet, but have we overlooked something here?\r\nWhat happened to all the packages per profile when we moved them to the dataclass structure? :)\r\n\r\nI obviously missed something in a PR some where hehe\n", "before_files": [{"content": "from typing import List, Optional, Any, TYPE_CHECKING\n\nfrom archinstall.default_profiles.profile import ProfileType\nfrom archinstall.default_profiles.xorg import XorgProfile\n\nif TYPE_CHECKING:\n\tfrom archinstall.lib.installer import Installer\n\t_: Any\n\n\nclass AwesomeProfile(XorgProfile):\n\tdef __init__(self):\n\t\tsuper().__init__('Awesome', ProfileType.WindowMgr, description='')\n\n\t@property\n\tdef packages(self) -> List[str]:\n\t\treturn ['alacritty']\n\n\tdef preview_text(self) -> Optional[str]:\n\t\ttext = str(_('Environment type: {}')).format(self.profile_type.value)\n\t\treturn text + '\\n' + self.packages_text()\n\n\tdef install(self, install_session: 'Installer'):\n\t\tsuper().install(install_session)\n\n\t\t# TODO: Copy a full configuration to ~/.config/awesome/rc.lua instead.\n\t\twith open(f\"{install_session.target}/etc/xdg/awesome/rc.lua\", 'r') as fh:\n\t\t\tawesome_lua = fh.read()\n\n\t\t# Replace xterm with alacritty for a smoother experience.\n\t\tawesome_lua = awesome_lua.replace('\"xterm\"', '\"alacritty\"')\n\n\t\twith open(f\"{install_session.target}/etc/xdg/awesome/rc.lua\", 'w') as fh:\n\t\t\tfh.write(awesome_lua)\n\n\t\t# TODO: Configure the right-click-menu to contain the above packages that were installed. (as a user config)\n", "path": "archinstall/default_profiles/desktops/awesome.py"}]} | 1,264 | 133 |
gh_patches_debug_8701 | rasdani/github-patches | git_diff | sublimelsp__LSP-658 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
hover regression
After the latest commits hover popups produce errors, I suspect c9a73907f00972a415e8e979d5dcc96c11ec3f09 is responsible for this
* OS: Windows 10
* language server: pyls
* How you installed LSP: git
* Minimal reproduction steps
1. open a .py file
2. hover over an function name
* Log
```
LSP: --> textDocument/hover
LSP: {'contents': "is_supported_syntax(syntax: str, configs: 'List[ClientConfig]') -> bool"}
Parse Error: #x27;List[ClientConfig]') -> bool</p><p><a href='definition'>Definition</a> | <a href='references'>References</a> | <a href='rename'>Rename</a></p></div></div> code: Unexpected character
Parse Error: 27;List[ClientConfig]') -> bool</p><p><a href='definition'>Definition</a> | <a href='references'>References</a> | <a href='rename'>Rename</a></p></div></div> code: Unexpected character
Parse Error: 7;List[ClientConfig]') -> bool</p><p><a href='definition'>Definition</a> | <a href='references'>References</a> | <a href='rename'>Rename</a></p></div></div> code: Unexpected character
Parse Error: #x27; code: Unknown entity
Parse Error: #x27;) -> bool</p><p><a href='definition'>Definition</a> | <a href='references'>References</a> | <a href='rename'>Rename</a></p></div></div> code: Unexpected character
Parse Error: 27;) -> bool</p><p><a href='definition'>Definition</a> | <a href='references'>References</a> | <a href='rename'>Rename</a></p></div></div> code: Unexpected character
Parse Error: 7;) -> bool</p><p><a href='definition'>Definition</a> | <a href='references'>References</a> | <a href='rename'>Rename</a></p></div></div> code: Unexpected character
Parse Error: #x27; code: Unknown entity
```
</issue>
<code>
[start of plugin/hover.py]
1 import mdpopups
2 import sublime
3 import sublime_plugin
4 import webbrowser
5 from html import escape
6 try:
7 from typing import List, Optional, Any, Dict
8 assert List and Optional and Any and Dict
9 except ImportError:
10 pass
11
12 from .core.configurations import is_supported_syntax
13 from .diagnostics import get_point_diagnostics
14 from .core.registry import session_for_view, LspTextCommand
15 from .core.protocol import Request, DiagnosticSeverity
16 from .core.documents import get_document_position
17 from .core.popups import popup_css, popup_class
18 from .core.settings import client_configs
19
20 SUBLIME_WORD_MASK = 515
21 NO_HOVER_SCOPES = 'comment, string'
22
23
24 class HoverHandler(sublime_plugin.ViewEventListener):
25 def __init__(self, view):
26 self.view = view
27
28 @classmethod
29 def is_applicable(cls, settings):
30 syntax = settings.get('syntax')
31 return syntax and is_supported_syntax(syntax, client_configs.all)
32
33 def on_hover(self, point, hover_zone):
34 if hover_zone != sublime.HOVER_TEXT or self.view.is_popup_visible():
35 return
36 self.view.run_command("lsp_hover", {"point": point})
37
38
39 _test_contents = [] # type: List[str]
40
41
42 class_for_severity = {
43 DiagnosticSeverity.Error: 'errors',
44 DiagnosticSeverity.Warning: 'warnings',
45 DiagnosticSeverity.Information: 'info',
46 DiagnosticSeverity.Hint: 'hints'
47 }
48
49
50 class LspHoverCommand(LspTextCommand):
51 def __init__(self, view):
52 super().__init__(view)
53
54 def is_likely_at_symbol(self, point):
55 word_at_sel = self.view.classify(point)
56 return word_at_sel & SUBLIME_WORD_MASK and not self.view.match_selector(point, NO_HOVER_SCOPES)
57
58 def run(self, edit, point=None):
59 if point is None:
60 point = self.view.sel()[0].begin()
61 if self.is_likely_at_symbol(point):
62 self.request_symbol_hover(point)
63 point_diagnostics = get_point_diagnostics(self.view, point)
64 if point_diagnostics:
65 self.show_hover(point, self.diagnostics_content(point_diagnostics))
66
67 def request_symbol_hover(self, point) -> None:
68 session = session_for_view(self.view, point)
69 if session:
70 if session.has_capability('hoverProvider'):
71 document_position = get_document_position(self.view, point)
72 if document_position:
73 if session.client:
74 session.client.send_request(
75 Request.hover(document_position),
76 lambda response: self.handle_response(response, point))
77
78 def handle_response(self, response: 'Optional[Any]', point) -> None:
79 all_content = ""
80
81 point_diagnostics = get_point_diagnostics(self.view, point)
82 if point_diagnostics:
83 all_content += self.diagnostics_content(point_diagnostics)
84
85 all_content += self.hover_content(point, response)
86 all_content += self.symbol_actions_content()
87
88 _test_contents.clear()
89 _test_contents.append(all_content) # for testing only
90 self.show_hover(point, all_content)
91
92 def symbol_actions_content(self):
93 actions = []
94 if self.has_client_with_capability('definitionProvider'):
95 actions.append("<a href='{}'>{}</a>".format('definition', 'Definition'))
96
97 if self.has_client_with_capability('referencesProvider'):
98 actions.append("<a href='{}'>{}</a>".format('references', 'References'))
99
100 if self.has_client_with_capability('renameProvider'):
101 actions.append("<a href='{}'>{}</a>".format('rename', 'Rename'))
102
103 return "<p>" + " | ".join(actions) + "</p>"
104
105 def format_diagnostic(self, diagnostic):
106 if diagnostic.source:
107 return "<pre>[{}] {}</pre>".format(diagnostic.source, escape(diagnostic.message, False))
108 else:
109 return "<pre>{}</pre>".format(escape(diagnostic.message, False))
110
111 def diagnostics_content(self, diagnostics):
112 by_severity = {} # type: Dict[int, List[str]]
113 for diagnostic in diagnostics:
114 by_severity.setdefault(diagnostic.severity, []).append(self.format_diagnostic(diagnostic))
115 formatted = []
116 for severity, items in by_severity.items():
117 formatted.append("<div class='{}'>".format(class_for_severity[severity]))
118 formatted.extend(items)
119 formatted.append("<a href='{}'>{}</a>".format('code-actions',
120 'Code Actions'))
121 formatted.append("</div>")
122
123 return "".join(formatted)
124
125 def hover_content(self, point, response: 'Optional[Any]') -> str:
126 contents = ["No description available."]
127 if isinstance(response, dict):
128 # Flow returns None sometimes
129 # See: https://github.com/flowtype/flow-language-server/issues/51
130 response_content = response.get('contents')
131 if response_content:
132 if isinstance(response_content, list):
133 contents = response_content
134 else:
135 contents = [response_content]
136
137 formatted = []
138 for item in contents:
139 value = ""
140 language = None
141 if isinstance(item, str):
142 value = escape(item)
143 else:
144 value = escape(item.get("value"))
145 language = item.get("language")
146 if language:
147 formatted.append("```{}\n{}\n```\n".format(language, value))
148 else:
149 formatted.append(value)
150
151 return mdpopups.md2html(self.view, "\n".join(formatted))
152
153 def show_hover(self, point, contents):
154 mdpopups.show_popup(
155 self.view,
156 contents,
157 css=popup_css,
158 md=False,
159 flags=sublime.HIDE_ON_MOUSE_MOVE_AWAY,
160 location=point,
161 wrapper_class=popup_class,
162 max_width=800,
163 on_navigate=lambda href: self.on_hover_navigate(href, point))
164
165 def on_hover_navigate(self, href, point):
166 if href == 'definition':
167 self.run_command_from_point(point, "lsp_symbol_definition")
168 elif href == 'references':
169 self.run_command_from_point(point, "lsp_symbol_references")
170 elif href == 'rename':
171 self.run_command_from_point(point, "lsp_symbol_rename")
172 elif href == 'code-actions':
173 self.run_command_from_point(point, "lsp_code_actions")
174 else:
175 webbrowser.open_new_tab(href)
176
177 def run_command_from_point(self, point, command_name):
178 sel = self.view.sel()
179 sel.clear()
180 sel.add(sublime.Region(point, point))
181 self.view.run_command(command_name)
182
[end of plugin/hover.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/plugin/hover.py b/plugin/hover.py
--- a/plugin/hover.py
+++ b/plugin/hover.py
@@ -139,9 +139,9 @@
value = ""
language = None
if isinstance(item, str):
- value = escape(item)
+ value = item
else:
- value = escape(item.get("value"))
+ value = item.get("value")
language = item.get("language")
if language:
formatted.append("```{}\n{}\n```\n".format(language, value))
| {"golden_diff": "diff --git a/plugin/hover.py b/plugin/hover.py\n--- a/plugin/hover.py\n+++ b/plugin/hover.py\n@@ -139,9 +139,9 @@\n value = \"\"\n language = None\n if isinstance(item, str):\n- value = escape(item)\n+ value = item\n else:\n- value = escape(item.get(\"value\"))\n+ value = item.get(\"value\")\n language = item.get(\"language\")\n if language:\n formatted.append(\"```{}\\n{}\\n```\\n\".format(language, value))\n", "issue": "hover regression\nAfter the latest commits hover popups produce errors, I suspect c9a73907f00972a415e8e979d5dcc96c11ec3f09 is responsible for this\r\n* OS: Windows 10\r\n* language server: pyls\r\n* How you installed LSP: git\r\n* Minimal reproduction steps\r\n 1. open a .py file\r\n 2. hover over an function name\r\n* Log\r\n```\r\nLSP: --> textDocument/hover\r\nLSP: {'contents': \"is_supported_syntax(syntax: str, configs: 'List[ClientConfig]') -> bool\"}\r\nParse Error: #x27;List[ClientConfig]') -> bool</p><p><a href='definition'>Definition</a> | <a href='references'>References</a> | <a href='rename'>Rename</a></p></div></div> code: Unexpected character\r\nParse Error: 27;List[ClientConfig]') -> bool</p><p><a href='definition'>Definition</a> | <a href='references'>References</a> | <a href='rename'>Rename</a></p></div></div> code: Unexpected character\r\nParse Error: 7;List[ClientConfig]') -> bool</p><p><a href='definition'>Definition</a> | <a href='references'>References</a> | <a href='rename'>Rename</a></p></div></div> code: Unexpected character\r\nParse Error: #x27; code: Unknown entity\r\nParse Error: #x27;) -> bool</p><p><a href='definition'>Definition</a> | <a href='references'>References</a> | <a href='rename'>Rename</a></p></div></div> code: Unexpected character\r\nParse Error: 27;) -> bool</p><p><a href='definition'>Definition</a> | <a href='references'>References</a> | <a href='rename'>Rename</a></p></div></div> code: Unexpected character\r\nParse Error: 7;) -> bool</p><p><a href='definition'>Definition</a> | <a href='references'>References</a> | <a href='rename'>Rename</a></p></div></div> code: Unexpected character\r\nParse Error: #x27; code: Unknown entity\r\n```\n", "before_files": [{"content": "import mdpopups\nimport sublime\nimport sublime_plugin\nimport webbrowser\nfrom html import escape\ntry:\n from typing import List, Optional, Any, Dict\n assert List and Optional and Any and Dict\nexcept ImportError:\n pass\n\nfrom .core.configurations import is_supported_syntax\nfrom .diagnostics import get_point_diagnostics\nfrom .core.registry import session_for_view, LspTextCommand\nfrom .core.protocol import Request, DiagnosticSeverity\nfrom .core.documents import get_document_position\nfrom .core.popups import popup_css, popup_class\nfrom .core.settings import client_configs\n\nSUBLIME_WORD_MASK = 515\nNO_HOVER_SCOPES = 'comment, string'\n\n\nclass HoverHandler(sublime_plugin.ViewEventListener):\n def __init__(self, view):\n self.view = view\n\n @classmethod\n def is_applicable(cls, settings):\n syntax = settings.get('syntax')\n return syntax and is_supported_syntax(syntax, client_configs.all)\n\n def on_hover(self, point, hover_zone):\n if hover_zone != sublime.HOVER_TEXT or self.view.is_popup_visible():\n return\n self.view.run_command(\"lsp_hover\", {\"point\": point})\n\n\n_test_contents = [] # type: List[str]\n\n\nclass_for_severity = {\n DiagnosticSeverity.Error: 'errors',\n DiagnosticSeverity.Warning: 'warnings',\n DiagnosticSeverity.Information: 'info',\n DiagnosticSeverity.Hint: 'hints'\n}\n\n\nclass LspHoverCommand(LspTextCommand):\n def __init__(self, view):\n super().__init__(view)\n\n def is_likely_at_symbol(self, point):\n word_at_sel = self.view.classify(point)\n return word_at_sel & SUBLIME_WORD_MASK and not self.view.match_selector(point, NO_HOVER_SCOPES)\n\n def run(self, edit, point=None):\n if point is None:\n point = self.view.sel()[0].begin()\n if self.is_likely_at_symbol(point):\n self.request_symbol_hover(point)\n point_diagnostics = get_point_diagnostics(self.view, point)\n if point_diagnostics:\n self.show_hover(point, self.diagnostics_content(point_diagnostics))\n\n def request_symbol_hover(self, point) -> None:\n session = session_for_view(self.view, point)\n if session:\n if session.has_capability('hoverProvider'):\n document_position = get_document_position(self.view, point)\n if document_position:\n if session.client:\n session.client.send_request(\n Request.hover(document_position),\n lambda response: self.handle_response(response, point))\n\n def handle_response(self, response: 'Optional[Any]', point) -> None:\n all_content = \"\"\n\n point_diagnostics = get_point_diagnostics(self.view, point)\n if point_diagnostics:\n all_content += self.diagnostics_content(point_diagnostics)\n\n all_content += self.hover_content(point, response)\n all_content += self.symbol_actions_content()\n\n _test_contents.clear()\n _test_contents.append(all_content) # for testing only\n self.show_hover(point, all_content)\n\n def symbol_actions_content(self):\n actions = []\n if self.has_client_with_capability('definitionProvider'):\n actions.append(\"<a href='{}'>{}</a>\".format('definition', 'Definition'))\n\n if self.has_client_with_capability('referencesProvider'):\n actions.append(\"<a href='{}'>{}</a>\".format('references', 'References'))\n\n if self.has_client_with_capability('renameProvider'):\n actions.append(\"<a href='{}'>{}</a>\".format('rename', 'Rename'))\n\n return \"<p>\" + \" | \".join(actions) + \"</p>\"\n\n def format_diagnostic(self, diagnostic):\n if diagnostic.source:\n return \"<pre>[{}] {}</pre>\".format(diagnostic.source, escape(diagnostic.message, False))\n else:\n return \"<pre>{}</pre>\".format(escape(diagnostic.message, False))\n\n def diagnostics_content(self, diagnostics):\n by_severity = {} # type: Dict[int, List[str]]\n for diagnostic in diagnostics:\n by_severity.setdefault(diagnostic.severity, []).append(self.format_diagnostic(diagnostic))\n formatted = []\n for severity, items in by_severity.items():\n formatted.append(\"<div class='{}'>\".format(class_for_severity[severity]))\n formatted.extend(items)\n formatted.append(\"<a href='{}'>{}</a>\".format('code-actions',\n 'Code Actions'))\n formatted.append(\"</div>\")\n\n return \"\".join(formatted)\n\n def hover_content(self, point, response: 'Optional[Any]') -> str:\n contents = [\"No description available.\"]\n if isinstance(response, dict):\n # Flow returns None sometimes\n # See: https://github.com/flowtype/flow-language-server/issues/51\n response_content = response.get('contents')\n if response_content:\n if isinstance(response_content, list):\n contents = response_content\n else:\n contents = [response_content]\n\n formatted = []\n for item in contents:\n value = \"\"\n language = None\n if isinstance(item, str):\n value = escape(item)\n else:\n value = escape(item.get(\"value\"))\n language = item.get(\"language\")\n if language:\n formatted.append(\"```{}\\n{}\\n```\\n\".format(language, value))\n else:\n formatted.append(value)\n\n return mdpopups.md2html(self.view, \"\\n\".join(formatted))\n\n def show_hover(self, point, contents):\n mdpopups.show_popup(\n self.view,\n contents,\n css=popup_css,\n md=False,\n flags=sublime.HIDE_ON_MOUSE_MOVE_AWAY,\n location=point,\n wrapper_class=popup_class,\n max_width=800,\n on_navigate=lambda href: self.on_hover_navigate(href, point))\n\n def on_hover_navigate(self, href, point):\n if href == 'definition':\n self.run_command_from_point(point, \"lsp_symbol_definition\")\n elif href == 'references':\n self.run_command_from_point(point, \"lsp_symbol_references\")\n elif href == 'rename':\n self.run_command_from_point(point, \"lsp_symbol_rename\")\n elif href == 'code-actions':\n self.run_command_from_point(point, \"lsp_code_actions\")\n else:\n webbrowser.open_new_tab(href)\n\n def run_command_from_point(self, point, command_name):\n sel = self.view.sel()\n sel.clear()\n sel.add(sublime.Region(point, point))\n self.view.run_command(command_name)\n", "path": "plugin/hover.py"}]} | 2,929 | 123 |
gh_patches_debug_24635 | rasdani/github-patches | git_diff | plone__Products.CMFPlone-1438 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
plone-upgrade to 5.0.3 shows plain text as result
This is caused by https://github.com/plone/plone.app.upgrade/pull/67 by @vangheem, though this change looks fine to me.
With the above change, when running `@@plone-upgrade`, even with dry-run selected, the result page is shown as text: you see plain html. Very strange. Reported here: https://community.plone.org/t/plone-5-0-3-soft-released/1699/4
When I empty the registry.xml, keeping only the main `registry` tags for safety, it all works fine. Keeping one of the two changed records, it again shows as text.
To check it:
- Use current coredev 5.0
- Create a Plone Site.
- Simulate a Plone 5.0.2 site: in portal_setup, Upgrades, select Products.CMFPlone:plone, and run the to502 upgrade profile.
- Go to @@plone-upgrade, optionally select dry-run, and run the upgrade.
Result: it will show as plain text.
BTW, afterwards, all is fine: the migration has succeeded and it looks like all pages show up fine.
Any idea?
</issue>
<code>
[start of Products/CMFPlone/resources/exportimport/bundles.py]
1 from plone.registry.interfaces import IRegistry
2 from zope.component import queryUtility
3
4 from ..browser.combine import combine_bundles
5
6
7 def combine(context):
8
9 logger = context.getLogger('bundles')
10 registry = queryUtility(IRegistry)
11
12 if registry is None:
13 logger.info("Cannot find registry")
14 return
15
16 body = context.readDataFile('registry.xml')
17 if body and "IBundleRegistry" in body:
18 site = context.getSite()
19 combine_bundles(site)
20
[end of Products/CMFPlone/resources/exportimport/bundles.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/Products/CMFPlone/resources/exportimport/bundles.py b/Products/CMFPlone/resources/exportimport/bundles.py
--- a/Products/CMFPlone/resources/exportimport/bundles.py
+++ b/Products/CMFPlone/resources/exportimport/bundles.py
@@ -1,5 +1,6 @@
from plone.registry.interfaces import IRegistry
from zope.component import queryUtility
+from zope.globalrequest import getRequest
from ..browser.combine import combine_bundles
@@ -16,4 +17,20 @@
body = context.readDataFile('registry.xml')
if body and "IBundleRegistry" in body:
site = context.getSite()
+ # Calling combine_bundles will have as side effect that the
+ # Content-Type header of the response is set to application/javascript,
+ # which we do not want. So we reset it to the original at the end.
+ site = context.getSite()
+ request = getattr(site, 'REQUEST', getRequest())
+ if request is not None:
+ # Easily happens in tests.
+ orig_header = request.response.getHeader('Content-Type')
combine_bundles(site)
+ if request is not None:
+ new_header = request.response.getHeader('Content-Type')
+ if new_header != orig_header:
+ if orig_header is None:
+ # Setting it to None would result in the string 'None'.
+ # So pick a saner one.
+ orig_header = 'text/html'
+ request.response.setHeader('Content-Type', orig_header)
| {"golden_diff": "diff --git a/Products/CMFPlone/resources/exportimport/bundles.py b/Products/CMFPlone/resources/exportimport/bundles.py\n--- a/Products/CMFPlone/resources/exportimport/bundles.py\n+++ b/Products/CMFPlone/resources/exportimport/bundles.py\n@@ -1,5 +1,6 @@\n from plone.registry.interfaces import IRegistry\n from zope.component import queryUtility\n+from zope.globalrequest import getRequest\n \n from ..browser.combine import combine_bundles\n \n@@ -16,4 +17,20 @@\n body = context.readDataFile('registry.xml')\n if body and \"IBundleRegistry\" in body:\n site = context.getSite()\n+ # Calling combine_bundles will have as side effect that the\n+ # Content-Type header of the response is set to application/javascript,\n+ # which we do not want. So we reset it to the original at the end.\n+ site = context.getSite()\n+ request = getattr(site, 'REQUEST', getRequest())\n+ if request is not None:\n+ # Easily happens in tests.\n+ orig_header = request.response.getHeader('Content-Type')\n combine_bundles(site)\n+ if request is not None:\n+ new_header = request.response.getHeader('Content-Type')\n+ if new_header != orig_header:\n+ if orig_header is None:\n+ # Setting it to None would result in the string 'None'.\n+ # So pick a saner one.\n+ orig_header = 'text/html'\n+ request.response.setHeader('Content-Type', orig_header)\n", "issue": "plone-upgrade to 5.0.3 shows plain text as result\nThis is caused by https://github.com/plone/plone.app.upgrade/pull/67 by @vangheem, though this change looks fine to me.\n\nWith the above change, when running `@@plone-upgrade`, even with dry-run selected, the result page is shown as text: you see plain html. Very strange. Reported here: https://community.plone.org/t/plone-5-0-3-soft-released/1699/4\nWhen I empty the registry.xml, keeping only the main `registry` tags for safety, it all works fine. Keeping one of the two changed records, it again shows as text.\n\nTo check it:\n- Use current coredev 5.0\n- Create a Plone Site.\n- Simulate a Plone 5.0.2 site: in portal_setup, Upgrades, select Products.CMFPlone:plone, and run the to502 upgrade profile.\n- Go to @@plone-upgrade, optionally select dry-run, and run the upgrade.\n\nResult: it will show as plain text.\nBTW, afterwards, all is fine: the migration has succeeded and it looks like all pages show up fine.\n\nAny idea?\n\n", "before_files": [{"content": "from plone.registry.interfaces import IRegistry\nfrom zope.component import queryUtility\n\nfrom ..browser.combine import combine_bundles\n\n\ndef combine(context):\n\n logger = context.getLogger('bundles')\n registry = queryUtility(IRegistry)\n\n if registry is None:\n logger.info(\"Cannot find registry\")\n return\n\n body = context.readDataFile('registry.xml')\n if body and \"IBundleRegistry\" in body:\n site = context.getSite()\n combine_bundles(site)\n", "path": "Products/CMFPlone/resources/exportimport/bundles.py"}]} | 955 | 340 |
gh_patches_debug_19129 | rasdani/github-patches | git_diff | DataDog__dd-trace-py-3799 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Starlette/Fastapi: endpoint duration includes the duration of background tasks
### Which version of dd-trace-py are you using?
ddtrace==0.55.4
### Which version of pip are you using?
21.2.4
### Which version of the libraries are you using?
fastapi==0.68.2
starlette==0.14.2
### How can we reproduce your problem?
this would be a minimal proof of concept `app.py`, running through `ddtrace-run uvicorn app:app`
```
import asyncio
from ddtrace import tracer
from fastapi import FastAPI, BackgroundTasks
app = FastAPI()
async def some_background_task():
with tracer.start_span("some_background_task", activate=True):
tracer.context_provider.activate(None)
await asyncio.sleep(10)
@app.get("/")
async def main(background_tasks: BackgroundTasks):
background_tasks.add_task(some_background_task)
return "ok"
```
### What is the result that you get?
The duration of `/` is reported to be 10s, while the browser immediately receives the response.
`some_background_task` is also reported with a duration of 10s.
### What is the result that you expected?
I would expect that the reported endpoint duration matches the time it took to get the response, and that the background task is reported separately. Please don't mind that `tracer.context_provider.activate(None)` might be redundant here, adding it here to show what I have tried.
FastAPI's `add_task` actually comes from starlette https://www.starlette.io/background/
I can understand why the endpoint duration includes the background task, this is the definition of starlette's `Response.__call__`:
https://github.com/encode/starlette/blob/ada99beee530e7b841ce320bc6e66f6dbd9ad781/starlette/responses.py#L159
```
async def __call__(self, scope: Scope, receive: Receive, send: Send) -> None:
await send(
{
"type": "http.response.start",
"status": self.status_code,
"headers": self.raw_headers,
}
)
await send({"type": "http.response.body", "body": self.body})
if self.background is not None:
await self.background()
```
The response header and body is sent, but the function itself is not finished until all background tasks have been processed.
I believe that this is not what users of ddtrace would expect: the background tasks are used to return a response early without waiting for background operations to finish ; the reported endpoint duration should correspond to when the body was sent
</issue>
<code>
[start of ddtrace/contrib/asgi/middleware.py]
1 import sys
2 from typing import TYPE_CHECKING
3
4 import ddtrace
5 from ddtrace import config
6 from ddtrace.constants import ANALYTICS_SAMPLE_RATE_KEY
7 from ddtrace.ext import SpanTypes
8 from ddtrace.ext import http
9
10 from .. import trace_utils
11 from ...internal.compat import reraise
12 from ...internal.logger import get_logger
13 from .utils import guarantee_single_callable
14
15
16 if TYPE_CHECKING:
17 from typing import Any
18 from typing import Mapping
19 from typing import Optional
20
21 from ddtrace import Span
22
23
24 log = get_logger(__name__)
25
26 config._add(
27 "asgi",
28 dict(service_name=config._get_service(default="asgi"), request_span_name="asgi.request", distributed_tracing=True),
29 )
30
31 ASGI_VERSION = "asgi.version"
32 ASGI_SPEC_VERSION = "asgi.spec_version"
33
34
35 def bytes_to_str(str_or_bytes):
36 return str_or_bytes.decode() if isinstance(str_or_bytes, bytes) else str_or_bytes
37
38
39 def _extract_versions_from_scope(scope, integration_config):
40 tags = {}
41
42 http_version = scope.get("http_version")
43 if http_version:
44 tags[http.VERSION] = http_version
45
46 scope_asgi = scope.get("asgi")
47
48 if scope_asgi and "version" in scope_asgi:
49 tags[ASGI_VERSION] = scope_asgi["version"]
50
51 if scope_asgi and "spec_version" in scope_asgi:
52 tags[ASGI_SPEC_VERSION] = scope_asgi["spec_version"]
53
54 return tags
55
56
57 def _extract_headers(scope):
58 headers = scope.get("headers")
59 if headers:
60 # headers: (Iterable[[byte string, byte string]])
61 return dict((bytes_to_str(k), bytes_to_str(v)) for (k, v) in headers)
62 return {}
63
64
65 def _default_handle_exception_span(exc, span):
66 """Default handler for exception for span"""
67 span.set_tag(http.STATUS_CODE, 500)
68
69
70 def span_from_scope(scope):
71 # type: (Mapping[str, Any]) -> Optional[Span]
72 """Retrieve the top-level ASGI span from the scope."""
73 return scope.get("datadog", {}).get("request_spans", [None])[0]
74
75
76 class TraceMiddleware:
77 """
78 ASGI application middleware that traces the requests.
79 Args:
80 app: The ASGI application.
81 tracer: Custom tracer. Defaults to the global tracer.
82 """
83
84 def __init__(
85 self,
86 app,
87 tracer=None,
88 integration_config=config.asgi,
89 handle_exception_span=_default_handle_exception_span,
90 span_modifier=None,
91 ):
92 self.app = guarantee_single_callable(app)
93 self.tracer = tracer or ddtrace.tracer
94 self.integration_config = integration_config
95 self.handle_exception_span = handle_exception_span
96 self.span_modifier = span_modifier
97
98 async def __call__(self, scope, receive, send):
99 if scope["type"] != "http":
100 return await self.app(scope, receive, send)
101
102 try:
103 headers = _extract_headers(scope)
104 except Exception:
105 log.warning("failed to decode headers for distributed tracing", exc_info=True)
106 headers = {}
107 else:
108 trace_utils.activate_distributed_headers(
109 self.tracer, int_config=self.integration_config, request_headers=headers
110 )
111
112 resource = "{} {}".format(scope["method"], scope["path"])
113
114 span = self.tracer.trace(
115 name=self.integration_config.get("request_span_name", "asgi.request"),
116 service=trace_utils.int_service(None, self.integration_config),
117 resource=resource,
118 span_type=SpanTypes.WEB,
119 )
120
121 if "datadog" not in scope:
122 scope["datadog"] = {"request_spans": [span]}
123 else:
124 scope["datadog"]["request_spans"].append(span)
125
126 if self.span_modifier:
127 self.span_modifier(span, scope)
128
129 sample_rate = self.integration_config.get_analytics_sample_rate(use_global_config=True)
130 if sample_rate is not None:
131 span.set_tag(ANALYTICS_SAMPLE_RATE_KEY, sample_rate)
132
133 method = scope.get("method")
134 server = scope.get("server")
135 if server and len(server) == 2:
136 port = server[1]
137 server_host = server[0] + (":" + str(port) if port is not None and port != 80 else "")
138 full_path = scope.get("root_path", "") + scope.get("path", "")
139 url = scope.get("scheme", "http") + "://" + server_host + full_path
140 else:
141 url = None
142
143 if self.integration_config.trace_query_string:
144 query_string = scope.get("query_string")
145 if len(query_string) > 0:
146 query_string = bytes_to_str(query_string)
147 else:
148 query_string = None
149
150 trace_utils.set_http_meta(
151 span, self.integration_config, method=method, url=url, query=query_string, request_headers=headers
152 )
153
154 tags = _extract_versions_from_scope(scope, self.integration_config)
155 span.set_tags(tags)
156
157 async def wrapped_send(message):
158 if span and message.get("type") == "http.response.start" and "status" in message:
159 status_code = message["status"]
160 else:
161 status_code = None
162
163 if "headers" in message:
164 response_headers = message["headers"]
165 else:
166 response_headers = None
167
168 trace_utils.set_http_meta(
169 span, self.integration_config, status_code=status_code, response_headers=response_headers
170 )
171
172 return await send(message)
173
174 try:
175 return await self.app(scope, receive, wrapped_send)
176 except Exception as exc:
177 (exc_type, exc_val, exc_tb) = sys.exc_info()
178 span.set_exc_info(exc_type, exc_val, exc_tb)
179 self.handle_exception_span(exc, span)
180 reraise(exc_type, exc_val, exc_tb)
181 finally:
182 try:
183 del scope["datadog"]["request_span"]
184 except KeyError:
185 pass
186 span.finish()
187
[end of ddtrace/contrib/asgi/middleware.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/ddtrace/contrib/asgi/middleware.py b/ddtrace/contrib/asgi/middleware.py
--- a/ddtrace/contrib/asgi/middleware.py
+++ b/ddtrace/contrib/asgi/middleware.py
@@ -169,7 +169,13 @@
span, self.integration_config, status_code=status_code, response_headers=response_headers
)
- return await send(message)
+ try:
+ return await send(message)
+ finally:
+ # Per asgi spec, "more_body" is used if there is still data to send
+ # Close the span if "http.response.body" has no more data left to send in the response.
+ if message.get("type") == "http.response.body" and not message.get("more_body", False):
+ span.finish()
try:
return await self.app(scope, receive, wrapped_send)
@@ -183,4 +189,5 @@
del scope["datadog"]["request_span"]
except KeyError:
pass
+
span.finish()
| {"golden_diff": "diff --git a/ddtrace/contrib/asgi/middleware.py b/ddtrace/contrib/asgi/middleware.py\n--- a/ddtrace/contrib/asgi/middleware.py\n+++ b/ddtrace/contrib/asgi/middleware.py\n@@ -169,7 +169,13 @@\n span, self.integration_config, status_code=status_code, response_headers=response_headers\n )\n \n- return await send(message)\n+ try:\n+ return await send(message)\n+ finally:\n+ # Per asgi spec, \"more_body\" is used if there is still data to send\n+ # Close the span if \"http.response.body\" has no more data left to send in the response.\n+ if message.get(\"type\") == \"http.response.body\" and not message.get(\"more_body\", False):\n+ span.finish()\n \n try:\n return await self.app(scope, receive, wrapped_send)\n@@ -183,4 +189,5 @@\n del scope[\"datadog\"][\"request_span\"]\n except KeyError:\n pass\n+\n span.finish()\n", "issue": "Starlette/Fastapi: endpoint duration includes the duration of background tasks\n### Which version of dd-trace-py are you using?\r\n\r\nddtrace==0.55.4\r\n\r\n### Which version of pip are you using?\r\n\r\n21.2.4\r\n\r\n\r\n### Which version of the libraries are you using?\r\n\r\nfastapi==0.68.2\r\nstarlette==0.14.2\r\n\r\n### How can we reproduce your problem?\r\n\r\nthis would be a minimal proof of concept `app.py`, running through `ddtrace-run uvicorn app:app`\r\n\r\n```\r\nimport asyncio\r\n\r\nfrom ddtrace import tracer\r\nfrom fastapi import FastAPI, BackgroundTasks\r\n\r\napp = FastAPI()\r\n\r\n\r\nasync def some_background_task():\r\n with tracer.start_span(\"some_background_task\", activate=True):\r\n tracer.context_provider.activate(None)\r\n await asyncio.sleep(10)\r\n\r\n\r\[email protected](\"/\")\r\nasync def main(background_tasks: BackgroundTasks):\r\n background_tasks.add_task(some_background_task)\r\n return \"ok\"\r\n\r\n```\r\n\r\n### What is the result that you get?\r\n\r\nThe duration of `/` is reported to be 10s, while the browser immediately receives the response.\r\n`some_background_task` is also reported with a duration of 10s.\r\n\r\n### What is the result that you expected?\r\n\r\nI would expect that the reported endpoint duration matches the time it took to get the response, and that the background task is reported separately. Please don't mind that `tracer.context_provider.activate(None)` might be redundant here, adding it here to show what I have tried.\r\n\r\nFastAPI's `add_task` actually comes from starlette https://www.starlette.io/background/\r\n\r\nI can understand why the endpoint duration includes the background task, this is the definition of starlette's `Response.__call__`:\r\n\r\nhttps://github.com/encode/starlette/blob/ada99beee530e7b841ce320bc6e66f6dbd9ad781/starlette/responses.py#L159\r\n```\r\n async def __call__(self, scope: Scope, receive: Receive, send: Send) -> None:\r\n await send(\r\n {\r\n \"type\": \"http.response.start\",\r\n \"status\": self.status_code,\r\n \"headers\": self.raw_headers,\r\n }\r\n )\r\n await send({\"type\": \"http.response.body\", \"body\": self.body})\r\n\r\n if self.background is not None:\r\n await self.background()\r\n```\r\n\r\nThe response header and body is sent, but the function itself is not finished until all background tasks have been processed.\r\n\r\nI believe that this is not what users of ddtrace would expect: the background tasks are used to return a response early without waiting for background operations to finish ; the reported endpoint duration should correspond to when the body was sent\r\n\n", "before_files": [{"content": "import sys\nfrom typing import TYPE_CHECKING\n\nimport ddtrace\nfrom ddtrace import config\nfrom ddtrace.constants import ANALYTICS_SAMPLE_RATE_KEY\nfrom ddtrace.ext import SpanTypes\nfrom ddtrace.ext import http\n\nfrom .. import trace_utils\nfrom ...internal.compat import reraise\nfrom ...internal.logger import get_logger\nfrom .utils import guarantee_single_callable\n\n\nif TYPE_CHECKING:\n from typing import Any\n from typing import Mapping\n from typing import Optional\n\n from ddtrace import Span\n\n\nlog = get_logger(__name__)\n\nconfig._add(\n \"asgi\",\n dict(service_name=config._get_service(default=\"asgi\"), request_span_name=\"asgi.request\", distributed_tracing=True),\n)\n\nASGI_VERSION = \"asgi.version\"\nASGI_SPEC_VERSION = \"asgi.spec_version\"\n\n\ndef bytes_to_str(str_or_bytes):\n return str_or_bytes.decode() if isinstance(str_or_bytes, bytes) else str_or_bytes\n\n\ndef _extract_versions_from_scope(scope, integration_config):\n tags = {}\n\n http_version = scope.get(\"http_version\")\n if http_version:\n tags[http.VERSION] = http_version\n\n scope_asgi = scope.get(\"asgi\")\n\n if scope_asgi and \"version\" in scope_asgi:\n tags[ASGI_VERSION] = scope_asgi[\"version\"]\n\n if scope_asgi and \"spec_version\" in scope_asgi:\n tags[ASGI_SPEC_VERSION] = scope_asgi[\"spec_version\"]\n\n return tags\n\n\ndef _extract_headers(scope):\n headers = scope.get(\"headers\")\n if headers:\n # headers: (Iterable[[byte string, byte string]])\n return dict((bytes_to_str(k), bytes_to_str(v)) for (k, v) in headers)\n return {}\n\n\ndef _default_handle_exception_span(exc, span):\n \"\"\"Default handler for exception for span\"\"\"\n span.set_tag(http.STATUS_CODE, 500)\n\n\ndef span_from_scope(scope):\n # type: (Mapping[str, Any]) -> Optional[Span]\n \"\"\"Retrieve the top-level ASGI span from the scope.\"\"\"\n return scope.get(\"datadog\", {}).get(\"request_spans\", [None])[0]\n\n\nclass TraceMiddleware:\n \"\"\"\n ASGI application middleware that traces the requests.\n Args:\n app: The ASGI application.\n tracer: Custom tracer. Defaults to the global tracer.\n \"\"\"\n\n def __init__(\n self,\n app,\n tracer=None,\n integration_config=config.asgi,\n handle_exception_span=_default_handle_exception_span,\n span_modifier=None,\n ):\n self.app = guarantee_single_callable(app)\n self.tracer = tracer or ddtrace.tracer\n self.integration_config = integration_config\n self.handle_exception_span = handle_exception_span\n self.span_modifier = span_modifier\n\n async def __call__(self, scope, receive, send):\n if scope[\"type\"] != \"http\":\n return await self.app(scope, receive, send)\n\n try:\n headers = _extract_headers(scope)\n except Exception:\n log.warning(\"failed to decode headers for distributed tracing\", exc_info=True)\n headers = {}\n else:\n trace_utils.activate_distributed_headers(\n self.tracer, int_config=self.integration_config, request_headers=headers\n )\n\n resource = \"{} {}\".format(scope[\"method\"], scope[\"path\"])\n\n span = self.tracer.trace(\n name=self.integration_config.get(\"request_span_name\", \"asgi.request\"),\n service=trace_utils.int_service(None, self.integration_config),\n resource=resource,\n span_type=SpanTypes.WEB,\n )\n\n if \"datadog\" not in scope:\n scope[\"datadog\"] = {\"request_spans\": [span]}\n else:\n scope[\"datadog\"][\"request_spans\"].append(span)\n\n if self.span_modifier:\n self.span_modifier(span, scope)\n\n sample_rate = self.integration_config.get_analytics_sample_rate(use_global_config=True)\n if sample_rate is not None:\n span.set_tag(ANALYTICS_SAMPLE_RATE_KEY, sample_rate)\n\n method = scope.get(\"method\")\n server = scope.get(\"server\")\n if server and len(server) == 2:\n port = server[1]\n server_host = server[0] + (\":\" + str(port) if port is not None and port != 80 else \"\")\n full_path = scope.get(\"root_path\", \"\") + scope.get(\"path\", \"\")\n url = scope.get(\"scheme\", \"http\") + \"://\" + server_host + full_path\n else:\n url = None\n\n if self.integration_config.trace_query_string:\n query_string = scope.get(\"query_string\")\n if len(query_string) > 0:\n query_string = bytes_to_str(query_string)\n else:\n query_string = None\n\n trace_utils.set_http_meta(\n span, self.integration_config, method=method, url=url, query=query_string, request_headers=headers\n )\n\n tags = _extract_versions_from_scope(scope, self.integration_config)\n span.set_tags(tags)\n\n async def wrapped_send(message):\n if span and message.get(\"type\") == \"http.response.start\" and \"status\" in message:\n status_code = message[\"status\"]\n else:\n status_code = None\n\n if \"headers\" in message:\n response_headers = message[\"headers\"]\n else:\n response_headers = None\n\n trace_utils.set_http_meta(\n span, self.integration_config, status_code=status_code, response_headers=response_headers\n )\n\n return await send(message)\n\n try:\n return await self.app(scope, receive, wrapped_send)\n except Exception as exc:\n (exc_type, exc_val, exc_tb) = sys.exc_info()\n span.set_exc_info(exc_type, exc_val, exc_tb)\n self.handle_exception_span(exc, span)\n reraise(exc_type, exc_val, exc_tb)\n finally:\n try:\n del scope[\"datadog\"][\"request_span\"]\n except KeyError:\n pass\n span.finish()\n", "path": "ddtrace/contrib/asgi/middleware.py"}]} | 2,884 | 232 |
gh_patches_debug_28087 | rasdani/github-patches | git_diff | sktime__sktime-1493 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[BUG] ValueError when fitting Prophet and X.index.name=='ds'
**Describe the bug**
The following error is raised when trying to fit Prophet using X where the name of X's DateTimeIndex is named "ds":
ValueError: 'ds' is both an index level and a column label, which is ambiguous.
It happens in the _merge_X method in the _ProphetAdapter, line 193, because after running line 191, X contains a column and an index called "ds".
A suggested solution would be to add this oneliner before line 189 of .forecasting.base.adapters._fbprophet.py:
```python
X.index.name = "index"
```
Note: This assumes that X never has a MultiIndex. Not sure if that ever happens though..
This way, the user is still allowed to call X's index "ds". Since "ds" is the standard name that Prophet users use for their DateTime column, this is the most intuitive and least restrictive solution.
**To Reproduce**
```python
import pandas as pd
from sktime.datasets import load_airline
from sktime.forecasting.fbprophet import Prophet
from sktime.forecasting.model_selection import ForecastingGridSearchCV
from sktime.forecasting.model_selection import ExpandingWindowSplitter
y_train = load_airline().to_timestamp(freq='M')
# Commenting out the following line makes everything run smoothly
y_train.index.name = 'ds'
X_train = pd.DataFrame({'x1':y_train+200, 'x2':y_train+100})
forecaster = Prophet(yearly_seasonality=False, weekly_seasonality=False, daily_seasonality=False)
forecaster.fit(y_train, X_train)
```
**Expected behavior**
No error
**Additional context**
**Versions**
<details>
System:
python: 3.9.7 | packaged by conda-forge | (default, Sep 23 2021, 07:24:41) [MSC v.1916 64 bit (AMD64)]
executable: C:\Users\............\env\python.exe
machine: Windows-10-10.0.19043-SP0
Python dependencies:
pip: 21.2.4
setuptools: 58.1.0
sklearn: 1.0
sktime: 0.7.0
statsmodels: 0.12.2
numpy: 1.21.2
scipy: 1.7.1
Cython: 0.29.24
pandas: 1.3.3
matplotlib: 3.4.3
joblib: 1.0.1
numba: 0.53.1
pmdarima: 1.8.3
tsfresh: None
</details>
</issue>
<code>
[start of sktime/utils/validation/series.py]
1 #!/usr/bin/env python3 -u
2 # -*- coding: utf-8 -*-
3
4 """Functions for checking input data."""
5
6 __author__ = ["Markus Löning", "Drishti Bhasin"]
7 __all__ = [
8 "check_series",
9 "check_time_index",
10 "check_equal_time_index",
11 "check_consistent_index_type",
12 ]
13 import pandas as pd
14 import numpy as np
15
16 # We currently support the following types for input data and time index types.
17 VALID_DATA_TYPES = (pd.DataFrame, pd.Series, np.ndarray)
18 VALID_INDEX_TYPES = (pd.Int64Index, pd.RangeIndex, pd.PeriodIndex, pd.DatetimeIndex)
19
20
21 def _check_is_univariate(y, var_name="input"):
22 """Check if series is univariate."""
23 if isinstance(y, pd.DataFrame):
24 nvars = y.shape[1]
25 if nvars > 1:
26 raise ValueError(
27 f"{var_name} must be univariate, but found {nvars} variables."
28 )
29 if isinstance(y, np.ndarray) and y.ndim > 1 and y.shape[1] > 1:
30 raise ValueError(
31 f"{var_name} must be univariate, but found np.ndarray with more than "
32 "one column"
33 )
34
35
36 def _check_is_multivariate(Z, var_name="input"):
37 """Check if series is multivariate."""
38 if isinstance(Z, pd.Series):
39 raise ValueError(f"{var_name} must have 2 or more variables, but found 1.")
40 if isinstance(Z, pd.DataFrame):
41 nvars = Z.shape[1]
42 if nvars < 2:
43 raise ValueError(
44 f"{var_name} must have 2 or more variables, but found {nvars}."
45 )
46 if isinstance(Z, np.ndarray):
47 if Z.ndim == 1 or (Z.ndim == 2 and Z.shape[1] == 1):
48 raise ValueError(f"{var_name} must have 2 or more variables, but found 1.")
49
50
51 def check_series(
52 Z,
53 enforce_univariate=False,
54 enforce_multivariate=False,
55 allow_empty=False,
56 allow_numpy=True,
57 allow_None=True,
58 enforce_index_type=None,
59 var_name="input",
60 ):
61 """Validate input data to be a valid mtype for Series.
62
63 Parameters
64 ----------
65 Z : pd.Series, pd.DataFrame, np.ndarray, or None
66 Univariate or multivariate time series.
67 enforce_univariate : bool, default = False
68 If True, multivariate Z will raise an error.
69 enforce_multivariate: bool, default = False
70 If True, univariate Z will raise an error.
71 allow_empty : bool, default = False
72 whether a container with zero samples is allowed
73 allow_numpy : bool, default = True
74 whether no error is raised if Z is in a valid numpy.ndarray format
75 allow_None : bool, default = True
76 whether no error is raised if Z is None
77 enforce_index_type : type, default = None
78 type of time index
79 var_name : str, default = "input" - variable name printed in error messages
80
81 Returns
82 -------
83 Z : pd.Series, pd.DataFrame, np.ndarray, or None
84 Validated time series - a reference to the input Z
85
86 Raises
87 ------
88 TypeError - if Z is not in a valid type or format for scitype Series
89 if enforce_univariate is True:
90 ValueError if Z has 2 or more columns
91 if enforce_multivariate is True:
92 ValueError if Z has 1 column
93 if allow_numpy is false:
94 TypeError - if Z is of type np.ndarray
95 if allow_empty is false:
96 ValueError - if Z has length 0
97 if allow_None is false:
98 ValueError - if Z is None
99 if enforce_index_type is not None and Z is pandas type:
100 ValueError - if Z has index type other than enforce_index_type
101 """
102 if Z is None:
103 if allow_None:
104 return Z
105 else:
106 raise ValueError(var_name + " cannot be None")
107
108 # Check if pandas series or numpy array
109 if not allow_numpy:
110 valid_data_types = tuple(
111 filter(lambda x: x is not np.ndarray, VALID_DATA_TYPES)
112 )
113 else:
114 valid_data_types = VALID_DATA_TYPES
115
116 if not isinstance(Z, valid_data_types):
117 raise TypeError(
118 f"{var_name} must be a one of {valid_data_types}, but found type: {type(Z)}"
119 )
120
121 if enforce_univariate and enforce_multivariate:
122 raise ValueError(
123 "`enforce_univariate` and `enforce_multivariate` cannot both be set to "
124 "True."
125 )
126
127 if enforce_univariate:
128 _check_is_univariate(Z, var_name=var_name)
129
130 if enforce_multivariate:
131 _check_is_multivariate(Z, var_name=var_name)
132
133 # check time index if input data is not an NumPy ndarray
134 if not isinstance(Z, np.ndarray):
135 check_time_index(
136 Z.index,
137 allow_empty=allow_empty,
138 enforce_index_type=enforce_index_type,
139 var_name=var_name,
140 )
141
142 return Z
143
144
145 def check_time_index(
146 index, allow_empty=False, enforce_index_type=None, var_name="input"
147 ):
148 """Check time index.
149
150 Parameters
151 ----------
152 index : pd.Index or np.array
153 Time index
154 allow_empty : bool, optional (default=False)
155 If False, empty `index` raises an error.
156 enforce_index_type : type, optional (default=None)
157 type of time index
158 var_name : str, default = "input" - variable name printed in error messages
159
160 Returns
161 -------
162 time_index : pd.Index
163 Validated time index - a reference to the input index
164 """
165 if isinstance(index, np.ndarray):
166 index = pd.Index(index)
167
168 # We here check for type equality because isinstance does not
169 # work reliably because index types inherit from each other.
170 if not type(index) in VALID_INDEX_TYPES:
171 raise NotImplementedError(
172 f"{type(index)} is not supported for {var_name}, use "
173 f"one of {VALID_INDEX_TYPES} instead."
174 )
175
176 if enforce_index_type and type(index) is not enforce_index_type:
177 raise NotImplementedError(
178 f"{type(index)} is not supported for {var_name}, use "
179 f"type: {enforce_index_type} instead."
180 )
181
182 # Check time index is ordered in time
183 if not index.is_monotonic:
184 raise ValueError(
185 f"The (time) index of {var_name} must be sorted monotonically increasing, "
186 f"but found: {index}"
187 )
188
189 # Check that index is not empty
190 if not allow_empty and len(index) < 1:
191 raise ValueError(
192 f"{var_name} must contain at least some values, but found none."
193 )
194
195 return index
196
197
198 def check_equal_time_index(*ys):
199 """Check that time series have the same (time) indices.
200
201 Parameters
202 ----------
203 *ys : tuple of pd.Series, pd.DataFrame or np.ndarray, or None
204 One or more time series
205
206 Raises
207 ------
208 ValueError
209 If there are at least two no=-None entries of ys
210 of which pandas indices are not the same
211 np.ndarray are considered having integer range index on axis 0
212 """
213 # None entries are ignored
214 y_not_None = [y for y in ys if y is not None]
215
216 # if there is no or just one element, there is nothing to compare
217 if len(y_not_None) < 2:
218 return None
219
220 # only validate indices if data is passed as pd.Series
221 if isinstance(y_not_None[0], np.ndarray):
222 first_index = pd.Index(range(len(y_not_None[0])))
223 else:
224 first_index = y_not_None[0].index
225
226 check_time_index(first_index)
227
228 for y in y_not_None[1:]:
229 if isinstance(y, np.ndarray):
230 y_index = pd.Index(y)
231 else:
232 y_index = y.index
233
234 check_time_index(y_index)
235
236 if not first_index.equals(y_index):
237 raise ValueError("Some (time) indices are not the same.")
238
239
240 def _is_int_index(index):
241 """Check if index type is one of pd.RangeIndex or pd.Int64Index."""
242 return type(index) in (pd.Int64Index, pd.RangeIndex)
243
244
245 def check_consistent_index_type(a, b):
246 """Check that two indices have consistent types.
247
248 Parameters
249 ----------
250 a : pd.Index
251 Index being checked for consistency
252 b : pd.Index
253 Index being checked for consistency
254
255 Raises
256 ------
257 TypeError
258 If index types are inconsistent
259 """
260 msg = (
261 "Found series with inconsistent index types, please make sure all "
262 "series have the same index type."
263 )
264
265 if _is_int_index(a):
266 if not _is_int_index(b):
267 raise TypeError(msg)
268
269 else:
270 # check types, note that isinstance() does not work here because index
271 # types inherit from each other, hence we check for type equality
272 if not type(a) is type(b): # noqa
273 raise TypeError(msg)
274
[end of sktime/utils/validation/series.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/sktime/utils/validation/series.py b/sktime/utils/validation/series.py
--- a/sktime/utils/validation/series.py
+++ b/sktime/utils/validation/series.py
@@ -10,8 +10,8 @@
"check_equal_time_index",
"check_consistent_index_type",
]
-import pandas as pd
import numpy as np
+import pandas as pd
# We currently support the following types for input data and time index types.
VALID_DATA_TYPES = (pd.DataFrame, pd.Series, np.ndarray)
@@ -56,6 +56,7 @@
allow_numpy=True,
allow_None=True,
enforce_index_type=None,
+ allow_index_names=False,
var_name="input",
):
"""Validate input data to be a valid mtype for Series.
@@ -76,6 +77,8 @@
whether no error is raised if Z is None
enforce_index_type : type, default = None
type of time index
+ allow_index_names : bool, default = False
+ If False, names of Z.index will be set to None
var_name : str, default = "input" - variable name printed in error messages
Returns
@@ -139,6 +142,9 @@
var_name=var_name,
)
+ if not allow_index_names and not isinstance(Z, np.ndarray):
+ Z.index.names = [None for name in Z.index.names]
+
return Z
| {"golden_diff": "diff --git a/sktime/utils/validation/series.py b/sktime/utils/validation/series.py\n--- a/sktime/utils/validation/series.py\n+++ b/sktime/utils/validation/series.py\n@@ -10,8 +10,8 @@\n \"check_equal_time_index\",\n \"check_consistent_index_type\",\n ]\n-import pandas as pd\n import numpy as np\n+import pandas as pd\n \n # We currently support the following types for input data and time index types.\n VALID_DATA_TYPES = (pd.DataFrame, pd.Series, np.ndarray)\n@@ -56,6 +56,7 @@\n allow_numpy=True,\n allow_None=True,\n enforce_index_type=None,\n+ allow_index_names=False,\n var_name=\"input\",\n ):\n \"\"\"Validate input data to be a valid mtype for Series.\n@@ -76,6 +77,8 @@\n whether no error is raised if Z is None\n enforce_index_type : type, default = None\n type of time index\n+ allow_index_names : bool, default = False\n+ If False, names of Z.index will be set to None\n var_name : str, default = \"input\" - variable name printed in error messages\n \n Returns\n@@ -139,6 +142,9 @@\n var_name=var_name,\n )\n \n+ if not allow_index_names and not isinstance(Z, np.ndarray):\n+ Z.index.names = [None for name in Z.index.names]\n+\n return Z\n", "issue": "[BUG] ValueError when fitting Prophet and X.index.name=='ds' \n**Describe the bug**\r\nThe following error is raised when trying to fit Prophet using X where the name of X's DateTimeIndex is named \"ds\":\r\nValueError: 'ds' is both an index level and a column label, which is ambiguous.\r\n\r\nIt happens in the _merge_X method in the _ProphetAdapter, line 193, because after running line 191, X contains a column and an index called \"ds\".\r\n\r\nA suggested solution would be to add this oneliner before line 189 of .forecasting.base.adapters._fbprophet.py:\r\n```python\r\nX.index.name = \"index\"\r\n```\r\nNote: This assumes that X never has a MultiIndex. Not sure if that ever happens though..\r\n\r\nThis way, the user is still allowed to call X's index \"ds\". Since \"ds\" is the standard name that Prophet users use for their DateTime column, this is the most intuitive and least restrictive solution.\r\n\r\n**To Reproduce**\r\n```python\r\nimport pandas as pd\r\nfrom sktime.datasets import load_airline\r\nfrom sktime.forecasting.fbprophet import Prophet\r\nfrom sktime.forecasting.model_selection import ForecastingGridSearchCV\r\nfrom sktime.forecasting.model_selection import ExpandingWindowSplitter\r\n\r\ny_train = load_airline().to_timestamp(freq='M')\r\n\r\n# Commenting out the following line makes everything run smoothly\r\ny_train.index.name = 'ds'\r\n\r\nX_train = pd.DataFrame({'x1':y_train+200, 'x2':y_train+100})\r\n\r\nforecaster = Prophet(yearly_seasonality=False, weekly_seasonality=False, daily_seasonality=False)\r\nforecaster.fit(y_train, X_train)\r\n```\r\n\r\n**Expected behavior**\r\nNo error\r\n\r\n**Additional context**\r\n\r\n\r\n**Versions**\r\n<details>\r\n\r\nSystem:\r\n python: 3.9.7 | packaged by conda-forge | (default, Sep 23 2021, 07:24:41) [MSC v.1916 64 bit (AMD64)]\r\nexecutable: C:\\Users\\............\\env\\python.exe\r\n machine: Windows-10-10.0.19043-SP0\r\n\r\nPython dependencies:\r\n pip: 21.2.4\r\n setuptools: 58.1.0\r\n sklearn: 1.0\r\n sktime: 0.7.0\r\n statsmodels: 0.12.2\r\n numpy: 1.21.2\r\n scipy: 1.7.1\r\n Cython: 0.29.24\r\n pandas: 1.3.3\r\n matplotlib: 3.4.3\r\n joblib: 1.0.1\r\n numba: 0.53.1\r\n pmdarima: 1.8.3\r\n tsfresh: None\r\n</details>\r\n\n", "before_files": [{"content": "#!/usr/bin/env python3 -u\n# -*- coding: utf-8 -*-\n\n\"\"\"Functions for checking input data.\"\"\"\n\n__author__ = [\"Markus L\u00f6ning\", \"Drishti Bhasin\"]\n__all__ = [\n \"check_series\",\n \"check_time_index\",\n \"check_equal_time_index\",\n \"check_consistent_index_type\",\n]\nimport pandas as pd\nimport numpy as np\n\n# We currently support the following types for input data and time index types.\nVALID_DATA_TYPES = (pd.DataFrame, pd.Series, np.ndarray)\nVALID_INDEX_TYPES = (pd.Int64Index, pd.RangeIndex, pd.PeriodIndex, pd.DatetimeIndex)\n\n\ndef _check_is_univariate(y, var_name=\"input\"):\n \"\"\"Check if series is univariate.\"\"\"\n if isinstance(y, pd.DataFrame):\n nvars = y.shape[1]\n if nvars > 1:\n raise ValueError(\n f\"{var_name} must be univariate, but found {nvars} variables.\"\n )\n if isinstance(y, np.ndarray) and y.ndim > 1 and y.shape[1] > 1:\n raise ValueError(\n f\"{var_name} must be univariate, but found np.ndarray with more than \"\n \"one column\"\n )\n\n\ndef _check_is_multivariate(Z, var_name=\"input\"):\n \"\"\"Check if series is multivariate.\"\"\"\n if isinstance(Z, pd.Series):\n raise ValueError(f\"{var_name} must have 2 or more variables, but found 1.\")\n if isinstance(Z, pd.DataFrame):\n nvars = Z.shape[1]\n if nvars < 2:\n raise ValueError(\n f\"{var_name} must have 2 or more variables, but found {nvars}.\"\n )\n if isinstance(Z, np.ndarray):\n if Z.ndim == 1 or (Z.ndim == 2 and Z.shape[1] == 1):\n raise ValueError(f\"{var_name} must have 2 or more variables, but found 1.\")\n\n\ndef check_series(\n Z,\n enforce_univariate=False,\n enforce_multivariate=False,\n allow_empty=False,\n allow_numpy=True,\n allow_None=True,\n enforce_index_type=None,\n var_name=\"input\",\n):\n \"\"\"Validate input data to be a valid mtype for Series.\n\n Parameters\n ----------\n Z : pd.Series, pd.DataFrame, np.ndarray, or None\n Univariate or multivariate time series.\n enforce_univariate : bool, default = False\n If True, multivariate Z will raise an error.\n enforce_multivariate: bool, default = False\n If True, univariate Z will raise an error.\n allow_empty : bool, default = False\n whether a container with zero samples is allowed\n allow_numpy : bool, default = True\n whether no error is raised if Z is in a valid numpy.ndarray format\n allow_None : bool, default = True\n whether no error is raised if Z is None\n enforce_index_type : type, default = None\n type of time index\n var_name : str, default = \"input\" - variable name printed in error messages\n\n Returns\n -------\n Z : pd.Series, pd.DataFrame, np.ndarray, or None\n Validated time series - a reference to the input Z\n\n Raises\n ------\n TypeError - if Z is not in a valid type or format for scitype Series\n if enforce_univariate is True:\n ValueError if Z has 2 or more columns\n if enforce_multivariate is True:\n ValueError if Z has 1 column\n if allow_numpy is false:\n TypeError - if Z is of type np.ndarray\n if allow_empty is false:\n ValueError - if Z has length 0\n if allow_None is false:\n ValueError - if Z is None\n if enforce_index_type is not None and Z is pandas type:\n ValueError - if Z has index type other than enforce_index_type\n \"\"\"\n if Z is None:\n if allow_None:\n return Z\n else:\n raise ValueError(var_name + \" cannot be None\")\n\n # Check if pandas series or numpy array\n if not allow_numpy:\n valid_data_types = tuple(\n filter(lambda x: x is not np.ndarray, VALID_DATA_TYPES)\n )\n else:\n valid_data_types = VALID_DATA_TYPES\n\n if not isinstance(Z, valid_data_types):\n raise TypeError(\n f\"{var_name} must be a one of {valid_data_types}, but found type: {type(Z)}\"\n )\n\n if enforce_univariate and enforce_multivariate:\n raise ValueError(\n \"`enforce_univariate` and `enforce_multivariate` cannot both be set to \"\n \"True.\"\n )\n\n if enforce_univariate:\n _check_is_univariate(Z, var_name=var_name)\n\n if enforce_multivariate:\n _check_is_multivariate(Z, var_name=var_name)\n\n # check time index if input data is not an NumPy ndarray\n if not isinstance(Z, np.ndarray):\n check_time_index(\n Z.index,\n allow_empty=allow_empty,\n enforce_index_type=enforce_index_type,\n var_name=var_name,\n )\n\n return Z\n\n\ndef check_time_index(\n index, allow_empty=False, enforce_index_type=None, var_name=\"input\"\n):\n \"\"\"Check time index.\n\n Parameters\n ----------\n index : pd.Index or np.array\n Time index\n allow_empty : bool, optional (default=False)\n If False, empty `index` raises an error.\n enforce_index_type : type, optional (default=None)\n type of time index\n var_name : str, default = \"input\" - variable name printed in error messages\n\n Returns\n -------\n time_index : pd.Index\n Validated time index - a reference to the input index\n \"\"\"\n if isinstance(index, np.ndarray):\n index = pd.Index(index)\n\n # We here check for type equality because isinstance does not\n # work reliably because index types inherit from each other.\n if not type(index) in VALID_INDEX_TYPES:\n raise NotImplementedError(\n f\"{type(index)} is not supported for {var_name}, use \"\n f\"one of {VALID_INDEX_TYPES} instead.\"\n )\n\n if enforce_index_type and type(index) is not enforce_index_type:\n raise NotImplementedError(\n f\"{type(index)} is not supported for {var_name}, use \"\n f\"type: {enforce_index_type} instead.\"\n )\n\n # Check time index is ordered in time\n if not index.is_monotonic:\n raise ValueError(\n f\"The (time) index of {var_name} must be sorted monotonically increasing, \"\n f\"but found: {index}\"\n )\n\n # Check that index is not empty\n if not allow_empty and len(index) < 1:\n raise ValueError(\n f\"{var_name} must contain at least some values, but found none.\"\n )\n\n return index\n\n\ndef check_equal_time_index(*ys):\n \"\"\"Check that time series have the same (time) indices.\n\n Parameters\n ----------\n *ys : tuple of pd.Series, pd.DataFrame or np.ndarray, or None\n One or more time series\n\n Raises\n ------\n ValueError\n If there are at least two no=-None entries of ys\n of which pandas indices are not the same\n np.ndarray are considered having integer range index on axis 0\n \"\"\"\n # None entries are ignored\n y_not_None = [y for y in ys if y is not None]\n\n # if there is no or just one element, there is nothing to compare\n if len(y_not_None) < 2:\n return None\n\n # only validate indices if data is passed as pd.Series\n if isinstance(y_not_None[0], np.ndarray):\n first_index = pd.Index(range(len(y_not_None[0])))\n else:\n first_index = y_not_None[0].index\n\n check_time_index(first_index)\n\n for y in y_not_None[1:]:\n if isinstance(y, np.ndarray):\n y_index = pd.Index(y)\n else:\n y_index = y.index\n\n check_time_index(y_index)\n\n if not first_index.equals(y_index):\n raise ValueError(\"Some (time) indices are not the same.\")\n\n\ndef _is_int_index(index):\n \"\"\"Check if index type is one of pd.RangeIndex or pd.Int64Index.\"\"\"\n return type(index) in (pd.Int64Index, pd.RangeIndex)\n\n\ndef check_consistent_index_type(a, b):\n \"\"\"Check that two indices have consistent types.\n\n Parameters\n ----------\n a : pd.Index\n Index being checked for consistency\n b : pd.Index\n Index being checked for consistency\n\n Raises\n ------\n TypeError\n If index types are inconsistent\n \"\"\"\n msg = (\n \"Found series with inconsistent index types, please make sure all \"\n \"series have the same index type.\"\n )\n\n if _is_int_index(a):\n if not _is_int_index(b):\n raise TypeError(msg)\n\n else:\n # check types, note that isinstance() does not work here because index\n # types inherit from each other, hence we check for type equality\n if not type(a) is type(b): # noqa\n raise TypeError(msg)\n", "path": "sktime/utils/validation/series.py"}]} | 3,902 | 321 |
gh_patches_debug_9715 | rasdani/github-patches | git_diff | OCA__server-tools-74 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[7.0] [base_optional_quick_create] AttributeError: 'NoneType' object has no attribute 'name_create'
Error at starting before a migration if a model has been removed
</issue>
<code>
[start of base_optional_quick_create/model.py]
1 # -*- coding: utf-8 -*-
2 ##############################################################################
3 #
4 # Copyright (C) 2013 Agile Business Group sagl (<http://www.agilebg.com>)
5 #
6 # This program is free software: you can redistribute it and/or modify
7 # it under the terms of the GNU Affero General Public License as published
8 # by the Free Software Foundation, either version 3 of the License, or
9 # (at your option) any later version.
10 #
11 # This program is distributed in the hope that it will be useful,
12 # but WITHOUT ANY WARRANTY; without even the implied warranty of
13 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
14 # GNU Affero General Public License for more details.
15 #
16 # You should have received a copy of the GNU Affero General Public License
17 # along with this program. If not, see <http://www.gnu.org/licenses/>.
18 #
19 ##############################################################################
20
21 from openerp.osv import orm, fields
22 from openerp import SUPERUSER_ID
23 from openerp.tools.translate import _
24
25
26 class ir_model(orm.Model):
27
28 _inherit = 'ir.model'
29
30 _columns = {
31 'avoid_quick_create': fields.boolean('Avoid quick create'),
32 }
33
34 def _wrap_name_create(self, old_create, model):
35 def wrapper(cr, uid, name, context=None):
36 raise orm.except_orm(_('Error'),
37 _("Can't create quickly. "
38 "Opening create form"))
39 return wrapper
40
41 def _register_hook(self, cr, ids=None):
42 if ids is None:
43 ids = self.search(cr, SUPERUSER_ID, [])
44 for model in self.browse(cr, SUPERUSER_ID, ids):
45 if model.avoid_quick_create:
46 model_name = model.model
47 model_obj = self.pool.get(model_name)
48 if not hasattr(model_obj, 'check_quick_create'):
49 model_obj.name_create = self._wrap_name_create(
50 model_obj.name_create,
51 model_name)
52 model_obj.check_quick_create = True
53 return True
54
55 def create(self, cr, uid, vals, context=None):
56 res_id = super(ir_model, self).create(cr, uid, vals, context=context)
57 self._register_hook(cr, [res_id])
58 return res_id
59
60 def write(self, cr, uid, ids, vals, context=None):
61 if isinstance(ids, (int, long)):
62 ids = [ids]
63 super(ir_model, self).write(cr, uid, ids, vals, context=context)
64 self._register_hook(cr, ids)
65 return True
66
[end of base_optional_quick_create/model.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/base_optional_quick_create/model.py b/base_optional_quick_create/model.py
--- a/base_optional_quick_create/model.py
+++ b/base_optional_quick_create/model.py
@@ -45,7 +45,7 @@
if model.avoid_quick_create:
model_name = model.model
model_obj = self.pool.get(model_name)
- if not hasattr(model_obj, 'check_quick_create'):
+ if model_obj and not hasattr(model_obj, 'check_quick_create'):
model_obj.name_create = self._wrap_name_create(
model_obj.name_create,
model_name)
| {"golden_diff": "diff --git a/base_optional_quick_create/model.py b/base_optional_quick_create/model.py\n--- a/base_optional_quick_create/model.py\n+++ b/base_optional_quick_create/model.py\n@@ -45,7 +45,7 @@\n if model.avoid_quick_create:\n model_name = model.model\n model_obj = self.pool.get(model_name)\n- if not hasattr(model_obj, 'check_quick_create'):\n+ if model_obj and not hasattr(model_obj, 'check_quick_create'):\n model_obj.name_create = self._wrap_name_create(\n model_obj.name_create,\n model_name)\n", "issue": "[7.0] [base_optional_quick_create] AttributeError: 'NoneType' object has no attribute 'name_create'\nError at starting before a migration if a model has been removed\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n##############################################################################\n#\n# Copyright (C) 2013 Agile Business Group sagl (<http://www.agilebg.com>)\n#\n# This program is free software: you can redistribute it and/or modify\n# it under the terms of the GNU Affero General Public License as published\n# by the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# This program is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU Affero General Public License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program. If not, see <http://www.gnu.org/licenses/>.\n#\n##############################################################################\n\nfrom openerp.osv import orm, fields\nfrom openerp import SUPERUSER_ID\nfrom openerp.tools.translate import _\n\n\nclass ir_model(orm.Model):\n\n _inherit = 'ir.model'\n\n _columns = {\n 'avoid_quick_create': fields.boolean('Avoid quick create'),\n }\n\n def _wrap_name_create(self, old_create, model):\n def wrapper(cr, uid, name, context=None):\n raise orm.except_orm(_('Error'),\n _(\"Can't create quickly. \"\n \"Opening create form\"))\n return wrapper\n\n def _register_hook(self, cr, ids=None):\n if ids is None:\n ids = self.search(cr, SUPERUSER_ID, [])\n for model in self.browse(cr, SUPERUSER_ID, ids):\n if model.avoid_quick_create:\n model_name = model.model\n model_obj = self.pool.get(model_name)\n if not hasattr(model_obj, 'check_quick_create'):\n model_obj.name_create = self._wrap_name_create(\n model_obj.name_create,\n model_name)\n model_obj.check_quick_create = True\n return True\n\n def create(self, cr, uid, vals, context=None):\n res_id = super(ir_model, self).create(cr, uid, vals, context=context)\n self._register_hook(cr, [res_id])\n return res_id\n\n def write(self, cr, uid, ids, vals, context=None):\n if isinstance(ids, (int, long)):\n ids = [ids]\n super(ir_model, self).write(cr, uid, ids, vals, context=context)\n self._register_hook(cr, ids)\n return True\n", "path": "base_optional_quick_create/model.py"}]} | 1,244 | 124 |
gh_patches_debug_27735 | rasdani/github-patches | git_diff | e-valuation__EvaP-1263 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Remove ViewTest where possible
Right now we have a `class ViewTest`, for which there is one subclass for each view that we have.
For views that we have tested properly, it provides no additional value and I I propose to replace it with the original `WebTest`.
Originally I proposed to remove it altogether and copypaste its test to all the test cases that wouldn't have any valuable test otherwise. @janno42 convinced me to leave it there and rename it to `WebTestWith200Check` instead.
</issue>
<code>
[start of evap/evaluation/migrations/0002_initial_data.py]
1 from django.db import migrations
2 from django.contrib.auth.models import Group
3
4
5 def insert_emailtemplates(apps, _schema_editor):
6 emailtemplates = [
7 ("Lecturer Review Notice", "[EvaP] New Course ready for approval"),
8 ("Student Reminder", "[EvaP] Evaluation period is ending"),
9 ("Publishing Notice", "[EvaP] A course has been published"),
10 ("Login Key Created", "[EvaP] A login key was created"),
11 ]
12
13 EmailTemplate = apps.get_model("evaluation", "EmailTemplate")
14
15 for name, subject in emailtemplates:
16 if not EmailTemplate.objects.filter(name=name).exists():
17 EmailTemplate.objects.create(name=name, subject=subject, body="")
18
19 Group.objects.create(name="Staff")
20
21
22 class Migration(migrations.Migration):
23
24 dependencies = [
25 ('evaluation', '0001_initial'),
26 ]
27
28 operations = [
29 migrations.RunPython(insert_emailtemplates),
30 ]
31
[end of evap/evaluation/migrations/0002_initial_data.py]
[start of evap/grades/migrations/0002_initial_data.py]
1 from django.db import migrations
2 from django.contrib.auth.models import Group
3
4
5 def add_group(_apps, _schema_editor):
6 Group.objects.create(name="Grade publisher")
7
8
9 class Migration(migrations.Migration):
10
11 dependencies = [
12 ('grades', '0001_initial'),
13 ]
14
15 operations = [
16 migrations.RunPython(add_group),
17 ]
18
[end of evap/grades/migrations/0002_initial_data.py]
[start of evap/evaluation/migrations/0055_reviewer_group.py]
1 from django.contrib.auth.models import Group
2 from django.db import migrations
3
4
5 def add_group(_apps, _schema_editor):
6 Group.objects.create(name="Reviewer")
7
8
9 def delete_group(_apps, _schema_editor):
10 Group.objects.get(name="Reviewer").delete()
11
12
13 class Migration(migrations.Migration):
14
15 dependencies = [
16 ('evaluation', '0054_userprofile_language'),
17 ]
18
19 operations = [
20 migrations.RunPython(add_group, reverse_code=delete_group),
21 ]
22
[end of evap/evaluation/migrations/0055_reviewer_group.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/evap/evaluation/migrations/0002_initial_data.py b/evap/evaluation/migrations/0002_initial_data.py
--- a/evap/evaluation/migrations/0002_initial_data.py
+++ b/evap/evaluation/migrations/0002_initial_data.py
@@ -1,5 +1,4 @@
from django.db import migrations
-from django.contrib.auth.models import Group
def insert_emailtemplates(apps, _schema_editor):
@@ -16,6 +15,7 @@
if not EmailTemplate.objects.filter(name=name).exists():
EmailTemplate.objects.create(name=name, subject=subject, body="")
+ Group = apps.get_model("auth", "Group")
Group.objects.create(name="Staff")
diff --git a/evap/evaluation/migrations/0055_reviewer_group.py b/evap/evaluation/migrations/0055_reviewer_group.py
--- a/evap/evaluation/migrations/0055_reviewer_group.py
+++ b/evap/evaluation/migrations/0055_reviewer_group.py
@@ -1,12 +1,13 @@
-from django.contrib.auth.models import Group
from django.db import migrations
-def add_group(_apps, _schema_editor):
+def add_group(apps, _schema_editor):
+ Group = apps.get_model("auth", "Group")
Group.objects.create(name="Reviewer")
-def delete_group(_apps, _schema_editor):
+def delete_group(apps, _schema_editor):
+ Group = apps.get_model("auth", "Group")
Group.objects.get(name="Reviewer").delete()
diff --git a/evap/grades/migrations/0002_initial_data.py b/evap/grades/migrations/0002_initial_data.py
--- a/evap/grades/migrations/0002_initial_data.py
+++ b/evap/grades/migrations/0002_initial_data.py
@@ -1,8 +1,8 @@
from django.db import migrations
-from django.contrib.auth.models import Group
-def add_group(_apps, _schema_editor):
+def add_group(apps, _schema_editor):
+ Group = apps.get_model("auth", "Group")
Group.objects.create(name="Grade publisher")
| {"golden_diff": "diff --git a/evap/evaluation/migrations/0002_initial_data.py b/evap/evaluation/migrations/0002_initial_data.py\n--- a/evap/evaluation/migrations/0002_initial_data.py\n+++ b/evap/evaluation/migrations/0002_initial_data.py\n@@ -1,5 +1,4 @@\n from django.db import migrations\n-from django.contrib.auth.models import Group\n \n \n def insert_emailtemplates(apps, _schema_editor):\n@@ -16,6 +15,7 @@\n if not EmailTemplate.objects.filter(name=name).exists():\n EmailTemplate.objects.create(name=name, subject=subject, body=\"\")\n \n+ Group = apps.get_model(\"auth\", \"Group\")\n Group.objects.create(name=\"Staff\")\n \n \ndiff --git a/evap/evaluation/migrations/0055_reviewer_group.py b/evap/evaluation/migrations/0055_reviewer_group.py\n--- a/evap/evaluation/migrations/0055_reviewer_group.py\n+++ b/evap/evaluation/migrations/0055_reviewer_group.py\n@@ -1,12 +1,13 @@\n-from django.contrib.auth.models import Group\n from django.db import migrations\n \n \n-def add_group(_apps, _schema_editor):\n+def add_group(apps, _schema_editor):\n+ Group = apps.get_model(\"auth\", \"Group\")\n Group.objects.create(name=\"Reviewer\")\n \n \n-def delete_group(_apps, _schema_editor):\n+def delete_group(apps, _schema_editor):\n+ Group = apps.get_model(\"auth\", \"Group\")\n Group.objects.get(name=\"Reviewer\").delete()\n \n \ndiff --git a/evap/grades/migrations/0002_initial_data.py b/evap/grades/migrations/0002_initial_data.py\n--- a/evap/grades/migrations/0002_initial_data.py\n+++ b/evap/grades/migrations/0002_initial_data.py\n@@ -1,8 +1,8 @@\n from django.db import migrations\n-from django.contrib.auth.models import Group\n \n \n-def add_group(_apps, _schema_editor):\n+def add_group(apps, _schema_editor):\n+ Group = apps.get_model(\"auth\", \"Group\")\n Group.objects.create(name=\"Grade publisher\")\n", "issue": "Remove ViewTest where possible\nRight now we have a `class ViewTest`, for which there is one subclass for each view that we have.\r\n\r\nFor views that we have tested properly, it provides no additional value and I I propose to replace it with the original `WebTest`. \r\n\r\nOriginally I proposed to remove it altogether and copypaste its test to all the test cases that wouldn't have any valuable test otherwise. @janno42 convinced me to leave it there and rename it to `WebTestWith200Check` instead.\n", "before_files": [{"content": "from django.db import migrations\nfrom django.contrib.auth.models import Group\n\n\ndef insert_emailtemplates(apps, _schema_editor):\n emailtemplates = [\n (\"Lecturer Review Notice\", \"[EvaP] New Course ready for approval\"),\n (\"Student Reminder\", \"[EvaP] Evaluation period is ending\"),\n (\"Publishing Notice\", \"[EvaP] A course has been published\"),\n (\"Login Key Created\", \"[EvaP] A login key was created\"),\n ]\n\n EmailTemplate = apps.get_model(\"evaluation\", \"EmailTemplate\")\n\n for name, subject in emailtemplates:\n if not EmailTemplate.objects.filter(name=name).exists():\n EmailTemplate.objects.create(name=name, subject=subject, body=\"\")\n\n Group.objects.create(name=\"Staff\")\n\n\nclass Migration(migrations.Migration):\n\n dependencies = [\n ('evaluation', '0001_initial'),\n ]\n\n operations = [\n migrations.RunPython(insert_emailtemplates),\n ]\n", "path": "evap/evaluation/migrations/0002_initial_data.py"}, {"content": "from django.db import migrations\nfrom django.contrib.auth.models import Group\n\n\ndef add_group(_apps, _schema_editor):\n Group.objects.create(name=\"Grade publisher\")\n\n\nclass Migration(migrations.Migration):\n\n dependencies = [\n ('grades', '0001_initial'),\n ]\n\n operations = [\n migrations.RunPython(add_group),\n ]\n", "path": "evap/grades/migrations/0002_initial_data.py"}, {"content": "from django.contrib.auth.models import Group\nfrom django.db import migrations\n\n\ndef add_group(_apps, _schema_editor):\n Group.objects.create(name=\"Reviewer\")\n\n\ndef delete_group(_apps, _schema_editor):\n Group.objects.get(name=\"Reviewer\").delete()\n\n\nclass Migration(migrations.Migration):\n\n dependencies = [\n ('evaluation', '0054_userprofile_language'),\n ]\n\n operations = [\n migrations.RunPython(add_group, reverse_code=delete_group),\n ]\n", "path": "evap/evaluation/migrations/0055_reviewer_group.py"}]} | 1,235 | 500 |
gh_patches_debug_8092 | rasdani/github-patches | git_diff | vega__altair-1907 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Interval Selection Example Bug
I am having trouble with the the [Interval Selection Example](https://altair-viz.github.io/gallery/interval_selection.html).

```python
import altair as alt
from vega_datasets import data
source = data.sp500.url
brush = alt.selection(type='interval', encodings=['x'])
upper = alt.Chart(source).mark_area().encode(
alt.X('date:T', scale=alt.Scale(domain=brush)),
y='price:Q'
).properties(
width=600,
height=200
)
lower = upper.properties(
height=60
).add_selection(brush)
upper & lower
```
The example looks correct so I am unsure what is causing this behavior.
</issue>
<code>
[start of altair/examples/interval_selection.py]
1 """
2 Interval Selection Example
3 ==========================
4
5 This is an example of creating a stacked chart for which the domain of the
6 top chart can be selected by interacting with the bottom chart.
7 """
8 # category: area charts
9 import altair as alt
10 from vega_datasets import data
11
12 source = data.sp500.url
13
14 brush = alt.selection(type='interval', encodings=['x'])
15
16 upper = alt.Chart(source).mark_area().encode(
17 alt.X('date:T', scale=alt.Scale(domain=brush)),
18 y='price:Q'
19 ).properties(
20 width=600,
21 height=200
22 )
23
24 lower = upper.properties(
25 height=60
26 ).add_selection(brush)
27
28 upper & lower
29
[end of altair/examples/interval_selection.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/altair/examples/interval_selection.py b/altair/examples/interval_selection.py
--- a/altair/examples/interval_selection.py
+++ b/altair/examples/interval_selection.py
@@ -13,15 +13,19 @@
brush = alt.selection(type='interval', encodings=['x'])
-upper = alt.Chart(source).mark_area().encode(
- alt.X('date:T', scale=alt.Scale(domain=brush)),
- y='price:Q'
+base = alt.Chart(source).mark_area().encode(
+ x = 'date:T',
+ y = 'price:Q'
).properties(
width=600,
height=200
)
-lower = upper.properties(
+upper = base.encode(
+ alt.X('date:T', scale=alt.Scale(domain=brush))
+)
+
+lower = base.properties(
height=60
).add_selection(brush)
| {"golden_diff": "diff --git a/altair/examples/interval_selection.py b/altair/examples/interval_selection.py\n--- a/altair/examples/interval_selection.py\n+++ b/altair/examples/interval_selection.py\n@@ -13,15 +13,19 @@\n \n brush = alt.selection(type='interval', encodings=['x'])\n \n-upper = alt.Chart(source).mark_area().encode(\n- alt.X('date:T', scale=alt.Scale(domain=brush)),\n- y='price:Q'\n+base = alt.Chart(source).mark_area().encode(\n+ x = 'date:T',\n+ y = 'price:Q'\n ).properties(\n width=600,\n height=200\n )\n \n-lower = upper.properties(\n+upper = base.encode(\n+ alt.X('date:T', scale=alt.Scale(domain=brush))\n+)\n+\n+lower = base.properties(\n height=60\n ).add_selection(brush)\n", "issue": "Interval Selection Example Bug\nI am having trouble with the the [Interval Selection Example](https://altair-viz.github.io/gallery/interval_selection.html). \r\n\r\n\r\n```python\r\nimport altair as alt\r\nfrom vega_datasets import data\r\n\r\nsource = data.sp500.url\r\n\r\nbrush = alt.selection(type='interval', encodings=['x'])\r\n\r\nupper = alt.Chart(source).mark_area().encode(\r\n alt.X('date:T', scale=alt.Scale(domain=brush)),\r\n y='price:Q'\r\n).properties(\r\n width=600,\r\n height=200\r\n)\r\n\r\nlower = upper.properties(\r\n height=60\r\n).add_selection(brush)\r\n\r\nupper & lower\r\n```\r\n\r\nThe example looks correct so I am unsure what is causing this behavior. \n", "before_files": [{"content": "\"\"\"\nInterval Selection Example\n==========================\n\nThis is an example of creating a stacked chart for which the domain of the\ntop chart can be selected by interacting with the bottom chart.\n\"\"\"\n# category: area charts\nimport altair as alt\nfrom vega_datasets import data\n\nsource = data.sp500.url\n\nbrush = alt.selection(type='interval', encodings=['x'])\n\nupper = alt.Chart(source).mark_area().encode(\n alt.X('date:T', scale=alt.Scale(domain=brush)),\n y='price:Q'\n).properties(\n width=600,\n height=200\n)\n\nlower = upper.properties(\n height=60\n).add_selection(brush)\n\nupper & lower\n", "path": "altair/examples/interval_selection.py"}]} | 965 | 205 |
gh_patches_debug_21929 | rasdani/github-patches | git_diff | Lightning-Universe__lightning-flash-210 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
NLTK being loaded on image classifcation
## 🐛 Bug
<!-- A clear and concise description of what the bug is. -->
### To Reproduce
```python
from flash.data import labels_from_csv
from flash.vision import ImageClassificationData
from flash.vision import ImageClassifier
from flash import Trainer
```
<!-- If you have a code sample, error messages, stack traces, please provide it here as well -->
[nltk_data] Error loading punkt: <urlopen error [Errno -3] Temporary
[nltk_data] failure in name resolution>
</issue>
<code>
[start of flash/text/seq2seq/summarization/metric.py]
1 # Copyright The PyTorch Lightning team.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 from typing import Dict, List, Tuple
15
16 import numpy as np
17 from rouge_score import rouge_scorer, scoring
18 from rouge_score.scoring import AggregateScore, Score
19 from torch import tensor
20 from torchmetrics import Metric
21
22 from flash.text.seq2seq.summarization.utils import add_newline_to_end_of_each_sentence
23
24
25 class RougeMetric(Metric):
26 """
27 Metric used for automatic summarization. https://www.aclweb.org/anthology/W04-1013/
28
29 Example:
30
31 >>> target = "Is your name John".split()
32 >>> preds = "My name is John".split()
33 >>> rouge = RougeMetric()
34 >>> from pprint import pprint
35 >>> pprint(rouge(preds, target)) # doctest: +NORMALIZE_WHITESPACE
36 {'rouge1_fmeasure': 0.25,
37 'rouge1_precision': 0.25,
38 'rouge1_recall': 0.25,
39 'rouge2_fmeasure': 0.0,
40 'rouge2_precision': 0.0,
41 'rouge2_recall': 0.0,
42 'rougeL_fmeasure': 0.25,
43 'rougeL_precision': 0.25,
44 'rougeL_recall': 0.25,
45 'rougeLsum_fmeasure': 0.25,
46 'rougeLsum_precision': 0.25,
47 'rougeLsum_recall': 0.25}
48 """
49
50 def __init__(
51 self,
52 rouge_newline_sep: bool = False,
53 use_stemmer: bool = False,
54 rouge_keys: Tuple[str] = ("rouge1", "rouge2", "rougeL", "rougeLsum"),
55 ):
56 super().__init__()
57 self.rouge_newline_sep = rouge_newline_sep
58 self.rouge_keys = rouge_keys
59 self.use_stemmer = use_stemmer
60 self.aggregator = RougeBatchAggregator()
61 self.scorer = rouge_scorer.RougeScorer(rouge_keys, use_stemmer=self.use_stemmer)
62
63 for key in rouge_keys:
64 self.add_state(key, [])
65
66 def update(self, pred_lns: List[str], tgt_lns: List[str]):
67 for pred, tgt in zip(pred_lns, tgt_lns):
68 # rougeLsum expects "\n" separated sentences within a summary
69 if self.rouge_newline_sep:
70 pred = add_newline_to_end_of_each_sentence(pred)
71 tgt = add_newline_to_end_of_each_sentence(tgt)
72 results = self.scorer.score(pred, tgt)
73 for key, score in results.items():
74 score = tensor([score.precision, score.recall, score.fmeasure])
75 getattr(self, key).append(score)
76
77 def compute(self) -> Dict[str, float]:
78 scores = {key: getattr(self, key) for key in self.rouge_keys}
79 self.aggregator.add_scores(scores)
80 result = self.aggregator.aggregate()
81 return format_rouge_results(result)
82
83 def __hash__(self):
84 # override to hash list objects.
85 # this is a bug in the upstream pytorch release.
86 hash_vals = [self.__class__.__name__]
87
88 for key in self._defaults.keys():
89 value = getattr(self, key)
90 if isinstance(value, list):
91 value = tuple(value)
92 hash_vals.append(value)
93
94 return hash(tuple(hash_vals))
95
96
97 class RougeBatchAggregator(scoring.BootstrapAggregator):
98 """
99 Aggregates rouge scores and provides confidence intervals.
100 """
101
102 def aggregate(self):
103 """
104 Override function to wrap the final results in `Score` objects.
105 This is due to the scores being replaced with a list of torch tensors.
106 """
107 result = {}
108 for score_type, scores in self._scores.items():
109 # Stack scores into a 2-d matrix of (sample, measure).
110 score_matrix = np.vstack(tuple(scores))
111 # Percentiles are returned as (interval, measure).
112 percentiles = self._bootstrap_resample(score_matrix)
113 # Extract the three intervals (low, mid, high).
114 intervals = tuple((Score(*percentiles[j, :]) for j in range(3)))
115 result[score_type] = AggregateScore(low=intervals[0], mid=intervals[1], high=intervals[2])
116 return result
117
118 def add_scores(self, scores):
119 self._scores = scores
120
121
122 def format_rouge_results(result: Dict[str, AggregateScore], decimal_places: int = 4) -> Dict[str, float]:
123 flattened_result = {}
124 for rouge_key, rouge_aggregate_score in result.items():
125 for stat in ["precision", "recall", "fmeasure"]:
126 mid = rouge_aggregate_score.mid
127 score = round(getattr(mid, stat), decimal_places)
128 flattened_result[f"{rouge_key}_{stat}"] = score
129 return flattened_result
130
[end of flash/text/seq2seq/summarization/metric.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/flash/text/seq2seq/summarization/metric.py b/flash/text/seq2seq/summarization/metric.py
--- a/flash/text/seq2seq/summarization/metric.py
+++ b/flash/text/seq2seq/summarization/metric.py
@@ -19,7 +19,7 @@
from torch import tensor
from torchmetrics import Metric
-from flash.text.seq2seq.summarization.utils import add_newline_to_end_of_each_sentence
+from flash.text.seq2seq import summarization
class RougeMetric(Metric):
@@ -67,8 +67,8 @@
for pred, tgt in zip(pred_lns, tgt_lns):
# rougeLsum expects "\n" separated sentences within a summary
if self.rouge_newline_sep:
- pred = add_newline_to_end_of_each_sentence(pred)
- tgt = add_newline_to_end_of_each_sentence(tgt)
+ pred = summarization.utils.add_newline_to_end_of_each_sentence(pred)
+ tgt = summarization.utils.add_newline_to_end_of_each_sentence(tgt)
results = self.scorer.score(pred, tgt)
for key, score in results.items():
score = tensor([score.precision, score.recall, score.fmeasure])
| {"golden_diff": "diff --git a/flash/text/seq2seq/summarization/metric.py b/flash/text/seq2seq/summarization/metric.py\n--- a/flash/text/seq2seq/summarization/metric.py\n+++ b/flash/text/seq2seq/summarization/metric.py\n@@ -19,7 +19,7 @@\n from torch import tensor\n from torchmetrics import Metric\n \n-from flash.text.seq2seq.summarization.utils import add_newline_to_end_of_each_sentence\n+from flash.text.seq2seq import summarization\n \n \n class RougeMetric(Metric):\n@@ -67,8 +67,8 @@\n for pred, tgt in zip(pred_lns, tgt_lns):\n # rougeLsum expects \"\\n\" separated sentences within a summary\n if self.rouge_newline_sep:\n- pred = add_newline_to_end_of_each_sentence(pred)\n- tgt = add_newline_to_end_of_each_sentence(tgt)\n+ pred = summarization.utils.add_newline_to_end_of_each_sentence(pred)\n+ tgt = summarization.utils.add_newline_to_end_of_each_sentence(tgt)\n results = self.scorer.score(pred, tgt)\n for key, score in results.items():\n score = tensor([score.precision, score.recall, score.fmeasure])\n", "issue": "NLTK being loaded on image classifcation\n## \ud83d\udc1b Bug\r\n\r\n<!-- A clear and concise description of what the bug is. -->\r\n\r\n### To Reproduce\r\n\r\n\r\n```python\r\nfrom flash.data import labels_from_csv\r\nfrom flash.vision import ImageClassificationData\r\nfrom flash.vision import ImageClassifier\r\nfrom flash import Trainer\r\n```\r\n<!-- If you have a code sample, error messages, stack traces, please provide it here as well -->\r\n\r\n[nltk_data] Error loading punkt: <urlopen error [Errno -3] Temporary\r\n[nltk_data] failure in name resolution>\r\n\r\n\n", "before_files": [{"content": "# Copyright The PyTorch Lightning team.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nfrom typing import Dict, List, Tuple\n\nimport numpy as np\nfrom rouge_score import rouge_scorer, scoring\nfrom rouge_score.scoring import AggregateScore, Score\nfrom torch import tensor\nfrom torchmetrics import Metric\n\nfrom flash.text.seq2seq.summarization.utils import add_newline_to_end_of_each_sentence\n\n\nclass RougeMetric(Metric):\n \"\"\"\n Metric used for automatic summarization. https://www.aclweb.org/anthology/W04-1013/\n\n Example:\n\n >>> target = \"Is your name John\".split()\n >>> preds = \"My name is John\".split()\n >>> rouge = RougeMetric()\n >>> from pprint import pprint\n >>> pprint(rouge(preds, target)) # doctest: +NORMALIZE_WHITESPACE\n {'rouge1_fmeasure': 0.25,\n 'rouge1_precision': 0.25,\n 'rouge1_recall': 0.25,\n 'rouge2_fmeasure': 0.0,\n 'rouge2_precision': 0.0,\n 'rouge2_recall': 0.0,\n 'rougeL_fmeasure': 0.25,\n 'rougeL_precision': 0.25,\n 'rougeL_recall': 0.25,\n 'rougeLsum_fmeasure': 0.25,\n 'rougeLsum_precision': 0.25,\n 'rougeLsum_recall': 0.25}\n \"\"\"\n\n def __init__(\n self,\n rouge_newline_sep: bool = False,\n use_stemmer: bool = False,\n rouge_keys: Tuple[str] = (\"rouge1\", \"rouge2\", \"rougeL\", \"rougeLsum\"),\n ):\n super().__init__()\n self.rouge_newline_sep = rouge_newline_sep\n self.rouge_keys = rouge_keys\n self.use_stemmer = use_stemmer\n self.aggregator = RougeBatchAggregator()\n self.scorer = rouge_scorer.RougeScorer(rouge_keys, use_stemmer=self.use_stemmer)\n\n for key in rouge_keys:\n self.add_state(key, [])\n\n def update(self, pred_lns: List[str], tgt_lns: List[str]):\n for pred, tgt in zip(pred_lns, tgt_lns):\n # rougeLsum expects \"\\n\" separated sentences within a summary\n if self.rouge_newline_sep:\n pred = add_newline_to_end_of_each_sentence(pred)\n tgt = add_newline_to_end_of_each_sentence(tgt)\n results = self.scorer.score(pred, tgt)\n for key, score in results.items():\n score = tensor([score.precision, score.recall, score.fmeasure])\n getattr(self, key).append(score)\n\n def compute(self) -> Dict[str, float]:\n scores = {key: getattr(self, key) for key in self.rouge_keys}\n self.aggregator.add_scores(scores)\n result = self.aggregator.aggregate()\n return format_rouge_results(result)\n\n def __hash__(self):\n # override to hash list objects.\n # this is a bug in the upstream pytorch release.\n hash_vals = [self.__class__.__name__]\n\n for key in self._defaults.keys():\n value = getattr(self, key)\n if isinstance(value, list):\n value = tuple(value)\n hash_vals.append(value)\n\n return hash(tuple(hash_vals))\n\n\nclass RougeBatchAggregator(scoring.BootstrapAggregator):\n \"\"\"\n Aggregates rouge scores and provides confidence intervals.\n \"\"\"\n\n def aggregate(self):\n \"\"\"\n Override function to wrap the final results in `Score` objects.\n This is due to the scores being replaced with a list of torch tensors.\n \"\"\"\n result = {}\n for score_type, scores in self._scores.items():\n # Stack scores into a 2-d matrix of (sample, measure).\n score_matrix = np.vstack(tuple(scores))\n # Percentiles are returned as (interval, measure).\n percentiles = self._bootstrap_resample(score_matrix)\n # Extract the three intervals (low, mid, high).\n intervals = tuple((Score(*percentiles[j, :]) for j in range(3)))\n result[score_type] = AggregateScore(low=intervals[0], mid=intervals[1], high=intervals[2])\n return result\n\n def add_scores(self, scores):\n self._scores = scores\n\n\ndef format_rouge_results(result: Dict[str, AggregateScore], decimal_places: int = 4) -> Dict[str, float]:\n flattened_result = {}\n for rouge_key, rouge_aggregate_score in result.items():\n for stat in [\"precision\", \"recall\", \"fmeasure\"]:\n mid = rouge_aggregate_score.mid\n score = round(getattr(mid, stat), decimal_places)\n flattened_result[f\"{rouge_key}_{stat}\"] = score\n return flattened_result\n", "path": "flash/text/seq2seq/summarization/metric.py"}]} | 2,169 | 282 |
gh_patches_debug_34681 | rasdani/github-patches | git_diff | sql-machine-learning__elasticdl-323 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
submit master pod using k8s python client instead of kubectl
use k8s python client to submit master pod instead of using the command below
`os.system('kubectl run ...')`
</issue>
<code>
[start of elasticdl/client/client.py]
1 import os
2 import inspect
3 import shutil
4 import time
5 import getpass
6 from string import Template
7 import docker
8
9
10 def run(model_class, train_data_dir=None,
11 num_epoch=1, minibatch_size=10,
12 record_per_task=100, num_worker=1, grads_to_wait=2):
13 m_path, m_file = _getModelFile()
14 m_file_in_docker = "/model/" + m_file
15 timestamp = int(round(time.time() * 1000))
16 _build_docker_image(m_path, m_file, m_file_in_docker, timestamp)
17 yaml_file = _generate_yaml(m_file_in_docker, model_class.__name__, train_data_dir=train_data_dir,
18 num_epoch=num_epoch, minibatch_size=minibatch_size,
19 record_per_task=record_per_task, num_worker=num_worker,
20 grads_to_wait=grads_to_wait, timestamp=timestamp)
21 _submit(yaml_file)
22
23 def _getModelFile():
24 m_file = inspect.currentframe().f_back.f_back.f_code.co_filename
25 m_path = os.path.abspath(os.path.dirname(m_file))
26 return m_path, m_file
27
28 def _build_docker_image(m_path, m_file, m_file_in_docker, timestamp):
29 d_path = os.path.abspath(os.path.dirname(
30 inspect.currentframe().f_back.f_code.co_filename))
31 new_dfile = m_path + "/Dockerfile"
32 shutil.copyfile(d_path + "/../Dockerfile.dev", new_dfile)
33
34 with open(new_dfile, 'a') as df:
35 df.write("COPY " + m_file + " " + m_file_in_docker)
36 client = docker.APIClient(base_url='unix://var/run/docker.sock')
37 for line in client.build(dockerfile='Dockerfile', path='.', tag='elasticdl:dev_' + str(timestamp)):
38 print(str(line, encoding = "utf-8"))
39
40 # TODO: upload docker image to docker hub.
41
42 def _generate_yaml(m_file, m_class,
43 train_data_dir=None, num_epoch=1,
44 minibatch_size=10, record_per_task=100,
45 num_worker=1, grads_to_wait=2, timestamp=1):
46 YAML_TEMPLATE = """
47 apiVersion: v1
48 kind: Pod
49 metadata:
50 name: elasticdl-master-$timestamp
51 labels:
52 purpose: test-command
53 spec:
54 containers:
55 - name: elasticdl-master-$timestamp
56 image: elasticdl:dev_$timestamp
57 command: ["python"]
58 args: ["-m", "elasticdl.master.main",
59 "--model-file", "$m_file",
60 "--num_worker", "$num_worker",
61 "--worker_image", "elasticdl:dev_$timestamp",
62 "--job_name", "elasticdl-$timestamp",
63 "--model-class", "$m_class",
64 "--train_data_dir", "$train_data_dir",
65 "--num_epoch", "$num_epoch",
66 "--grads_to_wait", "$grads_to_wait",
67 "--minibatch_size", "$minibatch_size",
68 "--record_per_task", "$record_per_task"]
69 imagePullPolicy: Never
70 env:
71 - name: MY_POD_IP
72 valueFrom:
73 fieldRef:
74 fieldPath: status.podIP
75 restartPolicy: Never
76 """
77 t = Template(YAML_TEMPLATE)
78 yaml_file = 'job_desc.yaml'
79 with open(yaml_file, "w") as yaml:
80 yaml.write(t.substitute(m_file=m_file, m_class=m_class,
81 train_data_dir=train_data_dir,
82 timestamp=timestamp, num_worker=num_worker, num_epoch=num_epoch,
83 minibatch_size=minibatch_size, record_per_task=record_per_task,
84 user=getpass.getuser(), grads_to_wait=grads_to_wait))
85 return yaml_file
86
87 def _submit(yaml_file):
88 os.system('kubectl create -f ' + yaml_file)
89
[end of elasticdl/client/client.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/elasticdl/client/client.py b/elasticdl/client/client.py
--- a/elasticdl/client/client.py
+++ b/elasticdl/client/client.py
@@ -5,6 +5,9 @@
import getpass
from string import Template
import docker
+import yaml
+from kubernetes.client.apis import core_v1_api
+from kubernetes import config
def run(model_class, train_data_dir=None,
@@ -14,11 +17,11 @@
m_file_in_docker = "/model/" + m_file
timestamp = int(round(time.time() * 1000))
_build_docker_image(m_path, m_file, m_file_in_docker, timestamp)
- yaml_file = _generate_yaml(m_file_in_docker, model_class.__name__, train_data_dir=train_data_dir,
+ yaml_content = _generate_yaml(m_file_in_docker, model_class.__name__, train_data_dir=train_data_dir,
num_epoch=num_epoch, minibatch_size=minibatch_size,
record_per_task=record_per_task, num_worker=num_worker,
grads_to_wait=grads_to_wait, timestamp=timestamp)
- _submit(yaml_file)
+ _submit(yaml_content)
def _getModelFile():
m_file = inspect.currentframe().f_back.f_back.f_code.co_filename
@@ -75,14 +78,15 @@
restartPolicy: Never
"""
t = Template(YAML_TEMPLATE)
- yaml_file = 'job_desc.yaml'
- with open(yaml_file, "w") as yaml:
- yaml.write(t.substitute(m_file=m_file, m_class=m_class,
- train_data_dir=train_data_dir,
- timestamp=timestamp, num_worker=num_worker, num_epoch=num_epoch,
- minibatch_size=minibatch_size, record_per_task=record_per_task,
- user=getpass.getuser(), grads_to_wait=grads_to_wait))
- return yaml_file
+ return t.substitute(m_file=m_file, m_class=m_class,
+ train_data_dir=train_data_dir,
+ timestamp=timestamp, num_worker=num_worker, num_epoch=num_epoch,
+ minibatch_size=minibatch_size, record_per_task=record_per_task,
+ user=getpass.getuser(), grads_to_wait=grads_to_wait)
-def _submit(yaml_file):
- os.system('kubectl create -f ' + yaml_file)
+def _submit(yaml_content):
+ config.load_kube_config()
+ pod_desc = yaml.safe_load(yaml_content)
+ api = core_v1_api.CoreV1Api()
+ resp = api.create_namespaced_pod(body=pod_desc, namespace='default')
+ print("Pod created. status='%s'" % str(resp.status))
| {"golden_diff": "diff --git a/elasticdl/client/client.py b/elasticdl/client/client.py\n--- a/elasticdl/client/client.py\n+++ b/elasticdl/client/client.py\n@@ -5,6 +5,9 @@\n import getpass\n from string import Template\n import docker\n+import yaml\n+from kubernetes.client.apis import core_v1_api\n+from kubernetes import config\n \n \n def run(model_class, train_data_dir=None, \n@@ -14,11 +17,11 @@\n m_file_in_docker = \"/model/\" + m_file \n timestamp = int(round(time.time() * 1000))\n _build_docker_image(m_path, m_file, m_file_in_docker, timestamp)\n- yaml_file = _generate_yaml(m_file_in_docker, model_class.__name__, train_data_dir=train_data_dir, \n+ yaml_content = _generate_yaml(m_file_in_docker, model_class.__name__, train_data_dir=train_data_dir, \n num_epoch=num_epoch, minibatch_size=minibatch_size, \n record_per_task=record_per_task, num_worker=num_worker, \n grads_to_wait=grads_to_wait, timestamp=timestamp)\n- _submit(yaml_file)\n+ _submit(yaml_content)\n \n def _getModelFile():\n m_file = inspect.currentframe().f_back.f_back.f_code.co_filename\n@@ -75,14 +78,15 @@\n restartPolicy: Never\n \"\"\"\n t = Template(YAML_TEMPLATE)\n- yaml_file = 'job_desc.yaml'\n- with open(yaml_file, \"w\") as yaml:\n- yaml.write(t.substitute(m_file=m_file, m_class=m_class, \n- train_data_dir=train_data_dir, \n- timestamp=timestamp, num_worker=num_worker, num_epoch=num_epoch,\n- minibatch_size=minibatch_size, record_per_task=record_per_task,\n- user=getpass.getuser(), grads_to_wait=grads_to_wait))\n- return yaml_file\n+ return t.substitute(m_file=m_file, m_class=m_class, \n+ train_data_dir=train_data_dir, \n+ timestamp=timestamp, num_worker=num_worker, num_epoch=num_epoch,\n+ minibatch_size=minibatch_size, record_per_task=record_per_task,\n+ user=getpass.getuser(), grads_to_wait=grads_to_wait)\n \n-def _submit(yaml_file):\n- os.system('kubectl create -f ' + yaml_file)\n+def _submit(yaml_content):\n+ config.load_kube_config()\n+ pod_desc = yaml.safe_load(yaml_content)\n+ api = core_v1_api.CoreV1Api()\n+ resp = api.create_namespaced_pod(body=pod_desc, namespace='default')\n+ print(\"Pod created. status='%s'\" % str(resp.status))\n", "issue": "submit master pod using k8s python client instead of kubectl \nuse k8s python client to submit master pod instead of using the command below\r\n`os.system('kubectl run ...')`\n", "before_files": [{"content": "import os\nimport inspect\nimport shutil\nimport time\nimport getpass\nfrom string import Template\nimport docker\n\n\ndef run(model_class, train_data_dir=None, \n num_epoch=1, minibatch_size=10, \n record_per_task=100, num_worker=1, grads_to_wait=2):\n m_path, m_file = _getModelFile()\n m_file_in_docker = \"/model/\" + m_file \n timestamp = int(round(time.time() * 1000))\n _build_docker_image(m_path, m_file, m_file_in_docker, timestamp)\n yaml_file = _generate_yaml(m_file_in_docker, model_class.__name__, train_data_dir=train_data_dir, \n num_epoch=num_epoch, minibatch_size=minibatch_size, \n record_per_task=record_per_task, num_worker=num_worker, \n grads_to_wait=grads_to_wait, timestamp=timestamp)\n _submit(yaml_file)\n\ndef _getModelFile():\n m_file = inspect.currentframe().f_back.f_back.f_code.co_filename\n m_path = os.path.abspath(os.path.dirname(m_file))\n return m_path, m_file\n\ndef _build_docker_image(m_path, m_file, m_file_in_docker, timestamp):\n d_path = os.path.abspath(os.path.dirname(\n inspect.currentframe().f_back.f_code.co_filename))\n new_dfile = m_path + \"/Dockerfile\"\n shutil.copyfile(d_path + \"/../Dockerfile.dev\", new_dfile)\n\n with open(new_dfile, 'a') as df:\n df.write(\"COPY \" + m_file + \" \" + m_file_in_docker)\n client = docker.APIClient(base_url='unix://var/run/docker.sock') \n for line in client.build(dockerfile='Dockerfile', path='.', tag='elasticdl:dev_' + str(timestamp)):\n print(str(line, encoding = \"utf-8\"))\n\n # TODO: upload docker image to docker hub.\n\ndef _generate_yaml(m_file, m_class,\n train_data_dir=None, num_epoch=1,\n minibatch_size=10, record_per_task=100, \n num_worker=1, grads_to_wait=2, timestamp=1):\n YAML_TEMPLATE = \"\"\"\n apiVersion: v1\n kind: Pod\n metadata:\n name: elasticdl-master-$timestamp\n labels:\n purpose: test-command\n spec:\n containers:\n - name: elasticdl-master-$timestamp\n image: elasticdl:dev_$timestamp\n command: [\"python\"]\n args: [\"-m\", \"elasticdl.master.main\",\n \"--model-file\", \"$m_file\",\n \"--num_worker\", \"$num_worker\",\n \"--worker_image\", \"elasticdl:dev_$timestamp\",\n \"--job_name\", \"elasticdl-$timestamp\",\n \"--model-class\", \"$m_class\",\n \"--train_data_dir\", \"$train_data_dir\",\n \"--num_epoch\", \"$num_epoch\",\n \"--grads_to_wait\", \"$grads_to_wait\",\n \"--minibatch_size\", \"$minibatch_size\",\n \"--record_per_task\", \"$record_per_task\"]\n imagePullPolicy: Never\n env:\n - name: MY_POD_IP\n valueFrom:\n fieldRef:\n fieldPath: status.podIP\n restartPolicy: Never\n \"\"\"\n t = Template(YAML_TEMPLATE)\n yaml_file = 'job_desc.yaml'\n with open(yaml_file, \"w\") as yaml:\n yaml.write(t.substitute(m_file=m_file, m_class=m_class, \n train_data_dir=train_data_dir, \n timestamp=timestamp, num_worker=num_worker, num_epoch=num_epoch,\n minibatch_size=minibatch_size, record_per_task=record_per_task,\n user=getpass.getuser(), grads_to_wait=grads_to_wait))\n return yaml_file\n\ndef _submit(yaml_file):\n os.system('kubectl create -f ' + yaml_file)\n", "path": "elasticdl/client/client.py"}]} | 1,593 | 608 |
gh_patches_debug_27572 | rasdani/github-patches | git_diff | cookiecutter__cookiecutter-1358 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add log message to get_user_config function
there should be debug message for cases when default config overwrites user_config.
Currently, it is done silently.
</issue>
<code>
[start of cookiecutter/config.py]
1 # -*- coding: utf-8 -*-
2
3 """Global configuration handling."""
4
5 from __future__ import unicode_literals
6 import copy
7 import logging
8 import os
9 import io
10 import collections
11
12 import poyo
13
14 from cookiecutter.exceptions import ConfigDoesNotExistException
15 from cookiecutter.exceptions import InvalidConfiguration
16
17
18 logger = logging.getLogger(__name__)
19
20 USER_CONFIG_PATH = os.path.expanduser('~/.cookiecutterrc')
21
22 BUILTIN_ABBREVIATIONS = {
23 'gh': 'https://github.com/{0}.git',
24 'gl': 'https://gitlab.com/{0}.git',
25 'bb': 'https://bitbucket.org/{0}',
26 }
27
28 DEFAULT_CONFIG = {
29 'cookiecutters_dir': os.path.expanduser('~/.cookiecutters/'),
30 'replay_dir': os.path.expanduser('~/.cookiecutter_replay/'),
31 'default_context': collections.OrderedDict([]),
32 'abbreviations': BUILTIN_ABBREVIATIONS,
33 }
34
35
36 def _expand_path(path):
37 """Expand both environment variables and user home in the given path."""
38 path = os.path.expandvars(path)
39 path = os.path.expanduser(path)
40 return path
41
42
43 def merge_configs(default, overwrite):
44 """Recursively update a dict with the key/value pair of another.
45
46 Dict values that are dictionaries themselves will be updated, whilst
47 preserving existing keys.
48 """
49 new_config = copy.deepcopy(default)
50
51 for k, v in overwrite.items():
52 # Make sure to preserve existing items in
53 # nested dicts, for example `abbreviations`
54 if isinstance(v, dict):
55 new_config[k] = merge_configs(default[k], v)
56 else:
57 new_config[k] = v
58
59 return new_config
60
61
62 def get_config(config_path):
63 """Retrieve the config from the specified path, returning a config dict."""
64 if not os.path.exists(config_path):
65 raise ConfigDoesNotExistException
66
67 logger.debug('config_path is %s', config_path)
68 with io.open(config_path, encoding='utf-8') as file_handle:
69 try:
70 yaml_dict = poyo.parse_string(file_handle.read())
71 except poyo.exceptions.PoyoException as e:
72 raise InvalidConfiguration(
73 'Unable to parse YAML file {}. Error: {}'
74 ''.format(config_path, e)
75 )
76
77 config_dict = merge_configs(DEFAULT_CONFIG, yaml_dict)
78
79 raw_replay_dir = config_dict['replay_dir']
80 config_dict['replay_dir'] = _expand_path(raw_replay_dir)
81
82 raw_cookies_dir = config_dict['cookiecutters_dir']
83 config_dict['cookiecutters_dir'] = _expand_path(raw_cookies_dir)
84
85 return config_dict
86
87
88 def get_user_config(config_file=None, default_config=False):
89 """Return the user config as a dict.
90
91 If ``default_config`` is True, ignore ``config_file`` and return default
92 values for the config parameters.
93
94 If a path to a ``config_file`` is given, that is different from the default
95 location, load the user config from that.
96
97 Otherwise look up the config file path in the ``COOKIECUTTER_CONFIG``
98 environment variable. If set, load the config from this path. This will
99 raise an error if the specified path is not valid.
100
101 If the environment variable is not set, try the default config file path
102 before falling back to the default config values.
103 """
104 # Do NOT load a config. Return defaults instead.
105 if default_config:
106 return copy.copy(DEFAULT_CONFIG)
107
108 # Load the given config file
109 if config_file and config_file is not USER_CONFIG_PATH:
110 return get_config(config_file)
111
112 try:
113 # Does the user set up a config environment variable?
114 env_config_file = os.environ['COOKIECUTTER_CONFIG']
115 except KeyError:
116 # Load an optional user config if it exists
117 # otherwise return the defaults
118 if os.path.exists(USER_CONFIG_PATH):
119 return get_config(USER_CONFIG_PATH)
120 else:
121 return copy.copy(DEFAULT_CONFIG)
122 else:
123 # There is a config environment variable. Try to load it.
124 # Do not check for existence, so invalid file paths raise an error.
125 return get_config(env_config_file)
126
[end of cookiecutter/config.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/cookiecutter/config.py b/cookiecutter/config.py
--- a/cookiecutter/config.py
+++ b/cookiecutter/config.py
@@ -103,10 +103,12 @@
"""
# Do NOT load a config. Return defaults instead.
if default_config:
+ logger.debug("Force ignoring user config with default_config switch.")
return copy.copy(DEFAULT_CONFIG)
# Load the given config file
if config_file and config_file is not USER_CONFIG_PATH:
+ logger.debug("Loading custom config from %s.", config_file)
return get_config(config_file)
try:
@@ -116,10 +118,13 @@
# Load an optional user config if it exists
# otherwise return the defaults
if os.path.exists(USER_CONFIG_PATH):
+ logger.debug("Loading config from %s.", USER_CONFIG_PATH)
return get_config(USER_CONFIG_PATH)
else:
+ logger.debug("User config not found. Loading default config.")
return copy.copy(DEFAULT_CONFIG)
else:
# There is a config environment variable. Try to load it.
# Do not check for existence, so invalid file paths raise an error.
+ logger.debug("User config not found or not specified. Loading default config.")
return get_config(env_config_file)
| {"golden_diff": "diff --git a/cookiecutter/config.py b/cookiecutter/config.py\n--- a/cookiecutter/config.py\n+++ b/cookiecutter/config.py\n@@ -103,10 +103,12 @@\n \"\"\"\n # Do NOT load a config. Return defaults instead.\n if default_config:\n+ logger.debug(\"Force ignoring user config with default_config switch.\")\n return copy.copy(DEFAULT_CONFIG)\n \n # Load the given config file\n if config_file and config_file is not USER_CONFIG_PATH:\n+ logger.debug(\"Loading custom config from %s.\", config_file)\n return get_config(config_file)\n \n try:\n@@ -116,10 +118,13 @@\n # Load an optional user config if it exists\n # otherwise return the defaults\n if os.path.exists(USER_CONFIG_PATH):\n+ logger.debug(\"Loading config from %s.\", USER_CONFIG_PATH)\n return get_config(USER_CONFIG_PATH)\n else:\n+ logger.debug(\"User config not found. Loading default config.\")\n return copy.copy(DEFAULT_CONFIG)\n else:\n # There is a config environment variable. Try to load it.\n # Do not check for existence, so invalid file paths raise an error.\n+ logger.debug(\"User config not found or not specified. Loading default config.\")\n return get_config(env_config_file)\n", "issue": "Add log message to get_user_config function\nthere should be debug message for cases when default config overwrites user_config. \r\nCurrently, it is done silently.\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\n\"\"\"Global configuration handling.\"\"\"\n\nfrom __future__ import unicode_literals\nimport copy\nimport logging\nimport os\nimport io\nimport collections\n\nimport poyo\n\nfrom cookiecutter.exceptions import ConfigDoesNotExistException\nfrom cookiecutter.exceptions import InvalidConfiguration\n\n\nlogger = logging.getLogger(__name__)\n\nUSER_CONFIG_PATH = os.path.expanduser('~/.cookiecutterrc')\n\nBUILTIN_ABBREVIATIONS = {\n 'gh': 'https://github.com/{0}.git',\n 'gl': 'https://gitlab.com/{0}.git',\n 'bb': 'https://bitbucket.org/{0}',\n}\n\nDEFAULT_CONFIG = {\n 'cookiecutters_dir': os.path.expanduser('~/.cookiecutters/'),\n 'replay_dir': os.path.expanduser('~/.cookiecutter_replay/'),\n 'default_context': collections.OrderedDict([]),\n 'abbreviations': BUILTIN_ABBREVIATIONS,\n}\n\n\ndef _expand_path(path):\n \"\"\"Expand both environment variables and user home in the given path.\"\"\"\n path = os.path.expandvars(path)\n path = os.path.expanduser(path)\n return path\n\n\ndef merge_configs(default, overwrite):\n \"\"\"Recursively update a dict with the key/value pair of another.\n\n Dict values that are dictionaries themselves will be updated, whilst\n preserving existing keys.\n \"\"\"\n new_config = copy.deepcopy(default)\n\n for k, v in overwrite.items():\n # Make sure to preserve existing items in\n # nested dicts, for example `abbreviations`\n if isinstance(v, dict):\n new_config[k] = merge_configs(default[k], v)\n else:\n new_config[k] = v\n\n return new_config\n\n\ndef get_config(config_path):\n \"\"\"Retrieve the config from the specified path, returning a config dict.\"\"\"\n if not os.path.exists(config_path):\n raise ConfigDoesNotExistException\n\n logger.debug('config_path is %s', config_path)\n with io.open(config_path, encoding='utf-8') as file_handle:\n try:\n yaml_dict = poyo.parse_string(file_handle.read())\n except poyo.exceptions.PoyoException as e:\n raise InvalidConfiguration(\n 'Unable to parse YAML file {}. Error: {}'\n ''.format(config_path, e)\n )\n\n config_dict = merge_configs(DEFAULT_CONFIG, yaml_dict)\n\n raw_replay_dir = config_dict['replay_dir']\n config_dict['replay_dir'] = _expand_path(raw_replay_dir)\n\n raw_cookies_dir = config_dict['cookiecutters_dir']\n config_dict['cookiecutters_dir'] = _expand_path(raw_cookies_dir)\n\n return config_dict\n\n\ndef get_user_config(config_file=None, default_config=False):\n \"\"\"Return the user config as a dict.\n\n If ``default_config`` is True, ignore ``config_file`` and return default\n values for the config parameters.\n\n If a path to a ``config_file`` is given, that is different from the default\n location, load the user config from that.\n\n Otherwise look up the config file path in the ``COOKIECUTTER_CONFIG``\n environment variable. If set, load the config from this path. This will\n raise an error if the specified path is not valid.\n\n If the environment variable is not set, try the default config file path\n before falling back to the default config values.\n \"\"\"\n # Do NOT load a config. Return defaults instead.\n if default_config:\n return copy.copy(DEFAULT_CONFIG)\n\n # Load the given config file\n if config_file and config_file is not USER_CONFIG_PATH:\n return get_config(config_file)\n\n try:\n # Does the user set up a config environment variable?\n env_config_file = os.environ['COOKIECUTTER_CONFIG']\n except KeyError:\n # Load an optional user config if it exists\n # otherwise return the defaults\n if os.path.exists(USER_CONFIG_PATH):\n return get_config(USER_CONFIG_PATH)\n else:\n return copy.copy(DEFAULT_CONFIG)\n else:\n # There is a config environment variable. Try to load it.\n # Do not check for existence, so invalid file paths raise an error.\n return get_config(env_config_file)\n", "path": "cookiecutter/config.py"}]} | 1,745 | 288 |
gh_patches_debug_27158 | rasdani/github-patches | git_diff | archlinux__archinstall-702 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[2.3.0-RC1] Automatic partitioning does not fill drive with btrfs and no encryption

[My installation log](https://github.com/archlinux/archinstall/files/7500204/install.log.txt)
</issue>
<code>
[start of archinstall/lib/disk/user_guides.py]
1 import logging
2 from .helpers import sort_block_devices_based_on_performance, select_largest_device, select_disk_larger_than_or_close_to
3 from ..output import log
4
5 def suggest_single_disk_layout(block_device, default_filesystem=None):
6 if not default_filesystem:
7 from ..user_interaction import ask_for_main_filesystem_format
8 default_filesystem = ask_for_main_filesystem_format()
9
10 MIN_SIZE_TO_ALLOW_HOME_PART = 40 # Gb
11
12 layout = {
13 block_device.path : {
14 "wipe" : True,
15 "partitions" : []
16 }
17 }
18
19 layout[block_device.path]['partitions'].append({
20 # Boot
21 "type" : "primary",
22 "start" : "1MiB",
23 "size" : "513MiB",
24 "boot" : True,
25 "encrypted" : False,
26 "format" : True,
27 "mountpoint" : "/boot",
28 "filesystem" : {
29 "format" : "fat32"
30 }
31 })
32 layout[block_device.path]['partitions'].append({
33 # Root
34 "type" : "primary",
35 "start" : "513MiB",
36 "encrypted" : False,
37 "format" : True,
38 "size" : "100%" if block_device.size < MIN_SIZE_TO_ALLOW_HOME_PART else f"{min(block_device.size, 20)*1024}MiB",
39 "mountpoint" : "/",
40 "filesystem" : {
41 "format" : default_filesystem
42 }
43 })
44
45 if default_filesystem == 'btrfs' and input('Would you like to use BTRFS subvolumes? (Y/n): ').strip().lower() in ('', 'y', 'yes'):
46 if input('Do you want to use a recommended structure? (Y/n): ').strip().lower() in ('', 'y', 'yes'):
47 # https://btrfs.wiki.kernel.org/index.php/FAQ
48 # https://unix.stackexchange.com/questions/246976/btrfs-subvolume-uuid-clash
49 # https://github.com/classy-giraffe/easy-arch/blob/main/easy-arch.sh
50 layout[block_device.path]['partitions'][1]['btrfs'] = {
51 "subvolumes" : {
52 "@home" : "/home",
53 "@log" : "/var/log",
54 "@pkgs" : "/var/cache/pacman/pkg",
55 "@.snapshots" : "/.snapshots"
56 }
57 }
58 else:
59 pass # ... implement a guided setup
60
61 elif block_device.size >= MIN_SIZE_TO_ALLOW_HOME_PART:
62 # If we don't want to use subvolumes,
63 # But we want to be able to re-use data between re-installs..
64 # A second partition for /home would be nice if we have the space for it
65 layout[block_device.path]['partitions'].append({
66 # Home
67 "type" : "primary",
68 "encrypted" : False,
69 "format" : True,
70 "start" : f"{min(block_device.size*0.2, 20)*1024}MiB",
71 "size" : "100%",
72 "mountpoint" : "/home",
73 "filesystem" : {
74 "format" : default_filesystem
75 }
76 })
77
78 return layout
79
80
81 def suggest_multi_disk_layout(block_devices, default_filesystem=None):
82 if not default_filesystem:
83 from ..user_interaction import ask_for_main_filesystem_format
84 default_filesystem = ask_for_main_filesystem_format()
85
86 # Not really a rock solid foundation of information to stand on, but it's a start:
87 # https://www.reddit.com/r/btrfs/comments/m287gp/partition_strategy_for_two_physical_disks/
88 # https://www.reddit.com/r/btrfs/comments/9us4hr/what_is_your_btrfs_partitionsubvolumes_scheme/
89
90 MIN_SIZE_TO_ALLOW_HOME_PART = 40 # Gb
91 ARCH_LINUX_INSTALLED_SIZE = 20 # Gb, rough estimate taking in to account user desktops etc. TODO: Catch user packages to detect size?
92
93 block_devices = sort_block_devices_based_on_performance(block_devices).keys()
94
95 home_device = select_largest_device(block_devices, gigabytes=MIN_SIZE_TO_ALLOW_HOME_PART)
96 root_device = select_disk_larger_than_or_close_to(block_devices, gigabytes=ARCH_LINUX_INSTALLED_SIZE, filter_out=[home_device])
97
98 log(f"Suggesting multi-disk-layout using {len(block_devices)} disks, where {root_device} will be /root and {home_device} will be /home", level=logging.DEBUG)
99
100 layout = {
101 root_device.path : {
102 "wipe" : True,
103 "partitions" : []
104 },
105 home_device.path : {
106 "wipe" : True,
107 "partitions" : []
108 },
109 }
110
111 layout[root_device.path]['partitions'].append({
112 # Boot
113 "type" : "primary",
114 "start" : "1MiB",
115 "size" : "513MiB",
116 "boot" : True,
117 "encrypted" : False,
118 "format" : True,
119 "mountpoint" : "/boot",
120 "filesystem" : {
121 "format" : "fat32"
122 }
123 })
124 layout[root_device.path]['partitions'].append({
125 # Root
126 "type" : "primary",
127 "start" : "513MiB",
128 "encrypted" : False,
129 "format" : True,
130 "size" : "100%",
131 "mountpoint" : "/",
132 "filesystem" : {
133 "format" : default_filesystem
134 }
135 })
136
137 layout[home_device.path]['partitions'].append({
138 # Home
139 "type" : "primary",
140 "encrypted" : False,
141 "format" : True,
142 "start" : "4MiB",
143 "size" : "100%",
144 "mountpoint" : "/home",
145 "filesystem" : {
146 "format" : default_filesystem
147 }
148 })
149
150 return layout
151
[end of archinstall/lib/disk/user_guides.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/archinstall/lib/disk/user_guides.py b/archinstall/lib/disk/user_guides.py
--- a/archinstall/lib/disk/user_guides.py
+++ b/archinstall/lib/disk/user_guides.py
@@ -8,6 +8,10 @@
default_filesystem = ask_for_main_filesystem_format()
MIN_SIZE_TO_ALLOW_HOME_PART = 40 # Gb
+ using_subvolumes = False
+
+ if default_filesystem == 'btrfs':
+ using_subvolumes = input('Would you like to use BTRFS subvolumes? (Y/n): ').strip().lower() in ('', 'y', 'yes')
layout = {
block_device.path : {
@@ -35,14 +39,14 @@
"start" : "513MiB",
"encrypted" : False,
"format" : True,
- "size" : "100%" if block_device.size < MIN_SIZE_TO_ALLOW_HOME_PART else f"{min(block_device.size, 20)*1024}MiB",
+ "size" : "100%" if (using_subvolumes or block_device.size < MIN_SIZE_TO_ALLOW_HOME_PART) else f"{min(block_device.size, 20)*1024}MiB",
"mountpoint" : "/",
"filesystem" : {
"format" : default_filesystem
}
})
- if default_filesystem == 'btrfs' and input('Would you like to use BTRFS subvolumes? (Y/n): ').strip().lower() in ('', 'y', 'yes'):
+ if default_filesystem == 'btrfs' and using_subvolumes:
if input('Do you want to use a recommended structure? (Y/n): ').strip().lower() in ('', 'y', 'yes'):
# https://btrfs.wiki.kernel.org/index.php/FAQ
# https://unix.stackexchange.com/questions/246976/btrfs-subvolume-uuid-clash
| {"golden_diff": "diff --git a/archinstall/lib/disk/user_guides.py b/archinstall/lib/disk/user_guides.py\n--- a/archinstall/lib/disk/user_guides.py\n+++ b/archinstall/lib/disk/user_guides.py\n@@ -8,6 +8,10 @@\n \t\tdefault_filesystem = ask_for_main_filesystem_format()\n \t\t\n \tMIN_SIZE_TO_ALLOW_HOME_PART = 40 # Gb\n+\tusing_subvolumes = False\n+\n+\tif default_filesystem == 'btrfs':\n+\t\tusing_subvolumes = input('Would you like to use BTRFS subvolumes? (Y/n): ').strip().lower() in ('', 'y', 'yes')\n \n \tlayout = {\n \t\tblock_device.path : {\n@@ -35,14 +39,14 @@\n \t\t\"start\" : \"513MiB\",\n \t\t\"encrypted\" : False,\n \t\t\"format\" : True,\n-\t\t\"size\" : \"100%\" if block_device.size < MIN_SIZE_TO_ALLOW_HOME_PART else f\"{min(block_device.size, 20)*1024}MiB\",\n+\t\t\"size\" : \"100%\" if (using_subvolumes or block_device.size < MIN_SIZE_TO_ALLOW_HOME_PART) else f\"{min(block_device.size, 20)*1024}MiB\",\n \t\t\"mountpoint\" : \"/\",\n \t\t\"filesystem\" : {\n \t\t\t\"format\" : default_filesystem\n \t\t}\n \t})\n \n-\tif default_filesystem == 'btrfs' and input('Would you like to use BTRFS subvolumes? (Y/n): ').strip().lower() in ('', 'y', 'yes'):\n+\tif default_filesystem == 'btrfs' and using_subvolumes:\n \t\tif input('Do you want to use a recommended structure? (Y/n): ').strip().lower() in ('', 'y', 'yes'):\n \t\t\t# https://btrfs.wiki.kernel.org/index.php/FAQ\n \t\t\t# https://unix.stackexchange.com/questions/246976/btrfs-subvolume-uuid-clash\n", "issue": "[2.3.0-RC1] Automatic partitioning does not fill drive with btrfs and no encryption\n\r\n\r\n[My installation log](https://github.com/archlinux/archinstall/files/7500204/install.log.txt)\r\n\n", "before_files": [{"content": "import logging\nfrom .helpers import sort_block_devices_based_on_performance, select_largest_device, select_disk_larger_than_or_close_to\nfrom ..output import log\n\ndef suggest_single_disk_layout(block_device, default_filesystem=None):\n\tif not default_filesystem:\n\t\tfrom ..user_interaction import ask_for_main_filesystem_format\n\t\tdefault_filesystem = ask_for_main_filesystem_format()\n\t\t\n\tMIN_SIZE_TO_ALLOW_HOME_PART = 40 # Gb\n\n\tlayout = {\n\t\tblock_device.path : {\n\t\t\t\"wipe\" : True,\n\t\t\t\"partitions\" : []\n\t\t}\n\t}\n\n\tlayout[block_device.path]['partitions'].append({\n\t\t# Boot\n\t\t\"type\" : \"primary\",\n\t\t\"start\" : \"1MiB\",\n\t\t\"size\" : \"513MiB\",\n\t\t\"boot\" : True,\n\t\t\"encrypted\" : False,\n\t\t\"format\" : True,\n\t\t\"mountpoint\" : \"/boot\",\n\t\t\"filesystem\" : {\n\t\t\t\"format\" : \"fat32\"\n\t\t}\n\t})\n\tlayout[block_device.path]['partitions'].append({\n\t\t# Root\n\t\t\"type\" : \"primary\",\n\t\t\"start\" : \"513MiB\",\n\t\t\"encrypted\" : False,\n\t\t\"format\" : True,\n\t\t\"size\" : \"100%\" if block_device.size < MIN_SIZE_TO_ALLOW_HOME_PART else f\"{min(block_device.size, 20)*1024}MiB\",\n\t\t\"mountpoint\" : \"/\",\n\t\t\"filesystem\" : {\n\t\t\t\"format\" : default_filesystem\n\t\t}\n\t})\n\n\tif default_filesystem == 'btrfs' and input('Would you like to use BTRFS subvolumes? (Y/n): ').strip().lower() in ('', 'y', 'yes'):\n\t\tif input('Do you want to use a recommended structure? (Y/n): ').strip().lower() in ('', 'y', 'yes'):\n\t\t\t# https://btrfs.wiki.kernel.org/index.php/FAQ\n\t\t\t# https://unix.stackexchange.com/questions/246976/btrfs-subvolume-uuid-clash\n\t\t\t# https://github.com/classy-giraffe/easy-arch/blob/main/easy-arch.sh\n\t\t\tlayout[block_device.path]['partitions'][1]['btrfs'] = {\n\t\t\t\t\"subvolumes\" : {\n\t\t\t\t\t\"@home\" : \"/home\",\n\t\t\t\t\t\"@log\" : \"/var/log\",\n\t\t\t\t\t\"@pkgs\" : \"/var/cache/pacman/pkg\",\n\t\t\t\t\t\"@.snapshots\" : \"/.snapshots\"\n\t\t\t\t}\n\t\t\t}\n\t\telse:\n\t\t\tpass # ... implement a guided setup\n\n\telif block_device.size >= MIN_SIZE_TO_ALLOW_HOME_PART:\n\t\t# If we don't want to use subvolumes,\n\t\t# But we want to be able to re-use data between re-installs..\n\t\t# A second partition for /home would be nice if we have the space for it\n\t\tlayout[block_device.path]['partitions'].append({\n\t\t\t# Home\n\t\t\t\"type\" : \"primary\",\n\t\t\t\"encrypted\" : False,\n\t\t\t\"format\" : True,\n\t\t\t\"start\" : f\"{min(block_device.size*0.2, 20)*1024}MiB\",\n\t\t\t\"size\" : \"100%\",\n\t\t\t\"mountpoint\" : \"/home\",\n\t\t\t\"filesystem\" : {\n\t\t\t\t\"format\" : default_filesystem\n\t\t\t}\n\t\t})\n\n\treturn layout\n\n\ndef suggest_multi_disk_layout(block_devices, default_filesystem=None):\n\tif not default_filesystem:\n\t\tfrom ..user_interaction import ask_for_main_filesystem_format\n\t\tdefault_filesystem = ask_for_main_filesystem_format()\n\n\t# Not really a rock solid foundation of information to stand on, but it's a start:\n\t# https://www.reddit.com/r/btrfs/comments/m287gp/partition_strategy_for_two_physical_disks/\n\t# https://www.reddit.com/r/btrfs/comments/9us4hr/what_is_your_btrfs_partitionsubvolumes_scheme/\n\n\tMIN_SIZE_TO_ALLOW_HOME_PART = 40 # Gb\n\tARCH_LINUX_INSTALLED_SIZE = 20 # Gb, rough estimate taking in to account user desktops etc. TODO: Catch user packages to detect size?\n\n\tblock_devices = sort_block_devices_based_on_performance(block_devices).keys()\n\n\thome_device = select_largest_device(block_devices, gigabytes=MIN_SIZE_TO_ALLOW_HOME_PART)\n\troot_device = select_disk_larger_than_or_close_to(block_devices, gigabytes=ARCH_LINUX_INSTALLED_SIZE, filter_out=[home_device])\n\n\tlog(f\"Suggesting multi-disk-layout using {len(block_devices)} disks, where {root_device} will be /root and {home_device} will be /home\", level=logging.DEBUG)\n\n\tlayout = {\n\t\troot_device.path : {\n\t\t\t\"wipe\" : True,\n\t\t\t\"partitions\" : []\n\t\t},\n\t\thome_device.path : {\n\t\t\t\"wipe\" : True,\n\t\t\t\"partitions\" : []\n\t\t},\n\t}\n\n\tlayout[root_device.path]['partitions'].append({\n\t\t# Boot\n\t\t\"type\" : \"primary\",\n\t\t\"start\" : \"1MiB\",\n\t\t\"size\" : \"513MiB\",\n\t\t\"boot\" : True,\n\t\t\"encrypted\" : False,\n\t\t\"format\" : True,\n\t\t\"mountpoint\" : \"/boot\",\n\t\t\"filesystem\" : {\n\t\t\t\"format\" : \"fat32\"\n\t\t}\n\t})\n\tlayout[root_device.path]['partitions'].append({\n\t\t# Root\n\t\t\"type\" : \"primary\",\n\t\t\"start\" : \"513MiB\",\n\t\t\"encrypted\" : False,\n\t\t\"format\" : True,\n\t\t\"size\" : \"100%\",\n\t\t\"mountpoint\" : \"/\",\n\t\t\"filesystem\" : {\n\t\t\t\"format\" : default_filesystem\n\t\t}\n\t})\n\n\tlayout[home_device.path]['partitions'].append({\n\t\t# Home\n\t\t\"type\" : \"primary\",\n\t\t\"encrypted\" : False,\n\t\t\"format\" : True,\n\t\t\"start\" : \"4MiB\",\n\t\t\"size\" : \"100%\",\n\t\t\"mountpoint\" : \"/home\",\n\t\t\"filesystem\" : {\n\t\t\t\"format\" : default_filesystem\n\t\t}\n\t})\n\n\treturn layout\n", "path": "archinstall/lib/disk/user_guides.py"}]} | 2,436 | 458 |
gh_patches_debug_14365 | rasdani/github-patches | git_diff | comic__grand-challenge.org-1084 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Create a simple interface for fetching datatypes schemas on grand-challenge
**Problem**
The grand-challenge datatypes are currently only stored in the [gc-api](https://github.com/DIAGNijmegen/rse-gcapi/tree/master/gcapi/schemas) repository. However, the information is required by other libraries as well. Duplication of this information seems bad.
**Solution**
* [x] It would be nice to have this information in a central location like grand-challenge and provide a simple GET interface to allow the libraries /comic/evalutils and /DIAGNijmegen/rse-gcapi/ to fetch and cache this information (only a few kbs) from grand-challenge.
* [x] The answer type schemas should be added to the generated schema
</issue>
<code>
[start of app/grandchallenge/reader_studies/serializers.py]
1 from rest_framework.exceptions import ValidationError
2 from rest_framework.fields import CharField
3 from rest_framework.relations import HyperlinkedRelatedField, SlugRelatedField
4 from rest_framework.serializers import (
5 HyperlinkedModelSerializer,
6 SerializerMethodField,
7 )
8
9 from grandchallenge.api.swagger import swagger_schema_fields_for_charfield
10 from grandchallenge.cases.models import Image
11 from grandchallenge.reader_studies.models import Answer, Question, ReaderStudy
12
13
14 class QuestionSerializer(HyperlinkedModelSerializer):
15 answer_type = CharField(source="get_answer_type_display")
16 reader_study = HyperlinkedRelatedField(
17 view_name="api:reader-study-detail", read_only=True
18 )
19 form_direction = CharField(source="get_direction_display")
20 image_port = CharField(source="get_image_port_display")
21
22 class Meta:
23 model = Question
24 fields = (
25 "answer_type",
26 "api_url",
27 "form_direction",
28 "help_text",
29 "image_port",
30 "pk",
31 "question_text",
32 "reader_study",
33 "required",
34 )
35 swagger_schema_fields = swagger_schema_fields_for_charfield(
36 answer_type=model._meta.get_field("answer_type"),
37 form_direction=model._meta.get_field(
38 "direction"
39 ), # model.direction gets remapped
40 image_port=model._meta.get_field("image_port"),
41 )
42
43
44 class ReaderStudySerializer(HyperlinkedModelSerializer):
45 questions = QuestionSerializer(many=True, read_only=True)
46 hanging_list_images = SerializerMethodField()
47
48 class Meta:
49 model = ReaderStudy
50 fields = (
51 "api_url",
52 "description",
53 "hanging_list_images",
54 "is_valid",
55 "pk",
56 "questions",
57 "title",
58 )
59
60 def get_hanging_list_images(self, obj: ReaderStudy):
61 """Used by hanging_list_images serializer field."""
62 return obj.get_hanging_list_images_for_user(
63 user=self.context["request"].user
64 )
65
66
67 class AnswerSerializer(HyperlinkedModelSerializer):
68 creator = SlugRelatedField(read_only=True, slug_field="username")
69 question = HyperlinkedRelatedField(
70 view_name="api:reader-studies-question-detail",
71 queryset=Question.objects.all(),
72 )
73 images = HyperlinkedRelatedField(
74 many=True, queryset=Image.objects.all(), view_name="api:image-detail"
75 )
76
77 def validate(self, attrs):
78 question = attrs["question"]
79 images = attrs["images"]
80 answer = attrs["answer"]
81 creator = self.context.get("request").user
82
83 if not question.reader_study.is_reader(user=creator):
84 raise ValidationError("This user is not a reader for this study.")
85
86 if not question.is_answer_valid(answer=answer):
87 raise ValidationError(
88 f"You answer is not the correct type. "
89 f"{question.get_answer_type_display()} expected, "
90 f"{type(answer)} found."
91 )
92
93 if len(images) == 0:
94 raise ValidationError(
95 "You must specify the images that this answer corresponds to."
96 )
97
98 reader_study_images = question.reader_study.images.all()
99 for im in images:
100 if im not in reader_study_images:
101 raise ValidationError(
102 f"Image {im} does not belong to this reader study."
103 )
104
105 if Answer.objects.filter(
106 creator=creator, question=question, images__in=images
107 ).exists():
108 raise ValidationError(
109 f"User {creator} has already answered this question "
110 f"for at least 1 of these images."
111 )
112
113 return attrs
114
115 class Meta:
116 model = Answer
117 fields = (
118 "answer",
119 "api_url",
120 "created",
121 "creator",
122 "images",
123 "pk",
124 "question",
125 )
126
[end of app/grandchallenge/reader_studies/serializers.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/app/grandchallenge/reader_studies/serializers.py b/app/grandchallenge/reader_studies/serializers.py
--- a/app/grandchallenge/reader_studies/serializers.py
+++ b/app/grandchallenge/reader_studies/serializers.py
@@ -8,7 +8,12 @@
from grandchallenge.api.swagger import swagger_schema_fields_for_charfield
from grandchallenge.cases.models import Image
-from grandchallenge.reader_studies.models import Answer, Question, ReaderStudy
+from grandchallenge.reader_studies.models import (
+ ANSWER_TYPE_SCHEMA,
+ Answer,
+ Question,
+ ReaderStudy,
+)
class QuestionSerializer(HyperlinkedModelSerializer):
@@ -123,3 +128,6 @@
"pk",
"question",
)
+ swagger_schema_fields = {
+ "properties": {"answer": {"title": "Answer", **ANSWER_TYPE_SCHEMA}}
+ }
| {"golden_diff": "diff --git a/app/grandchallenge/reader_studies/serializers.py b/app/grandchallenge/reader_studies/serializers.py\n--- a/app/grandchallenge/reader_studies/serializers.py\n+++ b/app/grandchallenge/reader_studies/serializers.py\n@@ -8,7 +8,12 @@\n \n from grandchallenge.api.swagger import swagger_schema_fields_for_charfield\n from grandchallenge.cases.models import Image\n-from grandchallenge.reader_studies.models import Answer, Question, ReaderStudy\n+from grandchallenge.reader_studies.models import (\n+ ANSWER_TYPE_SCHEMA,\n+ Answer,\n+ Question,\n+ ReaderStudy,\n+)\n \n \n class QuestionSerializer(HyperlinkedModelSerializer):\n@@ -123,3 +128,6 @@\n \"pk\",\n \"question\",\n )\n+ swagger_schema_fields = {\n+ \"properties\": {\"answer\": {\"title\": \"Answer\", **ANSWER_TYPE_SCHEMA}}\n+ }\n", "issue": "Create a simple interface for fetching datatypes schemas on grand-challenge \n**Problem**\r\nThe grand-challenge datatypes are currently only stored in the [gc-api](https://github.com/DIAGNijmegen/rse-gcapi/tree/master/gcapi/schemas) repository. However, the information is required by other libraries as well. Duplication of this information seems bad.\r\n\r\n**Solution**\r\n* [x] It would be nice to have this information in a central location like grand-challenge and provide a simple GET interface to allow the libraries /comic/evalutils and /DIAGNijmegen/rse-gcapi/ to fetch and cache this information (only a few kbs) from grand-challenge.\r\n* [x] The answer type schemas should be added to the generated schema\r\n\n", "before_files": [{"content": "from rest_framework.exceptions import ValidationError\nfrom rest_framework.fields import CharField\nfrom rest_framework.relations import HyperlinkedRelatedField, SlugRelatedField\nfrom rest_framework.serializers import (\n HyperlinkedModelSerializer,\n SerializerMethodField,\n)\n\nfrom grandchallenge.api.swagger import swagger_schema_fields_for_charfield\nfrom grandchallenge.cases.models import Image\nfrom grandchallenge.reader_studies.models import Answer, Question, ReaderStudy\n\n\nclass QuestionSerializer(HyperlinkedModelSerializer):\n answer_type = CharField(source=\"get_answer_type_display\")\n reader_study = HyperlinkedRelatedField(\n view_name=\"api:reader-study-detail\", read_only=True\n )\n form_direction = CharField(source=\"get_direction_display\")\n image_port = CharField(source=\"get_image_port_display\")\n\n class Meta:\n model = Question\n fields = (\n \"answer_type\",\n \"api_url\",\n \"form_direction\",\n \"help_text\",\n \"image_port\",\n \"pk\",\n \"question_text\",\n \"reader_study\",\n \"required\",\n )\n swagger_schema_fields = swagger_schema_fields_for_charfield(\n answer_type=model._meta.get_field(\"answer_type\"),\n form_direction=model._meta.get_field(\n \"direction\"\n ), # model.direction gets remapped\n image_port=model._meta.get_field(\"image_port\"),\n )\n\n\nclass ReaderStudySerializer(HyperlinkedModelSerializer):\n questions = QuestionSerializer(many=True, read_only=True)\n hanging_list_images = SerializerMethodField()\n\n class Meta:\n model = ReaderStudy\n fields = (\n \"api_url\",\n \"description\",\n \"hanging_list_images\",\n \"is_valid\",\n \"pk\",\n \"questions\",\n \"title\",\n )\n\n def get_hanging_list_images(self, obj: ReaderStudy):\n \"\"\"Used by hanging_list_images serializer field.\"\"\"\n return obj.get_hanging_list_images_for_user(\n user=self.context[\"request\"].user\n )\n\n\nclass AnswerSerializer(HyperlinkedModelSerializer):\n creator = SlugRelatedField(read_only=True, slug_field=\"username\")\n question = HyperlinkedRelatedField(\n view_name=\"api:reader-studies-question-detail\",\n queryset=Question.objects.all(),\n )\n images = HyperlinkedRelatedField(\n many=True, queryset=Image.objects.all(), view_name=\"api:image-detail\"\n )\n\n def validate(self, attrs):\n question = attrs[\"question\"]\n images = attrs[\"images\"]\n answer = attrs[\"answer\"]\n creator = self.context.get(\"request\").user\n\n if not question.reader_study.is_reader(user=creator):\n raise ValidationError(\"This user is not a reader for this study.\")\n\n if not question.is_answer_valid(answer=answer):\n raise ValidationError(\n f\"You answer is not the correct type. \"\n f\"{question.get_answer_type_display()} expected, \"\n f\"{type(answer)} found.\"\n )\n\n if len(images) == 0:\n raise ValidationError(\n \"You must specify the images that this answer corresponds to.\"\n )\n\n reader_study_images = question.reader_study.images.all()\n for im in images:\n if im not in reader_study_images:\n raise ValidationError(\n f\"Image {im} does not belong to this reader study.\"\n )\n\n if Answer.objects.filter(\n creator=creator, question=question, images__in=images\n ).exists():\n raise ValidationError(\n f\"User {creator} has already answered this question \"\n f\"for at least 1 of these images.\"\n )\n\n return attrs\n\n class Meta:\n model = Answer\n fields = (\n \"answer\",\n \"api_url\",\n \"created\",\n \"creator\",\n \"images\",\n \"pk\",\n \"question\",\n )\n", "path": "app/grandchallenge/reader_studies/serializers.py"}]} | 1,780 | 205 |
gh_patches_debug_12030 | rasdani/github-patches | git_diff | pytorch__vision-8256 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
`to_image` does not handle numpy 2D arrays
### 🐛 Describe the bug
[`to_image`](https://github.com/pytorch/vision/blob/806dba678d5b01f6e8a46f7c48fdf8c09369a267/torchvision/transforms/v2/functional/_type_conversion.py#L11) should be able to handle [numpy arrays](https://numpy.org/doc/stable/reference/generated/numpy.array.html) with shape `(H, W)`. This corresponds to the previous behavior of [`to_tensor`](https://github.com/pytorch/vision/blob/806dba678d5b01f6e8a46f7c48fdf8c09369a267/torchvision/transforms/functional.py#L149). Running the following:
```python
import numpy as np
from torchvision.transforms.v2.functional import to_image
img_npy = np.random.randint(0, 256, (224, 224), dtype=np.uint8)
to_image(img_npy)
```
results in error:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/mantasu/programs/anaconda/envs/glasses-detector-312/lib/python3.12/site-packages/torchvision/transforms/v2/functional/_type_conversion.py", line 14, in to_image
output = torch.from_numpy(inpt).permute((2, 0, 1)).contiguous()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: permute(sparse_coo): number of dimensions in the tensor input does not match the length of the desired ordering of dimensions i.e. input.dim() = 2 is not equal to len(dims) = 3
```
PIL grayscale images are handled correctly:
```python
from PIL import Image
img_pil = Image.fromarray(img_npy)
print(to_image(img_pil).shape) # (1, 224, 224)
```
### Versions
```
PyTorch version: 2.2.0
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.3 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.11.7 | packaged by conda-forge | (main, Dec 23 2023, 14:43:09) [GCC 12.3.0] (64-bit runtime)
Python platform: Linux-5.15.133.1-microsoft-standard-WSL2-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3080 Ti Laptop GPU
Nvidia driver version: 546.33
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.7
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 24
On-line CPU(s) list: 0-23
Vendor ID: GenuineIntel
Model name: 12th Gen Intel(R) Core(TM) i7-12800HX
CPU family: 6
Model: 151
Thread(s) per core: 2
Core(s) per socket: 12
Socket(s): 1
Stepping: 2
BogoMIPS: 4607.99
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology tsc_reliable nonstop_tsc cpuid pni pclmulqdq vmx ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves avx_vnni umip waitpkg gfni vaes vpclmulqdq rdpid movdiri movdir64b fsrm serialize flush_l1d arch_capabilities
Virtualization: VT-x
Hypervisor vendor: Microsoft
Virtualization type: full
L1d cache: 576 KiB (12 instances)
L1i cache: 384 KiB (12 instances)
L2 cache: 15 MiB (12 instances)
L3 cache: 25 MiB (1 instance)
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.26.3
[pip3] torch==2.2.0
[pip3] torchaudio==2.2.0
[pip3] torchvision==0.17.0
[pip3] triton==2.2.0
[conda] blas 1.0 mkl conda-forge
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] libjpeg-turbo 2.0.0 h9bf148f_0 pytorch
[conda] mkl 2023.1.0 h213fc3f_46344
[conda] numpy 1.26.3 py311h64a7726_0 conda-forge
[conda] pytorch 2.2.0 py3.11_cuda12.1_cudnn8.9.2_0 pytorch
[conda] pytorch-cuda 12.1 ha16c6d3_5 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torchaudio 2.2.0 py311_cu121 pytorch
[conda] torchtriton 2.2.0 py311 pytorch
[conda] torchvision 0.17.0 py311_cu121 pytorch
```
</issue>
<code>
[start of torchvision/transforms/v2/functional/_type_conversion.py]
1 from typing import Union
2
3 import numpy as np
4 import PIL.Image
5 import torch
6 from torchvision import tv_tensors
7 from torchvision.transforms import functional as _F
8
9
10 @torch.jit.unused
11 def to_image(inpt: Union[torch.Tensor, PIL.Image.Image, np.ndarray]) -> tv_tensors.Image:
12 """See :class:`~torchvision.transforms.v2.ToImage` for details."""
13 if isinstance(inpt, np.ndarray):
14 output = torch.from_numpy(inpt).permute((2, 0, 1)).contiguous()
15 elif isinstance(inpt, PIL.Image.Image):
16 output = pil_to_tensor(inpt)
17 elif isinstance(inpt, torch.Tensor):
18 output = inpt
19 else:
20 raise TypeError(
21 f"Input can either be a pure Tensor, a numpy array, or a PIL image, but got {type(inpt)} instead."
22 )
23 return tv_tensors.Image(output)
24
25
26 to_pil_image = _F.to_pil_image
27 pil_to_tensor = _F.pil_to_tensor
28
[end of torchvision/transforms/v2/functional/_type_conversion.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/torchvision/transforms/v2/functional/_type_conversion.py b/torchvision/transforms/v2/functional/_type_conversion.py
--- a/torchvision/transforms/v2/functional/_type_conversion.py
+++ b/torchvision/transforms/v2/functional/_type_conversion.py
@@ -11,7 +11,7 @@
def to_image(inpt: Union[torch.Tensor, PIL.Image.Image, np.ndarray]) -> tv_tensors.Image:
"""See :class:`~torchvision.transforms.v2.ToImage` for details."""
if isinstance(inpt, np.ndarray):
- output = torch.from_numpy(inpt).permute((2, 0, 1)).contiguous()
+ output = torch.from_numpy(np.atleast_3d(inpt)).permute((2, 0, 1)).contiguous()
elif isinstance(inpt, PIL.Image.Image):
output = pil_to_tensor(inpt)
elif isinstance(inpt, torch.Tensor):
| {"golden_diff": "diff --git a/torchvision/transforms/v2/functional/_type_conversion.py b/torchvision/transforms/v2/functional/_type_conversion.py\n--- a/torchvision/transforms/v2/functional/_type_conversion.py\n+++ b/torchvision/transforms/v2/functional/_type_conversion.py\n@@ -11,7 +11,7 @@\n def to_image(inpt: Union[torch.Tensor, PIL.Image.Image, np.ndarray]) -> tv_tensors.Image:\n \"\"\"See :class:`~torchvision.transforms.v2.ToImage` for details.\"\"\"\n if isinstance(inpt, np.ndarray):\n- output = torch.from_numpy(inpt).permute((2, 0, 1)).contiguous()\n+ output = torch.from_numpy(np.atleast_3d(inpt)).permute((2, 0, 1)).contiguous()\n elif isinstance(inpt, PIL.Image.Image):\n output = pil_to_tensor(inpt)\n elif isinstance(inpt, torch.Tensor):\n", "issue": "`to_image` does not handle numpy 2D arrays\n### \ud83d\udc1b Describe the bug\r\n\r\n[`to_image`](https://github.com/pytorch/vision/blob/806dba678d5b01f6e8a46f7c48fdf8c09369a267/torchvision/transforms/v2/functional/_type_conversion.py#L11) should be able to handle [numpy arrays](https://numpy.org/doc/stable/reference/generated/numpy.array.html) with shape `(H, W)`. This corresponds to the previous behavior of [`to_tensor`](https://github.com/pytorch/vision/blob/806dba678d5b01f6e8a46f7c48fdf8c09369a267/torchvision/transforms/functional.py#L149). Running the following:\r\n\r\n\r\n```python\r\nimport numpy as np\r\nfrom torchvision.transforms.v2.functional import to_image\r\n\r\nimg_npy = np.random.randint(0, 256, (224, 224), dtype=np.uint8)\r\nto_image(img_npy)\r\n```\r\n\r\nresults in error:\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/home/mantasu/programs/anaconda/envs/glasses-detector-312/lib/python3.12/site-packages/torchvision/transforms/v2/functional/_type_conversion.py\", line 14, in to_image\r\n output = torch.from_numpy(inpt).permute((2, 0, 1)).contiguous()\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\nRuntimeError: permute(sparse_coo): number of dimensions in the tensor input does not match the length of the desired ordering of dimensions i.e. input.dim() = 2 is not equal to len(dims) = 3\r\n```\r\n\r\n\r\nPIL grayscale images are handled correctly:\r\n```python\r\nfrom PIL import Image\r\n\r\nimg_pil = Image.fromarray(img_npy)\r\nprint(to_image(img_pil).shape) # (1, 224, 224)\r\n```\r\n\r\n\r\n\r\n### Versions\r\n\r\n```\r\nPyTorch version: 2.2.0\r\nIs debug build: False\r\nCUDA used to build PyTorch: 12.1\r\nROCM used to build PyTorch: N/A\r\n\r\nOS: Ubuntu 22.04.3 LTS (x86_64)\r\nGCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0\r\nClang version: Could not collect\r\nCMake version: Could not collect\r\nLibc version: glibc-2.35\r\n\r\nPython version: 3.11.7 | packaged by conda-forge | (main, Dec 23 2023, 14:43:09) [GCC 12.3.0] (64-bit runtime)\r\nPython platform: Linux-5.15.133.1-microsoft-standard-WSL2-x86_64-with-glibc2.35\r\nIs CUDA available: True\r\nCUDA runtime version: Could not collect\r\nCUDA_MODULE_LOADING set to: LAZY\r\nGPU models and configuration: GPU 0: NVIDIA GeForce RTX 3080 Ti Laptop GPU\r\nNvidia driver version: 546.33\r\ncuDNN version: Probably one of the following:\r\n/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.7\r\n/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.7\r\n/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.7\r\n/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.7\r\n/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.7\r\n/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.7\r\n/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.7\r\nHIP runtime version: N/A\r\nMIOpen runtime version: N/A\r\nIs XNNPACK available: True\r\n\r\nCPU:\r\nArchitecture: x86_64\r\nCPU op-mode(s): 32-bit, 64-bit\r\nAddress sizes: 39 bits physical, 48 bits virtual\r\nByte Order: Little Endian\r\nCPU(s): 24\r\nOn-line CPU(s) list: 0-23\r\nVendor ID: GenuineIntel\r\nModel name: 12th Gen Intel(R) Core(TM) i7-12800HX\r\nCPU family: 6\r\nModel: 151\r\nThread(s) per core: 2\r\nCore(s) per socket: 12\r\nSocket(s): 1\r\nStepping: 2\r\nBogoMIPS: 4607.99\r\nFlags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology tsc_reliable nonstop_tsc cpuid pni pclmulqdq vmx ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves avx_vnni umip waitpkg gfni vaes vpclmulqdq rdpid movdiri movdir64b fsrm serialize flush_l1d arch_capabilities\r\nVirtualization: VT-x\r\nHypervisor vendor: Microsoft\r\nVirtualization type: full\r\nL1d cache: 576 KiB (12 instances)\r\nL1i cache: 384 KiB (12 instances)\r\nL2 cache: 15 MiB (12 instances)\r\nL3 cache: 25 MiB (1 instance)\r\nVulnerability Gather data sampling: Not affected\r\nVulnerability Itlb multihit: Not affected\r\nVulnerability L1tf: Not affected\r\nVulnerability Mds: Not affected\r\nVulnerability Meltdown: Not affected\r\nVulnerability Mmio stale data: Not affected\r\nVulnerability Retbleed: Mitigation; Enhanced IBRS\r\nVulnerability Spec rstack overflow: Not affected\r\nVulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp\r\nVulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization\r\nVulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence\r\nVulnerability Srbds: Not affected\r\nVulnerability Tsx async abort: Not affected\r\n\r\nVersions of relevant libraries:\r\n[pip3] numpy==1.26.3\r\n[pip3] torch==2.2.0\r\n[pip3] torchaudio==2.2.0\r\n[pip3] torchvision==0.17.0\r\n[pip3] triton==2.2.0\r\n[conda] blas 1.0 mkl conda-forge\r\n[conda] ffmpeg 4.3 hf484d3e_0 pytorch\r\n[conda] libjpeg-turbo 2.0.0 h9bf148f_0 pytorch\r\n[conda] mkl 2023.1.0 h213fc3f_46344 \r\n[conda] numpy 1.26.3 py311h64a7726_0 conda-forge\r\n[conda] pytorch 2.2.0 py3.11_cuda12.1_cudnn8.9.2_0 pytorch\r\n[conda] pytorch-cuda 12.1 ha16c6d3_5 pytorch\r\n[conda] pytorch-mutex 1.0 cuda pytorch\r\n[conda] torchaudio 2.2.0 py311_cu121 pytorch\r\n[conda] torchtriton 2.2.0 py311 pytorch\r\n[conda] torchvision 0.17.0 py311_cu121 pytorch\r\n```\n", "before_files": [{"content": "from typing import Union\n\nimport numpy as np\nimport PIL.Image\nimport torch\nfrom torchvision import tv_tensors\nfrom torchvision.transforms import functional as _F\n\n\[email protected]\ndef to_image(inpt: Union[torch.Tensor, PIL.Image.Image, np.ndarray]) -> tv_tensors.Image:\n \"\"\"See :class:`~torchvision.transforms.v2.ToImage` for details.\"\"\"\n if isinstance(inpt, np.ndarray):\n output = torch.from_numpy(inpt).permute((2, 0, 1)).contiguous()\n elif isinstance(inpt, PIL.Image.Image):\n output = pil_to_tensor(inpt)\n elif isinstance(inpt, torch.Tensor):\n output = inpt\n else:\n raise TypeError(\n f\"Input can either be a pure Tensor, a numpy array, or a PIL image, but got {type(inpt)} instead.\"\n )\n return tv_tensors.Image(output)\n\n\nto_pil_image = _F.to_pil_image\npil_to_tensor = _F.pil_to_tensor\n", "path": "torchvision/transforms/v2/functional/_type_conversion.py"}]} | 2,838 | 207 |
gh_patches_debug_36244 | rasdani/github-patches | git_diff | mars-project__mars-632 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[ENH] More accurate memory stats using memory.stat in cgroup
Given kernel doc https://www.kernel.org/doc/Documentation/cgroup-v1/memory.txt, we may use ``/sys/fs/cgroup/memory/memory.stat`` to provide more accurate memory size estimations in Docker. Note that parsing this file is not supported in ``psutil`` now, thus we may parse it ourselves.
</issue>
<code>
[start of mars/resource.py]
1 # -*- coding: utf-8 -*-
2 # Copyright 1999-2018 Alibaba Group Holding Ltd.
3 #
4 # Licensed under the Apache License, Version 2.0 (the "License");
5 # you may not use this file except in compliance with the License.
6 # You may obtain a copy of the License at
7 #
8 # http://www.apache.org/licenses/LICENSE-2.0
9 #
10 # Unless required by applicable law or agreed to in writing, software
11 # distributed under the License is distributed on an "AS IS" BASIS,
12 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 # See the License for the specific language governing permissions and
14 # limitations under the License.
15
16 import os
17 import subprocess # nosec
18 import sys
19 import time
20 from collections import namedtuple
21
22 import psutil
23
24 from .lib import nvutils
25
26 _proc = psutil.Process()
27 _timer = getattr(time, 'monotonic', time.time)
28
29 _cpu_use_process_stat = bool(int(os.environ.get('MARS_CPU_USE_PROCESS_STAT', '0').strip('"')))
30 _mem_use_process_stat = bool(int(os.environ.get('MARS_MEM_USE_PROCESS_STAT', '0').strip('"')))
31
32 if 'MARS_USE_PROCESS_STAT' in os.environ:
33 _cpu_use_process_stat = _mem_use_process_stat = \
34 bool(int(os.environ['MARS_USE_PROCESS_STAT'].strip('"')))
35
36 if 'MARS_CPU_TOTAL' in os.environ:
37 _cpu_total = int(os.environ['MARS_CPU_TOTAL'].strip('"'))
38 else:
39 _cpu_total = psutil.cpu_count(logical=True)
40
41 if 'MARS_MEMORY_TOTAL' in os.environ:
42 _mem_total = int(os.environ['MARS_MEMORY_TOTAL'].strip('"'))
43 else:
44 _mem_total = None
45
46 _virt_memory_stat = namedtuple('virtual_memory', 'total available percent used free')
47
48 _shm_path = [pt.mountpoint for pt in psutil.disk_partitions(all=True)
49 if pt.mountpoint in ('/tmp', '/dev/shm') and pt.fstype == 'tmpfs']
50 if not _shm_path:
51 _shm_path = None
52 else:
53 _shm_path = _shm_path[0]
54
55
56 def virtual_memory():
57 sys_mem = psutil.virtual_memory()
58 if not _mem_use_process_stat:
59 total = sys_mem.total
60 used = sys_mem.used + getattr(sys_mem, 'shared', 0)
61 available = sys_mem.available
62 free = sys_mem.free
63 percent = 100.0 * (total - available) / total
64 return _virt_memory_stat(total, available, percent, used, free)
65 else:
66 used = 0
67 for p in psutil.process_iter():
68 try:
69 used += p.memory_info().rss
70 except (psutil.NoSuchProcess, psutil.AccessDenied):
71 pass
72
73 if _shm_path:
74 shm_stats = psutil.disk_usage(_shm_path)
75 used += shm_stats.used
76
77 total = min(_mem_total or sys_mem.total, sys_mem.total)
78 # TODO sys_mem.available does not work in container
79 # available = min(sys_mem.available, total - used)
80 available = total - used
81 free = min(sys_mem.free, total - used)
82 percent = 100.0 * (total - available) / total
83 return _virt_memory_stat(total, available, percent, used, free)
84
85
86 def cpu_count():
87 return _cpu_total
88
89
90 _last_cpu_measure = None
91
92
93 def _take_process_cpu_snapshot():
94 num_cpus = cpu_count() or 1
95
96 def timer():
97 return _timer() * num_cpus
98
99 processes = [p for p in psutil.process_iter() if p.pid != _proc.pid]
100
101 pts = dict()
102 sts = dict()
103 for p in processes:
104 try:
105 pts[p.pid] = p.cpu_times()
106 sts[p.pid] = timer()
107 except (psutil.NoSuchProcess, psutil.AccessDenied):
108 pass
109
110 pts[_proc.pid] = _proc.cpu_times()
111 sts[_proc.pid] = timer()
112 return pts, sts
113
114
115 def cpu_percent():
116 global _last_cpu_measure
117 if not _cpu_use_process_stat:
118 return sum(psutil.cpu_percent(percpu=True))
119
120 num_cpus = cpu_count() or 1
121 pts, sts = _take_process_cpu_snapshot()
122
123 if _last_cpu_measure is None:
124 _last_cpu_measure = (pts, sts)
125 return None
126
127 old_pts, old_sts = _last_cpu_measure
128
129 percents = []
130 for pid in pts:
131 if pid not in old_pts:
132 continue
133 pt1 = old_pts[pid]
134 pt2 = pts[pid]
135 delta_proc = (pt2.user - pt1.user) + (pt2.system - pt1.system)
136 delta_time = sts[pid] - old_sts[pid]
137
138 try:
139 overall_cpus_percent = (delta_proc / delta_time) * 100
140 except ZeroDivisionError:
141 percents.append(0.0)
142 else:
143 single_cpu_percent = overall_cpus_percent * num_cpus
144 percents.append(single_cpu_percent)
145 _last_cpu_measure = (pts, sts)
146 return round(sum(percents), 1)
147
148
149 def disk_usage(d):
150 return psutil.disk_usage(d)
151
152
153 def iowait():
154 cpu_percent = psutil.cpu_times_percent()
155 try:
156 return cpu_percent.iowait
157 except AttributeError:
158 return None
159
160
161 _last_disk_io_meta = None
162 _win_diskperf_called = False
163
164
165 def disk_io_usage():
166 global _last_disk_io_meta, _win_diskperf_called
167
168 # Needed by psutil.disk_io_counters() under newer version of Windows.
169 # diskperf -y need to be called or no disk information can be found.
170 if sys.platform == 'win32' and not _win_diskperf_called: # pragma: no cover
171 CREATE_NO_WINDOW = 0x08000000
172 try:
173 proc = subprocess.Popen(['diskperf', '-y'], shell=False,
174 creationflags=CREATE_NO_WINDOW) # nosec
175 proc.wait()
176 except (subprocess.CalledProcessError, OSError):
177 pass
178 _win_diskperf_called = True
179
180 disk_counters = psutil.disk_io_counters()
181 tst = time.time()
182
183 read_bytes = disk_counters.read_bytes
184 write_bytes = disk_counters.write_bytes
185 if _last_disk_io_meta is None:
186 _last_disk_io_meta = (read_bytes, write_bytes, tst)
187 return None
188
189 last_read_bytes, last_write_bytes, last_time = _last_disk_io_meta
190 delta_time = tst - last_time
191 read_speed = (read_bytes - last_read_bytes) / delta_time
192 write_speed = (write_bytes - last_write_bytes) / delta_time
193
194 _last_disk_io_meta = (read_bytes, write_bytes, tst)
195 return read_speed, write_speed
196
197
198 _last_net_io_meta = None
199
200
201 def net_io_usage():
202 global _last_net_io_meta
203
204 net_counters = psutil.net_io_counters()
205 tst = time.time()
206
207 send_bytes = net_counters.bytes_sent
208 recv_bytes = net_counters.bytes_recv
209 if _last_net_io_meta is None:
210 _last_net_io_meta = (send_bytes, recv_bytes, tst)
211 return None
212
213 last_send_bytes, last_recv_bytes, last_time = _last_net_io_meta
214 delta_time = tst - last_time
215 recv_speed = (recv_bytes - last_recv_bytes) / delta_time
216 send_speed = (send_bytes - last_send_bytes) / delta_time
217
218 _last_net_io_meta = (send_bytes, recv_bytes, tst)
219 return recv_speed, send_speed
220
221
222 _cuda_info = namedtuple('cuda_info', 'driver_version cuda_version products gpu_count')
223 _cuda_card_stat = namedtuple('cuda_card_stat', 'product_name gpu_usage temperature fb_mem_info')
224
225
226 def cuda_info(): # pragma: no cover
227 driver_info = nvutils.get_driver_info()
228 if not driver_info:
229 return
230 gpu_count = nvutils.get_device_count()
231 return _cuda_info(
232 driver_version=driver_info.driver_version,
233 cuda_version=driver_info.cuda_version,
234 products=[nvutils.get_device_info(idx).name for idx in range(gpu_count)],
235 gpu_count=gpu_count,
236 )
237
238
239 def cuda_card_stats(): # pragma: no cover
240 infos = []
241 device_count = nvutils.get_device_count()
242 if not device_count:
243 return
244 for device_idx in range(device_count):
245 device_info = nvutils.get_device_info(device_idx)
246 device_status = nvutils.get_device_status(device_idx)
247
248 infos.append(_cuda_card_stat(
249 product_name=device_info.name,
250 gpu_usage=device_status.gpu_util,
251 temperature=device_status.temperature,
252 fb_mem_info=_virt_memory_stat(
253 total=device_status.fb_total_mem, used=device_status.fb_used_mem,
254 free=device_status.fb_free_mem, available=device_status.fb_free_mem,
255 percent=device_status.mem_util,
256 )
257 ))
258 return infos
259
[end of mars/resource.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/mars/resource.py b/mars/resource.py
--- a/mars/resource.py
+++ b/mars/resource.py
@@ -23,22 +23,25 @@
from .lib import nvutils
+CGROUP_MEM_STAT_FILE = '/sys/fs/cgroup/memory/memory.stat'
+
_proc = psutil.Process()
_timer = getattr(time, 'monotonic', time.time)
_cpu_use_process_stat = bool(int(os.environ.get('MARS_CPU_USE_PROCESS_STAT', '0').strip('"')))
_mem_use_process_stat = bool(int(os.environ.get('MARS_MEM_USE_PROCESS_STAT', '0').strip('"')))
+_mem_use_cgroup_stat = bool(int(os.environ.get('MARS_MEM_USE_CGROUP_STAT', '0').strip('"')))
if 'MARS_USE_PROCESS_STAT' in os.environ:
_cpu_use_process_stat = _mem_use_process_stat = \
bool(int(os.environ['MARS_USE_PROCESS_STAT'].strip('"')))
-if 'MARS_CPU_TOTAL' in os.environ:
+if _cpu_use_process_stat and 'MARS_CPU_TOTAL' in os.environ:
_cpu_total = int(os.environ['MARS_CPU_TOTAL'].strip('"'))
else:
_cpu_total = psutil.cpu_count(logical=True)
-if 'MARS_MEMORY_TOTAL' in os.environ:
+if _mem_use_process_stat and 'MARS_MEMORY_TOTAL' in os.environ:
_mem_total = int(os.environ['MARS_MEMORY_TOTAL'].strip('"'))
else:
_mem_total = None
@@ -53,9 +56,28 @@
_shm_path = _shm_path[0]
+def _read_cgroup_stat_file():
+ with open(CGROUP_MEM_STAT_FILE, 'r') as cg_file:
+ contents = cg_file.read()
+ kvs = dict()
+ for l in contents.splitlines():
+ parts = l.split(' ')
+ if len(parts) == 2:
+ kvs[parts[0]] = int(parts[1])
+ return kvs
+
+
def virtual_memory():
sys_mem = psutil.virtual_memory()
- if not _mem_use_process_stat:
+ if _mem_use_cgroup_stat:
+ # see section 5.5 in https://www.kernel.org/doc/Documentation/cgroup-v1/memory.txt
+ cgroup_mem_info = _read_cgroup_stat_file()
+ total = cgroup_mem_info['hierarchical_memory_limit']
+ used = cgroup_mem_info['cache'] + cgroup_mem_info['rss'] + cgroup_mem_info.get('swap', 0)
+ available = free = total - used
+ percent = 100.0 * (total - available) / total
+ return _virt_memory_stat(total, available, percent, used, free)
+ elif not _mem_use_process_stat:
total = sys_mem.total
used = sys_mem.used + getattr(sys_mem, 'shared', 0)
available = sys_mem.available
| {"golden_diff": "diff --git a/mars/resource.py b/mars/resource.py\n--- a/mars/resource.py\n+++ b/mars/resource.py\n@@ -23,22 +23,25 @@\n \n from .lib import nvutils\n \n+CGROUP_MEM_STAT_FILE = '/sys/fs/cgroup/memory/memory.stat'\n+\n _proc = psutil.Process()\n _timer = getattr(time, 'monotonic', time.time)\n \n _cpu_use_process_stat = bool(int(os.environ.get('MARS_CPU_USE_PROCESS_STAT', '0').strip('\"')))\n _mem_use_process_stat = bool(int(os.environ.get('MARS_MEM_USE_PROCESS_STAT', '0').strip('\"')))\n+_mem_use_cgroup_stat = bool(int(os.environ.get('MARS_MEM_USE_CGROUP_STAT', '0').strip('\"')))\n \n if 'MARS_USE_PROCESS_STAT' in os.environ:\n _cpu_use_process_stat = _mem_use_process_stat = \\\n bool(int(os.environ['MARS_USE_PROCESS_STAT'].strip('\"')))\n \n-if 'MARS_CPU_TOTAL' in os.environ:\n+if _cpu_use_process_stat and 'MARS_CPU_TOTAL' in os.environ:\n _cpu_total = int(os.environ['MARS_CPU_TOTAL'].strip('\"'))\n else:\n _cpu_total = psutil.cpu_count(logical=True)\n \n-if 'MARS_MEMORY_TOTAL' in os.environ:\n+if _mem_use_process_stat and 'MARS_MEMORY_TOTAL' in os.environ:\n _mem_total = int(os.environ['MARS_MEMORY_TOTAL'].strip('\"'))\n else:\n _mem_total = None\n@@ -53,9 +56,28 @@\n _shm_path = _shm_path[0]\n \n \n+def _read_cgroup_stat_file():\n+ with open(CGROUP_MEM_STAT_FILE, 'r') as cg_file:\n+ contents = cg_file.read()\n+ kvs = dict()\n+ for l in contents.splitlines():\n+ parts = l.split(' ')\n+ if len(parts) == 2:\n+ kvs[parts[0]] = int(parts[1])\n+ return kvs\n+\n+\n def virtual_memory():\n sys_mem = psutil.virtual_memory()\n- if not _mem_use_process_stat:\n+ if _mem_use_cgroup_stat:\n+ # see section 5.5 in https://www.kernel.org/doc/Documentation/cgroup-v1/memory.txt\n+ cgroup_mem_info = _read_cgroup_stat_file()\n+ total = cgroup_mem_info['hierarchical_memory_limit']\n+ used = cgroup_mem_info['cache'] + cgroup_mem_info['rss'] + cgroup_mem_info.get('swap', 0)\n+ available = free = total - used\n+ percent = 100.0 * (total - available) / total\n+ return _virt_memory_stat(total, available, percent, used, free)\n+ elif not _mem_use_process_stat:\n total = sys_mem.total\n used = sys_mem.used + getattr(sys_mem, 'shared', 0)\n available = sys_mem.available\n", "issue": "[ENH] More accurate memory stats using memory.stat in cgroup\nGiven kernel doc https://www.kernel.org/doc/Documentation/cgroup-v1/memory.txt, we may use ``/sys/fs/cgroup/memory/memory.stat`` to provide more accurate memory size estimations in Docker. Note that parsing this file is not supported in ``psutil`` now, thus we may parse it ourselves.\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n# Copyright 1999-2018 Alibaba Group Holding Ltd.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport os\nimport subprocess # nosec\nimport sys\nimport time\nfrom collections import namedtuple\n\nimport psutil\n\nfrom .lib import nvutils\n\n_proc = psutil.Process()\n_timer = getattr(time, 'monotonic', time.time)\n\n_cpu_use_process_stat = bool(int(os.environ.get('MARS_CPU_USE_PROCESS_STAT', '0').strip('\"')))\n_mem_use_process_stat = bool(int(os.environ.get('MARS_MEM_USE_PROCESS_STAT', '0').strip('\"')))\n\nif 'MARS_USE_PROCESS_STAT' in os.environ:\n _cpu_use_process_stat = _mem_use_process_stat = \\\n bool(int(os.environ['MARS_USE_PROCESS_STAT'].strip('\"')))\n\nif 'MARS_CPU_TOTAL' in os.environ:\n _cpu_total = int(os.environ['MARS_CPU_TOTAL'].strip('\"'))\nelse:\n _cpu_total = psutil.cpu_count(logical=True)\n\nif 'MARS_MEMORY_TOTAL' in os.environ:\n _mem_total = int(os.environ['MARS_MEMORY_TOTAL'].strip('\"'))\nelse:\n _mem_total = None\n\n_virt_memory_stat = namedtuple('virtual_memory', 'total available percent used free')\n\n_shm_path = [pt.mountpoint for pt in psutil.disk_partitions(all=True)\n if pt.mountpoint in ('/tmp', '/dev/shm') and pt.fstype == 'tmpfs']\nif not _shm_path:\n _shm_path = None\nelse:\n _shm_path = _shm_path[0]\n\n\ndef virtual_memory():\n sys_mem = psutil.virtual_memory()\n if not _mem_use_process_stat:\n total = sys_mem.total\n used = sys_mem.used + getattr(sys_mem, 'shared', 0)\n available = sys_mem.available\n free = sys_mem.free\n percent = 100.0 * (total - available) / total\n return _virt_memory_stat(total, available, percent, used, free)\n else:\n used = 0\n for p in psutil.process_iter():\n try:\n used += p.memory_info().rss\n except (psutil.NoSuchProcess, psutil.AccessDenied):\n pass\n\n if _shm_path:\n shm_stats = psutil.disk_usage(_shm_path)\n used += shm_stats.used\n\n total = min(_mem_total or sys_mem.total, sys_mem.total)\n # TODO sys_mem.available does not work in container\n # available = min(sys_mem.available, total - used)\n available = total - used\n free = min(sys_mem.free, total - used)\n percent = 100.0 * (total - available) / total\n return _virt_memory_stat(total, available, percent, used, free)\n\n\ndef cpu_count():\n return _cpu_total\n\n\n_last_cpu_measure = None\n\n\ndef _take_process_cpu_snapshot():\n num_cpus = cpu_count() or 1\n\n def timer():\n return _timer() * num_cpus\n\n processes = [p for p in psutil.process_iter() if p.pid != _proc.pid]\n\n pts = dict()\n sts = dict()\n for p in processes:\n try:\n pts[p.pid] = p.cpu_times()\n sts[p.pid] = timer()\n except (psutil.NoSuchProcess, psutil.AccessDenied):\n pass\n\n pts[_proc.pid] = _proc.cpu_times()\n sts[_proc.pid] = timer()\n return pts, sts\n\n\ndef cpu_percent():\n global _last_cpu_measure\n if not _cpu_use_process_stat:\n return sum(psutil.cpu_percent(percpu=True))\n\n num_cpus = cpu_count() or 1\n pts, sts = _take_process_cpu_snapshot()\n\n if _last_cpu_measure is None:\n _last_cpu_measure = (pts, sts)\n return None\n\n old_pts, old_sts = _last_cpu_measure\n\n percents = []\n for pid in pts:\n if pid not in old_pts:\n continue\n pt1 = old_pts[pid]\n pt2 = pts[pid]\n delta_proc = (pt2.user - pt1.user) + (pt2.system - pt1.system)\n delta_time = sts[pid] - old_sts[pid]\n\n try:\n overall_cpus_percent = (delta_proc / delta_time) * 100\n except ZeroDivisionError:\n percents.append(0.0)\n else:\n single_cpu_percent = overall_cpus_percent * num_cpus\n percents.append(single_cpu_percent)\n _last_cpu_measure = (pts, sts)\n return round(sum(percents), 1)\n\n\ndef disk_usage(d):\n return psutil.disk_usage(d)\n\n\ndef iowait():\n cpu_percent = psutil.cpu_times_percent()\n try:\n return cpu_percent.iowait\n except AttributeError:\n return None\n\n\n_last_disk_io_meta = None\n_win_diskperf_called = False\n\n\ndef disk_io_usage():\n global _last_disk_io_meta, _win_diskperf_called\n\n # Needed by psutil.disk_io_counters() under newer version of Windows.\n # diskperf -y need to be called or no disk information can be found.\n if sys.platform == 'win32' and not _win_diskperf_called: # pragma: no cover\n CREATE_NO_WINDOW = 0x08000000\n try:\n proc = subprocess.Popen(['diskperf', '-y'], shell=False,\n creationflags=CREATE_NO_WINDOW) # nosec\n proc.wait()\n except (subprocess.CalledProcessError, OSError):\n pass\n _win_diskperf_called = True\n\n disk_counters = psutil.disk_io_counters()\n tst = time.time()\n\n read_bytes = disk_counters.read_bytes\n write_bytes = disk_counters.write_bytes\n if _last_disk_io_meta is None:\n _last_disk_io_meta = (read_bytes, write_bytes, tst)\n return None\n\n last_read_bytes, last_write_bytes, last_time = _last_disk_io_meta\n delta_time = tst - last_time\n read_speed = (read_bytes - last_read_bytes) / delta_time\n write_speed = (write_bytes - last_write_bytes) / delta_time\n\n _last_disk_io_meta = (read_bytes, write_bytes, tst)\n return read_speed, write_speed\n\n\n_last_net_io_meta = None\n\n\ndef net_io_usage():\n global _last_net_io_meta\n\n net_counters = psutil.net_io_counters()\n tst = time.time()\n\n send_bytes = net_counters.bytes_sent\n recv_bytes = net_counters.bytes_recv\n if _last_net_io_meta is None:\n _last_net_io_meta = (send_bytes, recv_bytes, tst)\n return None\n\n last_send_bytes, last_recv_bytes, last_time = _last_net_io_meta\n delta_time = tst - last_time\n recv_speed = (recv_bytes - last_recv_bytes) / delta_time\n send_speed = (send_bytes - last_send_bytes) / delta_time\n\n _last_net_io_meta = (send_bytes, recv_bytes, tst)\n return recv_speed, send_speed\n\n\n_cuda_info = namedtuple('cuda_info', 'driver_version cuda_version products gpu_count')\n_cuda_card_stat = namedtuple('cuda_card_stat', 'product_name gpu_usage temperature fb_mem_info')\n\n\ndef cuda_info(): # pragma: no cover\n driver_info = nvutils.get_driver_info()\n if not driver_info:\n return\n gpu_count = nvutils.get_device_count()\n return _cuda_info(\n driver_version=driver_info.driver_version,\n cuda_version=driver_info.cuda_version,\n products=[nvutils.get_device_info(idx).name for idx in range(gpu_count)],\n gpu_count=gpu_count,\n )\n\n\ndef cuda_card_stats(): # pragma: no cover\n infos = []\n device_count = nvutils.get_device_count()\n if not device_count:\n return\n for device_idx in range(device_count):\n device_info = nvutils.get_device_info(device_idx)\n device_status = nvutils.get_device_status(device_idx)\n\n infos.append(_cuda_card_stat(\n product_name=device_info.name,\n gpu_usage=device_status.gpu_util,\n temperature=device_status.temperature,\n fb_mem_info=_virt_memory_stat(\n total=device_status.fb_total_mem, used=device_status.fb_used_mem,\n free=device_status.fb_free_mem, available=device_status.fb_free_mem,\n percent=device_status.mem_util,\n )\n ))\n return infos\n", "path": "mars/resource.py"}]} | 3,261 | 646 |
gh_patches_debug_27914 | rasdani/github-patches | git_diff | localstack__localstack-1460 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Kinesis putRecords error
It seems that putRecords causes the following error:
```
Starting mock Kinesis (http port 4568)...
Starting mock S3 (http port 4572)...
Starting mock Firehose service (http port 4573)...
Starting mock Lambda service (http port 4574)...
Listening at http://:::4565
* Running on http://0.0.0.0:4563/ (Press CTRL+C to quit)
127.0.0.1 - - [08/May/2018 13:52:25] "GET / HTTP/1.1" 200 -
Ready.
r127.0.0.1 - - [08/May/2018 13:56:43] "PUT /prd1541-qa1-vf-sms-send-stream-archive HTTP/1.1" 200 -
127.0.0.1 - - [08/May/2018 13:56:43] "HEAD /prd1541-qa1-vf-sms-send-stream-archive HTTP/1.1" 200 -
2018-05-08T13:56:43:ERROR:localstack.services.generic_proxy: Error forwarding request: 'Records' Traceback (most recent call last):
File "/opt/code/localstack/localstack/services/generic_proxy.py", line 215, in forward
updated_response = self.proxy.update_listener.return_response(**kwargs)
File "/opt/code/localstack/localstack/services/kinesis/kinesis_listener.py", line 49, in return_response
response_records = response_body['Records']
KeyError: 'Records'
2018-05-08T13:56:43:ERROR:localstack.services.generic_proxy: Error forwarding request: 'Records' Traceback (most recent call last):
File "/opt/code/localstack/localstack/services/generic_proxy.py", line 215, in forward
updated_response = self.proxy.update_listener.return_response(**kwargs)
File "/opt/code/localstack/localstack/services/kinesis/kinesis_listener.py", line 49, in return_response
response_records = response_body['Records']
KeyError: 'Records'
2018-05-08T13:56:43:ERROR:localstack.services.generic_proxy: Error forwarding request: 'Records' Traceback (most recent call last):
File "/opt/code/localstack/localstack/services/generic_proxy.py", line 215, in forward
updated_response = self.proxy.update_listener.return_response(**kwargs)
File "/opt/code/localstack/localstack/services/kinesis/kinesis_listener.py", line 49, in return_response
response_records = response_body['Records']
KeyError: 'Records'
```
I'm using the latest localstack image and my `Kinesis.putRecords` call goes through the Javascript AWS SDK:
```
...
const result = await client.putRecords({
StreamName,
Records: records.map(data => ({
Data: new Buffer(JSON.stringify(data)),
PartitionKey: 'testPartition'
}))
}).promise();
...
```
It seems like there's a mismatch int the expected format in `kinesis_listener.py`.
</issue>
<code>
[start of localstack/services/kinesis/kinesis_listener.py]
1 import json
2 import random
3 from requests.models import Response
4 from localstack import config
5 from localstack.utils.common import to_str
6 from localstack.utils.analytics import event_publisher
7 from localstack.services.awslambda import lambda_api
8 from localstack.services.generic_proxy import ProxyListener
9
10 # action headers
11 ACTION_PREFIX = 'Kinesis_20131202'
12 ACTION_PUT_RECORD = '%s.PutRecord' % ACTION_PREFIX
13 ACTION_PUT_RECORDS = '%s.PutRecords' % ACTION_PREFIX
14 ACTION_CREATE_STREAM = '%s.CreateStream' % ACTION_PREFIX
15 ACTION_DELETE_STREAM = '%s.DeleteStream' % ACTION_PREFIX
16 ACTION_UPDATE_SHARD_COUNT = '%s.UpdateShardCount' % ACTION_PREFIX
17
18
19 class ProxyListenerKinesis(ProxyListener):
20
21 def forward_request(self, method, path, data, headers):
22 data = json.loads(to_str(data))
23 action = headers.get('X-Amz-Target')
24
25 if action == '%s.DescribeStreamSummary' % ACTION_PREFIX:
26 stream_arn = data.get('StreamARN') or data['StreamName']
27 # TODO fix values below
28 result = {
29 'StreamDescriptionSummary': {
30 'ConsumerCount': 0,
31 'EnhancedMonitoring': [],
32 'KeyId': 'string',
33 'OpenShardCount': 0,
34 'RetentionPeriodHours': 1,
35 'StreamARN': stream_arn,
36 # 'StreamCreationTimestamp': number,
37 'StreamName': data['StreamName'],
38 'StreamStatus': 'ACTIVE'
39 }
40 }
41 return result
42 if action == '%s.DescribeStreamConsumer' % ACTION_PREFIX:
43 consumer_arn = data.get('ConsumerARN') or data['ConsumerName']
44 consumer_name = data.get('ConsumerName') or data['ConsumerARN']
45 result = {
46 'ConsumerDescription': {
47 'ConsumerARN': consumer_arn,
48 # 'ConsumerCreationTimestamp': number,
49 'ConsumerName': consumer_name,
50 'ConsumerStatus': 'ACTIVE',
51 'StreamARN': data.get('StreamARN')
52 }
53 }
54 return result
55
56 if random.random() < config.KINESIS_ERROR_PROBABILITY:
57 action = headers.get('X-Amz-Target')
58 if action in [ACTION_PUT_RECORD, ACTION_PUT_RECORDS]:
59 return kinesis_error_response(data, action)
60 return True
61
62 def return_response(self, method, path, data, headers, response):
63 action = headers.get('X-Amz-Target')
64 data = json.loads(to_str(data))
65
66 records = []
67 if action in (ACTION_CREATE_STREAM, ACTION_DELETE_STREAM):
68 event_type = (event_publisher.EVENT_KINESIS_CREATE_STREAM if action == ACTION_CREATE_STREAM
69 else event_publisher.EVENT_KINESIS_DELETE_STREAM)
70 payload = {'n': event_publisher.get_hash(data.get('StreamName'))}
71 if action == ACTION_CREATE_STREAM:
72 payload['s'] = data.get('ShardCount')
73 event_publisher.fire_event(event_type, payload=payload)
74 elif action == ACTION_PUT_RECORD:
75 response_body = json.loads(to_str(response.content))
76 event_record = {
77 'data': data['Data'],
78 'partitionKey': data['PartitionKey'],
79 'sequenceNumber': response_body.get('SequenceNumber')
80 }
81 event_records = [event_record]
82 stream_name = data['StreamName']
83 lambda_api.process_kinesis_records(event_records, stream_name)
84 elif action == ACTION_PUT_RECORDS:
85 event_records = []
86 response_body = json.loads(to_str(response.content))
87 response_records = response_body['Records']
88 records = data['Records']
89 for index in range(0, len(records)):
90 record = records[index]
91 event_record = {
92 'data': record['Data'],
93 'partitionKey': record['PartitionKey'],
94 'sequenceNumber': response_records[index].get('SequenceNumber')
95 }
96 event_records.append(event_record)
97 stream_name = data['StreamName']
98 lambda_api.process_kinesis_records(event_records, stream_name)
99 elif action == ACTION_UPDATE_SHARD_COUNT:
100 # Currently kinesalite, which backs the Kinesis implementation for localstack, does
101 # not support UpdateShardCount:
102 # https://github.com/mhart/kinesalite/issues/61
103 #
104 # [Terraform](https://www.terraform.io) makes the call to UpdateShardCount when it
105 # applies Kinesis resources. A Terraform run fails when this is not present.
106 #
107 # The code that follows just returns a successful response, bypassing the 400
108 # response that kinesalite returns.
109 #
110 response = Response()
111 response.status_code = 200
112 content = {
113 'CurrentShardCount': 1,
114 'StreamName': data['StreamName'],
115 'TargetShardCount': data['TargetShardCount']
116 }
117 response.encoding = 'UTF-8'
118 response._content = json.dumps(content)
119 return response
120
121
122 # instantiate listener
123 UPDATE_KINESIS = ProxyListenerKinesis()
124
125
126 def kinesis_error_response(data, action):
127 error_response = Response()
128
129 if action == ACTION_PUT_RECORD:
130 error_response.status_code = 400
131 content = {
132 'ErrorCode': 'ProvisionedThroughputExceededException',
133 'ErrorMessage': 'Rate exceeded for shard X in stream Y under account Z.'
134 }
135 else:
136 error_response.status_code = 200
137 content = {'FailedRecordCount': 1, 'Records': []}
138 for record in data.get('Records', []):
139 content['Records'].append({
140 'ErrorCode': 'ProvisionedThroughputExceededException',
141 'ErrorMessage': 'Rate exceeded for shard X in stream Y under account Z.'
142 })
143
144 error_response._content = json.dumps(content)
145 return error_response
146
[end of localstack/services/kinesis/kinesis_listener.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/localstack/services/kinesis/kinesis_listener.py b/localstack/services/kinesis/kinesis_listener.py
--- a/localstack/services/kinesis/kinesis_listener.py
+++ b/localstack/services/kinesis/kinesis_listener.py
@@ -84,18 +84,19 @@
elif action == ACTION_PUT_RECORDS:
event_records = []
response_body = json.loads(to_str(response.content))
- response_records = response_body['Records']
- records = data['Records']
- for index in range(0, len(records)):
- record = records[index]
- event_record = {
- 'data': record['Data'],
- 'partitionKey': record['PartitionKey'],
- 'sequenceNumber': response_records[index].get('SequenceNumber')
- }
- event_records.append(event_record)
- stream_name = data['StreamName']
- lambda_api.process_kinesis_records(event_records, stream_name)
+ if 'Records' in response_body:
+ response_records = response_body['Records']
+ records = data['Records']
+ for index in range(0, len(records)):
+ record = records[index]
+ event_record = {
+ 'data': record['Data'],
+ 'partitionKey': record['PartitionKey'],
+ 'sequenceNumber': response_records[index].get('SequenceNumber')
+ }
+ event_records.append(event_record)
+ stream_name = data['StreamName']
+ lambda_api.process_kinesis_records(event_records, stream_name)
elif action == ACTION_UPDATE_SHARD_COUNT:
# Currently kinesalite, which backs the Kinesis implementation for localstack, does
# not support UpdateShardCount:
| {"golden_diff": "diff --git a/localstack/services/kinesis/kinesis_listener.py b/localstack/services/kinesis/kinesis_listener.py\n--- a/localstack/services/kinesis/kinesis_listener.py\n+++ b/localstack/services/kinesis/kinesis_listener.py\n@@ -84,18 +84,19 @@\n elif action == ACTION_PUT_RECORDS:\n event_records = []\n response_body = json.loads(to_str(response.content))\n- response_records = response_body['Records']\n- records = data['Records']\n- for index in range(0, len(records)):\n- record = records[index]\n- event_record = {\n- 'data': record['Data'],\n- 'partitionKey': record['PartitionKey'],\n- 'sequenceNumber': response_records[index].get('SequenceNumber')\n- }\n- event_records.append(event_record)\n- stream_name = data['StreamName']\n- lambda_api.process_kinesis_records(event_records, stream_name)\n+ if 'Records' in response_body:\n+ response_records = response_body['Records']\n+ records = data['Records']\n+ for index in range(0, len(records)):\n+ record = records[index]\n+ event_record = {\n+ 'data': record['Data'],\n+ 'partitionKey': record['PartitionKey'],\n+ 'sequenceNumber': response_records[index].get('SequenceNumber')\n+ }\n+ event_records.append(event_record)\n+ stream_name = data['StreamName']\n+ lambda_api.process_kinesis_records(event_records, stream_name)\n elif action == ACTION_UPDATE_SHARD_COUNT:\n # Currently kinesalite, which backs the Kinesis implementation for localstack, does\n # not support UpdateShardCount:\n", "issue": "Kinesis putRecords error\nIt seems that putRecords causes the following error:\r\n\r\n```\r\nStarting mock Kinesis (http port 4568)...\r\nStarting mock S3 (http port 4572)...\r\nStarting mock Firehose service (http port 4573)...\r\nStarting mock Lambda service (http port 4574)...\r\nListening at http://:::4565\r\n* Running on http://0.0.0.0:4563/ (Press CTRL+C to quit)\r\n127.0.0.1 - - [08/May/2018 13:52:25] \"GET / HTTP/1.1\" 200 -\r\nReady.\r\nr127.0.0.1 - - [08/May/2018 13:56:43] \"PUT /prd1541-qa1-vf-sms-send-stream-archive HTTP/1.1\" 200 -\r\n127.0.0.1 - - [08/May/2018 13:56:43] \"HEAD /prd1541-qa1-vf-sms-send-stream-archive HTTP/1.1\" 200 -\r\n2018-05-08T13:56:43:ERROR:localstack.services.generic_proxy: Error forwarding request: 'Records' Traceback (most recent call last):\r\n File \"/opt/code/localstack/localstack/services/generic_proxy.py\", line 215, in forward\r\n updated_response = self.proxy.update_listener.return_response(**kwargs)\r\n File \"/opt/code/localstack/localstack/services/kinesis/kinesis_listener.py\", line 49, in return_response\r\n response_records = response_body['Records']\r\nKeyError: 'Records'\r\n\r\n2018-05-08T13:56:43:ERROR:localstack.services.generic_proxy: Error forwarding request: 'Records' Traceback (most recent call last):\r\n File \"/opt/code/localstack/localstack/services/generic_proxy.py\", line 215, in forward\r\n updated_response = self.proxy.update_listener.return_response(**kwargs)\r\n File \"/opt/code/localstack/localstack/services/kinesis/kinesis_listener.py\", line 49, in return_response\r\n response_records = response_body['Records']\r\nKeyError: 'Records'\r\n\r\n2018-05-08T13:56:43:ERROR:localstack.services.generic_proxy: Error forwarding request: 'Records' Traceback (most recent call last):\r\n File \"/opt/code/localstack/localstack/services/generic_proxy.py\", line 215, in forward\r\n updated_response = self.proxy.update_listener.return_response(**kwargs)\r\n File \"/opt/code/localstack/localstack/services/kinesis/kinesis_listener.py\", line 49, in return_response\r\n response_records = response_body['Records']\r\nKeyError: 'Records'\r\n```\r\n\r\nI'm using the latest localstack image and my `Kinesis.putRecords` call goes through the Javascript AWS SDK: \r\n\r\n```\r\n...\r\n const result = await client.putRecords({\r\n StreamName,\r\n Records: records.map(data => ({\r\n Data: new Buffer(JSON.stringify(data)),\r\n PartitionKey: 'testPartition'\r\n }))\r\n }).promise();\r\n...\r\n```\r\n\r\nIt seems like there's a mismatch int the expected format in `kinesis_listener.py`.\n", "before_files": [{"content": "import json\nimport random\nfrom requests.models import Response\nfrom localstack import config\nfrom localstack.utils.common import to_str\nfrom localstack.utils.analytics import event_publisher\nfrom localstack.services.awslambda import lambda_api\nfrom localstack.services.generic_proxy import ProxyListener\n\n# action headers\nACTION_PREFIX = 'Kinesis_20131202'\nACTION_PUT_RECORD = '%s.PutRecord' % ACTION_PREFIX\nACTION_PUT_RECORDS = '%s.PutRecords' % ACTION_PREFIX\nACTION_CREATE_STREAM = '%s.CreateStream' % ACTION_PREFIX\nACTION_DELETE_STREAM = '%s.DeleteStream' % ACTION_PREFIX\nACTION_UPDATE_SHARD_COUNT = '%s.UpdateShardCount' % ACTION_PREFIX\n\n\nclass ProxyListenerKinesis(ProxyListener):\n\n def forward_request(self, method, path, data, headers):\n data = json.loads(to_str(data))\n action = headers.get('X-Amz-Target')\n\n if action == '%s.DescribeStreamSummary' % ACTION_PREFIX:\n stream_arn = data.get('StreamARN') or data['StreamName']\n # TODO fix values below\n result = {\n 'StreamDescriptionSummary': {\n 'ConsumerCount': 0,\n 'EnhancedMonitoring': [],\n 'KeyId': 'string',\n 'OpenShardCount': 0,\n 'RetentionPeriodHours': 1,\n 'StreamARN': stream_arn,\n # 'StreamCreationTimestamp': number,\n 'StreamName': data['StreamName'],\n 'StreamStatus': 'ACTIVE'\n }\n }\n return result\n if action == '%s.DescribeStreamConsumer' % ACTION_PREFIX:\n consumer_arn = data.get('ConsumerARN') or data['ConsumerName']\n consumer_name = data.get('ConsumerName') or data['ConsumerARN']\n result = {\n 'ConsumerDescription': {\n 'ConsumerARN': consumer_arn,\n # 'ConsumerCreationTimestamp': number,\n 'ConsumerName': consumer_name,\n 'ConsumerStatus': 'ACTIVE',\n 'StreamARN': data.get('StreamARN')\n }\n }\n return result\n\n if random.random() < config.KINESIS_ERROR_PROBABILITY:\n action = headers.get('X-Amz-Target')\n if action in [ACTION_PUT_RECORD, ACTION_PUT_RECORDS]:\n return kinesis_error_response(data, action)\n return True\n\n def return_response(self, method, path, data, headers, response):\n action = headers.get('X-Amz-Target')\n data = json.loads(to_str(data))\n\n records = []\n if action in (ACTION_CREATE_STREAM, ACTION_DELETE_STREAM):\n event_type = (event_publisher.EVENT_KINESIS_CREATE_STREAM if action == ACTION_CREATE_STREAM\n else event_publisher.EVENT_KINESIS_DELETE_STREAM)\n payload = {'n': event_publisher.get_hash(data.get('StreamName'))}\n if action == ACTION_CREATE_STREAM:\n payload['s'] = data.get('ShardCount')\n event_publisher.fire_event(event_type, payload=payload)\n elif action == ACTION_PUT_RECORD:\n response_body = json.loads(to_str(response.content))\n event_record = {\n 'data': data['Data'],\n 'partitionKey': data['PartitionKey'],\n 'sequenceNumber': response_body.get('SequenceNumber')\n }\n event_records = [event_record]\n stream_name = data['StreamName']\n lambda_api.process_kinesis_records(event_records, stream_name)\n elif action == ACTION_PUT_RECORDS:\n event_records = []\n response_body = json.loads(to_str(response.content))\n response_records = response_body['Records']\n records = data['Records']\n for index in range(0, len(records)):\n record = records[index]\n event_record = {\n 'data': record['Data'],\n 'partitionKey': record['PartitionKey'],\n 'sequenceNumber': response_records[index].get('SequenceNumber')\n }\n event_records.append(event_record)\n stream_name = data['StreamName']\n lambda_api.process_kinesis_records(event_records, stream_name)\n elif action == ACTION_UPDATE_SHARD_COUNT:\n # Currently kinesalite, which backs the Kinesis implementation for localstack, does\n # not support UpdateShardCount:\n # https://github.com/mhart/kinesalite/issues/61\n #\n # [Terraform](https://www.terraform.io) makes the call to UpdateShardCount when it\n # applies Kinesis resources. A Terraform run fails when this is not present.\n #\n # The code that follows just returns a successful response, bypassing the 400\n # response that kinesalite returns.\n #\n response = Response()\n response.status_code = 200\n content = {\n 'CurrentShardCount': 1,\n 'StreamName': data['StreamName'],\n 'TargetShardCount': data['TargetShardCount']\n }\n response.encoding = 'UTF-8'\n response._content = json.dumps(content)\n return response\n\n\n# instantiate listener\nUPDATE_KINESIS = ProxyListenerKinesis()\n\n\ndef kinesis_error_response(data, action):\n error_response = Response()\n\n if action == ACTION_PUT_RECORD:\n error_response.status_code = 400\n content = {\n 'ErrorCode': 'ProvisionedThroughputExceededException',\n 'ErrorMessage': 'Rate exceeded for shard X in stream Y under account Z.'\n }\n else:\n error_response.status_code = 200\n content = {'FailedRecordCount': 1, 'Records': []}\n for record in data.get('Records', []):\n content['Records'].append({\n 'ErrorCode': 'ProvisionedThroughputExceededException',\n 'ErrorMessage': 'Rate exceeded for shard X in stream Y under account Z.'\n })\n\n error_response._content = json.dumps(content)\n return error_response\n", "path": "localstack/services/kinesis/kinesis_listener.py"}]} | 2,862 | 362 |
gh_patches_debug_17748 | rasdani/github-patches | git_diff | canonical__microk8s-2148 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Indentation error in yaml output of microk8s.status
The output of `microk8s.status` is
```
microk8s:
running: False
high-availability:
enabled: False
message: microk8s is not running. Use microk8s inspect for a deeper inspection.
```
which confuses some parsers (e.g. the built-in Python 3.8) due to the extraneous space before `message`.
</issue>
<code>
[start of scripts/wrappers/status.py]
1 #!/usr/bin/python3
2 import os
3 import argparse
4
5 from common.utils import (
6 exit_if_no_permission,
7 exit_if_stopped,
8 is_cluster_locked,
9 is_ha_enabled,
10 get_dqlite_info,
11 wait_for_ready,
12 is_cluster_ready,
13 get_available_addons,
14 get_current_arch,
15 get_addon_by_name,
16 kubectl_get,
17 kubectl_get_clusterroles,
18 )
19
20
21 def is_enabled(addon, item):
22 if addon in item:
23 return True
24 else:
25 filepath = os.path.expandvars(addon)
26 return os.path.isfile(filepath)
27
28 return False
29
30
31 def print_short(isReady, enabled_addons, disabled_addons):
32 if isReady:
33 print("microk8s is running")
34 print("addons:")
35 if enabled_addons and len(enabled_addons) > 0:
36 for enabled in enabled_addons:
37 print("{}: enabled".format(enabled["name"]))
38 if disabled_addons and len(disabled_addons) > 0:
39 for disabled in disabled_addons:
40 print("{}: disabled".format(disabled["name"]))
41 else:
42 print("microk8s is not running. Use microk8s inspect for a deeper inspection.")
43
44
45 def print_pretty(isReady, enabled_addons, disabled_addons):
46 console_formatter = "{:>3} {:<20} # {}"
47 if isReady:
48 print("microk8s is running")
49 if not is_ha_enabled():
50 print("high-availability: no")
51 else:
52 info = get_dqlite_info()
53 if ha_cluster_formed(info):
54 print("high-availability: yes")
55 else:
56 print("high-availability: no")
57
58 masters = "none"
59 standby = "none"
60 for node in info:
61 if node[1] == "voter":
62 if masters == "none":
63 masters = "{}".format(node[0])
64 else:
65 masters = "{} {}".format(masters, node[0])
66 if node[1] == "standby":
67 if standby == "none":
68 standby = "{}".format(node[0])
69 else:
70 standby = "{} {}".format(standby, node[0])
71
72 print("{:>2}{} {}".format("", "datastore master nodes:", masters))
73 print("{:>2}{} {}".format("", "datastore standby nodes:", standby))
74
75 print("addons:")
76 if enabled_addons and len(enabled_addons) > 0:
77 print("{:>2}{}".format("", "enabled:"))
78 for enabled in enabled_addons:
79 print(console_formatter.format("", enabled["name"], enabled["description"]))
80 if disabled_addons and len(disabled_addons) > 0:
81 print("{:>2}{}".format("", "disabled:"))
82 for disabled in disabled_addons:
83 print(console_formatter.format("", disabled["name"], disabled["description"]))
84 else:
85 print("microk8s is not running. Use microk8s inspect for a deeper inspection.")
86
87
88 def print_short_yaml(isReady, enabled_addons, disabled_addons):
89 print("microk8s:")
90 print("{:>2}{} {}".format("", "running:", isReady))
91
92 if isReady:
93 print("addons:")
94 for enabled in enabled_addons:
95 print(" {}: enabled".format(enabled["name"]))
96
97 for disabled in disabled_addons:
98 print(" {}: disabled".format(disabled["name"]))
99 else:
100 print(
101 "{:>2} {} {}".format(
102 "",
103 "message:",
104 "microk8s is not running. Use microk8s inspect for a deeper inspection.",
105 )
106 )
107
108
109 def print_yaml(isReady, enabled_addons, disabled_addons):
110 print("microk8s:")
111 print("{:>2}{} {}".format("", "running:", isReady))
112
113 print("{:>2}".format("high-availability:"))
114 ha_enabled = is_ha_enabled()
115 print("{:>2}{} {}".format("", "enabled:", ha_enabled))
116 if ha_enabled:
117 info = get_dqlite_info()
118 print("{:>2}{}".format("", "nodes:"))
119 for node in info:
120 print("{:>6}address: {:<1}".format("- ", node[0]))
121 print("{:>6}role: {:<1}".format("", node[1]))
122
123 if isReady:
124 print("{:>2}".format("addons:"))
125 for enabled in enabled_addons:
126 print("{:>4}name: {:<1}".format("- ", enabled["name"]))
127 print("{:>4}description: {:<1}".format("", enabled["description"]))
128 print("{:>4}version: {:<1}".format("", enabled["version"]))
129 print("{:>4}status: enabled".format(""))
130
131 for disabled in disabled_addons:
132 print("{:>4}name: {:<1}".format("- ", disabled["name"]))
133 print("{:>4}description: {:<1}".format("", disabled["description"]))
134 print("{:>4}version: {:<1}".format("", disabled["version"]))
135 print("{:>4}status: disabled".format(""))
136 else:
137 print(
138 "{:>2} {} {}".format(
139 "",
140 "message:",
141 "microk8s is not running. Use microk8s inspect for a deeper inspection.",
142 )
143 )
144
145
146 def print_addon_status(enabled):
147 if len(enabled) > 0:
148 print("enabled")
149 else:
150 print("disabled")
151
152
153 def get_status(available_addons, isReady):
154 enabled = []
155 disabled = []
156 if isReady:
157 kube_output = kubectl_get("all")
158 cluster_output = kubectl_get_clusterroles()
159 kube_output = kube_output + cluster_output
160 for addon in available_addons:
161 found = False
162 for row in kube_output.split("\n"):
163 if is_enabled(addon["check_status"], row):
164 enabled.append(addon)
165 found = True
166 break
167 if not found:
168 disabled.append(addon)
169
170 return enabled, disabled
171
172
173 def ha_cluster_formed(info):
174 voters = 0
175 for node in info:
176 if node[1] == "voter":
177 voters += 1
178 ha_formed = False
179 if voters > 2:
180 ha_formed = True
181 return ha_formed
182
183
184 if __name__ == "__main__":
185 exit_if_no_permission()
186 exit_if_stopped()
187 is_cluster_locked()
188
189 # initiate the parser with a description
190 parser = argparse.ArgumentParser(
191 description="Microk8s cluster status check.", prog="microk8s status"
192 )
193 parser.add_argument(
194 "--format",
195 help="print cluster and addon status, output can be in yaml, pretty or short",
196 default="pretty",
197 choices={"pretty", "yaml", "short"},
198 )
199 parser.add_argument(
200 "-w", "--wait-ready", action="store_true", help="wait until the cluster is in ready state"
201 )
202 parser.add_argument(
203 "-t",
204 "--timeout",
205 help="specify a timeout in seconds when waiting for the cluster to be ready.",
206 type=int,
207 default=0,
208 )
209 parser.add_argument("-a", "--addon", help="check the status of an addon.", default="all")
210 parser.add_argument(
211 "--yaml", action="store_true", help="DEPRECATED, use '--format yaml' instead"
212 )
213
214 # read arguments from the command line
215 args = parser.parse_args()
216
217 wait_ready = args.wait_ready
218 timeout = args.timeout
219 yaml_short = args.yaml
220
221 if wait_ready:
222 isReady = wait_for_ready(timeout)
223 else:
224 isReady = is_cluster_ready()
225
226 available_addons = get_available_addons(get_current_arch())
227
228 if args.addon != "all":
229 available_addons = get_addon_by_name(available_addons, args.addon)
230
231 enabled, disabled = get_status(available_addons, isReady)
232
233 if args.addon != "all":
234 print_addon_status(enabled)
235 else:
236 if args.format == "yaml":
237 print_yaml(isReady, enabled, disabled)
238 elif args.format == "short":
239 print_short(isReady, enabled, disabled)
240 else:
241 if yaml_short:
242 print_short_yaml(isReady, enabled, disabled)
243 else:
244 print_pretty(isReady, enabled, disabled)
245
[end of scripts/wrappers/status.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/scripts/wrappers/status.py b/scripts/wrappers/status.py
--- a/scripts/wrappers/status.py
+++ b/scripts/wrappers/status.py
@@ -98,7 +98,7 @@
print(" {}: disabled".format(disabled["name"]))
else:
print(
- "{:>2} {} {}".format(
+ "{:>2}{} {}".format(
"",
"message:",
"microk8s is not running. Use microk8s inspect for a deeper inspection.",
@@ -135,7 +135,7 @@
print("{:>4}status: disabled".format(""))
else:
print(
- "{:>2} {} {}".format(
+ "{:>2}{} {}".format(
"",
"message:",
"microk8s is not running. Use microk8s inspect for a deeper inspection.",
| {"golden_diff": "diff --git a/scripts/wrappers/status.py b/scripts/wrappers/status.py\n--- a/scripts/wrappers/status.py\n+++ b/scripts/wrappers/status.py\n@@ -98,7 +98,7 @@\n print(\" {}: disabled\".format(disabled[\"name\"]))\n else:\n print(\n- \"{:>2} {} {}\".format(\n+ \"{:>2}{} {}\".format(\n \"\",\n \"message:\",\n \"microk8s is not running. Use microk8s inspect for a deeper inspection.\",\n@@ -135,7 +135,7 @@\n print(\"{:>4}status: disabled\".format(\"\"))\n else:\n print(\n- \"{:>2} {} {}\".format(\n+ \"{:>2}{} {}\".format(\n \"\",\n \"message:\",\n \"microk8s is not running. Use microk8s inspect for a deeper inspection.\",\n", "issue": "Indentation error in yaml output of microk8s.status\nThe output of `microk8s.status` is\r\n\r\n```\r\nmicrok8s:\r\n running: False\r\nhigh-availability:\r\n enabled: False\r\n message: microk8s is not running. Use microk8s inspect for a deeper inspection.\r\n```\r\n\r\nwhich confuses some parsers (e.g. the built-in Python 3.8) due to the extraneous space before `message`.\n", "before_files": [{"content": "#!/usr/bin/python3\nimport os\nimport argparse\n\nfrom common.utils import (\n exit_if_no_permission,\n exit_if_stopped,\n is_cluster_locked,\n is_ha_enabled,\n get_dqlite_info,\n wait_for_ready,\n is_cluster_ready,\n get_available_addons,\n get_current_arch,\n get_addon_by_name,\n kubectl_get,\n kubectl_get_clusterroles,\n)\n\n\ndef is_enabled(addon, item):\n if addon in item:\n return True\n else:\n filepath = os.path.expandvars(addon)\n return os.path.isfile(filepath)\n\n return False\n\n\ndef print_short(isReady, enabled_addons, disabled_addons):\n if isReady:\n print(\"microk8s is running\")\n print(\"addons:\")\n if enabled_addons and len(enabled_addons) > 0:\n for enabled in enabled_addons:\n print(\"{}: enabled\".format(enabled[\"name\"]))\n if disabled_addons and len(disabled_addons) > 0:\n for disabled in disabled_addons:\n print(\"{}: disabled\".format(disabled[\"name\"]))\n else:\n print(\"microk8s is not running. Use microk8s inspect for a deeper inspection.\")\n\n\ndef print_pretty(isReady, enabled_addons, disabled_addons):\n console_formatter = \"{:>3} {:<20} # {}\"\n if isReady:\n print(\"microk8s is running\")\n if not is_ha_enabled():\n print(\"high-availability: no\")\n else:\n info = get_dqlite_info()\n if ha_cluster_formed(info):\n print(\"high-availability: yes\")\n else:\n print(\"high-availability: no\")\n\n masters = \"none\"\n standby = \"none\"\n for node in info:\n if node[1] == \"voter\":\n if masters == \"none\":\n masters = \"{}\".format(node[0])\n else:\n masters = \"{} {}\".format(masters, node[0])\n if node[1] == \"standby\":\n if standby == \"none\":\n standby = \"{}\".format(node[0])\n else:\n standby = \"{} {}\".format(standby, node[0])\n\n print(\"{:>2}{} {}\".format(\"\", \"datastore master nodes:\", masters))\n print(\"{:>2}{} {}\".format(\"\", \"datastore standby nodes:\", standby))\n\n print(\"addons:\")\n if enabled_addons and len(enabled_addons) > 0:\n print(\"{:>2}{}\".format(\"\", \"enabled:\"))\n for enabled in enabled_addons:\n print(console_formatter.format(\"\", enabled[\"name\"], enabled[\"description\"]))\n if disabled_addons and len(disabled_addons) > 0:\n print(\"{:>2}{}\".format(\"\", \"disabled:\"))\n for disabled in disabled_addons:\n print(console_formatter.format(\"\", disabled[\"name\"], disabled[\"description\"]))\n else:\n print(\"microk8s is not running. Use microk8s inspect for a deeper inspection.\")\n\n\ndef print_short_yaml(isReady, enabled_addons, disabled_addons):\n print(\"microk8s:\")\n print(\"{:>2}{} {}\".format(\"\", \"running:\", isReady))\n\n if isReady:\n print(\"addons:\")\n for enabled in enabled_addons:\n print(\" {}: enabled\".format(enabled[\"name\"]))\n\n for disabled in disabled_addons:\n print(\" {}: disabled\".format(disabled[\"name\"]))\n else:\n print(\n \"{:>2} {} {}\".format(\n \"\",\n \"message:\",\n \"microk8s is not running. Use microk8s inspect for a deeper inspection.\",\n )\n )\n\n\ndef print_yaml(isReady, enabled_addons, disabled_addons):\n print(\"microk8s:\")\n print(\"{:>2}{} {}\".format(\"\", \"running:\", isReady))\n\n print(\"{:>2}\".format(\"high-availability:\"))\n ha_enabled = is_ha_enabled()\n print(\"{:>2}{} {}\".format(\"\", \"enabled:\", ha_enabled))\n if ha_enabled:\n info = get_dqlite_info()\n print(\"{:>2}{}\".format(\"\", \"nodes:\"))\n for node in info:\n print(\"{:>6}address: {:<1}\".format(\"- \", node[0]))\n print(\"{:>6}role: {:<1}\".format(\"\", node[1]))\n\n if isReady:\n print(\"{:>2}\".format(\"addons:\"))\n for enabled in enabled_addons:\n print(\"{:>4}name: {:<1}\".format(\"- \", enabled[\"name\"]))\n print(\"{:>4}description: {:<1}\".format(\"\", enabled[\"description\"]))\n print(\"{:>4}version: {:<1}\".format(\"\", enabled[\"version\"]))\n print(\"{:>4}status: enabled\".format(\"\"))\n\n for disabled in disabled_addons:\n print(\"{:>4}name: {:<1}\".format(\"- \", disabled[\"name\"]))\n print(\"{:>4}description: {:<1}\".format(\"\", disabled[\"description\"]))\n print(\"{:>4}version: {:<1}\".format(\"\", disabled[\"version\"]))\n print(\"{:>4}status: disabled\".format(\"\"))\n else:\n print(\n \"{:>2} {} {}\".format(\n \"\",\n \"message:\",\n \"microk8s is not running. Use microk8s inspect for a deeper inspection.\",\n )\n )\n\n\ndef print_addon_status(enabled):\n if len(enabled) > 0:\n print(\"enabled\")\n else:\n print(\"disabled\")\n\n\ndef get_status(available_addons, isReady):\n enabled = []\n disabled = []\n if isReady:\n kube_output = kubectl_get(\"all\")\n cluster_output = kubectl_get_clusterroles()\n kube_output = kube_output + cluster_output\n for addon in available_addons:\n found = False\n for row in kube_output.split(\"\\n\"):\n if is_enabled(addon[\"check_status\"], row):\n enabled.append(addon)\n found = True\n break\n if not found:\n disabled.append(addon)\n\n return enabled, disabled\n\n\ndef ha_cluster_formed(info):\n voters = 0\n for node in info:\n if node[1] == \"voter\":\n voters += 1\n ha_formed = False\n if voters > 2:\n ha_formed = True\n return ha_formed\n\n\nif __name__ == \"__main__\":\n exit_if_no_permission()\n exit_if_stopped()\n is_cluster_locked()\n\n # initiate the parser with a description\n parser = argparse.ArgumentParser(\n description=\"Microk8s cluster status check.\", prog=\"microk8s status\"\n )\n parser.add_argument(\n \"--format\",\n help=\"print cluster and addon status, output can be in yaml, pretty or short\",\n default=\"pretty\",\n choices={\"pretty\", \"yaml\", \"short\"},\n )\n parser.add_argument(\n \"-w\", \"--wait-ready\", action=\"store_true\", help=\"wait until the cluster is in ready state\"\n )\n parser.add_argument(\n \"-t\",\n \"--timeout\",\n help=\"specify a timeout in seconds when waiting for the cluster to be ready.\",\n type=int,\n default=0,\n )\n parser.add_argument(\"-a\", \"--addon\", help=\"check the status of an addon.\", default=\"all\")\n parser.add_argument(\n \"--yaml\", action=\"store_true\", help=\"DEPRECATED, use '--format yaml' instead\"\n )\n\n # read arguments from the command line\n args = parser.parse_args()\n\n wait_ready = args.wait_ready\n timeout = args.timeout\n yaml_short = args.yaml\n\n if wait_ready:\n isReady = wait_for_ready(timeout)\n else:\n isReady = is_cluster_ready()\n\n available_addons = get_available_addons(get_current_arch())\n\n if args.addon != \"all\":\n available_addons = get_addon_by_name(available_addons, args.addon)\n\n enabled, disabled = get_status(available_addons, isReady)\n\n if args.addon != \"all\":\n print_addon_status(enabled)\n else:\n if args.format == \"yaml\":\n print_yaml(isReady, enabled, disabled)\n elif args.format == \"short\":\n print_short(isReady, enabled, disabled)\n else:\n if yaml_short:\n print_short_yaml(isReady, enabled, disabled)\n else:\n print_pretty(isReady, enabled, disabled)\n", "path": "scripts/wrappers/status.py"}]} | 3,080 | 193 |
gh_patches_debug_14371 | rasdani/github-patches | git_diff | aio-libs__aiohttp-6164 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
tests_require: add trustme
It is required since https://github.com/aio-libs/aiohttp/pull/3487.
<!-- Thank you for your contribution! -->
## What do these changes do?
<!-- Please give a short brief about these changes. -->
## Are there changes in behavior for the user?
<!-- Outline any notable behaviour for the end users. -->
## Related issue number
<!-- Are there any issues opened that will be resolved by merging this change? -->
## Checklist
- [ ] I think the code is well written
- [ ] Unit tests for the changes exist
- [ ] Documentation reflects the changes
- [ ] If you provide code modification, please add yourself to `CONTRIBUTORS.txt`
* The format is <Name> <Surname>.
* Please keep alphabetical order, the file is sorted by names.
- [ ] Add a new news fragment into the `CHANGES` folder
* name it `<issue_id>.<type>` for example (588.bugfix)
* if you don't have an `issue_id` change it to the pr id after creating the pr
* ensure type is one of the following:
* `.feature`: Signifying a new feature.
* `.bugfix`: Signifying a bug fix.
* `.doc`: Signifying a documentation improvement.
* `.removal`: Signifying a deprecation or removal of public API.
* `.misc`: A ticket has been closed, but it is not of interest to users.
* Make sure to use full sentences with correct case and punctuation, for example: "Fix issue with non-ascii contents in doctest text files."
</issue>
<code>
[start of setup.py]
1 import pathlib
2 import re
3 import sys
4 from distutils.command.build_ext import build_ext
5 from distutils.errors import CCompilerError, DistutilsExecError, DistutilsPlatformError
6
7 from setuptools import Extension, setup
8
9 if sys.version_info < (3, 6):
10 raise RuntimeError("aiohttp 3.7+ requires Python 3.6+")
11
12 here = pathlib.Path(__file__).parent
13
14 if (here / ".git").exists() and not (here / "vendor/http-parser/README.md").exists():
15 print("Install submodules when building from git clone", file=sys.stderr)
16 print("Hint:", file=sys.stderr)
17 print(" git submodule update --init", file=sys.stderr)
18 sys.exit(2)
19
20
21 # NOTE: makefile cythonizes all Cython modules
22
23 extensions = [
24 Extension("aiohttp._websocket", ["aiohttp/_websocket.c"]),
25 Extension(
26 "aiohttp._http_parser",
27 [
28 "aiohttp/_http_parser.c",
29 "vendor/http-parser/http_parser.c",
30 "aiohttp/_find_header.c",
31 ],
32 define_macros=[("HTTP_PARSER_STRICT", 0)],
33 ),
34 Extension("aiohttp._helpers", ["aiohttp/_helpers.c"]),
35 Extension("aiohttp._http_writer", ["aiohttp/_http_writer.c"]),
36 ]
37
38
39 class BuildFailed(Exception):
40 pass
41
42
43 class ve_build_ext(build_ext):
44 # This class allows C extension building to fail.
45
46 def run(self):
47 try:
48 build_ext.run(self)
49 except (DistutilsPlatformError, FileNotFoundError):
50 raise BuildFailed()
51
52 def build_extension(self, ext):
53 try:
54 build_ext.build_extension(self, ext)
55 except (CCompilerError, DistutilsExecError, DistutilsPlatformError, ValueError):
56 raise BuildFailed()
57
58
59 txt = (here / "aiohttp" / "__init__.py").read_text("utf-8")
60 try:
61 version = re.findall(r'^__version__ = "([^"]+)"\r?$', txt, re.M)[0]
62 except IndexError:
63 raise RuntimeError("Unable to determine version.")
64
65 install_requires = [
66 "attrs>=17.3.0",
67 "charset-normalizer>=2.0,<3.0",
68 "multidict>=4.5,<7.0",
69 "async_timeout>=4.0.0a3,<5.0",
70 'asynctest==0.13.0; python_version<"3.8"',
71 "yarl>=1.0,<2.0",
72 'idna-ssl>=1.0; python_version<"3.7"',
73 'typing_extensions>=3.7.4; python_version<"3.8"',
74 "frozenlist>=1.1.1",
75 "aiosignal>=1.1.2",
76 ]
77
78
79 def read(f):
80 return (here / f).read_text("utf-8").strip()
81
82
83 NEEDS_PYTEST = {"pytest", "test"}.intersection(sys.argv)
84 pytest_runner = ["pytest-runner"] if NEEDS_PYTEST else []
85
86 tests_require = [
87 "pytest",
88 "gunicorn",
89 "pytest-timeout",
90 "async-generator",
91 "pytest-xdist",
92 ]
93
94
95 args = dict(
96 name="aiohttp",
97 version=version,
98 description="Async http client/server framework (asyncio)",
99 long_description=read("README.rst"),
100 long_description_content_type="text/x-rst",
101 classifiers=[
102 "License :: OSI Approved :: Apache Software License",
103 "Intended Audience :: Developers",
104 "Programming Language :: Python",
105 "Programming Language :: Python :: 3",
106 "Programming Language :: Python :: 3.6",
107 "Programming Language :: Python :: 3.7",
108 "Programming Language :: Python :: 3.8",
109 "Programming Language :: Python :: 3.9",
110 "Programming Language :: Python :: 3.10",
111 "Development Status :: 5 - Production/Stable",
112 "Operating System :: POSIX",
113 "Operating System :: MacOS :: MacOS X",
114 "Operating System :: Microsoft :: Windows",
115 "Topic :: Internet :: WWW/HTTP",
116 "Framework :: AsyncIO",
117 ],
118 author="Nikolay Kim",
119 author_email="[email protected]",
120 maintainer=", ".join(
121 (
122 "Nikolay Kim <[email protected]>",
123 "Andrew Svetlov <[email protected]>",
124 )
125 ),
126 maintainer_email="[email protected]",
127 url="https://github.com/aio-libs/aiohttp",
128 project_urls={
129 "Chat: Gitter": "https://gitter.im/aio-libs/Lobby",
130 "CI: GitHub Actions": "https://github.com/aio-libs/aiohttp/actions?query=workflow%3ACI", # noqa
131 "Coverage: codecov": "https://codecov.io/github/aio-libs/aiohttp",
132 "Docs: RTD": "https://docs.aiohttp.org",
133 "GitHub: issues": "https://github.com/aio-libs/aiohttp/issues",
134 "GitHub: repo": "https://github.com/aio-libs/aiohttp",
135 },
136 license="Apache 2",
137 packages=["aiohttp"],
138 python_requires=">=3.6",
139 install_requires=install_requires,
140 extras_require={
141 "speedups": [
142 "aiodns",
143 "Brotli",
144 "cchardet",
145 ],
146 },
147 tests_require=tests_require,
148 setup_requires=pytest_runner,
149 include_package_data=True,
150 ext_modules=extensions,
151 cmdclass=dict(build_ext=ve_build_ext),
152 )
153
154 try:
155 setup(**args)
156 except BuildFailed:
157 print("************************************************************")
158 print("Cannot compile C accelerator module, use pure python version")
159 print("************************************************************")
160 del args["ext_modules"]
161 del args["cmdclass"]
162 setup(**args)
163
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -80,18 +80,6 @@
return (here / f).read_text("utf-8").strip()
-NEEDS_PYTEST = {"pytest", "test"}.intersection(sys.argv)
-pytest_runner = ["pytest-runner"] if NEEDS_PYTEST else []
-
-tests_require = [
- "pytest",
- "gunicorn",
- "pytest-timeout",
- "async-generator",
- "pytest-xdist",
-]
-
-
args = dict(
name="aiohttp",
version=version,
@@ -144,8 +132,6 @@
"cchardet",
],
},
- tests_require=tests_require,
- setup_requires=pytest_runner,
include_package_data=True,
ext_modules=extensions,
cmdclass=dict(build_ext=ve_build_ext),
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -80,18 +80,6 @@\n return (here / f).read_text(\"utf-8\").strip()\n \n \n-NEEDS_PYTEST = {\"pytest\", \"test\"}.intersection(sys.argv)\n-pytest_runner = [\"pytest-runner\"] if NEEDS_PYTEST else []\n-\n-tests_require = [\n- \"pytest\",\n- \"gunicorn\",\n- \"pytest-timeout\",\n- \"async-generator\",\n- \"pytest-xdist\",\n-]\n-\n-\n args = dict(\n name=\"aiohttp\",\n version=version,\n@@ -144,8 +132,6 @@\n \"cchardet\",\n ],\n },\n- tests_require=tests_require,\n- setup_requires=pytest_runner,\n include_package_data=True,\n ext_modules=extensions,\n cmdclass=dict(build_ext=ve_build_ext),\n", "issue": "tests_require: add trustme\nIt is required since https://github.com/aio-libs/aiohttp/pull/3487.\r\n\r\n<!-- Thank you for your contribution! -->\r\n\r\n## What do these changes do?\r\n\r\n<!-- Please give a short brief about these changes. -->\r\n\r\n## Are there changes in behavior for the user?\r\n\r\n<!-- Outline any notable behaviour for the end users. -->\r\n\r\n## Related issue number\r\n\r\n<!-- Are there any issues opened that will be resolved by merging this change? -->\r\n\r\n## Checklist\r\n\r\n- [ ] I think the code is well written\r\n- [ ] Unit tests for the changes exist\r\n- [ ] Documentation reflects the changes\r\n- [ ] If you provide code modification, please add yourself to `CONTRIBUTORS.txt`\r\n * The format is <Name> <Surname>.\r\n * Please keep alphabetical order, the file is sorted by names. \r\n- [ ] Add a new news fragment into the `CHANGES` folder\r\n * name it `<issue_id>.<type>` for example (588.bugfix)\r\n * if you don't have an `issue_id` change it to the pr id after creating the pr\r\n * ensure type is one of the following:\r\n * `.feature`: Signifying a new feature.\r\n * `.bugfix`: Signifying a bug fix.\r\n * `.doc`: Signifying a documentation improvement.\r\n * `.removal`: Signifying a deprecation or removal of public API.\r\n * `.misc`: A ticket has been closed, but it is not of interest to users.\r\n * Make sure to use full sentences with correct case and punctuation, for example: \"Fix issue with non-ascii contents in doctest text files.\"\r\n\n", "before_files": [{"content": "import pathlib\nimport re\nimport sys\nfrom distutils.command.build_ext import build_ext\nfrom distutils.errors import CCompilerError, DistutilsExecError, DistutilsPlatformError\n\nfrom setuptools import Extension, setup\n\nif sys.version_info < (3, 6):\n raise RuntimeError(\"aiohttp 3.7+ requires Python 3.6+\")\n\nhere = pathlib.Path(__file__).parent\n\nif (here / \".git\").exists() and not (here / \"vendor/http-parser/README.md\").exists():\n print(\"Install submodules when building from git clone\", file=sys.stderr)\n print(\"Hint:\", file=sys.stderr)\n print(\" git submodule update --init\", file=sys.stderr)\n sys.exit(2)\n\n\n# NOTE: makefile cythonizes all Cython modules\n\nextensions = [\n Extension(\"aiohttp._websocket\", [\"aiohttp/_websocket.c\"]),\n Extension(\n \"aiohttp._http_parser\",\n [\n \"aiohttp/_http_parser.c\",\n \"vendor/http-parser/http_parser.c\",\n \"aiohttp/_find_header.c\",\n ],\n define_macros=[(\"HTTP_PARSER_STRICT\", 0)],\n ),\n Extension(\"aiohttp._helpers\", [\"aiohttp/_helpers.c\"]),\n Extension(\"aiohttp._http_writer\", [\"aiohttp/_http_writer.c\"]),\n]\n\n\nclass BuildFailed(Exception):\n pass\n\n\nclass ve_build_ext(build_ext):\n # This class allows C extension building to fail.\n\n def run(self):\n try:\n build_ext.run(self)\n except (DistutilsPlatformError, FileNotFoundError):\n raise BuildFailed()\n\n def build_extension(self, ext):\n try:\n build_ext.build_extension(self, ext)\n except (CCompilerError, DistutilsExecError, DistutilsPlatformError, ValueError):\n raise BuildFailed()\n\n\ntxt = (here / \"aiohttp\" / \"__init__.py\").read_text(\"utf-8\")\ntry:\n version = re.findall(r'^__version__ = \"([^\"]+)\"\\r?$', txt, re.M)[0]\nexcept IndexError:\n raise RuntimeError(\"Unable to determine version.\")\n\ninstall_requires = [\n \"attrs>=17.3.0\",\n \"charset-normalizer>=2.0,<3.0\",\n \"multidict>=4.5,<7.0\",\n \"async_timeout>=4.0.0a3,<5.0\",\n 'asynctest==0.13.0; python_version<\"3.8\"',\n \"yarl>=1.0,<2.0\",\n 'idna-ssl>=1.0; python_version<\"3.7\"',\n 'typing_extensions>=3.7.4; python_version<\"3.8\"',\n \"frozenlist>=1.1.1\",\n \"aiosignal>=1.1.2\",\n]\n\n\ndef read(f):\n return (here / f).read_text(\"utf-8\").strip()\n\n\nNEEDS_PYTEST = {\"pytest\", \"test\"}.intersection(sys.argv)\npytest_runner = [\"pytest-runner\"] if NEEDS_PYTEST else []\n\ntests_require = [\n \"pytest\",\n \"gunicorn\",\n \"pytest-timeout\",\n \"async-generator\",\n \"pytest-xdist\",\n]\n\n\nargs = dict(\n name=\"aiohttp\",\n version=version,\n description=\"Async http client/server framework (asyncio)\",\n long_description=read(\"README.rst\"),\n long_description_content_type=\"text/x-rst\",\n classifiers=[\n \"License :: OSI Approved :: Apache Software License\",\n \"Intended Audience :: Developers\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Programming Language :: Python :: 3.10\",\n \"Development Status :: 5 - Production/Stable\",\n \"Operating System :: POSIX\",\n \"Operating System :: MacOS :: MacOS X\",\n \"Operating System :: Microsoft :: Windows\",\n \"Topic :: Internet :: WWW/HTTP\",\n \"Framework :: AsyncIO\",\n ],\n author=\"Nikolay Kim\",\n author_email=\"[email protected]\",\n maintainer=\", \".join(\n (\n \"Nikolay Kim <[email protected]>\",\n \"Andrew Svetlov <[email protected]>\",\n )\n ),\n maintainer_email=\"[email protected]\",\n url=\"https://github.com/aio-libs/aiohttp\",\n project_urls={\n \"Chat: Gitter\": \"https://gitter.im/aio-libs/Lobby\",\n \"CI: GitHub Actions\": \"https://github.com/aio-libs/aiohttp/actions?query=workflow%3ACI\", # noqa\n \"Coverage: codecov\": \"https://codecov.io/github/aio-libs/aiohttp\",\n \"Docs: RTD\": \"https://docs.aiohttp.org\",\n \"GitHub: issues\": \"https://github.com/aio-libs/aiohttp/issues\",\n \"GitHub: repo\": \"https://github.com/aio-libs/aiohttp\",\n },\n license=\"Apache 2\",\n packages=[\"aiohttp\"],\n python_requires=\">=3.6\",\n install_requires=install_requires,\n extras_require={\n \"speedups\": [\n \"aiodns\",\n \"Brotli\",\n \"cchardet\",\n ],\n },\n tests_require=tests_require,\n setup_requires=pytest_runner,\n include_package_data=True,\n ext_modules=extensions,\n cmdclass=dict(build_ext=ve_build_ext),\n)\n\ntry:\n setup(**args)\nexcept BuildFailed:\n print(\"************************************************************\")\n print(\"Cannot compile C accelerator module, use pure python version\")\n print(\"************************************************************\")\n del args[\"ext_modules\"]\n del args[\"cmdclass\"]\n setup(**args)\n", "path": "setup.py"}]} | 2,568 | 204 |
gh_patches_debug_20065 | rasdani/github-patches | git_diff | zigpy__zha-device-handlers-1287 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[Device Support Request] New manufacturerName for DDS238-2 Zigbee
In one chat I got information that from December 2021 this device now is sending with a new manufacturerName attribute: _TZE200_ewxhg6o9. The new version also contains the fix the switch issue (see the details in the #994 thread) and maybe something else, like EnergyFactor attribute.
I can update the quirck but I have no physically the new version of the DDS238-2 device to test, so if someone has the new version of device and can support me with the testing, I can help with the contribution of this update.
</issue>
<code>
[start of zhaquirks/tuya/ts0601_din_power.py]
1 """Tuya Din Power Meter."""
2 from zigpy.profiles import zha
3 import zigpy.types as t
4 from zigpy.zcl.clusters.general import Basic, Groups, Ota, Scenes, Time
5 from zigpy.zcl.clusters.homeautomation import ElectricalMeasurement
6 from zigpy.zcl.clusters.smartenergy import Metering
7
8 from zhaquirks import Bus, LocalDataCluster
9 from zhaquirks.const import (
10 DEVICE_TYPE,
11 ENDPOINTS,
12 INPUT_CLUSTERS,
13 MODELS_INFO,
14 OUTPUT_CLUSTERS,
15 PROFILE_ID,
16 )
17 from zhaquirks.tuya import TuyaManufClusterAttributes, TuyaOnOff, TuyaSwitch
18
19 TUYA_TOTAL_ENERGY_ATTR = 0x0211
20 TUYA_CURRENT_ATTR = 0x0212
21 TUYA_POWER_ATTR = 0x0213
22 TUYA_VOLTAGE_ATTR = 0x0214
23 TUYA_DIN_SWITCH_ATTR = 0x0101
24
25 SWITCH_EVENT = "switch_event"
26
27
28 class TuyaManufClusterDinPower(TuyaManufClusterAttributes):
29 """Manufacturer Specific Cluster of the Tuya Power Meter device."""
30
31 manufacturer_attributes = {
32 TUYA_TOTAL_ENERGY_ATTR: ("energy", t.uint16_t),
33 TUYA_CURRENT_ATTR: ("current", t.int16s),
34 TUYA_POWER_ATTR: ("power", t.uint16_t),
35 TUYA_VOLTAGE_ATTR: ("voltage", t.uint16_t),
36 TUYA_DIN_SWITCH_ATTR: ("switch", t.uint8_t),
37 }
38
39 def _update_attribute(self, attrid, value):
40 super()._update_attribute(attrid, value)
41 if attrid == TUYA_TOTAL_ENERGY_ATTR:
42 self.endpoint.smartenergy_metering.energy_reported(value / 100)
43 elif attrid == TUYA_CURRENT_ATTR:
44 self.endpoint.electrical_measurement.current_reported(value)
45 elif attrid == TUYA_POWER_ATTR:
46 self.endpoint.electrical_measurement.power_reported(value / 10)
47 elif attrid == TUYA_VOLTAGE_ATTR:
48 self.endpoint.electrical_measurement.voltage_reported(value / 10)
49 elif attrid == TUYA_DIN_SWITCH_ATTR:
50 self.endpoint.device.switch_bus.listener_event(SWITCH_EVENT, attrid, value)
51
52
53 class TuyaPowerMeasurement(LocalDataCluster, ElectricalMeasurement):
54 """Custom class for power, voltage and current measurement."""
55
56 cluster_id = ElectricalMeasurement.cluster_id
57
58 POWER_ID = 0x050B
59 VOLTAGE_ID = 0x0505
60 CURRENT_ID = 0x0508
61
62 AC_CURRENT_MULTIPLIER = 0x0602
63 AC_CURRENT_DIVISOR = 0x0603
64
65 _CONSTANT_ATTRIBUTES = {AC_CURRENT_MULTIPLIER: 1, AC_CURRENT_DIVISOR: 1000}
66
67 def voltage_reported(self, value):
68 """Voltage reported."""
69 self._update_attribute(self.VOLTAGE_ID, value)
70
71 def power_reported(self, value):
72 """Power reported."""
73 self._update_attribute(self.POWER_ID, value)
74
75 def current_reported(self, value):
76 """Ampers reported."""
77 self._update_attribute(self.CURRENT_ID, value)
78
79
80 class TuyaElectricalMeasurement(LocalDataCluster, Metering):
81 """Custom class for total energy measurement."""
82
83 cluster_id = Metering.cluster_id
84 CURRENT_ID = 0x0000
85 POWER_WATT = 0x0000
86
87 """Setting unit of measurement."""
88 _CONSTANT_ATTRIBUTES = {0x0300: POWER_WATT}
89
90 def energy_reported(self, value):
91 """Summation Energy reported."""
92 self._update_attribute(self.CURRENT_ID, value)
93
94
95 class TuyaPowerMeter(TuyaSwitch):
96 """Tuya power meter device."""
97
98 def __init__(self, *args, **kwargs):
99 """Init device."""
100 self.switch_bus = Bus()
101 super().__init__(*args, **kwargs)
102
103 signature = {
104 # "node_descriptor": "<NodeDescriptor byte1=1 byte2=64 mac_capability_flags=142 manufacturer_code=4098
105 # maximum_buffer_size=82 maximum_incoming_transfer_size=82 server_mask=11264
106 # maximum_outgoing_transfer_size=82 descriptor_capability_field=0>",
107 # device_version=1
108 # input_clusters=[0x0000, 0x0004, 0x0005, 0xef00]
109 # output_clusters=[0x000a, 0x0019]
110 MODELS_INFO: [
111 ("_TZE200_byzdayie", "TS0601"),
112 ],
113 ENDPOINTS: {
114 # <SimpleDescriptor endpoint=1 profile=260 device_type=51
115 # device_version=1
116 # input_clusters=[0, 4, 5, 61184]
117 # output_clusters=[10, 25]>
118 1: {
119 PROFILE_ID: zha.PROFILE_ID,
120 DEVICE_TYPE: zha.DeviceType.SMART_PLUG,
121 INPUT_CLUSTERS: [
122 Basic.cluster_id,
123 Groups.cluster_id,
124 Scenes.cluster_id,
125 TuyaManufClusterAttributes.cluster_id,
126 ],
127 OUTPUT_CLUSTERS: [Time.cluster_id, Ota.cluster_id],
128 }
129 },
130 }
131
132 replacement = {
133 ENDPOINTS: {
134 1: {
135 PROFILE_ID: zha.PROFILE_ID,
136 DEVICE_TYPE: zha.DeviceType.SMART_PLUG,
137 INPUT_CLUSTERS: [
138 Basic.cluster_id,
139 Groups.cluster_id,
140 Scenes.cluster_id,
141 TuyaManufClusterDinPower,
142 TuyaPowerMeasurement,
143 TuyaElectricalMeasurement,
144 TuyaOnOff,
145 ],
146 OUTPUT_CLUSTERS: [Time.cluster_id, Ota.cluster_id],
147 }
148 }
149 }
150
[end of zhaquirks/tuya/ts0601_din_power.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/zhaquirks/tuya/ts0601_din_power.py b/zhaquirks/tuya/ts0601_din_power.py
--- a/zhaquirks/tuya/ts0601_din_power.py
+++ b/zhaquirks/tuya/ts0601_din_power.py
@@ -47,7 +47,9 @@
elif attrid == TUYA_VOLTAGE_ATTR:
self.endpoint.electrical_measurement.voltage_reported(value / 10)
elif attrid == TUYA_DIN_SWITCH_ATTR:
- self.endpoint.device.switch_bus.listener_event(SWITCH_EVENT, attrid, value)
+ self.endpoint.device.switch_bus.listener_event(
+ SWITCH_EVENT, self.endpoint.endpoint_id, value
+ )
class TuyaPowerMeasurement(LocalDataCluster, ElectricalMeasurement):
@@ -109,6 +111,7 @@
# output_clusters=[0x000a, 0x0019]
MODELS_INFO: [
("_TZE200_byzdayie", "TS0601"),
+ ("_TZE200_ewxhg6o9", "TS0601"),
],
ENDPOINTS: {
# <SimpleDescriptor endpoint=1 profile=260 device_type=51
| {"golden_diff": "diff --git a/zhaquirks/tuya/ts0601_din_power.py b/zhaquirks/tuya/ts0601_din_power.py\n--- a/zhaquirks/tuya/ts0601_din_power.py\n+++ b/zhaquirks/tuya/ts0601_din_power.py\n@@ -47,7 +47,9 @@\n elif attrid == TUYA_VOLTAGE_ATTR:\n self.endpoint.electrical_measurement.voltage_reported(value / 10)\n elif attrid == TUYA_DIN_SWITCH_ATTR:\n- self.endpoint.device.switch_bus.listener_event(SWITCH_EVENT, attrid, value)\n+ self.endpoint.device.switch_bus.listener_event(\n+ SWITCH_EVENT, self.endpoint.endpoint_id, value\n+ )\n \n \n class TuyaPowerMeasurement(LocalDataCluster, ElectricalMeasurement):\n@@ -109,6 +111,7 @@\n # output_clusters=[0x000a, 0x0019]\n MODELS_INFO: [\n (\"_TZE200_byzdayie\", \"TS0601\"),\n+ (\"_TZE200_ewxhg6o9\", \"TS0601\"),\n ],\n ENDPOINTS: {\n # <SimpleDescriptor endpoint=1 profile=260 device_type=51\n", "issue": "[Device Support Request] New manufacturerName for DDS238-2 Zigbee\nIn one chat I got information that from December 2021 this device now is sending with a new manufacturerName attribute: _TZE200_ewxhg6o9. The new version also contains the fix the switch issue (see the details in the #994 thread) and maybe something else, like EnergyFactor attribute.\r\nI can update the quirck but I have no physically the new version of the DDS238-2 device to test, so if someone has the new version of device and can support me with the testing, I can help with the contribution of this update.\n", "before_files": [{"content": "\"\"\"Tuya Din Power Meter.\"\"\"\nfrom zigpy.profiles import zha\nimport zigpy.types as t\nfrom zigpy.zcl.clusters.general import Basic, Groups, Ota, Scenes, Time\nfrom zigpy.zcl.clusters.homeautomation import ElectricalMeasurement\nfrom zigpy.zcl.clusters.smartenergy import Metering\n\nfrom zhaquirks import Bus, LocalDataCluster\nfrom zhaquirks.const import (\n DEVICE_TYPE,\n ENDPOINTS,\n INPUT_CLUSTERS,\n MODELS_INFO,\n OUTPUT_CLUSTERS,\n PROFILE_ID,\n)\nfrom zhaquirks.tuya import TuyaManufClusterAttributes, TuyaOnOff, TuyaSwitch\n\nTUYA_TOTAL_ENERGY_ATTR = 0x0211\nTUYA_CURRENT_ATTR = 0x0212\nTUYA_POWER_ATTR = 0x0213\nTUYA_VOLTAGE_ATTR = 0x0214\nTUYA_DIN_SWITCH_ATTR = 0x0101\n\nSWITCH_EVENT = \"switch_event\"\n\n\nclass TuyaManufClusterDinPower(TuyaManufClusterAttributes):\n \"\"\"Manufacturer Specific Cluster of the Tuya Power Meter device.\"\"\"\n\n manufacturer_attributes = {\n TUYA_TOTAL_ENERGY_ATTR: (\"energy\", t.uint16_t),\n TUYA_CURRENT_ATTR: (\"current\", t.int16s),\n TUYA_POWER_ATTR: (\"power\", t.uint16_t),\n TUYA_VOLTAGE_ATTR: (\"voltage\", t.uint16_t),\n TUYA_DIN_SWITCH_ATTR: (\"switch\", t.uint8_t),\n }\n\n def _update_attribute(self, attrid, value):\n super()._update_attribute(attrid, value)\n if attrid == TUYA_TOTAL_ENERGY_ATTR:\n self.endpoint.smartenergy_metering.energy_reported(value / 100)\n elif attrid == TUYA_CURRENT_ATTR:\n self.endpoint.electrical_measurement.current_reported(value)\n elif attrid == TUYA_POWER_ATTR:\n self.endpoint.electrical_measurement.power_reported(value / 10)\n elif attrid == TUYA_VOLTAGE_ATTR:\n self.endpoint.electrical_measurement.voltage_reported(value / 10)\n elif attrid == TUYA_DIN_SWITCH_ATTR:\n self.endpoint.device.switch_bus.listener_event(SWITCH_EVENT, attrid, value)\n\n\nclass TuyaPowerMeasurement(LocalDataCluster, ElectricalMeasurement):\n \"\"\"Custom class for power, voltage and current measurement.\"\"\"\n\n cluster_id = ElectricalMeasurement.cluster_id\n\n POWER_ID = 0x050B\n VOLTAGE_ID = 0x0505\n CURRENT_ID = 0x0508\n\n AC_CURRENT_MULTIPLIER = 0x0602\n AC_CURRENT_DIVISOR = 0x0603\n\n _CONSTANT_ATTRIBUTES = {AC_CURRENT_MULTIPLIER: 1, AC_CURRENT_DIVISOR: 1000}\n\n def voltage_reported(self, value):\n \"\"\"Voltage reported.\"\"\"\n self._update_attribute(self.VOLTAGE_ID, value)\n\n def power_reported(self, value):\n \"\"\"Power reported.\"\"\"\n self._update_attribute(self.POWER_ID, value)\n\n def current_reported(self, value):\n \"\"\"Ampers reported.\"\"\"\n self._update_attribute(self.CURRENT_ID, value)\n\n\nclass TuyaElectricalMeasurement(LocalDataCluster, Metering):\n \"\"\"Custom class for total energy measurement.\"\"\"\n\n cluster_id = Metering.cluster_id\n CURRENT_ID = 0x0000\n POWER_WATT = 0x0000\n\n \"\"\"Setting unit of measurement.\"\"\"\n _CONSTANT_ATTRIBUTES = {0x0300: POWER_WATT}\n\n def energy_reported(self, value):\n \"\"\"Summation Energy reported.\"\"\"\n self._update_attribute(self.CURRENT_ID, value)\n\n\nclass TuyaPowerMeter(TuyaSwitch):\n \"\"\"Tuya power meter device.\"\"\"\n\n def __init__(self, *args, **kwargs):\n \"\"\"Init device.\"\"\"\n self.switch_bus = Bus()\n super().__init__(*args, **kwargs)\n\n signature = {\n # \"node_descriptor\": \"<NodeDescriptor byte1=1 byte2=64 mac_capability_flags=142 manufacturer_code=4098\n # maximum_buffer_size=82 maximum_incoming_transfer_size=82 server_mask=11264\n # maximum_outgoing_transfer_size=82 descriptor_capability_field=0>\",\n # device_version=1\n # input_clusters=[0x0000, 0x0004, 0x0005, 0xef00]\n # output_clusters=[0x000a, 0x0019]\n MODELS_INFO: [\n (\"_TZE200_byzdayie\", \"TS0601\"),\n ],\n ENDPOINTS: {\n # <SimpleDescriptor endpoint=1 profile=260 device_type=51\n # device_version=1\n # input_clusters=[0, 4, 5, 61184]\n # output_clusters=[10, 25]>\n 1: {\n PROFILE_ID: zha.PROFILE_ID,\n DEVICE_TYPE: zha.DeviceType.SMART_PLUG,\n INPUT_CLUSTERS: [\n Basic.cluster_id,\n Groups.cluster_id,\n Scenes.cluster_id,\n TuyaManufClusterAttributes.cluster_id,\n ],\n OUTPUT_CLUSTERS: [Time.cluster_id, Ota.cluster_id],\n }\n },\n }\n\n replacement = {\n ENDPOINTS: {\n 1: {\n PROFILE_ID: zha.PROFILE_ID,\n DEVICE_TYPE: zha.DeviceType.SMART_PLUG,\n INPUT_CLUSTERS: [\n Basic.cluster_id,\n Groups.cluster_id,\n Scenes.cluster_id,\n TuyaManufClusterDinPower,\n TuyaPowerMeasurement,\n TuyaElectricalMeasurement,\n TuyaOnOff,\n ],\n OUTPUT_CLUSTERS: [Time.cluster_id, Ota.cluster_id],\n }\n }\n }\n", "path": "zhaquirks/tuya/ts0601_din_power.py"}]} | 2,372 | 293 |
gh_patches_debug_21733 | rasdani/github-patches | git_diff | getredash__redash-3619 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Support for Presto username and password
Currently the Presto query runner supports username only. We should support password as well.
This probably requires upgrading the PyHive library.
</issue>
<code>
[start of redash/query_runner/presto.py]
1 from redash.query_runner import *
2 from redash.utils import json_dumps, json_loads
3
4 import logging
5 logger = logging.getLogger(__name__)
6
7 from collections import defaultdict
8
9 try:
10 from pyhive import presto
11 from pyhive.exc import DatabaseError
12 enabled = True
13
14 except ImportError:
15 enabled = False
16
17 PRESTO_TYPES_MAPPING = {
18 "integer": TYPE_INTEGER,
19 "tinyint": TYPE_INTEGER,
20 "smallint": TYPE_INTEGER,
21 "long": TYPE_INTEGER,
22 "bigint": TYPE_INTEGER,
23 "float": TYPE_FLOAT,
24 "double": TYPE_FLOAT,
25 "boolean": TYPE_BOOLEAN,
26 "string": TYPE_STRING,
27 "varchar": TYPE_STRING,
28 "date": TYPE_DATE,
29 }
30
31
32 class Presto(BaseQueryRunner):
33 noop_query = 'SHOW TABLES'
34
35 @classmethod
36 def configuration_schema(cls):
37 return {
38 'type': 'object',
39 'properties': {
40 'host': {
41 'type': 'string'
42 },
43 'protocol': {
44 'type': 'string',
45 'default': 'http'
46 },
47 'port': {
48 'type': 'number'
49 },
50 'schema': {
51 'type': 'string'
52 },
53 'catalog': {
54 'type': 'string'
55 },
56 'username': {
57 'type': 'string'
58 },
59 },
60 'order': ['host', 'protocol', 'port', 'username', 'schema', 'catalog'],
61 'required': ['host']
62 }
63
64 @classmethod
65 def enabled(cls):
66 return enabled
67
68 @classmethod
69 def type(cls):
70 return "presto"
71
72 def get_schema(self, get_stats=False):
73 schema = {}
74 query = """
75 SELECT table_schema, table_name, column_name
76 FROM information_schema.columns
77 WHERE table_schema NOT IN ('pg_catalog', 'information_schema')
78 """
79
80 results, error = self.run_query(query, None)
81
82 if error is not None:
83 raise Exception("Failed getting schema.")
84
85 results = json_loads(results)
86
87 for row in results['rows']:
88 table_name = '{}.{}'.format(row['table_schema'], row['table_name'])
89
90 if table_name not in schema:
91 schema[table_name] = {'name': table_name, 'columns': []}
92
93 schema[table_name]['columns'].append(row['column_name'])
94
95 return schema.values()
96
97 def run_query(self, query, user):
98 connection = presto.connect(
99 host=self.configuration.get('host', ''),
100 port=self.configuration.get('port', 8080),
101 protocol=self.configuration.get('protocol', 'http'),
102 username=self.configuration.get('username', 'redash'),
103 catalog=self.configuration.get('catalog', 'hive'),
104 schema=self.configuration.get('schema', 'default'))
105
106 cursor = connection.cursor()
107
108
109 try:
110 cursor.execute(query)
111 column_tuples = [(i[0], PRESTO_TYPES_MAPPING.get(i[1], None)) for i in cursor.description]
112 columns = self.fetch_columns(column_tuples)
113 rows = [dict(zip(([c['name'] for c in columns]), r)) for i, r in enumerate(cursor.fetchall())]
114 data = {'columns': columns, 'rows': rows}
115 json_data = json_dumps(data)
116 error = None
117 except DatabaseError as db:
118 json_data = None
119 default_message = 'Unspecified DatabaseError: {0}'.format(db.message)
120 if isinstance(db.message, dict):
121 message = db.message.get('failureInfo', {'message', None}).get('message')
122 else:
123 message = None
124 error = default_message if message is None else message
125 except (KeyboardInterrupt, InterruptException) as e:
126 cursor.cancel()
127 error = "Query cancelled by user."
128 json_data = None
129 except Exception as ex:
130 json_data = None
131 error = ex.message
132 if not isinstance(error, basestring):
133 error = unicode(error)
134
135 return json_data, error
136
137 register(Presto)
138
[end of redash/query_runner/presto.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/redash/query_runner/presto.py b/redash/query_runner/presto.py
--- a/redash/query_runner/presto.py
+++ b/redash/query_runner/presto.py
@@ -56,8 +56,11 @@
'username': {
'type': 'string'
},
+ 'password': {
+ 'type': 'string'
+ },
},
- 'order': ['host', 'protocol', 'port', 'username', 'schema', 'catalog'],
+ 'order': ['host', 'protocol', 'port', 'username', 'password', 'schema', 'catalog'],
'required': ['host']
}
@@ -100,6 +103,7 @@
port=self.configuration.get('port', 8080),
protocol=self.configuration.get('protocol', 'http'),
username=self.configuration.get('username', 'redash'),
+ password=self.configuration.get('password', ''),
catalog=self.configuration.get('catalog', 'hive'),
schema=self.configuration.get('schema', 'default'))
| {"golden_diff": "diff --git a/redash/query_runner/presto.py b/redash/query_runner/presto.py\n--- a/redash/query_runner/presto.py\n+++ b/redash/query_runner/presto.py\n@@ -56,8 +56,11 @@\n 'username': {\n 'type': 'string'\n },\n+ 'password': {\n+ 'type': 'string'\n+ },\n },\n- 'order': ['host', 'protocol', 'port', 'username', 'schema', 'catalog'],\n+ 'order': ['host', 'protocol', 'port', 'username', 'password', 'schema', 'catalog'],\n 'required': ['host']\n }\n \n@@ -100,6 +103,7 @@\n port=self.configuration.get('port', 8080),\n protocol=self.configuration.get('protocol', 'http'),\n username=self.configuration.get('username', 'redash'),\n+ password=self.configuration.get('password', ''),\n catalog=self.configuration.get('catalog', 'hive'),\n schema=self.configuration.get('schema', 'default'))\n", "issue": "Support for Presto username and password\nCurrently the Presto query runner supports username only. We should support password as well.\r\n\r\nThis probably requires upgrading the PyHive library.\n", "before_files": [{"content": "from redash.query_runner import *\nfrom redash.utils import json_dumps, json_loads\n\nimport logging\nlogger = logging.getLogger(__name__)\n\nfrom collections import defaultdict\n\ntry:\n from pyhive import presto\n from pyhive.exc import DatabaseError\n enabled = True\n\nexcept ImportError:\n enabled = False\n\nPRESTO_TYPES_MAPPING = {\n \"integer\": TYPE_INTEGER,\n \"tinyint\": TYPE_INTEGER,\n \"smallint\": TYPE_INTEGER,\n \"long\": TYPE_INTEGER,\n \"bigint\": TYPE_INTEGER,\n \"float\": TYPE_FLOAT,\n \"double\": TYPE_FLOAT,\n \"boolean\": TYPE_BOOLEAN,\n \"string\": TYPE_STRING,\n \"varchar\": TYPE_STRING,\n \"date\": TYPE_DATE,\n}\n\n\nclass Presto(BaseQueryRunner):\n noop_query = 'SHOW TABLES'\n\n @classmethod\n def configuration_schema(cls):\n return {\n 'type': 'object',\n 'properties': {\n 'host': {\n 'type': 'string'\n },\n 'protocol': {\n 'type': 'string',\n 'default': 'http'\n },\n 'port': {\n 'type': 'number'\n },\n 'schema': {\n 'type': 'string'\n },\n 'catalog': {\n 'type': 'string'\n },\n 'username': {\n 'type': 'string'\n },\n },\n 'order': ['host', 'protocol', 'port', 'username', 'schema', 'catalog'],\n 'required': ['host']\n }\n\n @classmethod\n def enabled(cls):\n return enabled\n\n @classmethod\n def type(cls):\n return \"presto\"\n\n def get_schema(self, get_stats=False):\n schema = {}\n query = \"\"\"\n SELECT table_schema, table_name, column_name\n FROM information_schema.columns\n WHERE table_schema NOT IN ('pg_catalog', 'information_schema')\n \"\"\"\n\n results, error = self.run_query(query, None)\n\n if error is not None:\n raise Exception(\"Failed getting schema.\")\n\n results = json_loads(results)\n\n for row in results['rows']:\n table_name = '{}.{}'.format(row['table_schema'], row['table_name'])\n\n if table_name not in schema:\n schema[table_name] = {'name': table_name, 'columns': []}\n\n schema[table_name]['columns'].append(row['column_name'])\n\n return schema.values()\n\n def run_query(self, query, user):\n connection = presto.connect(\n host=self.configuration.get('host', ''),\n port=self.configuration.get('port', 8080),\n protocol=self.configuration.get('protocol', 'http'),\n username=self.configuration.get('username', 'redash'),\n catalog=self.configuration.get('catalog', 'hive'),\n schema=self.configuration.get('schema', 'default'))\n\n cursor = connection.cursor()\n\n\n try:\n cursor.execute(query)\n column_tuples = [(i[0], PRESTO_TYPES_MAPPING.get(i[1], None)) for i in cursor.description]\n columns = self.fetch_columns(column_tuples)\n rows = [dict(zip(([c['name'] for c in columns]), r)) for i, r in enumerate(cursor.fetchall())]\n data = {'columns': columns, 'rows': rows}\n json_data = json_dumps(data)\n error = None\n except DatabaseError as db:\n json_data = None\n default_message = 'Unspecified DatabaseError: {0}'.format(db.message)\n if isinstance(db.message, dict):\n message = db.message.get('failureInfo', {'message', None}).get('message')\n else:\n message = None\n error = default_message if message is None else message\n except (KeyboardInterrupt, InterruptException) as e:\n cursor.cancel()\n error = \"Query cancelled by user.\"\n json_data = None\n except Exception as ex:\n json_data = None\n error = ex.message\n if not isinstance(error, basestring):\n error = unicode(error)\n\n return json_data, error\n\nregister(Presto)\n", "path": "redash/query_runner/presto.py"}]} | 1,749 | 228 |
gh_patches_debug_21440 | rasdani/github-patches | git_diff | UTNkar__moore-120 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Application drafts in limbo
<!-- Do you want to ask a question? Are you looking for support? The system administrator can help you: [email protected] -->
### Prerequisites
* [x] Put an X between the brackets on this line if you have done all of the
following:
* Reproduced the problem with clear cache.
* (If running the application locally:) Made sure your running the newest version on the development branch
* Checked that your issue isn't already filed: https://github.com/UTNkar/moore/issues
### Description
The problem has not been reproduced but it's the kind of problem that probably will occur any way.
When a user starts an application draft and forgets to submit it the person can not be appointed to the position with the overturn function. If the application for a position has been saved as draft when the application period ends the application becomes stuck in limbo. The group administrator can before the application period has ended see that there is a draft for the person. When the application period ends the draft is not visible among the submitted applications (very resonable). The problem occurs when the administrator wants to appoint that person anyway with the overturn function then an error message occurs saying. "You can not appoint this person since an application has been submitted". This should not be the case, a draft application should not be seen as a submitted application by the overturn function. The user can not see or delete the draft application after the application period has ended.
Quick fix, an application system administrator can access the applications and set the status to submitted.
### Steps to Reproduce
1. Apply to an position and save the application as draft
2. Wait for the application time to end
3. Go to appoint for the given position
4. Try overturn with the user who saved the application as draft
5. Error message occurs
<!-- Please select the appropriate "topic category"/blue and "issue type"/yellow label -->
</issue>
<code>
[start of website/involvement/forms.py]
1 from django import forms
2 from django.contrib.auth import get_user_model
3 from django.utils.translation import ugettext_lazy as _
4
5 from involvement.models import Application, Reference
6 from utils.forms import AdvancedModelMultipleChoiceField
7
8
9 class ApplicationForm(forms.ModelForm):
10 class Meta:
11 model = Application
12 exclude = ['position', 'applicant']
13 widgets = {
14 'cover_letter': forms.Textarea(attrs={'style': 'height: 200px',
15 'class': 'form-control'}),
16 'qualifications': forms.Textarea(attrs={'style': 'height: 200px',
17 'class': 'form-control'}),
18 }
19
20 def clean_status(self):
21 status = self.cleaned_data['status']
22 if status not in ['draft', 'submitted'] \
23 or (self.initial['status'] == 'submitted'
24 and status == 'draft'):
25 raise forms.ValidationError(_('The submitted status was invalid.'))
26 return status
27
28
29 ReferenceFormSet = forms.inlineformset_factory(
30 Application,
31 Reference,
32 fields=('name', 'position', 'email', 'phone_number', 'comment'),
33 widgets={
34 'name': forms.TextInput(attrs={'class': 'form-control'}),
35 'position': forms.TextInput(attrs={'class': 'form-control'}),
36 'email': forms.TextInput(attrs={'class': 'form-control'}),
37 'phone_number': forms.TextInput(attrs={'class': 'form-control'}),
38 'comment': forms.TextInput(attrs={'class': 'form-control'}),
39 },
40 extra=0,
41 )
42
43
44 class ApprovalForm(forms.ModelForm):
45 status = forms.ChoiceField(
46 choices=(
47 ('submitted', '---------'),
48 ('approved', _('Approved')),
49 ('disapproved', _('Disapproved')),
50 ),
51 )
52
53 class Meta:
54 model = Application
55 fields = []
56
57 def clean_status(self):
58 status = self.cleaned_data['status']
59 if status not in ['submitted', 'approved', 'disapproved']:
60 raise forms.ValidationError(_('The submitted status was invalid.'))
61 return status
62
63 def save(self, commit=True):
64 self.instance.status = self.cleaned_data['status']
65
66 super(ApprovalForm, self).save(commit)
67
68
69 class AppointmentForm(forms.Form):
70 appoint = AdvancedModelMultipleChoiceField(
71 Application.objects.none(),
72 widget=forms.CheckboxSelectMultiple(),
73 required=False,
74 )
75 overturn = forms.CharField(
76 required=False,
77 label=_('Overturn'),
78 help_text=_('Enter a comma separated list of users you want to '
79 'appoint to the position, even though did not apply for '
80 'the position.')
81 )
82
83 def __init__(self, position, *args, **kwargs):
84 super(AppointmentForm, self).__init__(*args, **kwargs)
85 self.position = position
86 self.fields['appoint'].queryset = position.applications.filter(
87 status__in=['submitted', 'approved', 'appointed', 'turned_down']
88 )
89 self.initial['appoint'] = position.applications.filter(
90 status='appointed'
91 )
92
93 def clean_overturn(self):
94 string = self.cleaned_data['overturn']
95 string = string.replace(' ', '')
96 if string == '':
97 return []
98 else:
99 users = string.split(',')
100 for u in users:
101 if not get_user_model().objects.filter(
102 username=u
103 ).exists():
104 raise forms.ValidationError(
105 _('No user with the username %(user)s exists.'),
106 params={'user': u},
107 )
108 elif self.position.applications.filter(
109 applicant__username=u
110 ).exists():
111 raise forms.ValidationError(
112 _('User %(user)s already applied for this position '
113 'and can not be appointed through the overturn '
114 'field.'),
115 params={'user': u},
116 )
117 return users
118
119 def clean(self):
120 super(AppointmentForm, self).clean()
121 appoint = self.cleaned_data.get('appoint', [])
122 overturn = self.cleaned_data.get('overturn', [])
123 nr_appointment = len(appoint) + len(overturn)
124 if nr_appointment > self.position.appointments:
125 raise forms.ValidationError(
126 _('You cannot appoint %(current)s applicants. The maximum '
127 'for this position is %(max)s.'),
128 params={
129 'current': nr_appointment,
130 'max': self.position.appointments,
131 },
132 )
133 return self.cleaned_data
134
135 def save(self):
136 for application in self.fields['appoint'].queryset:
137 if application in self.cleaned_data['appoint']:
138 application.status = 'appointed'
139 else:
140 application.status = 'turned_down'
141 application.save()
142
143 for user in self.cleaned_data['overturn']:
144 user = get_user_model().objects.get(
145 username=user
146 )
147 Application.objects.create(
148 position=self.position,
149 applicant=user,
150 status='appointed',
151 )
152
[end of website/involvement/forms.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/website/involvement/forms.py b/website/involvement/forms.py
--- a/website/involvement/forms.py
+++ b/website/involvement/forms.py
@@ -107,6 +107,8 @@
)
elif self.position.applications.filter(
applicant__username=u
+ ).exclude(
+ status='draft'
).exists():
raise forms.ValidationError(
_('User %(user)s already applied for this position '
@@ -144,8 +146,11 @@
user = get_user_model().objects.get(
username=user
)
- Application.objects.create(
+ appl, created = Application.objects.get_or_create(
position=self.position,
applicant=user,
- status='appointed',
+ defaults={'status': 'appointed'}
)
+ if not created:
+ appl.status = 'appointed'
+ appl.save()
| {"golden_diff": "diff --git a/website/involvement/forms.py b/website/involvement/forms.py\n--- a/website/involvement/forms.py\n+++ b/website/involvement/forms.py\n@@ -107,6 +107,8 @@\n )\n elif self.position.applications.filter(\n applicant__username=u\n+ ).exclude(\n+ status='draft'\n ).exists():\n raise forms.ValidationError(\n _('User %(user)s already applied for this position '\n@@ -144,8 +146,11 @@\n user = get_user_model().objects.get(\n username=user\n )\n- Application.objects.create(\n+ appl, created = Application.objects.get_or_create(\n position=self.position,\n applicant=user,\n- status='appointed',\n+ defaults={'status': 'appointed'}\n )\n+ if not created:\n+ appl.status = 'appointed'\n+ appl.save()\n", "issue": "Application drafts in limbo\n<!-- Do you want to ask a question? Are you looking for support? The system administrator can help you: [email protected] -->\r\n\r\n### Prerequisites\r\n\r\n* [x] Put an X between the brackets on this line if you have done all of the\r\nfollowing:\r\n * Reproduced the problem with clear cache.\r\n * (If running the application locally:) Made sure your running the newest version on the development branch\r\n * Checked that your issue isn't already filed: https://github.com/UTNkar/moore/issues\r\n\r\n### Description\r\n\r\nThe problem has not been reproduced but it's the kind of problem that probably will occur any way.\r\n\r\nWhen a user starts an application draft and forgets to submit it the person can not be appointed to the position with the overturn function. If the application for a position has been saved as draft when the application period ends the application becomes stuck in limbo. The group administrator can before the application period has ended see that there is a draft for the person. When the application period ends the draft is not visible among the submitted applications (very resonable). The problem occurs when the administrator wants to appoint that person anyway with the overturn function then an error message occurs saying. \"You can not appoint this person since an application has been submitted\". This should not be the case, a draft application should not be seen as a submitted application by the overturn function. The user can not see or delete the draft application after the application period has ended.\r\n\r\nQuick fix, an application system administrator can access the applications and set the status to submitted. \r\n\r\n### Steps to Reproduce\r\n\r\n1. Apply to an position and save the application as draft\r\n2. Wait for the application time to end\r\n3. Go to appoint for the given position\r\n4. Try overturn with the user who saved the application as draft\r\n5. Error message occurs \r\n\r\n<!-- Please select the appropriate \"topic category\"/blue and \"issue type\"/yellow label -->\n", "before_files": [{"content": "from django import forms\nfrom django.contrib.auth import get_user_model\nfrom django.utils.translation import ugettext_lazy as _\n\nfrom involvement.models import Application, Reference\nfrom utils.forms import AdvancedModelMultipleChoiceField\n\n\nclass ApplicationForm(forms.ModelForm):\n class Meta:\n model = Application\n exclude = ['position', 'applicant']\n widgets = {\n 'cover_letter': forms.Textarea(attrs={'style': 'height: 200px',\n 'class': 'form-control'}),\n 'qualifications': forms.Textarea(attrs={'style': 'height: 200px',\n 'class': 'form-control'}),\n }\n\n def clean_status(self):\n status = self.cleaned_data['status']\n if status not in ['draft', 'submitted'] \\\n or (self.initial['status'] == 'submitted'\n and status == 'draft'):\n raise forms.ValidationError(_('The submitted status was invalid.'))\n return status\n\n\nReferenceFormSet = forms.inlineformset_factory(\n Application,\n Reference,\n fields=('name', 'position', 'email', 'phone_number', 'comment'),\n widgets={\n 'name': forms.TextInput(attrs={'class': 'form-control'}),\n 'position': forms.TextInput(attrs={'class': 'form-control'}),\n 'email': forms.TextInput(attrs={'class': 'form-control'}),\n 'phone_number': forms.TextInput(attrs={'class': 'form-control'}),\n 'comment': forms.TextInput(attrs={'class': 'form-control'}),\n },\n extra=0,\n)\n\n\nclass ApprovalForm(forms.ModelForm):\n status = forms.ChoiceField(\n choices=(\n ('submitted', '---------'),\n ('approved', _('Approved')),\n ('disapproved', _('Disapproved')),\n ),\n )\n\n class Meta:\n model = Application\n fields = []\n\n def clean_status(self):\n status = self.cleaned_data['status']\n if status not in ['submitted', 'approved', 'disapproved']:\n raise forms.ValidationError(_('The submitted status was invalid.'))\n return status\n\n def save(self, commit=True):\n self.instance.status = self.cleaned_data['status']\n\n super(ApprovalForm, self).save(commit)\n\n\nclass AppointmentForm(forms.Form):\n appoint = AdvancedModelMultipleChoiceField(\n Application.objects.none(),\n widget=forms.CheckboxSelectMultiple(),\n required=False,\n )\n overturn = forms.CharField(\n required=False,\n label=_('Overturn'),\n help_text=_('Enter a comma separated list of users you want to '\n 'appoint to the position, even though did not apply for '\n 'the position.')\n )\n\n def __init__(self, position, *args, **kwargs):\n super(AppointmentForm, self).__init__(*args, **kwargs)\n self.position = position\n self.fields['appoint'].queryset = position.applications.filter(\n status__in=['submitted', 'approved', 'appointed', 'turned_down']\n )\n self.initial['appoint'] = position.applications.filter(\n status='appointed'\n )\n\n def clean_overturn(self):\n string = self.cleaned_data['overturn']\n string = string.replace(' ', '')\n if string == '':\n return []\n else:\n users = string.split(',')\n for u in users:\n if not get_user_model().objects.filter(\n username=u\n ).exists():\n raise forms.ValidationError(\n _('No user with the username %(user)s exists.'),\n params={'user': u},\n )\n elif self.position.applications.filter(\n applicant__username=u\n ).exists():\n raise forms.ValidationError(\n _('User %(user)s already applied for this position '\n 'and can not be appointed through the overturn '\n 'field.'),\n params={'user': u},\n )\n return users\n\n def clean(self):\n super(AppointmentForm, self).clean()\n appoint = self.cleaned_data.get('appoint', [])\n overturn = self.cleaned_data.get('overturn', [])\n nr_appointment = len(appoint) + len(overturn)\n if nr_appointment > self.position.appointments:\n raise forms.ValidationError(\n _('You cannot appoint %(current)s applicants. The maximum '\n 'for this position is %(max)s.'),\n params={\n 'current': nr_appointment,\n 'max': self.position.appointments,\n },\n )\n return self.cleaned_data\n\n def save(self):\n for application in self.fields['appoint'].queryset:\n if application in self.cleaned_data['appoint']:\n application.status = 'appointed'\n else:\n application.status = 'turned_down'\n application.save()\n\n for user in self.cleaned_data['overturn']:\n user = get_user_model().objects.get(\n username=user\n )\n Application.objects.create(\n position=self.position,\n applicant=user,\n status='appointed',\n )\n", "path": "website/involvement/forms.py"}]} | 2,296 | 196 |
gh_patches_debug_4296 | rasdani/github-patches | git_diff | beetbox__beets-1181 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
ftintitle: does not gracefully handle duplicate artist name
Using ftintitle on one of my tracks, it seemed to get tripped up and not be able to fix it. I've tracked it down to being a problem with the fact that the Album Artist's name is in the Artist field twice.
```
Artist: The Roots feat. Talib Kweli / The Roots
Album Artist: The Roots
```
When trying to find the album artist in the artist field, it does a string split using the album artist as a separator. This returns a list with the following values `['', 'feat. Talib Kweli / ', '']`.
The code that tries to find the `feat_part` is then only expecting a two element list, but is instead given a three. It then checks if the `-1th` element, `2` in this case, is blank. If it's not, it extracts the featured artist.
If it is blank, it goes on to assume the featured artist must be on the left-hand side of the split and checks element `0`.
Both elements `0` and `2` are blank, so no featured part is found.
I've thought of two solutions, but am not sure which one would make more sense
- Attempt to remove duplicate album artists from the artist string before splitting
- Add another/change the current case to iterate over the split parts to find the first non-blank item
Either way, these methods would presumably still leave the trailing slash on the `feat. Talib Kweli /` and add the extraneous trailing slash to the track title. This, I'm not quite sure how to handle if at all.
Thoughts?
</issue>
<code>
[start of beetsplug/ftintitle.py]
1 # This file is part of beets.
2 # Copyright 2013, Verrus, <github.com/Verrus/beets-plugin-featInTitle>
3 #
4 # Permission is hereby granted, free of charge, to any person obtaining
5 # a copy of this software and associated documentation files (the
6 # "Software"), to deal in the Software without restriction, including
7 # without limitation the rights to use, copy, modify, merge, publish,
8 # distribute, sublicense, and/or sell copies of the Software, and to
9 # permit persons to whom the Software is furnished to do so, subject to
10 # the following conditions:
11 #
12 # The above copyright notice and this permission notice shall be
13 # included in all copies or substantial portions of the Software.
14
15 """Moves "featured" artists to the title from the artist field.
16 """
17 from beets import plugins
18 from beets import ui
19 from beets.util import displayable_path
20 from beets import config
21 import logging
22 import re
23
24 log = logging.getLogger('beets')
25
26
27 def split_on_feat(artist):
28 """Given an artist string, split the "main" artist from any artist
29 on the right-hand side of a string like "feat". Return the main
30 artist, which is always a string, and the featuring artist, which
31 may be a string or None if none is present.
32 """
33 # split on the first "feat".
34 regex = re.compile(plugins.feat_tokens(), re.IGNORECASE)
35 parts = [s.strip() for s in regex.split(artist, 1)]
36 if len(parts) == 1:
37 return parts[0], None
38 else:
39 return tuple(parts)
40
41
42 def contains_feat(title):
43 """Determine whether the title contains a "featured" marker.
44 """
45 return bool(re.search(plugins.feat_tokens(), title, flags=re.IGNORECASE))
46
47
48 def update_metadata(item, feat_part, drop_feat, loglevel=logging.DEBUG):
49 """Choose how to add new artists to the title and set the new
50 metadata. Also, print out messages about any changes that are made.
51 If `drop_feat` is set, then do not add the artist to the title; just
52 remove it from the artist field.
53 """
54 # In all cases, update the artist fields.
55 log.log(loglevel, u'artist: {0} -> {1}'.format(
56 item.artist, item.albumartist))
57 item.artist = item.albumartist
58 if item.artist_sort:
59 # Just strip the featured artist from the sort name.
60 item.artist_sort, _ = split_on_feat(item.artist_sort)
61
62 # Only update the title if it does not already contain a featured
63 # artist and if we do not drop featuring information.
64 if not drop_feat and not contains_feat(item.title):
65 new_title = u"{0} feat. {1}".format(item.title, feat_part)
66 log.log(loglevel, u'title: {0} -> {1}'.format(item.title, new_title))
67 item.title = new_title
68
69
70 def ft_in_title(item, drop_feat, loglevel=logging.DEBUG):
71 """Look for featured artists in the item's artist fields and move
72 them to the title.
73 """
74 artist = item.artist.strip()
75 albumartist = item.albumartist.strip()
76
77 # Check whether there is a featured artist on this track and the
78 # artist field does not exactly match the album artist field. In
79 # that case, we attempt to move the featured artist to the title.
80 _, featured = split_on_feat(artist)
81 if featured and albumartist != artist and albumartist:
82 log.log(loglevel, displayable_path(item.path))
83 feat_part = None
84
85 # Look for the album artist in the artist field. If it's not
86 # present, give up.
87 albumartist_split = artist.split(albumartist)
88 if len(albumartist_split) <= 1:
89 log.log(loglevel, 'album artist not present in artist')
90
91 # If the last element of the split (the right-hand side of the
92 # album artist) is nonempty, then it probably contains the
93 # featured artist.
94 elif albumartist_split[-1] != '':
95 # Extract the featured artist from the right-hand side.
96 _, feat_part = split_on_feat(albumartist_split[-1])
97
98 # Otherwise, if there's nothing on the right-hand side, look for a
99 # featuring artist on the left-hand side.
100 else:
101 lhs, rhs = split_on_feat(albumartist_split[0])
102 if rhs:
103 feat_part = lhs
104
105 # If we have a featuring artist, move it to the title.
106 if feat_part:
107 update_metadata(item, feat_part, drop_feat, loglevel)
108 else:
109 log.log(loglevel, u'no featuring artists found')
110
111
112 class FtInTitlePlugin(plugins.BeetsPlugin):
113 def __init__(self):
114 super(FtInTitlePlugin, self).__init__()
115
116 self.config.add({
117 'auto': True,
118 'drop': False,
119 })
120
121 self._command = ui.Subcommand(
122 'ftintitle',
123 help='move featured artists to the title field')
124
125 self._command.parser.add_option(
126 '-d', '--drop', dest='drop',
127 action='store_true', default=False,
128 help='drop featuring from artists and ignore title update')
129
130 if self.config['auto']:
131 self.import_stages = [self.imported]
132
133 def commands(self):
134
135 def func(lib, opts, args):
136 self.config.set_args(opts)
137 drop_feat = self.config['drop'].get(bool)
138 write = config['import']['write'].get(bool)
139
140 for item in lib.items(ui.decargs(args)):
141 ft_in_title(item, drop_feat, logging.INFO)
142 item.store()
143 if write:
144 item.try_write()
145
146 self._command.func = func
147 return [self._command]
148
149 def imported(self, session, task):
150 """Import hook for moving featuring artist automatically.
151 """
152 drop_feat = self.config['drop'].get(bool)
153
154 for item in task.imported_items():
155 ft_in_title(item, drop_feat, logging.DEBUG)
156 item.store()
157
[end of beetsplug/ftintitle.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/beetsplug/ftintitle.py b/beetsplug/ftintitle.py
--- a/beetsplug/ftintitle.py
+++ b/beetsplug/ftintitle.py
@@ -83,7 +83,7 @@
# Look for the album artist in the artist field. If it's not
# present, give up.
- albumartist_split = artist.split(albumartist)
+ albumartist_split = artist.split(albumartist, 1)
if len(albumartist_split) <= 1:
ui.print_('album artist not present in artist')
| {"golden_diff": "diff --git a/beetsplug/ftintitle.py b/beetsplug/ftintitle.py\n--- a/beetsplug/ftintitle.py\n+++ b/beetsplug/ftintitle.py\n@@ -83,7 +83,7 @@\n \n # Look for the album artist in the artist field. If it's not\n # present, give up.\n- albumartist_split = artist.split(albumartist)\n+ albumartist_split = artist.split(albumartist, 1)\n if len(albumartist_split) <= 1:\n ui.print_('album artist not present in artist')\n", "issue": "ftintitle: does not gracefully handle duplicate artist name\nUsing ftintitle on one of my tracks, it seemed to get tripped up and not be able to fix it. I've tracked it down to being a problem with the fact that the Album Artist's name is in the Artist field twice.\n\n```\nArtist: The Roots feat. Talib Kweli / The Roots\nAlbum Artist: The Roots\n```\n\nWhen trying to find the album artist in the artist field, it does a string split using the album artist as a separator. This returns a list with the following values `['', 'feat. Talib Kweli / ', '']`.\n\nThe code that tries to find the `feat_part` is then only expecting a two element list, but is instead given a three. It then checks if the `-1th` element, `2` in this case, is blank. If it's not, it extracts the featured artist. \n\nIf it is blank, it goes on to assume the featured artist must be on the left-hand side of the split and checks element `0`.\n\nBoth elements `0` and `2` are blank, so no featured part is found.\n\nI've thought of two solutions, but am not sure which one would make more sense\n- Attempt to remove duplicate album artists from the artist string before splitting\n- Add another/change the current case to iterate over the split parts to find the first non-blank item\n\nEither way, these methods would presumably still leave the trailing slash on the `feat. Talib Kweli /` and add the extraneous trailing slash to the track title. This, I'm not quite sure how to handle if at all.\n\nThoughts?\n\n", "before_files": [{"content": "# This file is part of beets.\n# Copyright 2013, Verrus, <github.com/Verrus/beets-plugin-featInTitle>\n#\n# Permission is hereby granted, free of charge, to any person obtaining\n# a copy of this software and associated documentation files (the\n# \"Software\"), to deal in the Software without restriction, including\n# without limitation the rights to use, copy, modify, merge, publish,\n# distribute, sublicense, and/or sell copies of the Software, and to\n# permit persons to whom the Software is furnished to do so, subject to\n# the following conditions:\n#\n# The above copyright notice and this permission notice shall be\n# included in all copies or substantial portions of the Software.\n\n\"\"\"Moves \"featured\" artists to the title from the artist field.\n\"\"\"\nfrom beets import plugins\nfrom beets import ui\nfrom beets.util import displayable_path\nfrom beets import config\nimport logging\nimport re\n\nlog = logging.getLogger('beets')\n\n\ndef split_on_feat(artist):\n \"\"\"Given an artist string, split the \"main\" artist from any artist\n on the right-hand side of a string like \"feat\". Return the main\n artist, which is always a string, and the featuring artist, which\n may be a string or None if none is present.\n \"\"\"\n # split on the first \"feat\".\n regex = re.compile(plugins.feat_tokens(), re.IGNORECASE)\n parts = [s.strip() for s in regex.split(artist, 1)]\n if len(parts) == 1:\n return parts[0], None\n else:\n return tuple(parts)\n\n\ndef contains_feat(title):\n \"\"\"Determine whether the title contains a \"featured\" marker.\n \"\"\"\n return bool(re.search(plugins.feat_tokens(), title, flags=re.IGNORECASE))\n\n\ndef update_metadata(item, feat_part, drop_feat, loglevel=logging.DEBUG):\n \"\"\"Choose how to add new artists to the title and set the new\n metadata. Also, print out messages about any changes that are made.\n If `drop_feat` is set, then do not add the artist to the title; just\n remove it from the artist field.\n \"\"\"\n # In all cases, update the artist fields.\n log.log(loglevel, u'artist: {0} -> {1}'.format(\n item.artist, item.albumartist))\n item.artist = item.albumartist\n if item.artist_sort:\n # Just strip the featured artist from the sort name.\n item.artist_sort, _ = split_on_feat(item.artist_sort)\n\n # Only update the title if it does not already contain a featured\n # artist and if we do not drop featuring information.\n if not drop_feat and not contains_feat(item.title):\n new_title = u\"{0} feat. {1}\".format(item.title, feat_part)\n log.log(loglevel, u'title: {0} -> {1}'.format(item.title, new_title))\n item.title = new_title\n\n\ndef ft_in_title(item, drop_feat, loglevel=logging.DEBUG):\n \"\"\"Look for featured artists in the item's artist fields and move\n them to the title.\n \"\"\"\n artist = item.artist.strip()\n albumartist = item.albumartist.strip()\n\n # Check whether there is a featured artist on this track and the\n # artist field does not exactly match the album artist field. In\n # that case, we attempt to move the featured artist to the title.\n _, featured = split_on_feat(artist)\n if featured and albumartist != artist and albumartist:\n log.log(loglevel, displayable_path(item.path))\n feat_part = None\n\n # Look for the album artist in the artist field. If it's not\n # present, give up.\n albumartist_split = artist.split(albumartist)\n if len(albumartist_split) <= 1:\n log.log(loglevel, 'album artist not present in artist')\n\n # If the last element of the split (the right-hand side of the\n # album artist) is nonempty, then it probably contains the\n # featured artist.\n elif albumartist_split[-1] != '':\n # Extract the featured artist from the right-hand side.\n _, feat_part = split_on_feat(albumartist_split[-1])\n\n # Otherwise, if there's nothing on the right-hand side, look for a\n # featuring artist on the left-hand side.\n else:\n lhs, rhs = split_on_feat(albumartist_split[0])\n if rhs:\n feat_part = lhs\n\n # If we have a featuring artist, move it to the title.\n if feat_part:\n update_metadata(item, feat_part, drop_feat, loglevel)\n else:\n log.log(loglevel, u'no featuring artists found')\n\n\nclass FtInTitlePlugin(plugins.BeetsPlugin):\n def __init__(self):\n super(FtInTitlePlugin, self).__init__()\n\n self.config.add({\n 'auto': True,\n 'drop': False,\n })\n\n self._command = ui.Subcommand(\n 'ftintitle',\n help='move featured artists to the title field')\n\n self._command.parser.add_option(\n '-d', '--drop', dest='drop',\n action='store_true', default=False,\n help='drop featuring from artists and ignore title update')\n\n if self.config['auto']:\n self.import_stages = [self.imported]\n\n def commands(self):\n\n def func(lib, opts, args):\n self.config.set_args(opts)\n drop_feat = self.config['drop'].get(bool)\n write = config['import']['write'].get(bool)\n\n for item in lib.items(ui.decargs(args)):\n ft_in_title(item, drop_feat, logging.INFO)\n item.store()\n if write:\n item.try_write()\n\n self._command.func = func\n return [self._command]\n\n def imported(self, session, task):\n \"\"\"Import hook for moving featuring artist automatically.\n \"\"\"\n drop_feat = self.config['drop'].get(bool)\n\n for item in task.imported_items():\n ft_in_title(item, drop_feat, logging.DEBUG)\n item.store()\n", "path": "beetsplug/ftintitle.py"}]} | 2,575 | 128 |
gh_patches_debug_1064 | rasdani/github-patches | git_diff | scikit-hep__pyhf-1220 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
pytest v6.2.0 causing test_optim_with_value to fail
# Description
`v0.5.4` `bump2version` changes were swept into `master` 2020-12-12 with f824afe and the CI on `master` succeeded. Later that day [`pytest` `v6.2.0`](https://github.com/pytest-dev/pytest/releases/tag/6.2.0) was released and the nightly scheduled CI failed on
```pytb
_______________________ test_optim_with_value[jax-mu=1] ________________________
backend = (<pyhf.tensor.jax_backend.jax_backend object at 0x7f6bf92def50>, None)
source = {'bindata': {'bkg': [100.0, 150.0], 'bkgsys_dn': [98, 100], 'bkgsys_up': [102, 190], 'data': [120.0, 180.0], ...}, 'binning': [2, -0.5, 1.5]}
spec = {'channels': [{'name': 'singlechannel', 'samples': [{'data': [30.0, 95.0], 'modifiers': [{...}], 'name': 'signal'}, {'data': [100.0, 150.0], 'modifiers': [{...}], 'name': 'background'}]}]}
mu = 1.0
@pytest.mark.parametrize('mu', [1.0], ids=['mu=1'])
def test_optim_with_value(backend, source, spec, mu):
pdf = pyhf.Model(spec)
data = source['bindata']['data'] + pdf.config.auxdata
init_pars = pdf.config.suggested_init()
par_bounds = pdf.config.suggested_bounds()
optim = pyhf.optimizer
result = optim.minimize(pyhf.infer.mle.twice_nll, data, pdf, init_pars, par_bounds)
assert pyhf.tensorlib.tolist(result)
result, fitted_val = optim.minimize(
pyhf.infer.mle.twice_nll,
data,
pdf,
init_pars,
par_bounds,
fixed_vals=[(pdf.config.poi_index, mu)],
return_fitted_val=True,
)
assert pyhf.tensorlib.tolist(result)
assert pyhf.tensorlib.shape(fitted_val) == ()
> assert pytest.approx(17.52954975, rel=1e-5) == fitted_val
E assert 17.52954975 ± 1.8e-04 == DeviceArray(17.52954975, dtype=float64)
E + where 17.52954975 ± 1.8e-04 = <function approx at 0x7f6cc1747f80>(17.52954975, rel=1e-05)
E + where <function approx at 0x7f6cc1747f80> = pytest.approx
tests/test_optim.py:383: AssertionError
```
Diffing the installed libraries between the two (in [f824afe_install.txt](https://github.com/scikit-hep/pyhf/files/5684241/f824afe_install.txt) and [failing_install.txt](https://github.com/scikit-hep/pyhf/files/5684242/failing_install.txt)) shows that the relevant change is `pytest`
```
$ diff f824afe_install.txt failing_install.txt
33a34
> importlib-metadata 3.1.1
83c84
< py 1.9.0
---
> py 1.10.0
96c97
< pytest 6.1.2
---
> pytest 6.2.0
143a145
> zipp 3.4.0
```
This is confirmed as if
```diff
--- a/setup.py
+++ b/setup.py
@@ -29,7 +29,7 @@
+ extras_require['contrib']
+ extras_require['shellcomplete']
+ [
- 'pytest~=6.0',
+ 'pytest~=6.1.0',
'pytest-cov>=2.5.1',
'pytest-mock',
'pytest-benchmark[histogram]',
```
the [CI installs `v6.1.2` and passes](https://github.com/scikit-hep/pyhf/actions/runs/418404132).
This behavior is confusing as the only mention of `pytest.approx`in the [`v6.2.0` release notes](https://github.com/pytest-dev/pytest/releases/tag/6.2.0) is under "Improvements"
> 7710: Use strict equality comparison for non-numeric types in pytest.approx instead of
raising TypeError.
>
> This was the undocumented behavior before 3.7, but is now officially a supported feature.
</issue>
<code>
[start of setup.py]
1 from setuptools import setup
2
3 extras_require = {
4 'shellcomplete': ['click_completion'],
5 'tensorflow': [
6 'tensorflow~=2.2.0', # TensorFlow minor releases are as volatile as major
7 'tensorflow-probability~=0.10.0',
8 ],
9 'torch': ['torch~=1.2'],
10 'jax': ['jax~=0.2.4', 'jaxlib~=0.1.56'],
11 'xmlio': ['uproot3~=3.14'], # Future proof against uproot4 API changes
12 'minuit': ['iminuit~=1.5.3'],
13 }
14 extras_require['backends'] = sorted(
15 set(
16 extras_require['tensorflow']
17 + extras_require['torch']
18 + extras_require['jax']
19 + extras_require['minuit']
20 )
21 )
22 extras_require['contrib'] = sorted({'matplotlib', 'requests'})
23 extras_require['lint'] = sorted({'flake8', 'black'})
24
25 extras_require['test'] = sorted(
26 set(
27 extras_require['backends']
28 + extras_require['xmlio']
29 + extras_require['contrib']
30 + extras_require['shellcomplete']
31 + [
32 'pytest~=6.0',
33 'pytest-cov>=2.5.1',
34 'pytest-mock',
35 'pytest-benchmark[histogram]',
36 'pytest-console-scripts',
37 'pytest-mpl',
38 'pydocstyle',
39 'coverage>=4.0', # coveralls
40 'papermill~=2.0',
41 'nteract-scrapbook~=0.2',
42 'jupyter',
43 'graphviz',
44 'jsonpatch',
45 ]
46 )
47 )
48 extras_require['docs'] = sorted(
49 {
50 'sphinx>=3.1.2',
51 'sphinxcontrib-bibtex',
52 'sphinx-click',
53 'sphinx_rtd_theme',
54 'nbsphinx',
55 'ipywidgets',
56 'sphinx-issues',
57 'sphinx-copybutton>0.2.9',
58 }
59 )
60 extras_require['develop'] = sorted(
61 set(
62 extras_require['docs']
63 + extras_require['lint']
64 + extras_require['test']
65 + [
66 'nbdime',
67 'bump2version',
68 'ipython',
69 'pre-commit',
70 'check-manifest',
71 'codemetapy>=0.3.4',
72 'twine',
73 ]
74 )
75 )
76 extras_require['complete'] = sorted(set(sum(extras_require.values(), [])))
77
78
79 setup(
80 extras_require=extras_require,
81 use_scm_version=lambda: {'local_scheme': lambda version: ''},
82 )
83
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -48,7 +48,7 @@
extras_require['docs'] = sorted(
{
'sphinx>=3.1.2',
- 'sphinxcontrib-bibtex',
+ 'sphinxcontrib-bibtex~=1.0',
'sphinx-click',
'sphinx_rtd_theme',
'nbsphinx',
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -48,7 +48,7 @@\n extras_require['docs'] = sorted(\n {\n 'sphinx>=3.1.2',\n- 'sphinxcontrib-bibtex',\n+ 'sphinxcontrib-bibtex~=1.0',\n 'sphinx-click',\n 'sphinx_rtd_theme',\n 'nbsphinx',\n", "issue": "pytest v6.2.0 causing test_optim_with_value to fail\n# Description\r\n\r\n`v0.5.4` `bump2version` changes were swept into `master` 2020-12-12 with f824afe and the CI on `master` succeeded. Later that day [`pytest` `v6.2.0`](https://github.com/pytest-dev/pytest/releases/tag/6.2.0) was released and the nightly scheduled CI failed on \r\n\r\n```pytb\r\n_______________________ test_optim_with_value[jax-mu=1] ________________________\r\n\r\nbackend = (<pyhf.tensor.jax_backend.jax_backend object at 0x7f6bf92def50>, None)\r\nsource = {'bindata': {'bkg': [100.0, 150.0], 'bkgsys_dn': [98, 100], 'bkgsys_up': [102, 190], 'data': [120.0, 180.0], ...}, 'binning': [2, -0.5, 1.5]}\r\nspec = {'channels': [{'name': 'singlechannel', 'samples': [{'data': [30.0, 95.0], 'modifiers': [{...}], 'name': 'signal'}, {'data': [100.0, 150.0], 'modifiers': [{...}], 'name': 'background'}]}]}\r\nmu = 1.0\r\n\r\n @pytest.mark.parametrize('mu', [1.0], ids=['mu=1'])\r\n def test_optim_with_value(backend, source, spec, mu):\r\n pdf = pyhf.Model(spec)\r\n data = source['bindata']['data'] + pdf.config.auxdata\r\n \r\n init_pars = pdf.config.suggested_init()\r\n par_bounds = pdf.config.suggested_bounds()\r\n \r\n optim = pyhf.optimizer\r\n \r\n result = optim.minimize(pyhf.infer.mle.twice_nll, data, pdf, init_pars, par_bounds)\r\n assert pyhf.tensorlib.tolist(result)\r\n \r\n result, fitted_val = optim.minimize(\r\n pyhf.infer.mle.twice_nll,\r\n data,\r\n pdf,\r\n init_pars,\r\n par_bounds,\r\n fixed_vals=[(pdf.config.poi_index, mu)],\r\n return_fitted_val=True,\r\n )\r\n assert pyhf.tensorlib.tolist(result)\r\n assert pyhf.tensorlib.shape(fitted_val) == ()\r\n> assert pytest.approx(17.52954975, rel=1e-5) == fitted_val\r\nE assert 17.52954975 \u00b1 1.8e-04 == DeviceArray(17.52954975, dtype=float64)\r\nE + where 17.52954975 \u00b1 1.8e-04 = <function approx at 0x7f6cc1747f80>(17.52954975, rel=1e-05)\r\nE + where <function approx at 0x7f6cc1747f80> = pytest.approx\r\n\r\ntests/test_optim.py:383: AssertionError\r\n```\r\n\r\nDiffing the installed libraries between the two (in [f824afe_install.txt](https://github.com/scikit-hep/pyhf/files/5684241/f824afe_install.txt) and [failing_install.txt](https://github.com/scikit-hep/pyhf/files/5684242/failing_install.txt)) shows that the relevant change is `pytest`\r\n\r\n```\r\n$ diff f824afe_install.txt failing_install.txt \r\n33a34\r\n> importlib-metadata 3.1.1\r\n83c84\r\n< py 1.9.0\r\n---\r\n> py 1.10.0\r\n96c97\r\n< pytest 6.1.2\r\n---\r\n> pytest 6.2.0\r\n143a145\r\n> zipp 3.4.0\r\n```\r\n\r\nThis is confirmed as if\r\n\r\n```diff\r\n--- a/setup.py\r\n+++ b/setup.py\r\n@@ -29,7 +29,7 @@\r\n + extras_require['contrib']\r\n + extras_require['shellcomplete']\r\n + [\r\n- 'pytest~=6.0',\r\n+ 'pytest~=6.1.0',\r\n 'pytest-cov>=2.5.1',\r\n 'pytest-mock',\r\n 'pytest-benchmark[histogram]',\r\n```\r\n\r\nthe [CI installs `v6.1.2` and passes](https://github.com/scikit-hep/pyhf/actions/runs/418404132).\r\n\r\nThis behavior is confusing as the only mention of `pytest.approx`in the [`v6.2.0` release notes](https://github.com/pytest-dev/pytest/releases/tag/6.2.0) is under \"Improvements\"\r\n\r\n> 7710: Use strict equality comparison for non-numeric types in pytest.approx instead of\r\nraising TypeError.\r\n>\r\n> This was the undocumented behavior before 3.7, but is now officially a supported feature.\n", "before_files": [{"content": "from setuptools import setup\n\nextras_require = {\n 'shellcomplete': ['click_completion'],\n 'tensorflow': [\n 'tensorflow~=2.2.0', # TensorFlow minor releases are as volatile as major\n 'tensorflow-probability~=0.10.0',\n ],\n 'torch': ['torch~=1.2'],\n 'jax': ['jax~=0.2.4', 'jaxlib~=0.1.56'],\n 'xmlio': ['uproot3~=3.14'], # Future proof against uproot4 API changes\n 'minuit': ['iminuit~=1.5.3'],\n}\nextras_require['backends'] = sorted(\n set(\n extras_require['tensorflow']\n + extras_require['torch']\n + extras_require['jax']\n + extras_require['minuit']\n )\n)\nextras_require['contrib'] = sorted({'matplotlib', 'requests'})\nextras_require['lint'] = sorted({'flake8', 'black'})\n\nextras_require['test'] = sorted(\n set(\n extras_require['backends']\n + extras_require['xmlio']\n + extras_require['contrib']\n + extras_require['shellcomplete']\n + [\n 'pytest~=6.0',\n 'pytest-cov>=2.5.1',\n 'pytest-mock',\n 'pytest-benchmark[histogram]',\n 'pytest-console-scripts',\n 'pytest-mpl',\n 'pydocstyle',\n 'coverage>=4.0', # coveralls\n 'papermill~=2.0',\n 'nteract-scrapbook~=0.2',\n 'jupyter',\n 'graphviz',\n 'jsonpatch',\n ]\n )\n)\nextras_require['docs'] = sorted(\n {\n 'sphinx>=3.1.2',\n 'sphinxcontrib-bibtex',\n 'sphinx-click',\n 'sphinx_rtd_theme',\n 'nbsphinx',\n 'ipywidgets',\n 'sphinx-issues',\n 'sphinx-copybutton>0.2.9',\n }\n)\nextras_require['develop'] = sorted(\n set(\n extras_require['docs']\n + extras_require['lint']\n + extras_require['test']\n + [\n 'nbdime',\n 'bump2version',\n 'ipython',\n 'pre-commit',\n 'check-manifest',\n 'codemetapy>=0.3.4',\n 'twine',\n ]\n )\n)\nextras_require['complete'] = sorted(set(sum(extras_require.values(), [])))\n\n\nsetup(\n extras_require=extras_require,\n use_scm_version=lambda: {'local_scheme': lambda version: ''},\n)\n", "path": "setup.py"}]} | 2,410 | 97 |
gh_patches_debug_15908 | rasdani/github-patches | git_diff | mkdocs__mkdocs-288 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
If the mkdocs.yml is completely empty there is a traceback
```
Traceback (most recent call last):
File "/home/dougalmatthews/.virtualenvs/mkdocs/bin/mkdocs", line 9, in <module>
load_entry_point('mkdocs==0.11.1', 'console_scripts', 'mkdocs')()
File "/home/dougalmatthews/.virtualenvs/mkdocs/lib/python3.4/site-packages/mkdocs/main.py", line 60, in run_main
main(cmd, args=sys.argv[2:], options=dict(opts))
File "/home/dougalmatthews/.virtualenvs/mkdocs/lib/python3.4/site-packages/mkdocs/main.py", line 32, in main
config = load_config(options=options)
File "/home/dougalmatthews/.virtualenvs/mkdocs/lib/python3.4/site-packages/mkdocs/config.py", line 82, in load_config
user_config.update(options)
AttributeError: 'NoneType' object has no attribute 'update'
```
</issue>
<code>
[start of mkdocs/config.py]
1 # coding: utf-8
2
3 from mkdocs import utils
4 from mkdocs.compat import urlparse
5 from mkdocs.exceptions import ConfigurationError
6
7 import os
8 import yaml
9
10 DEFAULT_CONFIG = {
11 'site_name': None,
12 'pages': None,
13
14 'site_url': None,
15 'site_description': None,
16 'site_author': None,
17 'site_favicon': None,
18
19 'theme': 'mkdocs',
20 'docs_dir': 'docs',
21 'site_dir': 'site',
22 'theme_dir': None,
23
24 'copyright': None,
25 'google_analytics': None,
26
27 # The address on which to serve the livereloading docs server.
28 'dev_addr': '127.0.0.1:8000',
29
30 # If `True`, use `<page_name>/index.hmtl` style files with hyperlinks to the directory.
31 # If `False`, use `<page_name>.html style file with hyperlinks to the file.
32 # True generates nicer URLs, but False is useful if browsing the output on a filesystem.
33 'use_directory_urls': True,
34
35 # Specify a link to the project source repo to be included
36 # in the documentation pages.
37 'repo_url': None,
38
39 # A name to use for the link to the project source repo.
40 # Default: If repo_url is unset then None, otherwise
41 # "GitHub" or "Bitbucket" for known url or Hostname for unknown urls.
42 'repo_name': None,
43
44 # Specify which css or javascript files from the docs
45 # directionary should be additionally included in the site.
46 # Default: List of all .css and .js files in the docs dir.
47 'extra_css': None,
48 'extra_javascript': None,
49
50 # Determine if the site should include the nav and next/prev elements.
51 # Default: True if the site has more than one page, False otherwise.
52 'include_nav': None,
53 'include_next_prev': None,
54
55 # PyMarkdown extension names.
56 'markdown_extensions': (),
57
58 # Determine if the site should generate a json search index and include
59 # search elements in the theme. - TODO
60 'include_search': False,
61
62 # Determine if the site should include a 404.html page.
63 # TODO: Implment this. Make this None, have it True if a 404.html
64 # template exists in the theme or docs dir.
65 'include_404': False,
66
67 # Determine if the site should include a sitemap.xml page.
68 # TODO: Implement this. Make this None, have it True if a sitemap.xml
69 # template exists in the theme or docs dir.
70 'include_sitemap': False,
71 }
72
73
74 def load_config(filename='mkdocs.yml', options=None):
75 options = options or {}
76 if 'config' in options:
77 filename = options['config']
78 if not os.path.exists(filename):
79 raise ConfigurationError("Config file '%s' does not exist." % filename)
80 with open(filename, 'r') as fp:
81 user_config = yaml.load(fp)
82 user_config.update(options)
83 return validate_config(user_config)
84
85
86 def validate_config(user_config):
87 config = DEFAULT_CONFIG.copy()
88 config.update(user_config)
89
90 if not config['site_name']:
91 raise ConfigurationError("Config must contain 'site_name' setting.")
92
93 # If not specified, then the 'pages' config simply includes all
94 # markdown files in the docs dir, without generating any header items
95 # for them.
96 pages = []
97 extra_css = []
98 extra_javascript = []
99 for (dirpath, dirnames, filenames) in os.walk(config['docs_dir']):
100 for filename in sorted(filenames):
101 fullpath = os.path.join(dirpath, filename)
102 relpath = os.path.relpath(fullpath, config['docs_dir'])
103
104 if utils.is_markdown_file(filename):
105 # index pages should always be the first listed page.
106 if os.path.splitext(relpath)[0] == 'index':
107 pages.insert(0, relpath)
108 else:
109 pages.append(relpath)
110 elif utils.is_css_file(filename):
111 extra_css.append(relpath)
112 elif utils.is_javascript_file(filename):
113 extra_javascript.append(relpath)
114
115 if config['pages'] is None:
116 config['pages'] = pages
117
118 if config['extra_css'] is None:
119 config['extra_css'] = extra_css
120
121 if config['extra_javascript'] is None:
122 config['extra_javascript'] = extra_javascript
123
124 package_dir = os.path.dirname(__file__)
125 theme_dir = [os.path.join(package_dir, 'themes', config['theme'])]
126
127 if config['theme_dir'] is not None:
128 theme_dir.insert(0, config['theme_dir'])
129
130 config['theme_dir'] = theme_dir
131
132 if config['repo_url'] is not None and config['repo_name'] is None:
133 repo_host = urlparse(config['repo_url']).netloc.lower()
134 if repo_host == 'github.com':
135 config['repo_name'] = 'GitHub'
136 elif repo_host == 'bitbucket.com':
137 config['repo_name'] = 'Bitbucket'
138 else:
139 config['repo_name'] = repo_host.split('.')[0].title()
140
141 if config['include_next_prev'] is None:
142 config['include_next_prev'] = len(config['pages']) > 1
143
144 if config['include_nav'] is None:
145 config['include_nav'] = len(config['pages']) > 1
146
147 # To Do:
148
149 # The docs dir must exist.
150 # The theme dir must exist.
151 # Ensure 'theme' is one of 'mkdocs', 'readthedocs', 'custom'
152 # A homepage 'index' must exist.
153 # The theme 'base.html' file must exist.
154 # Cannot set repo_name without setting repo_url.
155 # Cannot set 'include_next_prev: true' when only one page exists.
156 # Cannot set 'include_nav: true' when only one page exists.
157 # Error if any config keys provided that are not in the DEFAULT_CONFIG.
158
159 return config
160
[end of mkdocs/config.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/mkdocs/config.py b/mkdocs/config.py
--- a/mkdocs/config.py
+++ b/mkdocs/config.py
@@ -74,11 +74,13 @@
def load_config(filename='mkdocs.yml', options=None):
options = options or {}
if 'config' in options:
- filename = options['config']
+ filename = options.pop('config')
if not os.path.exists(filename):
raise ConfigurationError("Config file '%s' does not exist." % filename)
with open(filename, 'r') as fp:
user_config = yaml.load(fp)
+ if not isinstance(user_config, dict):
+ raise ConfigurationError("The mkdocs.yml file is invalid. See http://www.mkdocs.org/user-guide/configuration/ for more information.")
user_config.update(options)
return validate_config(user_config)
| {"golden_diff": "diff --git a/mkdocs/config.py b/mkdocs/config.py\n--- a/mkdocs/config.py\n+++ b/mkdocs/config.py\n@@ -74,11 +74,13 @@\n def load_config(filename='mkdocs.yml', options=None):\n options = options or {}\n if 'config' in options:\n- filename = options['config']\n+ filename = options.pop('config')\n if not os.path.exists(filename):\n raise ConfigurationError(\"Config file '%s' does not exist.\" % filename)\n with open(filename, 'r') as fp:\n user_config = yaml.load(fp)\n+ if not isinstance(user_config, dict):\n+ raise ConfigurationError(\"The mkdocs.yml file is invalid. See http://www.mkdocs.org/user-guide/configuration/ for more information.\")\n user_config.update(options)\n return validate_config(user_config)\n", "issue": "If the mkdocs.yml is completely empty there is a traceback\n```\nTraceback (most recent call last):\n File \"/home/dougalmatthews/.virtualenvs/mkdocs/bin/mkdocs\", line 9, in <module>\n load_entry_point('mkdocs==0.11.1', 'console_scripts', 'mkdocs')()\n File \"/home/dougalmatthews/.virtualenvs/mkdocs/lib/python3.4/site-packages/mkdocs/main.py\", line 60, in run_main\n main(cmd, args=sys.argv[2:], options=dict(opts))\n File \"/home/dougalmatthews/.virtualenvs/mkdocs/lib/python3.4/site-packages/mkdocs/main.py\", line 32, in main\n config = load_config(options=options)\n File \"/home/dougalmatthews/.virtualenvs/mkdocs/lib/python3.4/site-packages/mkdocs/config.py\", line 82, in load_config\n user_config.update(options)\nAttributeError: 'NoneType' object has no attribute 'update'\n```\n\n", "before_files": [{"content": "# coding: utf-8\n\nfrom mkdocs import utils\nfrom mkdocs.compat import urlparse\nfrom mkdocs.exceptions import ConfigurationError\n\nimport os\nimport yaml\n\nDEFAULT_CONFIG = {\n 'site_name': None,\n 'pages': None,\n\n 'site_url': None,\n 'site_description': None,\n 'site_author': None,\n 'site_favicon': None,\n\n 'theme': 'mkdocs',\n 'docs_dir': 'docs',\n 'site_dir': 'site',\n 'theme_dir': None,\n\n 'copyright': None,\n 'google_analytics': None,\n\n # The address on which to serve the livereloading docs server.\n 'dev_addr': '127.0.0.1:8000',\n\n # If `True`, use `<page_name>/index.hmtl` style files with hyperlinks to the directory.\n # If `False`, use `<page_name>.html style file with hyperlinks to the file.\n # True generates nicer URLs, but False is useful if browsing the output on a filesystem.\n 'use_directory_urls': True,\n\n # Specify a link to the project source repo to be included\n # in the documentation pages.\n 'repo_url': None,\n\n # A name to use for the link to the project source repo.\n # Default: If repo_url is unset then None, otherwise\n # \"GitHub\" or \"Bitbucket\" for known url or Hostname for unknown urls.\n 'repo_name': None,\n\n # Specify which css or javascript files from the docs\n # directionary should be additionally included in the site.\n # Default: List of all .css and .js files in the docs dir.\n 'extra_css': None,\n 'extra_javascript': None,\n\n # Determine if the site should include the nav and next/prev elements.\n # Default: True if the site has more than one page, False otherwise.\n 'include_nav': None,\n 'include_next_prev': None,\n\n # PyMarkdown extension names.\n 'markdown_extensions': (),\n\n # Determine if the site should generate a json search index and include\n # search elements in the theme. - TODO\n 'include_search': False,\n\n # Determine if the site should include a 404.html page.\n # TODO: Implment this. Make this None, have it True if a 404.html\n # template exists in the theme or docs dir.\n 'include_404': False,\n\n # Determine if the site should include a sitemap.xml page.\n # TODO: Implement this. Make this None, have it True if a sitemap.xml\n # template exists in the theme or docs dir.\n 'include_sitemap': False,\n}\n\n\ndef load_config(filename='mkdocs.yml', options=None):\n options = options or {}\n if 'config' in options:\n filename = options['config']\n if not os.path.exists(filename):\n raise ConfigurationError(\"Config file '%s' does not exist.\" % filename)\n with open(filename, 'r') as fp:\n user_config = yaml.load(fp)\n user_config.update(options)\n return validate_config(user_config)\n\n\ndef validate_config(user_config):\n config = DEFAULT_CONFIG.copy()\n config.update(user_config)\n\n if not config['site_name']:\n raise ConfigurationError(\"Config must contain 'site_name' setting.\")\n\n # If not specified, then the 'pages' config simply includes all\n # markdown files in the docs dir, without generating any header items\n # for them.\n pages = []\n extra_css = []\n extra_javascript = []\n for (dirpath, dirnames, filenames) in os.walk(config['docs_dir']):\n for filename in sorted(filenames):\n fullpath = os.path.join(dirpath, filename)\n relpath = os.path.relpath(fullpath, config['docs_dir'])\n\n if utils.is_markdown_file(filename):\n # index pages should always be the first listed page.\n if os.path.splitext(relpath)[0] == 'index':\n pages.insert(0, relpath)\n else:\n pages.append(relpath)\n elif utils.is_css_file(filename):\n extra_css.append(relpath)\n elif utils.is_javascript_file(filename):\n extra_javascript.append(relpath)\n\n if config['pages'] is None:\n config['pages'] = pages\n\n if config['extra_css'] is None:\n config['extra_css'] = extra_css\n\n if config['extra_javascript'] is None:\n config['extra_javascript'] = extra_javascript\n\n package_dir = os.path.dirname(__file__)\n theme_dir = [os.path.join(package_dir, 'themes', config['theme'])]\n\n if config['theme_dir'] is not None:\n theme_dir.insert(0, config['theme_dir'])\n\n config['theme_dir'] = theme_dir\n\n if config['repo_url'] is not None and config['repo_name'] is None:\n repo_host = urlparse(config['repo_url']).netloc.lower()\n if repo_host == 'github.com':\n config['repo_name'] = 'GitHub'\n elif repo_host == 'bitbucket.com':\n config['repo_name'] = 'Bitbucket'\n else:\n config['repo_name'] = repo_host.split('.')[0].title()\n\n if config['include_next_prev'] is None:\n config['include_next_prev'] = len(config['pages']) > 1\n\n if config['include_nav'] is None:\n config['include_nav'] = len(config['pages']) > 1\n\n # To Do:\n\n # The docs dir must exist.\n # The theme dir must exist.\n # Ensure 'theme' is one of 'mkdocs', 'readthedocs', 'custom'\n # A homepage 'index' must exist.\n # The theme 'base.html' file must exist.\n # Cannot set repo_name without setting repo_url.\n # Cannot set 'include_next_prev: true' when only one page exists.\n # Cannot set 'include_nav: true' when only one page exists.\n # Error if any config keys provided that are not in the DEFAULT_CONFIG.\n\n return config\n", "path": "mkdocs/config.py"}]} | 2,477 | 185 |
gh_patches_debug_5113 | rasdani/github-patches | git_diff | fedora-infra__bodhi-4102 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Bad characters in username
There's a bot with a bad username of `packagerbot/os-master01.phx2.fedoraproject.org` that makes CI tests failing.
https://bodhi.fedoraproject.org/users/packagerbot/os-master01.phx2.fedoraproject.org
I'm pushing a PR to safe check CI tests, but do we want to make Bodhi safe to bad usernames like this? Since usernames are from outside world, should we modify them in a safe way before storing in the database?
</issue>
<code>
[start of bodhi/server/services/user.py]
1 # Copyright 2014-2019 Red Hat, Inc. and others
2 #
3 # This file is part of Bodhi.
4 #
5 # This program is free software; you can redistribute it and/or
6 # modify it under the terms of the GNU General Public License
7 # as published by the Free Software Foundation; either version 2
8 # of the License, or (at your option) any later version.
9 #
10 # This program is distributed in the hope that it will be useful,
11 # but WITHOUT ANY WARRANTY; without even the implied warranty of
12 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
13 # GNU General Public License for more details.
14 #
15 # You should have received a copy of the GNU General Public License
16 # along with this program; if not, write to the Free Software
17 # Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
18 """Defines API services that pertain to users."""
19 import math
20
21 from cornice import Service
22 from cornice.validators import colander_querystring_validator
23 from pyramid.exceptions import HTTPNotFound
24 from sqlalchemy import func, distinct
25 from sqlalchemy.sql import or_
26
27 from bodhi.server.models import Group, Update, User
28 from bodhi.server.validators import (validate_updates, validate_groups)
29 import bodhi.server.schemas
30 import bodhi.server.security
31 import bodhi.server.services.errors
32 import bodhi.server.services.updates
33
34
35 user = Service(name='user', path='/users/{name}',
36 description='Bodhi users',
37 # These we leave wide-open since these are only GETs
38 cors_origins=bodhi.server.security.cors_origins_ro)
39
40 users = Service(name='users', path='/users/',
41 description='Bodhi users',
42 # These we leave wide-open since these are only GETs
43 cors_origins=bodhi.server.security.cors_origins_ro)
44
45 users_rss = Service(name='users_rss', path='/rss/users/', description='Bodhi users RSS feed',
46 cors_origins=bodhi.server.security.cors_origins_ro)
47
48
49 @user.get(accept=("application/json", "text/json"), renderer="json",
50 error_handler=bodhi.server.services.errors.json_handler)
51 @user.get(accept=("application/javascript"), renderer="jsonp",
52 error_handler=bodhi.server.services.errors.json_handler)
53 @user.get(accept="text/html", renderer="user.html",
54 error_handler=bodhi.server.services.errors.html_handler)
55 def get_user(request):
56 """
57 Return a user given by username.
58
59 Args:
60 request (pyramid.request): The current request.
61 Returns:
62 dict: A dictionary with two keys. "user" maps to a dictionary representation of the User
63 object. "urls" maps to various URLs that describe various other objects related to the
64 user.
65 """
66 id = request.matchdict.get('name')
67 user = User.get(id)
68
69 if not user:
70 request.errors.add('body', 'name', 'No such user')
71 request.errors.status = HTTPNotFound.code
72 return
73
74 user = user.__json__(request)
75
76 # Throw some extra information in there
77 rurl = request.route_url # Just shorthand
78 urls = {
79 'comments_by': rurl('comments') + '?user=%s' % id,
80 'comments_on': rurl('comments') + '?update_owner=%s' % id,
81 'recent_updates': rurl('updates') + '?user=%s' % id,
82 'recent_overrides': rurl('overrides') + '?user=%s' % id,
83 'comments_by_rss': rurl('comments_rss') + '?user=%s' % id,
84 'comments_on_rss': rurl('comments_rss') + '?update_owner=%s' % id,
85 'recent_updates_rss': rurl('updates_rss') + '?user=%s' % id,
86 'recent_overrides_rss': rurl('overrides_rss') + '?user=%s' % id,
87 }
88
89 return dict(user=user, urls=urls)
90
91
92 validators = (
93 colander_querystring_validator,
94 validate_groups,
95 validate_updates,
96 )
97
98
99 @users.get(schema=bodhi.server.schemas.ListUserSchema,
100 accept=("application/json", "text/json"), renderer="json",
101 error_handler=bodhi.server.services.errors.json_handler,
102 validators=validators)
103 @users.get(schema=bodhi.server.schemas.ListUserSchema,
104 accept=("application/javascript"), renderer="jsonp",
105 error_handler=bodhi.server.services.errors.jsonp_handler,
106 validators=validators)
107 @users.get(schema=bodhi.server.schemas.ListUserSchema, renderer="rss",
108 accept=('application/atom+xml',),
109 error_handler=bodhi.server.services.errors.html_handler,
110 validators=validators)
111 @users_rss.get(schema=bodhi.server.schemas.ListUserSchema, renderer="rss",
112 error_handler=bodhi.server.services.errors.html_handler,
113 validators=validators)
114 def query_users(request):
115 """
116 Search for users by various criteria.
117
118 Args:
119 request (pyramid.request): The current web request.
120 Returns:
121 dict: A dictionary with the follow key mappings:
122 users: A list of users matching the search criteria.
123 page: The current page of results.
124 pages: The total number of pages available.
125 rows_per_page: The number of users on the page.
126 total: The total number of users matching the search criteria.
127 """
128 db = request.db
129 data = request.validated
130 query = db.query(User)
131
132 like = data.get('like')
133 if like is not None:
134 query = query.filter(or_(*[
135 User.name.like('%%%s%%' % like)
136 ]))
137
138 search = data.get('search')
139 if search is not None:
140 query = query.filter(User.name.ilike('%%%s%%' % search))
141
142 name = data.get('name')
143 if name is not None:
144 query = query.filter(User.name.like(name))
145
146 groups = data.get('groups')
147 if groups is not None:
148 query = query.join(User.groups)
149 query = query.filter(or_(*[Group.id == grp.id for grp in groups]))
150
151 updates = data.get('updates')
152 if updates is not None:
153 query = query.join(User.updates)
154 args = [Update.alias == update.alias for update in updates]
155 query = query.filter(or_(*args))
156
157 # We can't use ``query.count()`` here because it is naive with respect to
158 # all the joins that we're doing above.
159 count_query = query.with_labels().statement\
160 .with_only_columns([func.count(distinct(User.id))])\
161 .order_by(None)
162 total = request.db.execute(count_query).scalar()
163
164 page = data.get('page')
165 rows_per_page = data.get('rows_per_page')
166 pages = int(math.ceil(total / float(rows_per_page)))
167 query = query.offset(rows_per_page * (page - 1)).limit(rows_per_page)
168
169 return dict(
170 users=query.all(),
171 page=page,
172 pages=pages,
173 rows_per_page=rows_per_page,
174 total=total,
175 )
176
[end of bodhi/server/services/user.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/bodhi/server/services/user.py b/bodhi/server/services/user.py
--- a/bodhi/server/services/user.py
+++ b/bodhi/server/services/user.py
@@ -32,7 +32,7 @@
import bodhi.server.services.updates
-user = Service(name='user', path='/users/{name}',
+user = Service(name='user', path=r'/users/{name:\S+}',
description='Bodhi users',
# These we leave wide-open since these are only GETs
cors_origins=bodhi.server.security.cors_origins_ro)
| {"golden_diff": "diff --git a/bodhi/server/services/user.py b/bodhi/server/services/user.py\n--- a/bodhi/server/services/user.py\n+++ b/bodhi/server/services/user.py\n@@ -32,7 +32,7 @@\n import bodhi.server.services.updates\n \n \n-user = Service(name='user', path='/users/{name}',\n+user = Service(name='user', path=r'/users/{name:\\S+}',\n description='Bodhi users',\n # These we leave wide-open since these are only GETs\n cors_origins=bodhi.server.security.cors_origins_ro)\n", "issue": "Bad characters in username\nThere's a bot with a bad username of `packagerbot/os-master01.phx2.fedoraproject.org` that makes CI tests failing.\r\nhttps://bodhi.fedoraproject.org/users/packagerbot/os-master01.phx2.fedoraproject.org\r\n\r\nI'm pushing a PR to safe check CI tests, but do we want to make Bodhi safe to bad usernames like this? Since usernames are from outside world, should we modify them in a safe way before storing in the database?\n", "before_files": [{"content": "# Copyright 2014-2019 Red Hat, Inc. and others\n#\n# This file is part of Bodhi.\n#\n# This program is free software; you can redistribute it and/or\n# modify it under the terms of the GNU General Public License\n# as published by the Free Software Foundation; either version 2\n# of the License, or (at your option) any later version.\n#\n# This program is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with this program; if not, write to the Free Software\n# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.\n\"\"\"Defines API services that pertain to users.\"\"\"\nimport math\n\nfrom cornice import Service\nfrom cornice.validators import colander_querystring_validator\nfrom pyramid.exceptions import HTTPNotFound\nfrom sqlalchemy import func, distinct\nfrom sqlalchemy.sql import or_\n\nfrom bodhi.server.models import Group, Update, User\nfrom bodhi.server.validators import (validate_updates, validate_groups)\nimport bodhi.server.schemas\nimport bodhi.server.security\nimport bodhi.server.services.errors\nimport bodhi.server.services.updates\n\n\nuser = Service(name='user', path='/users/{name}',\n description='Bodhi users',\n # These we leave wide-open since these are only GETs\n cors_origins=bodhi.server.security.cors_origins_ro)\n\nusers = Service(name='users', path='/users/',\n description='Bodhi users',\n # These we leave wide-open since these are only GETs\n cors_origins=bodhi.server.security.cors_origins_ro)\n\nusers_rss = Service(name='users_rss', path='/rss/users/', description='Bodhi users RSS feed',\n cors_origins=bodhi.server.security.cors_origins_ro)\n\n\[email protected](accept=(\"application/json\", \"text/json\"), renderer=\"json\",\n error_handler=bodhi.server.services.errors.json_handler)\[email protected](accept=(\"application/javascript\"), renderer=\"jsonp\",\n error_handler=bodhi.server.services.errors.json_handler)\[email protected](accept=\"text/html\", renderer=\"user.html\",\n error_handler=bodhi.server.services.errors.html_handler)\ndef get_user(request):\n \"\"\"\n Return a user given by username.\n\n Args:\n request (pyramid.request): The current request.\n Returns:\n dict: A dictionary with two keys. \"user\" maps to a dictionary representation of the User\n object. \"urls\" maps to various URLs that describe various other objects related to the\n user.\n \"\"\"\n id = request.matchdict.get('name')\n user = User.get(id)\n\n if not user:\n request.errors.add('body', 'name', 'No such user')\n request.errors.status = HTTPNotFound.code\n return\n\n user = user.__json__(request)\n\n # Throw some extra information in there\n rurl = request.route_url # Just shorthand\n urls = {\n 'comments_by': rurl('comments') + '?user=%s' % id,\n 'comments_on': rurl('comments') + '?update_owner=%s' % id,\n 'recent_updates': rurl('updates') + '?user=%s' % id,\n 'recent_overrides': rurl('overrides') + '?user=%s' % id,\n 'comments_by_rss': rurl('comments_rss') + '?user=%s' % id,\n 'comments_on_rss': rurl('comments_rss') + '?update_owner=%s' % id,\n 'recent_updates_rss': rurl('updates_rss') + '?user=%s' % id,\n 'recent_overrides_rss': rurl('overrides_rss') + '?user=%s' % id,\n }\n\n return dict(user=user, urls=urls)\n\n\nvalidators = (\n colander_querystring_validator,\n validate_groups,\n validate_updates,\n)\n\n\[email protected](schema=bodhi.server.schemas.ListUserSchema,\n accept=(\"application/json\", \"text/json\"), renderer=\"json\",\n error_handler=bodhi.server.services.errors.json_handler,\n validators=validators)\[email protected](schema=bodhi.server.schemas.ListUserSchema,\n accept=(\"application/javascript\"), renderer=\"jsonp\",\n error_handler=bodhi.server.services.errors.jsonp_handler,\n validators=validators)\[email protected](schema=bodhi.server.schemas.ListUserSchema, renderer=\"rss\",\n accept=('application/atom+xml',),\n error_handler=bodhi.server.services.errors.html_handler,\n validators=validators)\n@users_rss.get(schema=bodhi.server.schemas.ListUserSchema, renderer=\"rss\",\n error_handler=bodhi.server.services.errors.html_handler,\n validators=validators)\ndef query_users(request):\n \"\"\"\n Search for users by various criteria.\n\n Args:\n request (pyramid.request): The current web request.\n Returns:\n dict: A dictionary with the follow key mappings:\n users: A list of users matching the search criteria.\n page: The current page of results.\n pages: The total number of pages available.\n rows_per_page: The number of users on the page.\n total: The total number of users matching the search criteria.\n \"\"\"\n db = request.db\n data = request.validated\n query = db.query(User)\n\n like = data.get('like')\n if like is not None:\n query = query.filter(or_(*[\n User.name.like('%%%s%%' % like)\n ]))\n\n search = data.get('search')\n if search is not None:\n query = query.filter(User.name.ilike('%%%s%%' % search))\n\n name = data.get('name')\n if name is not None:\n query = query.filter(User.name.like(name))\n\n groups = data.get('groups')\n if groups is not None:\n query = query.join(User.groups)\n query = query.filter(or_(*[Group.id == grp.id for grp in groups]))\n\n updates = data.get('updates')\n if updates is not None:\n query = query.join(User.updates)\n args = [Update.alias == update.alias for update in updates]\n query = query.filter(or_(*args))\n\n # We can't use ``query.count()`` here because it is naive with respect to\n # all the joins that we're doing above.\n count_query = query.with_labels().statement\\\n .with_only_columns([func.count(distinct(User.id))])\\\n .order_by(None)\n total = request.db.execute(count_query).scalar()\n\n page = data.get('page')\n rows_per_page = data.get('rows_per_page')\n pages = int(math.ceil(total / float(rows_per_page)))\n query = query.offset(rows_per_page * (page - 1)).limit(rows_per_page)\n\n return dict(\n users=query.all(),\n page=page,\n pages=pages,\n rows_per_page=rows_per_page,\n total=total,\n )\n", "path": "bodhi/server/services/user.py"}]} | 2,616 | 129 |
gh_patches_debug_39224 | rasdani/github-patches | git_diff | spack__spack-12207 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Spack broken on Blue Waters
On the current develop, no Spack command works on Blue Waters.
### Steps to reproduce the issue
Any Spack command:
```console
$ spack help
```
### Error Message
```
Traceback (most recent call last):
File "/u/sciteam/stewart1/spack/bin/spack", line 48, in <module>
sys.exit(spack.main.main())
File "/mnt/a/u/sciteam/stewart1/spack/lib/spack/spack/main.py", line 704, in main
if spack.config.get('config:debug'):
File "/mnt/a/u/sciteam/stewart1/spack/lib/spack/spack/config.py", line 627, in get
return config.get(path, default, scope)
File "/mnt/a/u/sciteam/stewart1/spack/lib/spack/llnl/util/lang.py", line 558, in __getattr__
return getattr(self.instance, name)
File "/mnt/a/u/sciteam/stewart1/spack/lib/spack/llnl/util/lang.py", line 554, in instance
self._instance = self.factory()
File "/mnt/a/u/sciteam/stewart1/spack/lib/spack/spack/config.py", line 609, in _config
_add_platform_scope(cfg, ConfigScope, name, path)
File "/mnt/a/u/sciteam/stewart1/spack/lib/spack/spack/config.py", line 562, in _add_platform_scope
platform = spack.architecture.platform().name
File "/mnt/a/u/sciteam/stewart1/spack/lib/spack/llnl/util/lang.py", line 184, in _memoized_function
func.cache[args] = func(*args)
File "/mnt/a/u/sciteam/stewart1/spack/lib/spack/spack/architecture.py", line 388, in platform
return platform_cls()
File "/mnt/a/u/sciteam/stewart1/spack/lib/spack/spack/platforms/cray.py", line 76, in __init__
back_distro = Cnl()
File "/mnt/a/u/sciteam/stewart1/spack/lib/spack/spack/operating_systems/cnl.py", line 57, in __init__
version = self._detect_crayos_version()
File "/mnt/a/u/sciteam/stewart1/spack/lib/spack/spack/operating_systems/cnl.py", line 66, in _detect_crayos_version
release_attrs = read_cle_release_file()
File "/mnt/a/u/sciteam/stewart1/spack/lib/spack/spack/operating_systems/cnl.py", line 37, in read_cle_release_file
with open(_cle_release_file) as release_file:
IOError: [Errno 2] No such file or directory: '/etc/opt/cray/release/cle-release'
```
### Information on your system
```console
$ cat /etc/*-release
Cluster Manager v6.1
slave
LSB_VERSION="core-2.0-noarch:core-3.2-noarch:core-4.0-noarch:core-2.0-x86_64:core-3.2-x86_64:core-4.0-x86_64"
SUSE Linux Enterprise Server 11 (x86_64)
VERSION = 11
PATCHLEVEL = 3
$ uname -a
Linux h2ologin2 3.0.101-0.47.106.59-default #1 SMP Wed Jan 23 09:00:24 UTC 2019 (624897e) x86_64 x86_64 x86_64 GNU/Linux
```
</issue>
<code>
[start of lib/spack/spack/operating_systems/cnl.py]
1 # Copyright 2013-2019 Lawrence Livermore National Security, LLC and other
2 # Spack Project Developers. See the top-level COPYRIGHT file for details.
3 #
4 # SPDX-License-Identifier: (Apache-2.0 OR MIT)
5
6 import re
7
8 import llnl.util.tty as tty
9
10 import spack.version
11 from spack.architecture import OperatingSystem
12 from spack.util.module_cmd import module
13
14 #: Location of the Cray CLE release file, which we look at to get the CNL
15 #: OS version.
16 _cle_release_file = '/etc/opt/cray/release/cle-release'
17
18
19 def read_cle_release_file():
20 """Read the CLE release file and return a dict with its attributes.
21
22 The release file looks something like this::
23
24 RELEASE=6.0.UP07
25 BUILD=6.0.7424
26 ...
27
28 The dictionary we produce looks like this::
29
30 {
31 "RELEASE": "6.0.UP07",
32 "BUILD": "6.0.7424",
33 ...
34 }
35
36 """
37 with open(_cle_release_file) as release_file:
38 result = {}
39 for line in release_file:
40 # use partition instead of split() to ensure we only split on
41 # the first '=' in the line.
42 key, _, value = line.partition('=')
43 result[key] = value.strip()
44 return result
45
46
47 class Cnl(OperatingSystem):
48 """ Compute Node Linux (CNL) is the operating system used for the Cray XC
49 series super computers. It is a very stripped down version of GNU/Linux.
50 Any compilers found through this operating system will be used with
51 modules. If updated, user must make sure that version and name are
52 updated to indicate that OS has been upgraded (or downgraded)
53 """
54
55 def __init__(self):
56 name = 'cnl'
57 version = self._detect_crayos_version()
58 super(Cnl, self).__init__(name, version)
59 self.modulecmd = module
60
61 def __str__(self):
62 return self.name + str(self.version)
63
64 @classmethod
65 def _detect_crayos_version(cls):
66 release_attrs = read_cle_release_file()
67 v = spack.version.Version(release_attrs['RELEASE'])
68 return v[0]
69
70 def arguments_to_detect_version_fn(self, paths):
71 import spack.compilers
72
73 command_arguments = []
74 for compiler_name in spack.compilers.supported_compilers():
75 cmp_cls = spack.compilers.class_for_compiler_name(compiler_name)
76
77 # If the compiler doesn't have a corresponding
78 # Programming Environment, skip to the next
79 if cmp_cls.PrgEnv is None:
80 continue
81
82 if cmp_cls.PrgEnv_compiler is None:
83 tty.die('Must supply PrgEnv_compiler with PrgEnv')
84
85 compiler_id = spack.compilers.CompilerID(self, compiler_name, None)
86 detect_version_args = spack.compilers.DetectVersionArgs(
87 id=compiler_id, variation=(None, None),
88 language='cc', path='cc'
89 )
90 command_arguments.append(detect_version_args)
91 return command_arguments
92
93 def detect_version(self, detect_version_args):
94 import spack.compilers
95 modulecmd = self.modulecmd
96 compiler_name = detect_version_args.id.compiler_name
97 compiler_cls = spack.compilers.class_for_compiler_name(compiler_name)
98 output = modulecmd('avail', compiler_cls.PrgEnv_compiler)
99 version_regex = r'(%s)/([\d\.]+[\d])' % compiler_cls.PrgEnv_compiler
100 matches = re.findall(version_regex, output)
101 version = tuple(version for _, version in matches)
102 compiler_id = detect_version_args.id
103 value = detect_version_args._replace(
104 id=compiler_id._replace(version=version)
105 )
106 return value, None
107
108 def make_compilers(self, compiler_id, paths):
109 import spack.spec
110 name = compiler_id.compiler_name
111 cmp_cls = spack.compilers.class_for_compiler_name(name)
112 compilers = []
113 for v in compiler_id.version:
114 comp = cmp_cls(
115 spack.spec.CompilerSpec(name + '@' + v),
116 self, "any",
117 ['cc', 'CC', 'ftn'], [cmp_cls.PrgEnv, name + '/' + v])
118
119 compilers.append(comp)
120 return compilers
121
[end of lib/spack/spack/operating_systems/cnl.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/lib/spack/spack/operating_systems/cnl.py b/lib/spack/spack/operating_systems/cnl.py
--- a/lib/spack/spack/operating_systems/cnl.py
+++ b/lib/spack/spack/operating_systems/cnl.py
@@ -3,22 +3,27 @@
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
+import os
import re
import llnl.util.tty as tty
+import spack.error
import spack.version
from spack.architecture import OperatingSystem
from spack.util.module_cmd import module
-#: Location of the Cray CLE release file, which we look at to get the CNL
-#: OS version.
+#: Possible locations of the Cray CLE release file,
+#: which we look at to get the CNL OS version.
_cle_release_file = '/etc/opt/cray/release/cle-release'
+_clerelease_file = '/etc/opt/cray/release/clerelease'
def read_cle_release_file():
"""Read the CLE release file and return a dict with its attributes.
+ This file is present on newer versions of Cray.
+
The release file looks something like this::
RELEASE=6.0.UP07
@@ -33,6 +38,8 @@
...
}
+ Returns:
+ dict: dictionary of release attributes
"""
with open(_cle_release_file) as release_file:
result = {}
@@ -44,8 +51,25 @@
return result
+def read_clerelease_file():
+ """Read the CLE release file and return the Cray OS version.
+
+ This file is present on older versions of Cray.
+
+ The release file looks something like this::
+
+ 5.2.UP04
+
+ Returns:
+ str: the Cray OS version
+ """
+ with open(_clerelease_file) as release_file:
+ for line in release_file:
+ return line.strip()
+
+
class Cnl(OperatingSystem):
- """ Compute Node Linux (CNL) is the operating system used for the Cray XC
+ """Compute Node Linux (CNL) is the operating system used for the Cray XC
series super computers. It is a very stripped down version of GNU/Linux.
Any compilers found through this operating system will be used with
modules. If updated, user must make sure that version and name are
@@ -63,9 +87,16 @@
@classmethod
def _detect_crayos_version(cls):
- release_attrs = read_cle_release_file()
- v = spack.version.Version(release_attrs['RELEASE'])
- return v[0]
+ if os.path.isfile(_cle_release_file):
+ release_attrs = read_cle_release_file()
+ v = spack.version.Version(release_attrs['RELEASE'])
+ return v[0]
+ elif os.path.isfile(_clerelease_file):
+ v = read_clerelease_file()
+ return spack.version.Version(v)[0]
+ else:
+ raise spack.error.UnsupportedPlatformError(
+ 'Unable to detect Cray OS version')
def arguments_to_detect_version_fn(self, paths):
import spack.compilers
| {"golden_diff": "diff --git a/lib/spack/spack/operating_systems/cnl.py b/lib/spack/spack/operating_systems/cnl.py\n--- a/lib/spack/spack/operating_systems/cnl.py\n+++ b/lib/spack/spack/operating_systems/cnl.py\n@@ -3,22 +3,27 @@\n #\n # SPDX-License-Identifier: (Apache-2.0 OR MIT)\n \n+import os\n import re\n \n import llnl.util.tty as tty\n \n+import spack.error\n import spack.version\n from spack.architecture import OperatingSystem\n from spack.util.module_cmd import module\n \n-#: Location of the Cray CLE release file, which we look at to get the CNL\n-#: OS version.\n+#: Possible locations of the Cray CLE release file,\n+#: which we look at to get the CNL OS version.\n _cle_release_file = '/etc/opt/cray/release/cle-release'\n+_clerelease_file = '/etc/opt/cray/release/clerelease'\n \n \n def read_cle_release_file():\n \"\"\"Read the CLE release file and return a dict with its attributes.\n \n+ This file is present on newer versions of Cray.\n+\n The release file looks something like this::\n \n RELEASE=6.0.UP07\n@@ -33,6 +38,8 @@\n ...\n }\n \n+ Returns:\n+ dict: dictionary of release attributes\n \"\"\"\n with open(_cle_release_file) as release_file:\n result = {}\n@@ -44,8 +51,25 @@\n return result\n \n \n+def read_clerelease_file():\n+ \"\"\"Read the CLE release file and return the Cray OS version.\n+\n+ This file is present on older versions of Cray.\n+\n+ The release file looks something like this::\n+\n+ 5.2.UP04\n+\n+ Returns:\n+ str: the Cray OS version\n+ \"\"\"\n+ with open(_clerelease_file) as release_file:\n+ for line in release_file:\n+ return line.strip()\n+\n+\n class Cnl(OperatingSystem):\n- \"\"\" Compute Node Linux (CNL) is the operating system used for the Cray XC\n+ \"\"\"Compute Node Linux (CNL) is the operating system used for the Cray XC\n series super computers. It is a very stripped down version of GNU/Linux.\n Any compilers found through this operating system will be used with\n modules. If updated, user must make sure that version and name are\n@@ -63,9 +87,16 @@\n \n @classmethod\n def _detect_crayos_version(cls):\n- release_attrs = read_cle_release_file()\n- v = spack.version.Version(release_attrs['RELEASE'])\n- return v[0]\n+ if os.path.isfile(_cle_release_file):\n+ release_attrs = read_cle_release_file()\n+ v = spack.version.Version(release_attrs['RELEASE'])\n+ return v[0]\n+ elif os.path.isfile(_clerelease_file):\n+ v = read_clerelease_file()\n+ return spack.version.Version(v)[0]\n+ else:\n+ raise spack.error.UnsupportedPlatformError(\n+ 'Unable to detect Cray OS version')\n \n def arguments_to_detect_version_fn(self, paths):\n import spack.compilers\n", "issue": "Spack broken on Blue Waters\nOn the current develop, no Spack command works on Blue Waters.\r\n\r\n### Steps to reproduce the issue\r\n\r\nAny Spack command:\r\n```console\r\n$ spack help\r\n```\r\n\r\n### Error Message\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"/u/sciteam/stewart1/spack/bin/spack\", line 48, in <module>\r\n sys.exit(spack.main.main())\r\n File \"/mnt/a/u/sciteam/stewart1/spack/lib/spack/spack/main.py\", line 704, in main\r\n if spack.config.get('config:debug'):\r\n File \"/mnt/a/u/sciteam/stewart1/spack/lib/spack/spack/config.py\", line 627, in get\r\n return config.get(path, default, scope)\r\n File \"/mnt/a/u/sciteam/stewart1/spack/lib/spack/llnl/util/lang.py\", line 558, in __getattr__\r\n return getattr(self.instance, name)\r\n File \"/mnt/a/u/sciteam/stewart1/spack/lib/spack/llnl/util/lang.py\", line 554, in instance\r\n self._instance = self.factory()\r\n File \"/mnt/a/u/sciteam/stewart1/spack/lib/spack/spack/config.py\", line 609, in _config\r\n _add_platform_scope(cfg, ConfigScope, name, path)\r\n File \"/mnt/a/u/sciteam/stewart1/spack/lib/spack/spack/config.py\", line 562, in _add_platform_scope\r\n platform = spack.architecture.platform().name\r\n File \"/mnt/a/u/sciteam/stewart1/spack/lib/spack/llnl/util/lang.py\", line 184, in _memoized_function\r\n func.cache[args] = func(*args)\r\n File \"/mnt/a/u/sciteam/stewart1/spack/lib/spack/spack/architecture.py\", line 388, in platform\r\n return platform_cls()\r\n File \"/mnt/a/u/sciteam/stewart1/spack/lib/spack/spack/platforms/cray.py\", line 76, in __init__\r\n back_distro = Cnl()\r\n File \"/mnt/a/u/sciteam/stewart1/spack/lib/spack/spack/operating_systems/cnl.py\", line 57, in __init__\r\n version = self._detect_crayos_version()\r\n File \"/mnt/a/u/sciteam/stewart1/spack/lib/spack/spack/operating_systems/cnl.py\", line 66, in _detect_crayos_version\r\n release_attrs = read_cle_release_file()\r\n File \"/mnt/a/u/sciteam/stewart1/spack/lib/spack/spack/operating_systems/cnl.py\", line 37, in read_cle_release_file\r\n with open(_cle_release_file) as release_file:\r\nIOError: [Errno 2] No such file or directory: '/etc/opt/cray/release/cle-release'\r\n```\r\n\r\n### Information on your system\r\n\r\n```console\r\n$ cat /etc/*-release\r\nCluster Manager v6.1\r\nslave\r\nLSB_VERSION=\"core-2.0-noarch:core-3.2-noarch:core-4.0-noarch:core-2.0-x86_64:core-3.2-x86_64:core-4.0-x86_64\"\r\nSUSE Linux Enterprise Server 11 (x86_64)\r\nVERSION = 11\r\nPATCHLEVEL = 3\r\n$ uname -a\r\nLinux h2ologin2 3.0.101-0.47.106.59-default #1 SMP Wed Jan 23 09:00:24 UTC 2019 (624897e) x86_64 x86_64 x86_64 GNU/Linux\r\n```\n", "before_files": [{"content": "# Copyright 2013-2019 Lawrence Livermore National Security, LLC and other\n# Spack Project Developers. See the top-level COPYRIGHT file for details.\n#\n# SPDX-License-Identifier: (Apache-2.0 OR MIT)\n\nimport re\n\nimport llnl.util.tty as tty\n\nimport spack.version\nfrom spack.architecture import OperatingSystem\nfrom spack.util.module_cmd import module\n\n#: Location of the Cray CLE release file, which we look at to get the CNL\n#: OS version.\n_cle_release_file = '/etc/opt/cray/release/cle-release'\n\n\ndef read_cle_release_file():\n \"\"\"Read the CLE release file and return a dict with its attributes.\n\n The release file looks something like this::\n\n RELEASE=6.0.UP07\n BUILD=6.0.7424\n ...\n\n The dictionary we produce looks like this::\n\n {\n \"RELEASE\": \"6.0.UP07\",\n \"BUILD\": \"6.0.7424\",\n ...\n }\n\n \"\"\"\n with open(_cle_release_file) as release_file:\n result = {}\n for line in release_file:\n # use partition instead of split() to ensure we only split on\n # the first '=' in the line.\n key, _, value = line.partition('=')\n result[key] = value.strip()\n return result\n\n\nclass Cnl(OperatingSystem):\n \"\"\" Compute Node Linux (CNL) is the operating system used for the Cray XC\n series super computers. It is a very stripped down version of GNU/Linux.\n Any compilers found through this operating system will be used with\n modules. If updated, user must make sure that version and name are\n updated to indicate that OS has been upgraded (or downgraded)\n \"\"\"\n\n def __init__(self):\n name = 'cnl'\n version = self._detect_crayos_version()\n super(Cnl, self).__init__(name, version)\n self.modulecmd = module\n\n def __str__(self):\n return self.name + str(self.version)\n\n @classmethod\n def _detect_crayos_version(cls):\n release_attrs = read_cle_release_file()\n v = spack.version.Version(release_attrs['RELEASE'])\n return v[0]\n\n def arguments_to_detect_version_fn(self, paths):\n import spack.compilers\n\n command_arguments = []\n for compiler_name in spack.compilers.supported_compilers():\n cmp_cls = spack.compilers.class_for_compiler_name(compiler_name)\n\n # If the compiler doesn't have a corresponding\n # Programming Environment, skip to the next\n if cmp_cls.PrgEnv is None:\n continue\n\n if cmp_cls.PrgEnv_compiler is None:\n tty.die('Must supply PrgEnv_compiler with PrgEnv')\n\n compiler_id = spack.compilers.CompilerID(self, compiler_name, None)\n detect_version_args = spack.compilers.DetectVersionArgs(\n id=compiler_id, variation=(None, None),\n language='cc', path='cc'\n )\n command_arguments.append(detect_version_args)\n return command_arguments\n\n def detect_version(self, detect_version_args):\n import spack.compilers\n modulecmd = self.modulecmd\n compiler_name = detect_version_args.id.compiler_name\n compiler_cls = spack.compilers.class_for_compiler_name(compiler_name)\n output = modulecmd('avail', compiler_cls.PrgEnv_compiler)\n version_regex = r'(%s)/([\\d\\.]+[\\d])' % compiler_cls.PrgEnv_compiler\n matches = re.findall(version_regex, output)\n version = tuple(version for _, version in matches)\n compiler_id = detect_version_args.id\n value = detect_version_args._replace(\n id=compiler_id._replace(version=version)\n )\n return value, None\n\n def make_compilers(self, compiler_id, paths):\n import spack.spec\n name = compiler_id.compiler_name\n cmp_cls = spack.compilers.class_for_compiler_name(name)\n compilers = []\n for v in compiler_id.version:\n comp = cmp_cls(\n spack.spec.CompilerSpec(name + '@' + v),\n self, \"any\",\n ['cc', 'CC', 'ftn'], [cmp_cls.PrgEnv, name + '/' + v])\n\n compilers.append(comp)\n return compilers\n", "path": "lib/spack/spack/operating_systems/cnl.py"}]} | 2,615 | 732 |
gh_patches_debug_24151 | rasdani/github-patches | git_diff | gammapy__gammapy-4924 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Remove deprecated features
This is a reminder issue to remove the features deprecated since 1.1 before the next release
</issue>
<code>
[start of gammapy/utils/table.py]
1 # Licensed under a 3-clause BSD style license - see LICENSE.rst
2 """Table helper utilities."""
3 import numpy as np
4 from astropy.table import Table
5 from astropy.units import Quantity
6 from .deprecation import deprecated
7 from .units import standardise_unit
8
9 __all__ = [
10 "hstack_columns",
11 "table_from_row_data",
12 "table_row_to_dict",
13 "table_standardise_units_copy",
14 "table_standardise_units_inplace",
15 ]
16
17
18 def hstack_columns(table, table_other):
19 """Stack the column data horizontally.
20
21 Parameters
22 ----------
23 table : `~astropy.table.Table`
24 Input table.
25 table_other : `~astropy.table.Table`
26 Other input table.
27
28 Returns
29 -------
30 stacked : `~astropy.table.Table`
31 Stacked table.
32 """
33 stacked = Table()
34
35 for column in table.colnames:
36 data = np.hstack([table[column].data[0], table_other[column].data[0]])
37 stacked[column] = data[np.newaxis, :]
38 return stacked
39
40
41 def table_standardise_units_copy(table):
42 """Standardise units for all columns in a table in a copy.
43
44 Calls `~gammapy.utils.units.standardise_unit`.
45
46 Parameters
47 ----------
48 table : `~astropy.table.Table`
49 Input table (won't be modified).
50
51 Returns
52 -------
53 table : `~astropy.table.Table`
54 Copy of the input table with standardised column units.
55 """
56 # Note: we could add an `inplace` option (or variant of this function)
57 # See https://github.com/astropy/astropy/issues/6098
58 table = Table(table)
59 return table_standardise_units_inplace(table)
60
61
62 def table_standardise_units_inplace(table):
63 """Standardise units for all columns in a table in place."""
64 for column in table.columns.values():
65 if column.unit:
66 column.unit = standardise_unit(column.unit)
67
68 return table
69
70
71 def table_row_to_dict(row, make_quantity=True):
72 """Make one source data dictionary.
73
74 Parameters
75 ----------
76 row : `~astropy.table.Row`
77 Row.
78 make_quantity : bool, optional
79 Make quantity values for columns with units.
80 Default is True.
81
82 Returns
83 -------
84 data : dict
85 Row data.
86 """
87 data = {}
88 for name, col in row.columns.items():
89 val = row[name]
90
91 if make_quantity and col.unit:
92 val = Quantity(val, unit=col.unit)
93 data[name] = val
94 return data
95
96
97 @deprecated("v1.1", alternative="astropy.table.Table")
98 def table_from_row_data(rows, **kwargs):
99 """Helper function to create table objects from row data.
100
101 Works with quantities.
102
103 Parameters
104 ----------
105 rows : list
106 List of row data (each row a dictionary).
107 """
108 table = Table(**kwargs)
109
110 if len(rows) == 0:
111 return table
112
113 colnames = list(rows[0].keys())
114
115 for name in colnames:
116 coldata = [_[name] for _ in rows]
117 if isinstance(rows[0][name], Quantity):
118 coldata = Quantity(coldata, unit=rows[0][name].unit)
119 table[name] = coldata
120
121 return table
122
[end of gammapy/utils/table.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/gammapy/utils/table.py b/gammapy/utils/table.py
--- a/gammapy/utils/table.py
+++ b/gammapy/utils/table.py
@@ -3,12 +3,10 @@
import numpy as np
from astropy.table import Table
from astropy.units import Quantity
-from .deprecation import deprecated
from .units import standardise_unit
__all__ = [
"hstack_columns",
- "table_from_row_data",
"table_row_to_dict",
"table_standardise_units_copy",
"table_standardise_units_inplace",
@@ -92,30 +90,3 @@
val = Quantity(val, unit=col.unit)
data[name] = val
return data
-
-
-@deprecated("v1.1", alternative="astropy.table.Table")
-def table_from_row_data(rows, **kwargs):
- """Helper function to create table objects from row data.
-
- Works with quantities.
-
- Parameters
- ----------
- rows : list
- List of row data (each row a dictionary).
- """
- table = Table(**kwargs)
-
- if len(rows) == 0:
- return table
-
- colnames = list(rows[0].keys())
-
- for name in colnames:
- coldata = [_[name] for _ in rows]
- if isinstance(rows[0][name], Quantity):
- coldata = Quantity(coldata, unit=rows[0][name].unit)
- table[name] = coldata
-
- return table
| {"golden_diff": "diff --git a/gammapy/utils/table.py b/gammapy/utils/table.py\n--- a/gammapy/utils/table.py\n+++ b/gammapy/utils/table.py\n@@ -3,12 +3,10 @@\n import numpy as np\n from astropy.table import Table\n from astropy.units import Quantity\n-from .deprecation import deprecated\n from .units import standardise_unit\n \n __all__ = [\n \"hstack_columns\",\n- \"table_from_row_data\",\n \"table_row_to_dict\",\n \"table_standardise_units_copy\",\n \"table_standardise_units_inplace\",\n@@ -92,30 +90,3 @@\n val = Quantity(val, unit=col.unit)\n data[name] = val\n return data\n-\n-\n-@deprecated(\"v1.1\", alternative=\"astropy.table.Table\")\n-def table_from_row_data(rows, **kwargs):\n- \"\"\"Helper function to create table objects from row data.\n-\n- Works with quantities.\n-\n- Parameters\n- ----------\n- rows : list\n- List of row data (each row a dictionary).\n- \"\"\"\n- table = Table(**kwargs)\n-\n- if len(rows) == 0:\n- return table\n-\n- colnames = list(rows[0].keys())\n-\n- for name in colnames:\n- coldata = [_[name] for _ in rows]\n- if isinstance(rows[0][name], Quantity):\n- coldata = Quantity(coldata, unit=rows[0][name].unit)\n- table[name] = coldata\n-\n- return table\n", "issue": "Remove deprecated features\nThis is a reminder issue to remove the features deprecated since 1.1 before the next release\n", "before_files": [{"content": "# Licensed under a 3-clause BSD style license - see LICENSE.rst\n\"\"\"Table helper utilities.\"\"\"\nimport numpy as np\nfrom astropy.table import Table\nfrom astropy.units import Quantity\nfrom .deprecation import deprecated\nfrom .units import standardise_unit\n\n__all__ = [\n \"hstack_columns\",\n \"table_from_row_data\",\n \"table_row_to_dict\",\n \"table_standardise_units_copy\",\n \"table_standardise_units_inplace\",\n]\n\n\ndef hstack_columns(table, table_other):\n \"\"\"Stack the column data horizontally.\n\n Parameters\n ----------\n table : `~astropy.table.Table`\n Input table.\n table_other : `~astropy.table.Table`\n Other input table.\n\n Returns\n -------\n stacked : `~astropy.table.Table`\n Stacked table.\n \"\"\"\n stacked = Table()\n\n for column in table.colnames:\n data = np.hstack([table[column].data[0], table_other[column].data[0]])\n stacked[column] = data[np.newaxis, :]\n return stacked\n\n\ndef table_standardise_units_copy(table):\n \"\"\"Standardise units for all columns in a table in a copy.\n\n Calls `~gammapy.utils.units.standardise_unit`.\n\n Parameters\n ----------\n table : `~astropy.table.Table`\n Input table (won't be modified).\n\n Returns\n -------\n table : `~astropy.table.Table`\n Copy of the input table with standardised column units.\n \"\"\"\n # Note: we could add an `inplace` option (or variant of this function)\n # See https://github.com/astropy/astropy/issues/6098\n table = Table(table)\n return table_standardise_units_inplace(table)\n\n\ndef table_standardise_units_inplace(table):\n \"\"\"Standardise units for all columns in a table in place.\"\"\"\n for column in table.columns.values():\n if column.unit:\n column.unit = standardise_unit(column.unit)\n\n return table\n\n\ndef table_row_to_dict(row, make_quantity=True):\n \"\"\"Make one source data dictionary.\n\n Parameters\n ----------\n row : `~astropy.table.Row`\n Row.\n make_quantity : bool, optional\n Make quantity values for columns with units.\n Default is True.\n\n Returns\n -------\n data : dict\n Row data.\n \"\"\"\n data = {}\n for name, col in row.columns.items():\n val = row[name]\n\n if make_quantity and col.unit:\n val = Quantity(val, unit=col.unit)\n data[name] = val\n return data\n\n\n@deprecated(\"v1.1\", alternative=\"astropy.table.Table\")\ndef table_from_row_data(rows, **kwargs):\n \"\"\"Helper function to create table objects from row data.\n\n Works with quantities.\n\n Parameters\n ----------\n rows : list\n List of row data (each row a dictionary).\n \"\"\"\n table = Table(**kwargs)\n\n if len(rows) == 0:\n return table\n\n colnames = list(rows[0].keys())\n\n for name in colnames:\n coldata = [_[name] for _ in rows]\n if isinstance(rows[0][name], Quantity):\n coldata = Quantity(coldata, unit=rows[0][name].unit)\n table[name] = coldata\n\n return table\n", "path": "gammapy/utils/table.py"}]} | 1,544 | 347 |
gh_patches_debug_476 | rasdani/github-patches | git_diff | rlworkgroup__garage-2133 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Unpin cloudpickle instead of pinning it to 1.3
Currently, #1879 pins cloudpickle to 1.3 because tensorflow-probability 0.11 does so. When tfp unpins cloudpickle, we should unpin it too.
</issue>
<code>
[start of setup.py]
1 """setuptools based setup module."""
2 import os
3
4 from setuptools import find_packages, setup
5
6 GARAGE_GH_TOKEN = os.environ.get('GARAGE_GH_TOKEN') or 'git'
7 GYM_VERSION = '0.17.2'
8
9 # Required dependencies
10 REQUIRED = [
11 # Please keep alphabetized
12 'akro',
13 'click>=2.0',
14 'cloudpickle==1.3',
15 'cma==2.7.0',
16 'dowel==0.0.3',
17 'numpy>=1.14.5',
18 'psutil',
19 'python-dateutil',
20 'ray',
21 'scikit-image',
22 'scipy',
23 'setproctitle>=1.0',
24 'tensorflow>=1.14',
25 'tensorflow-probability>=0.11.0',
26 'torch>=1.0.0,!=1.5.0',
27 'torchvision>=0.2.1',
28 ]
29
30 # Dependencies for optional features
31 EXTRAS = {}
32
33 EXTRAS['gym'] = [
34 f'gym[atari,box2d,classic_control]=={GYM_VERSION}',
35 ]
36
37 EXTRAS['mujoco'] = [
38 'mujoco-py>=2.0,<=2.0.2.8',
39 f'gym[all]=={GYM_VERSION}',
40 ]
41
42 EXTRAS['dm_control'] = [
43 # dm_control throws an error during install about not being able to
44 # find a build dependency (absl-py). Later pip executes the `install`
45 # command again and the install succeeds because absl-py has been
46 # installed. This is stupid, but harmless.
47 'dm_control',
48 ]
49
50 EXTRAS['bullet'] = ['mpi4py', 'pybullet>=2.8.7']
51
52 EXTRAS['all'] = list(set(sum(EXTRAS.values(), [])))
53
54 # Development dependencies (*not* included in 'all')
55 EXTRAS['dev'] = [
56 # Please keep alphabetized
57 'flake8',
58 'flake8-docstrings>=1.5.0',
59 'flake8-import-order',
60 f'metaworld @ https://{GARAGE_GH_TOKEN}@api.github.com/repos/rlworkgroup/metaworld/tarball/0875192baaa91c43523708f55866d98eaf3facaf', # noqa: E501
61 'isort>=4.3.21,<5.0.0',
62 'pep8-naming==0.7.0',
63 'pre-commit',
64 'pycodestyle>=2.5.0',
65 'pydocstyle>=4.0.0',
66 'pylint>=2.5.3',
67 'pytest>=4.5.0', # Required for strict-markers
68 'pytest-cov',
69 'pytest-rerunfailures',
70 'pytest-timeout',
71 'pytest-xdist',
72 'recommonmark',
73 'sphinx',
74 'sphinx-autoapi>=1.4.0',
75 'sphinx_rtd_theme',
76 'sphinxcontrib-bibtex',
77 'yapf==0.30.0',
78 ] # yapf: disable
79
80 with open('README.md') as f:
81 README = f.read()
82
83 # Get the package version dynamically
84 with open('VERSION') as v:
85 VERSION = v.read().strip()
86
87 setup(
88 name='garage',
89 version=VERSION,
90 author='Reinforcement Learning Working Group',
91 description='A toolkit for reproducible reinforcement learning research',
92 url='https://github.com/rlworkgroup/garage',
93 packages=find_packages(where='src'),
94 package_dir={'': 'src'},
95 scripts=['scripts/garage'],
96 python_requires='>=3.6',
97 install_requires=REQUIRED,
98 extras_require=EXTRAS,
99 license='MIT',
100 long_description=README,
101 long_description_content_type='text/markdown',
102 classifiers=[
103 'Development Status :: 4 - Beta',
104 'Intended Audience :: Developers',
105 'Intended Audience :: Education',
106 'Intended Audience :: Science/Research',
107 'License :: OSI Approved :: MIT License',
108 'Programming Language :: Python :: 3.6',
109 'Programming Language :: Python :: 3.7',
110 'Programming Language :: Python :: 3 :: Only',
111 'Topic :: Scientific/Engineering :: Artificial Intelligence',
112 'Topic :: Scientific/Engineering :: Mathematics',
113 'Topic :: Software Development :: Libraries',
114 ],
115 )
116
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -11,7 +11,7 @@
# Please keep alphabetized
'akro',
'click>=2.0',
- 'cloudpickle==1.3',
+ 'cloudpickle',
'cma==2.7.0',
'dowel==0.0.3',
'numpy>=1.14.5',
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -11,7 +11,7 @@\n # Please keep alphabetized\n 'akro',\n 'click>=2.0',\n- 'cloudpickle==1.3',\n+ 'cloudpickle',\n 'cma==2.7.0',\n 'dowel==0.0.3',\n 'numpy>=1.14.5',\n", "issue": "Unpin cloudpickle instead of pinning it to 1.3\nCurrently, #1879 pins cloudpickle to 1.3 because tensorflow-probability 0.11 does so. When tfp unpins cloudpickle, we should unpin it too.\n", "before_files": [{"content": "\"\"\"setuptools based setup module.\"\"\"\nimport os\n\nfrom setuptools import find_packages, setup\n\nGARAGE_GH_TOKEN = os.environ.get('GARAGE_GH_TOKEN') or 'git'\nGYM_VERSION = '0.17.2'\n\n# Required dependencies\nREQUIRED = [\n # Please keep alphabetized\n 'akro',\n 'click>=2.0',\n 'cloudpickle==1.3',\n 'cma==2.7.0',\n 'dowel==0.0.3',\n 'numpy>=1.14.5',\n 'psutil',\n 'python-dateutil',\n 'ray',\n 'scikit-image',\n 'scipy',\n 'setproctitle>=1.0',\n 'tensorflow>=1.14',\n 'tensorflow-probability>=0.11.0',\n 'torch>=1.0.0,!=1.5.0',\n 'torchvision>=0.2.1',\n]\n\n# Dependencies for optional features\nEXTRAS = {}\n\nEXTRAS['gym'] = [\n f'gym[atari,box2d,classic_control]=={GYM_VERSION}',\n]\n\nEXTRAS['mujoco'] = [\n 'mujoco-py>=2.0,<=2.0.2.8',\n f'gym[all]=={GYM_VERSION}',\n]\n\nEXTRAS['dm_control'] = [\n # dm_control throws an error during install about not being able to\n # find a build dependency (absl-py). Later pip executes the `install`\n # command again and the install succeeds because absl-py has been\n # installed. This is stupid, but harmless.\n 'dm_control',\n]\n\nEXTRAS['bullet'] = ['mpi4py', 'pybullet>=2.8.7']\n\nEXTRAS['all'] = list(set(sum(EXTRAS.values(), [])))\n\n# Development dependencies (*not* included in 'all')\nEXTRAS['dev'] = [\n # Please keep alphabetized\n 'flake8',\n 'flake8-docstrings>=1.5.0',\n 'flake8-import-order',\n f'metaworld @ https://{GARAGE_GH_TOKEN}@api.github.com/repos/rlworkgroup/metaworld/tarball/0875192baaa91c43523708f55866d98eaf3facaf', # noqa: E501\n 'isort>=4.3.21,<5.0.0',\n 'pep8-naming==0.7.0',\n 'pre-commit',\n 'pycodestyle>=2.5.0',\n 'pydocstyle>=4.0.0',\n 'pylint>=2.5.3',\n 'pytest>=4.5.0', # Required for strict-markers\n 'pytest-cov',\n 'pytest-rerunfailures',\n 'pytest-timeout',\n 'pytest-xdist',\n 'recommonmark',\n 'sphinx',\n 'sphinx-autoapi>=1.4.0',\n 'sphinx_rtd_theme',\n 'sphinxcontrib-bibtex',\n 'yapf==0.30.0',\n] # yapf: disable\n\nwith open('README.md') as f:\n README = f.read()\n\n# Get the package version dynamically\nwith open('VERSION') as v:\n VERSION = v.read().strip()\n\nsetup(\n name='garage',\n version=VERSION,\n author='Reinforcement Learning Working Group',\n description='A toolkit for reproducible reinforcement learning research',\n url='https://github.com/rlworkgroup/garage',\n packages=find_packages(where='src'),\n package_dir={'': 'src'},\n scripts=['scripts/garage'],\n python_requires='>=3.6',\n install_requires=REQUIRED,\n extras_require=EXTRAS,\n license='MIT',\n long_description=README,\n long_description_content_type='text/markdown',\n classifiers=[\n 'Development Status :: 4 - Beta',\n 'Intended Audience :: Developers',\n 'Intended Audience :: Education',\n 'Intended Audience :: Science/Research',\n 'License :: OSI Approved :: MIT License',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Programming Language :: Python :: 3 :: Only',\n 'Topic :: Scientific/Engineering :: Artificial Intelligence',\n 'Topic :: Scientific/Engineering :: Mathematics',\n 'Topic :: Software Development :: Libraries',\n ],\n)\n", "path": "setup.py"}]} | 1,828 | 101 |
gh_patches_debug_59565 | rasdani/github-patches | git_diff | saulpw__visidata-509 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[html saver] Saving typed columns as html (int/vlen/bool) causes exception
I tried to copy (yank) a couple of rows from the frequency sheet and it provided me the following error. I believe this is due to the html parser expecting strings? A similar error also occurs in other sheets when using unexpected py types (e.g. bool).
FrequencySheet error
```
Traceback (most recent call last):
File "/Documents/pyv/py3/lib/python3.7/site-packages/visidata/threads.py", line 201, in _toplevelTryFunc
t.status = func(*args, **kwargs)
File "/Documents/pyv/py3/lib/python3.7/site-packages/visidata/loaders/html.py", line 124, in save_html
fp.write(html.escape(val))
File "/usr/local/Cellar/python/3.7.6_1/Frameworks/Python.framework/Versions/3.7/lib/python3.7/html/__init__.py", line 19, in escape
s = s.replace("&", "&") # Must be done first!
AttributeError: 'vlen' object has no attribute 'replace'
```
Sheet with a bool column error:
```
Traceback (most recent call last):
File "/Documents/pyv/py3/lib/python3.7/site-packages/visidata/threads.py", line 201, in _toplevelTryFunc
t.status = func(*args, **kwargs)
File "/Documents/pyv/py3/lib/python3.7/site-packages/visidata/loaders/html.py", line 124, in save_html
fp.write(html.escape(val))
File "/usr/local/Cellar/python/3.7.6_1/Frameworks/Python.framework/Versions/3.7/lib/python3.7/html/__init__.py", line 19, in escape
s = s.replace("&", "&") # Must be done first!
AttributeError: 'bool' object has no attribute 'replace'
```
</issue>
<code>
[start of visidata/loaders/html.py]
1 import html
2 from visidata import *
3
4
5 class HtmlTablesSheet(IndexSheet):
6 rowtype = 'sheets' # rowdef: HtmlTableSheet (sheet.html = lxml.html.HtmlElement)
7 columns = IndexSheet.columns + [
8 Column('tag', width=0, getter=lambda col,row: row.html.tag),
9 Column('id', getter=lambda col,row: row.html.attrib.get('id')),
10 Column('classes', getter=lambda col,row: row.html.attrib.get('class')),
11 ]
12 def iterload(self):
13 import lxml.html
14 from lxml import etree
15 utf8_parser = etree.HTMLParser(encoding='utf-8')
16 with self.source.open_text() as fp:
17 html = lxml.html.etree.parse(fp, parser=utf8_parser)
18 self.setKeys([self.column('name')])
19 self.column('keys').hide()
20 self.column('source').hide()
21
22 for i, e in enumerate(html.iter('table')):
23 if e.tag == 'table':
24 vs = HtmlTableSheet(e.attrib.get("id", "table_" + str(i)), source=e)
25 vs.reload()
26 vs.html = e
27 yield vs
28
29
30 def is_header(elem):
31 scope = elem.attrib.get('scope', '')
32
33 if elem.tag == 'th':
34 if not scope or scope == 'col':
35 return True
36
37 return False
38
39 class HtmlTableSheet(Sheet):
40 rowtype = 'rows' # list of strings
41 columns = []
42
43 def iterload(self):
44 headers = []
45
46 maxlinks = {} # [colnum] -> nlinks:int
47
48 for rownum, r in enumerate(self.source.iter('tr')):
49 row = []
50
51 colnum = 0
52 # get starting column, which might be different if there were rowspan>1 already
53 if rownum < len(headers):
54 while colnum < len(headers[rownum]):
55 if headers[rownum][colnum] is None:
56 break
57 colnum += 1
58
59 for cell in r.getchildren():
60 colspan = int(cell.attrib.get('colspan', 1))
61 rowspan = int(cell.attrib.get('rowspan', 1))
62 cellval = ' '.join(x.strip() for x in cell.itertext()) # text only without markup
63 links = [x.get('href') for x in cell.iter('a')]
64 maxlinks[colnum] = max(maxlinks.get(colnum, 0), len(links))
65
66 if is_header(cell):
67 for k in range(rownum, rownum+rowspan):
68 while k >= len(headers): # extend headers list with lists for all header rows
69 headers.append([])
70
71 for j in range(colnum, colnum+colspan):
72 while j >= len(headers[k]):
73 headers[k].append(None)
74 headers[k][j] = cellval
75 cellval = '' # use empty non-None value for subsequent rows in the rowspan
76 else:
77 while colnum >= len(row):
78 row.append(None)
79 row[colnum] = (cellval, links)
80
81 colnum += colspan
82
83 if any(row):
84 yield row
85
86 self.columns = []
87 if headers:
88 it = itertools.zip_longest(*headers, fillvalue='')
89 else:
90 it = [list(x) for x in self.rows[0]]
91 self.rows = self.rows[1:]
92
93 for colnum, names in enumerate(it):
94 name = '_'.join(str(x) for x in names if x)
95 self.addColumn(Column(name, getter=lambda c,r,i=colnum: r[i][0]))
96 for linknum in range(maxlinks.get(colnum, 0)):
97 self.addColumn(Column(name+'_link'+str(linknum), width=20, getter=lambda c,r,i=colnum,j=linknum: r[i][1][j]))
98
99
100 @VisiData.api
101 def save_html(vd, p, *vsheets):
102 'Save vsheets as HTML tables in a single file'
103
104 with open(p, 'w', encoding='ascii', errors='xmlcharrefreplace') as fp:
105 for sheet in vsheets:
106
107 fp.write('<h2 class="sheetname">%s</h2>\n'.format(sheetname=html.escape(sheet.name)))
108
109 fp.write('<table id="{sheetname}">\n'.format(sheetname=html.escape(sheet.name)))
110
111 # headers
112 fp.write('<tr>')
113 for col in sheet.visibleCols:
114 contents = html.escape(col.name)
115 fp.write('<th>{colname}</th>'.format(colname=contents))
116 fp.write('</tr>\n')
117
118 # rows
119 with Progress(gerund='saving'):
120 for typedvals in sheet.iterdispvals(format=False):
121 fp.write('<tr>')
122 for col, val in typedvals.items():
123 fp.write('<td>')
124 fp.write(html.escape(val))
125 fp.write('</td>')
126 fp.write('</tr>\n')
127
128 fp.write('</table>')
129 vd.status('%s save finished' % p)
130
131
132 VisiData.save_htm = VisiData.save_html
133
134
135 vd.filetype('html', HtmlTablesSheet)
136 vd.filetype('htm', HtmlTablesSheet)
137
[end of visidata/loaders/html.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/visidata/loaders/html.py b/visidata/loaders/html.py
--- a/visidata/loaders/html.py
+++ b/visidata/loaders/html.py
@@ -121,7 +121,7 @@
fp.write('<tr>')
for col, val in typedvals.items():
fp.write('<td>')
- fp.write(html.escape(val))
+ fp.write(html.escape(str(val)))
fp.write('</td>')
fp.write('</tr>\n')
| {"golden_diff": "diff --git a/visidata/loaders/html.py b/visidata/loaders/html.py\n--- a/visidata/loaders/html.py\n+++ b/visidata/loaders/html.py\n@@ -121,7 +121,7 @@\n fp.write('<tr>')\n for col, val in typedvals.items():\n fp.write('<td>')\n- fp.write(html.escape(val))\n+ fp.write(html.escape(str(val)))\n fp.write('</td>')\n fp.write('</tr>\\n')\n", "issue": "[html saver] Saving typed columns as html (int/vlen/bool) causes exception\nI tried to copy (yank) a couple of rows from the frequency sheet and it provided me the following error. I believe this is due to the html parser expecting strings? A similar error also occurs in other sheets when using unexpected py types (e.g. bool).\r\n\r\nFrequencySheet error\r\n```\r\nTraceback (most recent call last):\r\n File \"/Documents/pyv/py3/lib/python3.7/site-packages/visidata/threads.py\", line 201, in _toplevelTryFunc\r\n t.status = func(*args, **kwargs)\r\n File \"/Documents/pyv/py3/lib/python3.7/site-packages/visidata/loaders/html.py\", line 124, in save_html\r\n fp.write(html.escape(val))\r\n File \"/usr/local/Cellar/python/3.7.6_1/Frameworks/Python.framework/Versions/3.7/lib/python3.7/html/__init__.py\", line 19, in escape\r\n s = s.replace(\"&\", \"&\") # Must be done first!\r\nAttributeError: 'vlen' object has no attribute 'replace'\r\n```\r\n\r\nSheet with a bool column error:\r\n```\r\nTraceback (most recent call last):\r\n File \"/Documents/pyv/py3/lib/python3.7/site-packages/visidata/threads.py\", line 201, in _toplevelTryFunc\r\n t.status = func(*args, **kwargs)\r\n File \"/Documents/pyv/py3/lib/python3.7/site-packages/visidata/loaders/html.py\", line 124, in save_html\r\n fp.write(html.escape(val))\r\n File \"/usr/local/Cellar/python/3.7.6_1/Frameworks/Python.framework/Versions/3.7/lib/python3.7/html/__init__.py\", line 19, in escape\r\n s = s.replace(\"&\", \"&\") # Must be done first!\r\nAttributeError: 'bool' object has no attribute 'replace'\r\n```\n", "before_files": [{"content": "import html\nfrom visidata import *\n\n\nclass HtmlTablesSheet(IndexSheet):\n rowtype = 'sheets' # rowdef: HtmlTableSheet (sheet.html = lxml.html.HtmlElement)\n columns = IndexSheet.columns + [\n Column('tag', width=0, getter=lambda col,row: row.html.tag),\n Column('id', getter=lambda col,row: row.html.attrib.get('id')),\n Column('classes', getter=lambda col,row: row.html.attrib.get('class')),\n ]\n def iterload(self):\n import lxml.html\n from lxml import etree\n utf8_parser = etree.HTMLParser(encoding='utf-8')\n with self.source.open_text() as fp:\n html = lxml.html.etree.parse(fp, parser=utf8_parser)\n self.setKeys([self.column('name')])\n self.column('keys').hide()\n self.column('source').hide()\n\n for i, e in enumerate(html.iter('table')):\n if e.tag == 'table':\n vs = HtmlTableSheet(e.attrib.get(\"id\", \"table_\" + str(i)), source=e)\n vs.reload()\n vs.html = e\n yield vs\n\n\ndef is_header(elem):\n scope = elem.attrib.get('scope', '')\n\n if elem.tag == 'th':\n if not scope or scope == 'col':\n return True\n\n return False\n\nclass HtmlTableSheet(Sheet):\n rowtype = 'rows' # list of strings\n columns = []\n\n def iterload(self):\n headers = []\n\n maxlinks = {} # [colnum] -> nlinks:int\n\n for rownum, r in enumerate(self.source.iter('tr')):\n row = []\n\n colnum = 0\n # get starting column, which might be different if there were rowspan>1 already\n if rownum < len(headers):\n while colnum < len(headers[rownum]):\n if headers[rownum][colnum] is None:\n break\n colnum += 1\n\n for cell in r.getchildren():\n colspan = int(cell.attrib.get('colspan', 1))\n rowspan = int(cell.attrib.get('rowspan', 1))\n cellval = ' '.join(x.strip() for x in cell.itertext()) # text only without markup\n links = [x.get('href') for x in cell.iter('a')]\n maxlinks[colnum] = max(maxlinks.get(colnum, 0), len(links))\n\n if is_header(cell):\n for k in range(rownum, rownum+rowspan):\n while k >= len(headers): # extend headers list with lists for all header rows\n headers.append([])\n\n for j in range(colnum, colnum+colspan):\n while j >= len(headers[k]):\n headers[k].append(None)\n headers[k][j] = cellval\n cellval = '' # use empty non-None value for subsequent rows in the rowspan\n else:\n while colnum >= len(row):\n row.append(None)\n row[colnum] = (cellval, links)\n\n colnum += colspan\n\n if any(row):\n yield row\n\n self.columns = []\n if headers:\n it = itertools.zip_longest(*headers, fillvalue='')\n else:\n it = [list(x) for x in self.rows[0]]\n self.rows = self.rows[1:]\n\n for colnum, names in enumerate(it):\n name = '_'.join(str(x) for x in names if x)\n self.addColumn(Column(name, getter=lambda c,r,i=colnum: r[i][0]))\n for linknum in range(maxlinks.get(colnum, 0)):\n self.addColumn(Column(name+'_link'+str(linknum), width=20, getter=lambda c,r,i=colnum,j=linknum: r[i][1][j]))\n\n\[email protected]\ndef save_html(vd, p, *vsheets):\n 'Save vsheets as HTML tables in a single file'\n\n with open(p, 'w', encoding='ascii', errors='xmlcharrefreplace') as fp:\n for sheet in vsheets:\n\n fp.write('<h2 class=\"sheetname\">%s</h2>\\n'.format(sheetname=html.escape(sheet.name)))\n\n fp.write('<table id=\"{sheetname}\">\\n'.format(sheetname=html.escape(sheet.name)))\n\n # headers\n fp.write('<tr>')\n for col in sheet.visibleCols:\n contents = html.escape(col.name)\n fp.write('<th>{colname}</th>'.format(colname=contents))\n fp.write('</tr>\\n')\n\n # rows\n with Progress(gerund='saving'):\n for typedvals in sheet.iterdispvals(format=False):\n fp.write('<tr>')\n for col, val in typedvals.items():\n fp.write('<td>')\n fp.write(html.escape(val))\n fp.write('</td>')\n fp.write('</tr>\\n')\n\n fp.write('</table>')\n vd.status('%s save finished' % p)\n\n\nVisiData.save_htm = VisiData.save_html\n\n\nvd.filetype('html', HtmlTablesSheet)\nvd.filetype('htm', HtmlTablesSheet)\n", "path": "visidata/loaders/html.py"}]} | 2,389 | 107 |
gh_patches_debug_26723 | rasdani/github-patches | git_diff | OpenCTI-Platform__connectors-51 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[CVE] Download link to variable
## Description
Set the download CVE link to variable, because otherwise the tool can hardly be used offline. Offline we can host the CVEs on a link that is not : "https://nvd.nist.gov/feeds/json/cve/1.1/nvdcve-1.1-"
## Reproducible Steps
https://github.com/OpenCTI-Platform/connectors/blame/9d47ffdad1c2a7fbdd709565d5c3f670693b148f/cve/src/cve.py#L103
## Expected Output
Url as a variable in the .yml
## Actual Output
Permanent link : "https://nvd.nist.gov/feeds/json/cve/1.1/nvdcve-1.1-"
</issue>
<code>
[start of cve/src/cve.py]
1 # coding: utf-8
2
3 import os
4 import yaml
5 import time
6 import urllib.request
7 import gzip
8 import shutil
9
10 from datetime import datetime
11 from pycti import OpenCTIConnectorHelper, get_config_variable
12 from cvetostix2 import convert
13
14
15 class Cve:
16 def __init__(self):
17 # Instantiate the connector helper from config
18 config_file_path = os.path.dirname(os.path.abspath(__file__)) + "/config.yml"
19 config = (
20 yaml.load(open(config_file_path), Loader=yaml.FullLoader)
21 if os.path.isfile(config_file_path)
22 else {}
23 )
24 self.helper = OpenCTIConnectorHelper(config)
25 # Extra config
26 self.cve_import_history = get_config_variable(
27 "CVE_IMPORT_HISTORY", ["cve", "import_history"], config, False
28 )
29 self.cve_nvd_data_feed = get_config_variable(
30 "CVE_NVD_DATA_FEED", ["cve", "nvd_data_feed"], config
31 )
32 self.cve_interval = get_config_variable(
33 "CVE_INTERVAL", ["cve", "interval"], config, True
34 )
35 self.update_existing_data = get_config_variable(
36 "CONNECTOR_UPDATE_EXISTING_DATA",
37 ["connector", "update_existing_data"],
38 config,
39 )
40
41 def get_interval(self):
42 return int(self.cve_interval) * 60 * 60 * 24
43
44 def convert_and_send(self, url):
45 try:
46 # Downloading json.gz file
47 self.helper.log_info("Requesting the file " + url)
48 urllib.request.urlretrieve(
49 self.cve_nvd_data_feed,
50 os.path.dirname(os.path.abspath(__file__)) + "/data.json.gz",
51 )
52 # Unzipping the file
53 self.helper.log_info("Unzipping the file")
54 with gzip.open("data.json.gz", "rb") as f_in:
55 with open("data.json", "wb") as f_out:
56 shutil.copyfileobj(f_in, f_out)
57 # Converting the file to stix2
58 self.helper.log_info("Converting the file")
59 convert("data.json", "data-stix2.json")
60 with open("data-stix2.json") as stix_json:
61 contents = stix_json.read()
62 self.helper.send_stix2_bundle(
63 contents, self.helper.connect_scope, self.update_existing_data
64 )
65 # Remove files
66 os.remove("data.json")
67 os.remove("data.json.gz")
68 os.remove("data-stix2.json")
69 except Exception as e:
70 self.helper.log_error(str(e))
71 time.sleep(60)
72
73 def run(self):
74 self.helper.log_info("Fetching CVE knowledge...")
75 while True:
76 try:
77 # Get the current timestamp and check
78 timestamp = int(time.time())
79 current_state = self.helper.get_state()
80 if current_state is not None and "last_run" in current_state:
81 last_run = current_state["last_run"]
82 self.helper.log_info(
83 "Connector last run: "
84 + datetime.utcfromtimestamp(last_run).strftime(
85 "%Y-%m-%d %H:%M:%S"
86 )
87 )
88 else:
89 last_run = None
90 self.helper.log_info("Connector has never run")
91 # If the last_run is more than interval-1 day
92 if last_run is None or (
93 (timestamp - last_run)
94 > ((int(self.cve_interval) - 1) * 60 * 60 * 24)
95 ):
96 self.convert_and_send(self.cve_nvd_data_feed)
97 # If import history and never run
98 if last_run is None and self.cve_import_history:
99 now = datetime.now()
100 years = list(range(2002, now.year))
101 for year in years:
102 self.convert_and_send(
103 "https://nvd.nist.gov/feeds/json/cve/1.1/nvdcve-1.1-"
104 + str(year)
105 + ".json.gz"
106 )
107
108 # Store the current timestamp as a last run
109 self.helper.log_info(
110 "Connector successfully run, storing last_run as "
111 + str(timestamp)
112 )
113 self.helper.set_state({"last_run": timestamp})
114 self.helper.log_info(
115 "Last_run stored, next run in: "
116 + str(round(self.get_interval() / 60 / 60 / 24, 2))
117 + " days"
118 )
119 time.sleep(60)
120 else:
121 new_interval = self.get_interval() - (timestamp - last_run)
122 self.helper.log_info(
123 "Connector will not run, next run in: "
124 + str(round(new_interval / 60 / 60 / 24, 2))
125 + " days"
126 )
127 time.sleep(60)
128 except (KeyboardInterrupt, SystemExit):
129 self.helper.log_info("Connector stop")
130 exit(0)
131 except Exception as e:
132 self.helper.log_error(str(e))
133 time.sleep(60)
134
135
136 if __name__ == "__main__":
137 try:
138 cveConnector = Cve()
139 cveConnector.run()
140 except Exception as e:
141 print(e)
142 time.sleep(10)
143 exit(0)
144
[end of cve/src/cve.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/cve/src/cve.py b/cve/src/cve.py
--- a/cve/src/cve.py
+++ b/cve/src/cve.py
@@ -29,6 +29,9 @@
self.cve_nvd_data_feed = get_config_variable(
"CVE_NVD_DATA_FEED", ["cve", "nvd_data_feed"], config
)
+ self.cve_history_data_feed = get_config_variable(
+ "CVE_HISTORY_DATA_FEED", ["cve", "history_data_feed"], config
+ )
self.cve_interval = get_config_variable(
"CVE_INTERVAL", ["cve", "interval"], config, True
)
@@ -97,12 +100,10 @@
# If import history and never run
if last_run is None and self.cve_import_history:
now = datetime.now()
- years = list(range(2002, now.year))
+ years = list(range(2002, now.year+1))
for year in years:
self.convert_and_send(
- "https://nvd.nist.gov/feeds/json/cve/1.1/nvdcve-1.1-"
- + str(year)
- + ".json.gz"
+ f"{self.cve_history_data_feed}nvdcve-1.1-{year}.json.gz"
)
# Store the current timestamp as a last run
| {"golden_diff": "diff --git a/cve/src/cve.py b/cve/src/cve.py\n--- a/cve/src/cve.py\n+++ b/cve/src/cve.py\n@@ -29,6 +29,9 @@\n self.cve_nvd_data_feed = get_config_variable(\n \"CVE_NVD_DATA_FEED\", [\"cve\", \"nvd_data_feed\"], config\n )\n+ self.cve_history_data_feed = get_config_variable(\n+ \"CVE_HISTORY_DATA_FEED\", [\"cve\", \"history_data_feed\"], config\n+ )\n self.cve_interval = get_config_variable(\n \"CVE_INTERVAL\", [\"cve\", \"interval\"], config, True\n )\n@@ -97,12 +100,10 @@\n # If import history and never run\n if last_run is None and self.cve_import_history:\n now = datetime.now()\n- years = list(range(2002, now.year))\n+ years = list(range(2002, now.year+1))\n for year in years:\n self.convert_and_send(\n- \"https://nvd.nist.gov/feeds/json/cve/1.1/nvdcve-1.1-\"\n- + str(year)\n- + \".json.gz\"\n+ f\"{self.cve_history_data_feed}nvdcve-1.1-{year}.json.gz\"\n )\n \n # Store the current timestamp as a last run\n", "issue": "[CVE] Download link to variable\n## Description\r\n\r\nSet the download CVE link to variable, because otherwise the tool can hardly be used offline. Offline we can host the CVEs on a link that is not : \"https://nvd.nist.gov/feeds/json/cve/1.1/nvdcve-1.1-\"\r\n\r\n## Reproducible Steps\r\n\r\nhttps://github.com/OpenCTI-Platform/connectors/blame/9d47ffdad1c2a7fbdd709565d5c3f670693b148f/cve/src/cve.py#L103\r\n\r\n## Expected Output\r\n\r\nUrl as a variable in the .yml\r\n\r\n## Actual Output\r\n\r\nPermanent link : \"https://nvd.nist.gov/feeds/json/cve/1.1/nvdcve-1.1-\"\r\n\n", "before_files": [{"content": "# coding: utf-8\n\nimport os\nimport yaml\nimport time\nimport urllib.request\nimport gzip\nimport shutil\n\nfrom datetime import datetime\nfrom pycti import OpenCTIConnectorHelper, get_config_variable\nfrom cvetostix2 import convert\n\n\nclass Cve:\n def __init__(self):\n # Instantiate the connector helper from config\n config_file_path = os.path.dirname(os.path.abspath(__file__)) + \"/config.yml\"\n config = (\n yaml.load(open(config_file_path), Loader=yaml.FullLoader)\n if os.path.isfile(config_file_path)\n else {}\n )\n self.helper = OpenCTIConnectorHelper(config)\n # Extra config\n self.cve_import_history = get_config_variable(\n \"CVE_IMPORT_HISTORY\", [\"cve\", \"import_history\"], config, False\n )\n self.cve_nvd_data_feed = get_config_variable(\n \"CVE_NVD_DATA_FEED\", [\"cve\", \"nvd_data_feed\"], config\n )\n self.cve_interval = get_config_variable(\n \"CVE_INTERVAL\", [\"cve\", \"interval\"], config, True\n )\n self.update_existing_data = get_config_variable(\n \"CONNECTOR_UPDATE_EXISTING_DATA\",\n [\"connector\", \"update_existing_data\"],\n config,\n )\n\n def get_interval(self):\n return int(self.cve_interval) * 60 * 60 * 24\n\n def convert_and_send(self, url):\n try:\n # Downloading json.gz file\n self.helper.log_info(\"Requesting the file \" + url)\n urllib.request.urlretrieve(\n self.cve_nvd_data_feed,\n os.path.dirname(os.path.abspath(__file__)) + \"/data.json.gz\",\n )\n # Unzipping the file\n self.helper.log_info(\"Unzipping the file\")\n with gzip.open(\"data.json.gz\", \"rb\") as f_in:\n with open(\"data.json\", \"wb\") as f_out:\n shutil.copyfileobj(f_in, f_out)\n # Converting the file to stix2\n self.helper.log_info(\"Converting the file\")\n convert(\"data.json\", \"data-stix2.json\")\n with open(\"data-stix2.json\") as stix_json:\n contents = stix_json.read()\n self.helper.send_stix2_bundle(\n contents, self.helper.connect_scope, self.update_existing_data\n )\n # Remove files\n os.remove(\"data.json\")\n os.remove(\"data.json.gz\")\n os.remove(\"data-stix2.json\")\n except Exception as e:\n self.helper.log_error(str(e))\n time.sleep(60)\n\n def run(self):\n self.helper.log_info(\"Fetching CVE knowledge...\")\n while True:\n try:\n # Get the current timestamp and check\n timestamp = int(time.time())\n current_state = self.helper.get_state()\n if current_state is not None and \"last_run\" in current_state:\n last_run = current_state[\"last_run\"]\n self.helper.log_info(\n \"Connector last run: \"\n + datetime.utcfromtimestamp(last_run).strftime(\n \"%Y-%m-%d %H:%M:%S\"\n )\n )\n else:\n last_run = None\n self.helper.log_info(\"Connector has never run\")\n # If the last_run is more than interval-1 day\n if last_run is None or (\n (timestamp - last_run)\n > ((int(self.cve_interval) - 1) * 60 * 60 * 24)\n ):\n self.convert_and_send(self.cve_nvd_data_feed)\n # If import history and never run\n if last_run is None and self.cve_import_history:\n now = datetime.now()\n years = list(range(2002, now.year))\n for year in years:\n self.convert_and_send(\n \"https://nvd.nist.gov/feeds/json/cve/1.1/nvdcve-1.1-\"\n + str(year)\n + \".json.gz\"\n )\n\n # Store the current timestamp as a last run\n self.helper.log_info(\n \"Connector successfully run, storing last_run as \"\n + str(timestamp)\n )\n self.helper.set_state({\"last_run\": timestamp})\n self.helper.log_info(\n \"Last_run stored, next run in: \"\n + str(round(self.get_interval() / 60 / 60 / 24, 2))\n + \" days\"\n )\n time.sleep(60)\n else:\n new_interval = self.get_interval() - (timestamp - last_run)\n self.helper.log_info(\n \"Connector will not run, next run in: \"\n + str(round(new_interval / 60 / 60 / 24, 2))\n + \" days\"\n )\n time.sleep(60)\n except (KeyboardInterrupt, SystemExit):\n self.helper.log_info(\"Connector stop\")\n exit(0)\n except Exception as e:\n self.helper.log_error(str(e))\n time.sleep(60)\n\n\nif __name__ == \"__main__\":\n try:\n cveConnector = Cve()\n cveConnector.run()\n except Exception as e:\n print(e)\n time.sleep(10)\n exit(0)\n", "path": "cve/src/cve.py"}]} | 2,179 | 313 |
gh_patches_debug_2261 | rasdani/github-patches | git_diff | mitmproxy__mitmproxy-4179 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
TypeError: Subscripted generics cannot be used with class and instance checks under python 3.9.0b1
#### Problem Description
Running mitmproxy 5.1.1 under python 3.9.0b1 fails with `TypeError: Subscripted generics cannot be used with class and instance checks`. The test suite fails as well with hundreds of ERROR and FAILED tests.
#### Steps to reproduce the behavior:
1. install mitmproxy 5.1.1 on Fedora rawhide
2. mitmproxy
3. pytest -v
There are:
```
=================== 303 failed, 994 passed, 2 xfailed, 115 warnings, 182 errors in 72.86s (0:01:12) ====================
```
Most of them throw a `TypeError: Subscripted generics cannot be used with class and instance checks` and have a stack trace similar to:
```
___________________________________ ERROR at setup of TestHTTPS.test_clientcert_dir ____________________________________
cls = <class 'test.mitmproxy.proxy.test_server.TestHTTPS'>
@classmethod
def setup_class(cls):
cls.server = pathod.test.Daemon(
ssl=cls.ssl,
ssloptions=cls.ssloptions)
cls.server2 = pathod.test.Daemon(
ssl=cls.ssl,
ssloptions=cls.ssloptions)
> cls.options = cls.get_options()
test/mitmproxy/tservers.py:146:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
test/mitmproxy/tservers.py:179: in get_options
return options.Options(
mitmproxy/options.py:50: in __init__
self.add_option(
mitmproxy/optmanager.py:109: in add_option
self._options[name] = _Option(name, typespec, default, help, choices)
mitmproxy/optmanager.py:34: in __init__
typecheck.check_option_type(name, default, typespec)
mitmproxy/utils/typecheck.py:73: in check_option_type
elif not isinstance(value, typeinfo):
/usr/lib64/python3.9/typing.py:649: in __instancecheck__
return self.__subclasscheck__(type(obj))
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = typing.Optional[str], cls = <class 'NoneType'>
def __subclasscheck__(self, cls):
> raise TypeError("Subscripted generics cannot be used with"
" class and instance checks")
E TypeError: Subscripted generics cannot be used with class and instance checks
/usr/lib64/python3.9/typing.py:652: TypeError
```
#### System Information
```
Traceback (most recent call last):
File "/builddir/build/BUILDROOT/mitmproxy-5.1.1-1.fc33.x86_64/usr/bin/./mitmproxy", line 11, in <module>
load_entry_point('mitmproxy==5.1.1', 'console_scripts', 'mitmproxy')()
File "/builddir/build/BUILDROOT/mitmproxy-5.1.1-1.fc33.x86_64/usr/lib/python3.9/site-packages/mitmproxy/tools/_main.py", line 147, in mitmproxy
run(console.master.ConsoleMaster, cmdline.mitmproxy, args)
File "/builddir/build/BUILDROOT/mitmproxy-5.1.1-1.fc33.x86_64/usr/lib/python3.9/site-packages/mitmproxy/tools/_main.py", line 71, in run
opts = options.Options()
File "/builddir/build/BUILDROOT/mitmproxy-5.1.1-1.fc33.x86_64/usr/lib/python3.9/site-packages/mitmproxy/options.py", line 50, in __init__
self.add_option(
File "/builddir/build/BUILDROOT/mitmproxy-5.1.1-1.fc33.x86_64/usr/lib/python3.9/site-packages/mitmproxy/optmanager.py", line 109, in add_option
self._options[name] = _Option(name, typespec, default, help, choices)
File "/builddir/build/BUILDROOT/mitmproxy-5.1.1-1.fc33.x86_64/usr/lib/python3.9/site-packages/mitmproxy/optmanager.py", line 34, in __init__
typecheck.check_option_type(name, default, typespec)
File "/builddir/build/BUILDROOT/mitmproxy-5.1.1-1.fc33.x86_64/usr/lib/python3.9/site-packages/mitmproxy/utils/typecheck.py", line 73, in check_option_type
elif not isinstance(value, typeinfo):
File "/usr/lib64/python3.9/typing.py", line 649, in __instancecheck__
return self.__subclasscheck__(type(obj))
File "/usr/lib64/python3.9/typing.py", line 652, in __subclasscheck__
raise TypeError("Subscripted generics cannot be used with"
TypeError: Subscripted generics cannot be used with class and instance checks
```
</issue>
<code>
[start of mitmproxy/utils/typecheck.py]
1 import typing
2
3 Type = typing.Union[
4 typing.Any # anything more elaborate really fails with mypy at the moment.
5 ]
6
7
8 def sequence_type(typeinfo: typing.Type[typing.List]) -> Type:
9 """Return the type of a sequence, e.g. typing.List"""
10 return typeinfo.__args__[0] # type: ignore
11
12
13 def tuple_types(typeinfo: typing.Type[typing.Tuple]) -> typing.Sequence[Type]:
14 """Return the types of a typing.Tuple"""
15 return typeinfo.__args__ # type: ignore
16
17
18 def union_types(typeinfo: typing.Type[typing.Tuple]) -> typing.Sequence[Type]:
19 """return the types of a typing.Union"""
20 return typeinfo.__args__ # type: ignore
21
22
23 def mapping_types(typeinfo: typing.Type[typing.Mapping]) -> typing.Tuple[Type, Type]:
24 """return the types of a mapping, e.g. typing.Dict"""
25 return typeinfo.__args__ # type: ignore
26
27
28 def check_option_type(name: str, value: typing.Any, typeinfo: Type) -> None:
29 """
30 Check if the provided value is an instance of typeinfo and raises a
31 TypeError otherwise. This function supports only those types required for
32 options.
33 """
34 e = TypeError("Expected {} for {}, but got {}.".format(
35 typeinfo,
36 name,
37 type(value)
38 ))
39
40 typename = str(typeinfo)
41
42 if typename.startswith("typing.Union"):
43 for T in union_types(typeinfo):
44 try:
45 check_option_type(name, value, T)
46 except TypeError:
47 pass
48 else:
49 return
50 raise e
51 elif typename.startswith("typing.Tuple"):
52 types = tuple_types(typeinfo)
53 if not isinstance(value, (tuple, list)):
54 raise e
55 if len(types) != len(value):
56 raise e
57 for i, (x, T) in enumerate(zip(value, types)):
58 check_option_type("{}[{}]".format(name, i), x, T)
59 return
60 elif typename.startswith("typing.Sequence"):
61 T = sequence_type(typeinfo)
62 if not isinstance(value, (tuple, list)):
63 raise e
64 for v in value:
65 check_option_type(name, v, T)
66 elif typename.startswith("typing.IO"):
67 if hasattr(value, "read"):
68 return
69 else:
70 raise e
71 elif typename.startswith("typing.Any"):
72 return
73 elif not isinstance(value, typeinfo):
74 if typeinfo is float and isinstance(value, int):
75 return
76 raise e
77
78
79 def typespec_to_str(typespec: typing.Any) -> str:
80 if typespec in (str, int, bool):
81 t = typespec.__name__
82 elif typespec == typing.Optional[str]:
83 t = 'optional str'
84 elif typespec == typing.Sequence[str]:
85 t = 'sequence of str'
86 elif typespec == typing.Optional[int]:
87 t = 'optional int'
88 else:
89 raise NotImplementedError
90 return t
91
[end of mitmproxy/utils/typecheck.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/mitmproxy/utils/typecheck.py b/mitmproxy/utils/typecheck.py
--- a/mitmproxy/utils/typecheck.py
+++ b/mitmproxy/utils/typecheck.py
@@ -39,7 +39,7 @@
typename = str(typeinfo)
- if typename.startswith("typing.Union"):
+ if typename.startswith("typing.Union") or typename.startswith("typing.Optional"):
for T in union_types(typeinfo):
try:
check_option_type(name, value, T)
| {"golden_diff": "diff --git a/mitmproxy/utils/typecheck.py b/mitmproxy/utils/typecheck.py\n--- a/mitmproxy/utils/typecheck.py\n+++ b/mitmproxy/utils/typecheck.py\n@@ -39,7 +39,7 @@\n \n typename = str(typeinfo)\n \n- if typename.startswith(\"typing.Union\"):\n+ if typename.startswith(\"typing.Union\") or typename.startswith(\"typing.Optional\"):\n for T in union_types(typeinfo):\n try:\n check_option_type(name, value, T)\n", "issue": "TypeError: Subscripted generics cannot be used with class and instance checks under python 3.9.0b1\n#### Problem Description\r\nRunning mitmproxy 5.1.1 under python 3.9.0b1 fails with `TypeError: Subscripted generics cannot be used with class and instance checks`. The test suite fails as well with hundreds of ERROR and FAILED tests.\r\n\r\n#### Steps to reproduce the behavior:\r\n1. install mitmproxy 5.1.1 on Fedora rawhide\r\n2. mitmproxy\r\n3. pytest -v\r\n\r\nThere are:\r\n```\r\n=================== 303 failed, 994 passed, 2 xfailed, 115 warnings, 182 errors in 72.86s (0:01:12) ====================\r\n```\r\nMost of them throw a `TypeError: Subscripted generics cannot be used with class and instance checks` and have a stack trace similar to:\r\n```\r\n___________________________________ ERROR at setup of TestHTTPS.test_clientcert_dir ____________________________________\r\n\r\ncls = <class 'test.mitmproxy.proxy.test_server.TestHTTPS'>\r\n\r\n @classmethod\r\n def setup_class(cls):\r\n cls.server = pathod.test.Daemon(\r\n ssl=cls.ssl,\r\n ssloptions=cls.ssloptions)\r\n cls.server2 = pathod.test.Daemon(\r\n ssl=cls.ssl,\r\n ssloptions=cls.ssloptions)\r\n \r\n> cls.options = cls.get_options()\r\n\r\ntest/mitmproxy/tservers.py:146: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntest/mitmproxy/tservers.py:179: in get_options\r\n return options.Options(\r\nmitmproxy/options.py:50: in __init__\r\n self.add_option(\r\nmitmproxy/optmanager.py:109: in add_option\r\n self._options[name] = _Option(name, typespec, default, help, choices)\r\nmitmproxy/optmanager.py:34: in __init__\r\n typecheck.check_option_type(name, default, typespec)\r\nmitmproxy/utils/typecheck.py:73: in check_option_type\r\n elif not isinstance(value, typeinfo):\r\n/usr/lib64/python3.9/typing.py:649: in __instancecheck__\r\n return self.__subclasscheck__(type(obj))\r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\n\r\nself = typing.Optional[str], cls = <class 'NoneType'>\r\n\r\n def __subclasscheck__(self, cls):\r\n> raise TypeError(\"Subscripted generics cannot be used with\"\r\n \" class and instance checks\")\r\nE TypeError: Subscripted generics cannot be used with class and instance checks\r\n\r\n/usr/lib64/python3.9/typing.py:652: TypeError\r\n```\r\n\r\n#### System Information\r\n```\r\nTraceback (most recent call last):\r\n File \"/builddir/build/BUILDROOT/mitmproxy-5.1.1-1.fc33.x86_64/usr/bin/./mitmproxy\", line 11, in <module>\r\n load_entry_point('mitmproxy==5.1.1', 'console_scripts', 'mitmproxy')()\r\n File \"/builddir/build/BUILDROOT/mitmproxy-5.1.1-1.fc33.x86_64/usr/lib/python3.9/site-packages/mitmproxy/tools/_main.py\", line 147, in mitmproxy\r\n run(console.master.ConsoleMaster, cmdline.mitmproxy, args)\r\n File \"/builddir/build/BUILDROOT/mitmproxy-5.1.1-1.fc33.x86_64/usr/lib/python3.9/site-packages/mitmproxy/tools/_main.py\", line 71, in run\r\n opts = options.Options()\r\n File \"/builddir/build/BUILDROOT/mitmproxy-5.1.1-1.fc33.x86_64/usr/lib/python3.9/site-packages/mitmproxy/options.py\", line 50, in __init__\r\n self.add_option(\r\n File \"/builddir/build/BUILDROOT/mitmproxy-5.1.1-1.fc33.x86_64/usr/lib/python3.9/site-packages/mitmproxy/optmanager.py\", line 109, in add_option\r\n self._options[name] = _Option(name, typespec, default, help, choices)\r\n File \"/builddir/build/BUILDROOT/mitmproxy-5.1.1-1.fc33.x86_64/usr/lib/python3.9/site-packages/mitmproxy/optmanager.py\", line 34, in __init__\r\n typecheck.check_option_type(name, default, typespec)\r\n File \"/builddir/build/BUILDROOT/mitmproxy-5.1.1-1.fc33.x86_64/usr/lib/python3.9/site-packages/mitmproxy/utils/typecheck.py\", line 73, in check_option_type\r\n elif not isinstance(value, typeinfo):\r\n File \"/usr/lib64/python3.9/typing.py\", line 649, in __instancecheck__\r\n return self.__subclasscheck__(type(obj))\r\n File \"/usr/lib64/python3.9/typing.py\", line 652, in __subclasscheck__\r\n raise TypeError(\"Subscripted generics cannot be used with\"\r\nTypeError: Subscripted generics cannot be used with class and instance checks\r\n```\n", "before_files": [{"content": "import typing\n\nType = typing.Union[\n typing.Any # anything more elaborate really fails with mypy at the moment.\n]\n\n\ndef sequence_type(typeinfo: typing.Type[typing.List]) -> Type:\n \"\"\"Return the type of a sequence, e.g. typing.List\"\"\"\n return typeinfo.__args__[0] # type: ignore\n\n\ndef tuple_types(typeinfo: typing.Type[typing.Tuple]) -> typing.Sequence[Type]:\n \"\"\"Return the types of a typing.Tuple\"\"\"\n return typeinfo.__args__ # type: ignore\n\n\ndef union_types(typeinfo: typing.Type[typing.Tuple]) -> typing.Sequence[Type]:\n \"\"\"return the types of a typing.Union\"\"\"\n return typeinfo.__args__ # type: ignore\n\n\ndef mapping_types(typeinfo: typing.Type[typing.Mapping]) -> typing.Tuple[Type, Type]:\n \"\"\"return the types of a mapping, e.g. typing.Dict\"\"\"\n return typeinfo.__args__ # type: ignore\n\n\ndef check_option_type(name: str, value: typing.Any, typeinfo: Type) -> None:\n \"\"\"\n Check if the provided value is an instance of typeinfo and raises a\n TypeError otherwise. This function supports only those types required for\n options.\n \"\"\"\n e = TypeError(\"Expected {} for {}, but got {}.\".format(\n typeinfo,\n name,\n type(value)\n ))\n\n typename = str(typeinfo)\n\n if typename.startswith(\"typing.Union\"):\n for T in union_types(typeinfo):\n try:\n check_option_type(name, value, T)\n except TypeError:\n pass\n else:\n return\n raise e\n elif typename.startswith(\"typing.Tuple\"):\n types = tuple_types(typeinfo)\n if not isinstance(value, (tuple, list)):\n raise e\n if len(types) != len(value):\n raise e\n for i, (x, T) in enumerate(zip(value, types)):\n check_option_type(\"{}[{}]\".format(name, i), x, T)\n return\n elif typename.startswith(\"typing.Sequence\"):\n T = sequence_type(typeinfo)\n if not isinstance(value, (tuple, list)):\n raise e\n for v in value:\n check_option_type(name, v, T)\n elif typename.startswith(\"typing.IO\"):\n if hasattr(value, \"read\"):\n return\n else:\n raise e\n elif typename.startswith(\"typing.Any\"):\n return\n elif not isinstance(value, typeinfo):\n if typeinfo is float and isinstance(value, int):\n return\n raise e\n\n\ndef typespec_to_str(typespec: typing.Any) -> str:\n if typespec in (str, int, bool):\n t = typespec.__name__\n elif typespec == typing.Optional[str]:\n t = 'optional str'\n elif typespec == typing.Sequence[str]:\n t = 'sequence of str'\n elif typespec == typing.Optional[int]:\n t = 'optional int'\n else:\n raise NotImplementedError\n return t\n", "path": "mitmproxy/utils/typecheck.py"}]} | 2,628 | 110 |
gh_patches_debug_16552 | rasdani/github-patches | git_diff | Kinto__kinto-1814 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Invalid account_create_principals key
The `account:create` check_permission code is looking at `account_account:create_principals` settings key rather than `account_create_principals`
</issue>
<code>
[start of kinto/core/authorization.py]
1 import functools
2 import logging
3
4 from pyramid.settings import aslist
5 from pyramid.security import IAuthorizationPolicy, Authenticated
6 from zope.interface import implementer
7
8 from kinto.core import utils
9 from kinto.core.storage import exceptions as storage_exceptions
10
11
12 logger = logging.getLogger(__name__)
13
14 # A permission is called "dynamic" when it's computed at request time.
15 DYNAMIC = "dynamic"
16
17 # When permission is set to "private", only the current user is allowed.
18 PRIVATE = "private"
19
20
21 def groupfinder(userid, request):
22 """Fetch principals from permission backend for the specified `userid`.
23
24 This is plugged by default using the ``multiauth.groupfinder`` setting.
25 """
26 backend = getattr(request.registry, "permission", None)
27 # Permission backend not configured. Ignore.
28 if not backend:
29 return []
30
31 # Safety check when Kinto-Core is used without pyramid_multiauth.
32 if request.prefixed_userid:
33 userid = request.prefixed_userid
34
35 # Query the permission backend only once per request (e.g. batch).
36 reify_key = userid + "_principals"
37 if reify_key not in request.bound_data:
38 principals = backend.get_user_principals(userid)
39 request.bound_data[reify_key] = principals
40
41 return request.bound_data[reify_key]
42
43
44 @implementer(IAuthorizationPolicy)
45 class AuthorizationPolicy:
46 """Default authorization class, that leverages the permission backend
47 for shareable resources.
48 """
49
50 get_bound_permissions = None
51 """Callable that takes an object id and a permission and returns
52 a list of tuples (<object id>, <permission>). Useful when objects
53 permission depend on others."""
54
55 def permits(self, context, principals, permission):
56 if permission == PRIVATE:
57 return Authenticated in principals
58
59 principals = context.get_prefixed_principals()
60
61 if permission == DYNAMIC:
62 permission = context.required_permission
63
64 create_permission = "{}:create".format(context.resource_name)
65 if permission == "create":
66 permission = create_permission
67
68 object_id = context.permission_object_id
69 bound_perms = self._get_bound_permissions(object_id, permission)
70
71 allowed = context.check_permission(principals, bound_perms)
72
73 # Here we consider that parent URI is one path level above.
74 parent_uri = "/".join(object_id.split("/")[:-1]) if object_id else None
75
76 # If not allowed to delete/patch, and target object is missing, and
77 # allowed to read the parent, then view is permitted (will raise 404
78 # later anyway). See Kinto/kinto#918
79 is_record_unknown = not context.on_collection and context.current_record is None
80 if context.required_permission == "write" and is_record_unknown:
81 bound_perms = self._get_bound_permissions(parent_uri, "read")
82 allowed = context.check_permission(principals, bound_perms)
83
84 # If not allowed on this collection, but some records are shared with
85 # the current user, then authorize.
86 # The ShareableResource class will take care of the filtering.
87 is_list_operation = context.on_collection and not permission.endswith("create")
88 if not allowed and is_list_operation:
89 allowed = bool(
90 context.fetch_shared_records(permission, principals, self.get_bound_permissions)
91 )
92 if not allowed:
93 # If allowed to create this kind of object on parent,
94 # then allow to obtain the list.
95 if len(bound_perms) > 0:
96 bound_perms = [(parent_uri, create_permission)]
97 else:
98 bound_perms = [("", "create")] # Root object.
99 allowed = context.check_permission(principals, bound_perms)
100
101 if not allowed:
102 logger.warn(
103 "Permission %r on %r not granted to %r.",
104 permission,
105 object_id,
106 principals[0],
107 extra=dict(userid=principals[0], uri=object_id, perm=permission),
108 )
109
110 return allowed
111
112 def _get_bound_permissions(self, object_id, permission):
113 if self.get_bound_permissions is None:
114 return [(object_id, permission)]
115 return self.get_bound_permissions(object_id, permission)
116
117 def principals_allowed_by_permission(self, context, permission):
118 raise NotImplementedError() # PRAGMA NOCOVER
119
120
121 class RouteFactory:
122 resource_name = None
123 on_collection = False
124 required_permission = None
125 permission_object_id = None
126 current_record = None
127 shared_ids = None
128
129 method_permissions = {
130 "head": "read",
131 "get": "read",
132 "post": "create",
133 "delete": "write",
134 "patch": "write",
135 }
136
137 def __init__(self, request):
138 # Store some shortcuts.
139 permission = request.registry.permission
140 self._check_permission = permission.check_permission
141 self._get_accessible_objects = permission.get_accessible_objects
142
143 self.get_prefixed_principals = functools.partial(utils.prefixed_principals, request)
144
145 # Store current resource and required permission.
146 service = utils.current_service(request)
147 is_on_resource = (
148 service is not None and hasattr(service, "viewset") and hasattr(service, "resource")
149 )
150 if is_on_resource:
151 self.resource_name = request.current_resource_name
152 self.on_collection = getattr(service, "type", None) == "collection"
153
154 # Try to fetch the target object. Its existence will affect permissions checking.
155 if not self.on_collection and request.method.lower() in ("put", "delete", "patch"):
156 resource = service.resource(request=request, context=self)
157 try:
158 # Save a reference, to avoid refetching from storage in resource.
159 self.current_record = resource.model.get_record(resource.record_id)
160 except storage_exceptions.RecordNotFoundError:
161 pass
162
163 self.permission_object_id, self.required_permission = self._find_required_permission(
164 request, service
165 )
166
167 # To obtain shared records on a collection endpoint, use a match:
168 self._object_id_match = self.get_permission_object_id(request, "*")
169
170 self._settings = request.registry.settings
171
172 def check_permission(self, principals, bound_perms):
173 """Read allowed principals from settings, if not any, query the permission
174 backend to check if view is allowed.
175 """
176 if not bound_perms:
177 bound_perms = [(self.resource_name, self.required_permission)]
178 for (_, permission) in bound_perms:
179 setting = "{}_{}_principals".format(self.resource_name, permission)
180 allowed_principals = aslist(self._settings.get(setting, ""))
181 if allowed_principals:
182 if bool(set(allowed_principals) & set(principals)):
183 return True
184 return self._check_permission(principals, bound_perms)
185
186 def fetch_shared_records(self, perm, principals, get_bound_permissions):
187 """Fetch records that are readable or writable for the current
188 principals.
189
190 See :meth:`kinto.core.authorization.AuthorizationPolicy.permits`
191
192 If no record is shared, it returns None.
193
194 .. warning::
195 This sets the ``shared_ids`` attribute to the context with the
196 return value. The attribute is then read by
197 :class:`kinto.core.resource.ShareableResource`
198 """
199 if get_bound_permissions:
200 bound_perms = get_bound_permissions(self._object_id_match, perm)
201 else:
202 bound_perms = [(self._object_id_match, perm)]
203 by_obj_id = self._get_accessible_objects(principals, bound_perms, with_children=False)
204 ids = by_obj_id.keys()
205 # Store for later use in ``ShareableResource``.
206 self.shared_ids = [self._extract_object_id(id_) for id_ in ids]
207 return self.shared_ids
208
209 def get_permission_object_id(self, request, object_id=None):
210 """Returns the permission object id for the current request.
211 In the nominal case, it is just the current URI without version prefix.
212 For collections, it is the related record URI using the specified
213 `object_id`.
214
215 See :meth:`kinto.core.resource.model.SharableModel` and
216 :meth:`kinto.core.authorization.RouteFactory.__init__`
217 """
218 object_uri = utils.strip_uri_prefix(request.path)
219
220 if self.on_collection and object_id is not None:
221 # With the current request on a collection, the record URI must
222 # be found out by inspecting the collection service and its sibling
223 # record service.
224 matchdict = {**request.matchdict, "id": object_id}
225 try:
226 object_uri = utils.instance_uri(request, self.resource_name, **matchdict)
227 object_uri = object_uri.replace("%2A", "*")
228 except KeyError:
229 # Maybe the resource has no single record endpoint.
230 # We consider that object URIs in permissions backend will
231 # be stored naively:
232 object_uri = "{}/{}".format(object_uri, object_id)
233
234 return object_uri
235
236 def _extract_object_id(self, object_uri):
237 # XXX: Rewrite using kinto.core.utils.view_lookup() and matchdict['id']
238 return object_uri.split("/")[-1]
239
240 def _find_required_permission(self, request, service):
241 """Find out what is the permission object id and the required
242 permission.
243
244 .. note::
245 This method saves an attribute ``self.current_record`` used
246 in :class:`kinto.core.resource.UserResource`.
247 """
248 # By default, it's a URI a and permission associated to the method.
249 permission_object_id = self.get_permission_object_id(request)
250 method = request.method.lower()
251 required_permission = self.method_permissions.get(method)
252
253 # For create permission, the object id is the plural endpoint.
254 collection_path = str(service.collection_path)
255 collection_path = collection_path.format_map(request.matchdict)
256
257 # In the case of a "PUT", check if the targetted record already
258 # exists, return "write" if it does, "create" otherwise.
259 if request.method.lower() == "put":
260 if self.current_record is None:
261 # The record does not exist, the permission to create on
262 # the related collection is required.
263 permission_object_id = collection_path
264 required_permission = "create"
265 else:
266 # For safe creations, the user needs a create permission.
267 # See Kinto/kinto#792
268 if request.headers.get("If-None-Match") == "*":
269 permission_object_id = collection_path
270 required_permission = "create"
271 else:
272 required_permission = "write"
273
274 return (permission_object_id, required_permission)
275
[end of kinto/core/authorization.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/kinto/core/authorization.py b/kinto/core/authorization.py
--- a/kinto/core/authorization.py
+++ b/kinto/core/authorization.py
@@ -176,7 +176,11 @@
if not bound_perms:
bound_perms = [(self.resource_name, self.required_permission)]
for (_, permission) in bound_perms:
- setting = "{}_{}_principals".format(self.resource_name, permission)
+ # With Kinto inheritance tree, we can have: `permission = "record:create"`
+ if self.resource_name and permission.startswith(self.resource_name):
+ setting = "{}_principals".format(permission.replace(":", "_"))
+ else:
+ setting = "{}_{}_principals".format(self.resource_name, permission)
allowed_principals = aslist(self._settings.get(setting, ""))
if allowed_principals:
if bool(set(allowed_principals) & set(principals)):
| {"golden_diff": "diff --git a/kinto/core/authorization.py b/kinto/core/authorization.py\n--- a/kinto/core/authorization.py\n+++ b/kinto/core/authorization.py\n@@ -176,7 +176,11 @@\n if not bound_perms:\n bound_perms = [(self.resource_name, self.required_permission)]\n for (_, permission) in bound_perms:\n- setting = \"{}_{}_principals\".format(self.resource_name, permission)\n+ # With Kinto inheritance tree, we can have: `permission = \"record:create\"`\n+ if self.resource_name and permission.startswith(self.resource_name):\n+ setting = \"{}_principals\".format(permission.replace(\":\", \"_\"))\n+ else:\n+ setting = \"{}_{}_principals\".format(self.resource_name, permission)\n allowed_principals = aslist(self._settings.get(setting, \"\"))\n if allowed_principals:\n if bool(set(allowed_principals) & set(principals)):\n", "issue": "Invalid account_create_principals key\nThe `account:create` check_permission code is looking at `account_account:create_principals` settings key rather than `account_create_principals`\n", "before_files": [{"content": "import functools\nimport logging\n\nfrom pyramid.settings import aslist\nfrom pyramid.security import IAuthorizationPolicy, Authenticated\nfrom zope.interface import implementer\n\nfrom kinto.core import utils\nfrom kinto.core.storage import exceptions as storage_exceptions\n\n\nlogger = logging.getLogger(__name__)\n\n# A permission is called \"dynamic\" when it's computed at request time.\nDYNAMIC = \"dynamic\"\n\n# When permission is set to \"private\", only the current user is allowed.\nPRIVATE = \"private\"\n\n\ndef groupfinder(userid, request):\n \"\"\"Fetch principals from permission backend for the specified `userid`.\n\n This is plugged by default using the ``multiauth.groupfinder`` setting.\n \"\"\"\n backend = getattr(request.registry, \"permission\", None)\n # Permission backend not configured. Ignore.\n if not backend:\n return []\n\n # Safety check when Kinto-Core is used without pyramid_multiauth.\n if request.prefixed_userid:\n userid = request.prefixed_userid\n\n # Query the permission backend only once per request (e.g. batch).\n reify_key = userid + \"_principals\"\n if reify_key not in request.bound_data:\n principals = backend.get_user_principals(userid)\n request.bound_data[reify_key] = principals\n\n return request.bound_data[reify_key]\n\n\n@implementer(IAuthorizationPolicy)\nclass AuthorizationPolicy:\n \"\"\"Default authorization class, that leverages the permission backend\n for shareable resources.\n \"\"\"\n\n get_bound_permissions = None\n \"\"\"Callable that takes an object id and a permission and returns\n a list of tuples (<object id>, <permission>). Useful when objects\n permission depend on others.\"\"\"\n\n def permits(self, context, principals, permission):\n if permission == PRIVATE:\n return Authenticated in principals\n\n principals = context.get_prefixed_principals()\n\n if permission == DYNAMIC:\n permission = context.required_permission\n\n create_permission = \"{}:create\".format(context.resource_name)\n if permission == \"create\":\n permission = create_permission\n\n object_id = context.permission_object_id\n bound_perms = self._get_bound_permissions(object_id, permission)\n\n allowed = context.check_permission(principals, bound_perms)\n\n # Here we consider that parent URI is one path level above.\n parent_uri = \"/\".join(object_id.split(\"/\")[:-1]) if object_id else None\n\n # If not allowed to delete/patch, and target object is missing, and\n # allowed to read the parent, then view is permitted (will raise 404\n # later anyway). See Kinto/kinto#918\n is_record_unknown = not context.on_collection and context.current_record is None\n if context.required_permission == \"write\" and is_record_unknown:\n bound_perms = self._get_bound_permissions(parent_uri, \"read\")\n allowed = context.check_permission(principals, bound_perms)\n\n # If not allowed on this collection, but some records are shared with\n # the current user, then authorize.\n # The ShareableResource class will take care of the filtering.\n is_list_operation = context.on_collection and not permission.endswith(\"create\")\n if not allowed and is_list_operation:\n allowed = bool(\n context.fetch_shared_records(permission, principals, self.get_bound_permissions)\n )\n if not allowed:\n # If allowed to create this kind of object on parent,\n # then allow to obtain the list.\n if len(bound_perms) > 0:\n bound_perms = [(parent_uri, create_permission)]\n else:\n bound_perms = [(\"\", \"create\")] # Root object.\n allowed = context.check_permission(principals, bound_perms)\n\n if not allowed:\n logger.warn(\n \"Permission %r on %r not granted to %r.\",\n permission,\n object_id,\n principals[0],\n extra=dict(userid=principals[0], uri=object_id, perm=permission),\n )\n\n return allowed\n\n def _get_bound_permissions(self, object_id, permission):\n if self.get_bound_permissions is None:\n return [(object_id, permission)]\n return self.get_bound_permissions(object_id, permission)\n\n def principals_allowed_by_permission(self, context, permission):\n raise NotImplementedError() # PRAGMA NOCOVER\n\n\nclass RouteFactory:\n resource_name = None\n on_collection = False\n required_permission = None\n permission_object_id = None\n current_record = None\n shared_ids = None\n\n method_permissions = {\n \"head\": \"read\",\n \"get\": \"read\",\n \"post\": \"create\",\n \"delete\": \"write\",\n \"patch\": \"write\",\n }\n\n def __init__(self, request):\n # Store some shortcuts.\n permission = request.registry.permission\n self._check_permission = permission.check_permission\n self._get_accessible_objects = permission.get_accessible_objects\n\n self.get_prefixed_principals = functools.partial(utils.prefixed_principals, request)\n\n # Store current resource and required permission.\n service = utils.current_service(request)\n is_on_resource = (\n service is not None and hasattr(service, \"viewset\") and hasattr(service, \"resource\")\n )\n if is_on_resource:\n self.resource_name = request.current_resource_name\n self.on_collection = getattr(service, \"type\", None) == \"collection\"\n\n # Try to fetch the target object. Its existence will affect permissions checking.\n if not self.on_collection and request.method.lower() in (\"put\", \"delete\", \"patch\"):\n resource = service.resource(request=request, context=self)\n try:\n # Save a reference, to avoid refetching from storage in resource.\n self.current_record = resource.model.get_record(resource.record_id)\n except storage_exceptions.RecordNotFoundError:\n pass\n\n self.permission_object_id, self.required_permission = self._find_required_permission(\n request, service\n )\n\n # To obtain shared records on a collection endpoint, use a match:\n self._object_id_match = self.get_permission_object_id(request, \"*\")\n\n self._settings = request.registry.settings\n\n def check_permission(self, principals, bound_perms):\n \"\"\"Read allowed principals from settings, if not any, query the permission\n backend to check if view is allowed.\n \"\"\"\n if not bound_perms:\n bound_perms = [(self.resource_name, self.required_permission)]\n for (_, permission) in bound_perms:\n setting = \"{}_{}_principals\".format(self.resource_name, permission)\n allowed_principals = aslist(self._settings.get(setting, \"\"))\n if allowed_principals:\n if bool(set(allowed_principals) & set(principals)):\n return True\n return self._check_permission(principals, bound_perms)\n\n def fetch_shared_records(self, perm, principals, get_bound_permissions):\n \"\"\"Fetch records that are readable or writable for the current\n principals.\n\n See :meth:`kinto.core.authorization.AuthorizationPolicy.permits`\n\n If no record is shared, it returns None.\n\n .. warning::\n This sets the ``shared_ids`` attribute to the context with the\n return value. The attribute is then read by\n :class:`kinto.core.resource.ShareableResource`\n \"\"\"\n if get_bound_permissions:\n bound_perms = get_bound_permissions(self._object_id_match, perm)\n else:\n bound_perms = [(self._object_id_match, perm)]\n by_obj_id = self._get_accessible_objects(principals, bound_perms, with_children=False)\n ids = by_obj_id.keys()\n # Store for later use in ``ShareableResource``.\n self.shared_ids = [self._extract_object_id(id_) for id_ in ids]\n return self.shared_ids\n\n def get_permission_object_id(self, request, object_id=None):\n \"\"\"Returns the permission object id for the current request.\n In the nominal case, it is just the current URI without version prefix.\n For collections, it is the related record URI using the specified\n `object_id`.\n\n See :meth:`kinto.core.resource.model.SharableModel` and\n :meth:`kinto.core.authorization.RouteFactory.__init__`\n \"\"\"\n object_uri = utils.strip_uri_prefix(request.path)\n\n if self.on_collection and object_id is not None:\n # With the current request on a collection, the record URI must\n # be found out by inspecting the collection service and its sibling\n # record service.\n matchdict = {**request.matchdict, \"id\": object_id}\n try:\n object_uri = utils.instance_uri(request, self.resource_name, **matchdict)\n object_uri = object_uri.replace(\"%2A\", \"*\")\n except KeyError:\n # Maybe the resource has no single record endpoint.\n # We consider that object URIs in permissions backend will\n # be stored naively:\n object_uri = \"{}/{}\".format(object_uri, object_id)\n\n return object_uri\n\n def _extract_object_id(self, object_uri):\n # XXX: Rewrite using kinto.core.utils.view_lookup() and matchdict['id']\n return object_uri.split(\"/\")[-1]\n\n def _find_required_permission(self, request, service):\n \"\"\"Find out what is the permission object id and the required\n permission.\n\n .. note::\n This method saves an attribute ``self.current_record`` used\n in :class:`kinto.core.resource.UserResource`.\n \"\"\"\n # By default, it's a URI a and permission associated to the method.\n permission_object_id = self.get_permission_object_id(request)\n method = request.method.lower()\n required_permission = self.method_permissions.get(method)\n\n # For create permission, the object id is the plural endpoint.\n collection_path = str(service.collection_path)\n collection_path = collection_path.format_map(request.matchdict)\n\n # In the case of a \"PUT\", check if the targetted record already\n # exists, return \"write\" if it does, \"create\" otherwise.\n if request.method.lower() == \"put\":\n if self.current_record is None:\n # The record does not exist, the permission to create on\n # the related collection is required.\n permission_object_id = collection_path\n required_permission = \"create\"\n else:\n # For safe creations, the user needs a create permission.\n # See Kinto/kinto#792\n if request.headers.get(\"If-None-Match\") == \"*\":\n permission_object_id = collection_path\n required_permission = \"create\"\n else:\n required_permission = \"write\"\n\n return (permission_object_id, required_permission)\n", "path": "kinto/core/authorization.py"}]} | 3,564 | 207 |
gh_patches_debug_18827 | rasdani/github-patches | git_diff | DataDog__dd-trace-py-1879 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Sanic resource names gets grouped incorrectly
Hi!
The Endpoints gets grouped incorrectly in the UI when using the Sanic integration because the path parameter values are used in the resource name instead of the parameter names and thus creating one unique Endpoint for every unique method + request path.
Example:

Is this by design? Other integrations (node express for example) groups them by the paramater names which imo seems to be the proper way to do it.
I have created a PR to solve this: #1879
</issue>
<code>
[start of ddtrace/contrib/sanic/patch.py]
1 import asyncio
2 import ddtrace
3 import sanic
4 from ddtrace import config
5 from ddtrace.constants import ANALYTICS_SAMPLE_RATE_KEY
6 from ddtrace.ext import SpanTypes
7 from ddtrace.propagation.http import HTTPPropagator
8 from ddtrace.utils.wrappers import unwrap as _u
9 from ddtrace.vendor import wrapt
10 from ddtrace.vendor.wrapt import wrap_function_wrapper as _w
11
12 from .. import trace_utils
13 from ...internal.logger import get_logger
14
15 log = get_logger(__name__)
16
17 config._add("sanic", dict(_default_service="sanic", distributed_tracing=True))
18
19
20 def _wrap_response_callback(span, callback):
21 # wrap response callbacks (either sync or async function) to set span tags
22 # based on response and finish span before returning response
23
24 def update_span(response):
25 if isinstance(response, sanic.response.BaseHTTPResponse):
26 status_code = response.status
27 response_headers = response.headers
28 else:
29 # invalid response causes ServerError exception which must be handled
30 status_code = 500
31 response_headers = None
32 trace_utils.set_http_meta(span, config.sanic, status_code=status_code, response_headers=response_headers)
33 span.finish()
34
35 @wrapt.function_wrapper
36 def wrap_sync(wrapped, instance, args, kwargs):
37 r = wrapped(*args, **kwargs)
38 response = args[0]
39 update_span(response)
40 return r
41
42 @wrapt.function_wrapper
43 async def wrap_async(wrapped, instance, args, kwargs):
44 r = await wrapped(*args, **kwargs)
45 response = args[0]
46 update_span(response)
47 return r
48
49 if asyncio.iscoroutinefunction(callback):
50 return wrap_async(callback)
51
52 return wrap_sync(callback)
53
54
55 def patch():
56 """Patch the instrumented methods."""
57 if getattr(sanic, "__datadog_patch", False):
58 return
59 setattr(sanic, "__datadog_patch", True)
60 _w("sanic", "Sanic.handle_request", patch_handle_request)
61
62
63 def unpatch():
64 """Unpatch the instrumented methods."""
65 _u(sanic.Sanic, "handle_request")
66 if not getattr(sanic, "__datadog_patch", False):
67 return
68 setattr(sanic, "__datadog_patch", False)
69
70
71 async def patch_handle_request(wrapped, instance, args, kwargs):
72 """Wrapper for Sanic.handle_request"""
73 request = kwargs.get("request", args[0])
74 write_callback = kwargs.get("write_callback", args[1])
75 stream_callback = kwargs.get("stream_callback", args[2])
76
77 if request.scheme not in ("http", "https"):
78 return await wrapped(request, write_callback, stream_callback, **kwargs)
79
80 resource = "{} {}".format(request.method, request.path)
81
82 headers = request.headers.copy()
83
84 if config.sanic.distributed_tracing:
85 propagator = HTTPPropagator()
86 context = propagator.extract(headers)
87 if context.trace_id:
88 ddtrace.tracer.context_provider.activate(context)
89
90 span = ddtrace.tracer.trace(
91 "sanic.request",
92 service=trace_utils.int_service(None, config.sanic),
93 resource=resource,
94 span_type=SpanTypes.WEB,
95 )
96 sample_rate = config.sanic.get_analytics_sample_rate(use_global_config=True)
97 if sample_rate is not None:
98 span.set_tag(ANALYTICS_SAMPLE_RATE_KEY, sample_rate)
99
100 method = request.method
101 url = "{scheme}://{host}{path}".format(scheme=request.scheme, host=request.host, path=request.path)
102 query_string = request.query_string
103 if isinstance(query_string, bytes):
104 query_string = query_string.decode()
105 trace_utils.set_http_meta(span, config.sanic, method=method, url=url, query=query_string, request_headers=headers)
106
107 if write_callback is not None:
108 write_callback = _wrap_response_callback(span, write_callback)
109 if stream_callback is not None:
110 stream_callback = _wrap_response_callback(span, stream_callback)
111
112 return await wrapped(request, write_callback, stream_callback, **kwargs)
113
[end of ddtrace/contrib/sanic/patch.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/ddtrace/contrib/sanic/patch.py b/ddtrace/contrib/sanic/patch.py
--- a/ddtrace/contrib/sanic/patch.py
+++ b/ddtrace/contrib/sanic/patch.py
@@ -52,6 +52,18 @@
return wrap_sync(callback)
+def _get_path(request):
+ """Get path and replace path parameter values with names if route exists."""
+ path = request.path
+ try:
+ match_info = request.match_info
+ except sanic.exceptions.SanicException:
+ return path
+ for key, value in match_info.items():
+ path = path.replace(value, f"<{key}>")
+ return path
+
+
def patch():
"""Patch the instrumented methods."""
if getattr(sanic, "__datadog_patch", False):
@@ -77,7 +89,7 @@
if request.scheme not in ("http", "https"):
return await wrapped(request, write_callback, stream_callback, **kwargs)
- resource = "{} {}".format(request.method, request.path)
+ resource = "{} {}".format(request.method, _get_path(request))
headers = request.headers.copy()
| {"golden_diff": "diff --git a/ddtrace/contrib/sanic/patch.py b/ddtrace/contrib/sanic/patch.py\n--- a/ddtrace/contrib/sanic/patch.py\n+++ b/ddtrace/contrib/sanic/patch.py\n@@ -52,6 +52,18 @@\n return wrap_sync(callback)\n \n \n+def _get_path(request):\n+ \"\"\"Get path and replace path parameter values with names if route exists.\"\"\"\n+ path = request.path\n+ try:\n+ match_info = request.match_info\n+ except sanic.exceptions.SanicException:\n+ return path\n+ for key, value in match_info.items():\n+ path = path.replace(value, f\"<{key}>\")\n+ return path\n+\n+\n def patch():\n \"\"\"Patch the instrumented methods.\"\"\"\n if getattr(sanic, \"__datadog_patch\", False):\n@@ -77,7 +89,7 @@\n if request.scheme not in (\"http\", \"https\"):\n return await wrapped(request, write_callback, stream_callback, **kwargs)\n \n- resource = \"{} {}\".format(request.method, request.path)\n+ resource = \"{} {}\".format(request.method, _get_path(request))\n \n headers = request.headers.copy()\n", "issue": "Sanic resource names gets grouped incorrectly\nHi!\r\n\r\nThe Endpoints gets grouped incorrectly in the UI when using the Sanic integration because the path parameter values are used in the resource name instead of the parameter names and thus creating one unique Endpoint for every unique method + request path.\r\n\r\nExample:\r\n\r\n\r\nIs this by design? Other integrations (node express for example) groups them by the paramater names which imo seems to be the proper way to do it.\r\n\r\nI have created a PR to solve this: #1879\n", "before_files": [{"content": "import asyncio\nimport ddtrace\nimport sanic\nfrom ddtrace import config\nfrom ddtrace.constants import ANALYTICS_SAMPLE_RATE_KEY\nfrom ddtrace.ext import SpanTypes\nfrom ddtrace.propagation.http import HTTPPropagator\nfrom ddtrace.utils.wrappers import unwrap as _u\nfrom ddtrace.vendor import wrapt\nfrom ddtrace.vendor.wrapt import wrap_function_wrapper as _w\n\nfrom .. import trace_utils\nfrom ...internal.logger import get_logger\n\nlog = get_logger(__name__)\n\nconfig._add(\"sanic\", dict(_default_service=\"sanic\", distributed_tracing=True))\n\n\ndef _wrap_response_callback(span, callback):\n # wrap response callbacks (either sync or async function) to set span tags\n # based on response and finish span before returning response\n\n def update_span(response):\n if isinstance(response, sanic.response.BaseHTTPResponse):\n status_code = response.status\n response_headers = response.headers\n else:\n # invalid response causes ServerError exception which must be handled\n status_code = 500\n response_headers = None\n trace_utils.set_http_meta(span, config.sanic, status_code=status_code, response_headers=response_headers)\n span.finish()\n\n @wrapt.function_wrapper\n def wrap_sync(wrapped, instance, args, kwargs):\n r = wrapped(*args, **kwargs)\n response = args[0]\n update_span(response)\n return r\n\n @wrapt.function_wrapper\n async def wrap_async(wrapped, instance, args, kwargs):\n r = await wrapped(*args, **kwargs)\n response = args[0]\n update_span(response)\n return r\n\n if asyncio.iscoroutinefunction(callback):\n return wrap_async(callback)\n\n return wrap_sync(callback)\n\n\ndef patch():\n \"\"\"Patch the instrumented methods.\"\"\"\n if getattr(sanic, \"__datadog_patch\", False):\n return\n setattr(sanic, \"__datadog_patch\", True)\n _w(\"sanic\", \"Sanic.handle_request\", patch_handle_request)\n\n\ndef unpatch():\n \"\"\"Unpatch the instrumented methods.\"\"\"\n _u(sanic.Sanic, \"handle_request\")\n if not getattr(sanic, \"__datadog_patch\", False):\n return\n setattr(sanic, \"__datadog_patch\", False)\n\n\nasync def patch_handle_request(wrapped, instance, args, kwargs):\n \"\"\"Wrapper for Sanic.handle_request\"\"\"\n request = kwargs.get(\"request\", args[0])\n write_callback = kwargs.get(\"write_callback\", args[1])\n stream_callback = kwargs.get(\"stream_callback\", args[2])\n\n if request.scheme not in (\"http\", \"https\"):\n return await wrapped(request, write_callback, stream_callback, **kwargs)\n\n resource = \"{} {}\".format(request.method, request.path)\n\n headers = request.headers.copy()\n\n if config.sanic.distributed_tracing:\n propagator = HTTPPropagator()\n context = propagator.extract(headers)\n if context.trace_id:\n ddtrace.tracer.context_provider.activate(context)\n\n span = ddtrace.tracer.trace(\n \"sanic.request\",\n service=trace_utils.int_service(None, config.sanic),\n resource=resource,\n span_type=SpanTypes.WEB,\n )\n sample_rate = config.sanic.get_analytics_sample_rate(use_global_config=True)\n if sample_rate is not None:\n span.set_tag(ANALYTICS_SAMPLE_RATE_KEY, sample_rate)\n\n method = request.method\n url = \"{scheme}://{host}{path}\".format(scheme=request.scheme, host=request.host, path=request.path)\n query_string = request.query_string\n if isinstance(query_string, bytes):\n query_string = query_string.decode()\n trace_utils.set_http_meta(span, config.sanic, method=method, url=url, query=query_string, request_headers=headers)\n\n if write_callback is not None:\n write_callback = _wrap_response_callback(span, write_callback)\n if stream_callback is not None:\n stream_callback = _wrap_response_callback(span, stream_callback)\n\n return await wrapped(request, write_callback, stream_callback, **kwargs)\n", "path": "ddtrace/contrib/sanic/patch.py"}]} | 1,820 | 259 |
gh_patches_debug_32673 | rasdani/github-patches | git_diff | coreproject-moe__CoreProject-Monorepo-19 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add `djangorestframework-simplejwt` and add Django endpoints. ( Easiest part NGL )
Gonna leave it as is till i finish other stuff
</issue>
<code>
[start of backend/core/urls.py]
1 """core URL Configuration
2
3 The `urlpatterns` list routes URLs to views. For more information please see:
4 https://docs.djangoproject.com/en/3.2/topics/http/urls/
5 Examples:
6 Function views
7 1. Add an import: from my_app import views
8 2. Add a URL to urlpatterns: path('', views.home, name='home')
9 Class-based views
10 1. Add an import: from other_app.views import Home
11 2. Add a URL to urlpatterns: path('', Home.as_view(), name='home')
12 Including another URLconf
13 1. Import the include() function: from django.urls import include, path
14 2. Add a URL to urlpatterns: path('blog/', include('blog.urls'))
15 """
16 from django.contrib import admin
17 from django.urls import path
18 from django.urls import include
19 from django.conf.urls.static import static
20 from django.conf import settings
21
22 urlpatterns = [
23 path("admin/", admin.site.urls),
24 # Pages
25 path("user/", include("pages.users.urls")),
26 path("authentication/", include("pages.authentication.urls")),
27 # Api
28 path("api/v1/avatar/", include("api.v1.avatar.urls")),
29 # Rest endpoints
30 path("api/v1/users/", include("api.v1._user.urls")),
31 ]
32 if settings.DEBUG:
33 urlpatterns += static(settings.STATIC_URL, document_root=settings.STATIC_ROOT)
34 urlpatterns += static(settings.MEDIA_URL, document_root=settings.MEDIA_ROOT)
35
[end of backend/core/urls.py]
[start of backend/core/settings.py]
1 """
2 Django settings for core project.
3
4 Generated by 'django-admin startproject' using Django 3.2.7.
5
6 For more information on this file, see
7 https://docs.djangoproject.com/en/3.2/topics/settings/
8
9 For the full list of settings and their values, see
10 https://docs.djangoproject.com/en/3.2/ref/settings/
11 """
12
13 from pathlib import Path
14 import os
15
16 # Build paths inside the project like this: BASE_DIR / 'subdir'.
17 BASE_DIR = Path(__file__).resolve().parent.parent
18
19
20 # Quick-start development settings - unsuitable for production
21 # See https://docs.djangoproject.com/en/3.2/howto/deployment/checklist/
22
23 # SECURITY WARNING: keep the secret key used in production secret!
24 SECRET_KEY = "django-insecure-mn19l@e%r^s&a^pa9%(bf173v-0c54^@3s(pb!ts_yuts0$+6p"
25
26 # SECURITY WARNING: don't run with debug turned on in production!
27 DEBUG = True
28
29 ALLOWED_HOSTS = []
30
31
32 # Application definition
33
34 INSTALLED_APPS = [
35 "django.contrib.admin",
36 "django.contrib.auth",
37 "django.contrib.contenttypes",
38 "django.contrib.sessions",
39 "django.contrib.messages",
40 "whitenoise.runserver_nostatic",
41 "django.contrib.staticfiles",
42 # Rest Framework
43 "rest_framework",
44 "rest_framework.authtoken",
45 "corsheaders",
46 # Custom Stuff
47 "custom.user",
48 # Pages
49 "pages.users",
50 "pages.authentication",
51 # Rest stuff
52 "api.v1.avatar",
53 "api.v1._user",
54 ]
55
56 MIDDLEWARE = [
57 "django.middleware.security.SecurityMiddleware",
58 "whitenoise.middleware.WhiteNoiseMiddleware",
59 "django.contrib.sessions.middleware.SessionMiddleware",
60 "corsheaders.middleware.CorsMiddleware",
61 "django.middleware.common.CommonMiddleware",
62 "django.middleware.csrf.CsrfViewMiddleware",
63 "django.contrib.auth.middleware.AuthenticationMiddleware",
64 "django.contrib.messages.middleware.MessageMiddleware",
65 "django.middleware.clickjacking.XFrameOptionsMiddleware",
66 ]
67
68 ROOT_URLCONF = "core.urls"
69
70 TEMPLATES = [
71 {
72 "BACKEND": "django.template.backends.django.DjangoTemplates",
73 "DIRS": [BASE_DIR / "templates"],
74 "APP_DIRS": True,
75 "OPTIONS": {
76 "context_processors": [
77 "django.template.context_processors.debug",
78 "django.template.context_processors.request",
79 "django.contrib.auth.context_processors.auth",
80 "django.contrib.messages.context_processors.messages",
81 ],
82 },
83 },
84 ]
85
86 WSGI_APPLICATION = "core.wsgi.application"
87
88
89 # Database
90 # https://docs.djangoproject.com/en/3.2/ref/settings/#databases
91
92 DATABASES = {
93 "default": {
94 "ENGINE": "django.db.backends.sqlite3",
95 "NAME": BASE_DIR / "db.sqlite3",
96 }
97 }
98
99
100 # Password validation
101 # https://docs.djangoproject.com/en/3.2/ref/settings/#auth-password-validators
102
103 AUTH_PASSWORD_VALIDATORS = [
104 {
105 "NAME": "django.contrib.auth.password_validation.UserAttributeSimilarityValidator",
106 },
107 {
108 "NAME": "django.contrib.auth.password_validation.MinimumLengthValidator",
109 },
110 {
111 "NAME": "django.contrib.auth.password_validation.CommonPasswordValidator",
112 },
113 {
114 "NAME": "django.contrib.auth.password_validation.NumericPasswordValidator",
115 },
116 ]
117
118 # Custom user model
119 # https://testdriven.io/blog/django-custom-user-model/
120
121 AUTH_USER_MODEL = "user.CustomUser"
122
123 # Password hashers
124 # https://docs.djangoproject.com/en/3.2/topics/auth/passwords/#using-argon2-with-django
125
126 PASSWORD_HASHERS = [
127 "django.contrib.auth.hashers.Argon2PasswordHasher",
128 "django.contrib.auth.hashers.PBKDF2PasswordHasher",
129 "django.contrib.auth.hashers.PBKDF2SHA1PasswordHasher",
130 "django.contrib.auth.hashers.BCryptSHA256PasswordHasher",
131 ]
132
133 # Internationalization
134 # https://docs.djangoproject.com/en/3.2/topics/i18n/
135
136 LANGUAGE_CODE = "en-us"
137
138 TIME_ZONE = "UTC"
139
140 USE_I18N = True
141
142 USE_L10N = True
143
144 USE_TZ = True
145
146
147 # Static files (CSS, JavaScript, Images)
148 # https://docs.djangoproject.com/en/3.2/howto/static-files/
149 STATIC_URL = "/static/"
150
151 STATICFILES_DIRS = [
152 os.path.join(BASE_DIR, "static"),
153 ]
154
155 STATIC_ROOT = os.path.join(BASE_DIR, "staticfiles")
156 STATICFILES_STORAGE = "whitenoise.storage.CompressedManifestStaticFilesStorage"
157
158 MEDIA_URL = "/media/"
159 MEDIA_ROOT = os.path.join(BASE_DIR, "media")
160 # Default primary key field type
161 # https://docs.djangoproject.com/en/3.2/ref/settings/#default-auto-field
162
163 DEFAULT_AUTO_FIELD = "django.db.models.BigAutoField"
164
165 # Rest framework auth
166 # https://django-rest-framework-simplejwt.readthedocs.io/en/latest/getting_started.html#installation
167 REST_FRAMEWORK = {
168 "DEFAULT_AUTHENTICATION_CLASSES": (
169 "rest_framework.authentication.BasicAuthentication",
170 "rest_framework.authentication.SessionAuthentication",
171 )
172 }
173
174 # Override the login url
175 # https://stackoverflow.com/questions/49532708/custom-login-url-in-django#49532812
176 LOGIN_URL = "login_page"
177
178
179 CORS_ALLOWED_ORIGINS = [
180 "http://127.0.0.1:3000",
181 "http://127.0.0.1:8000",
182 "http://localhost:3000",
183 "http://localhost:8000",
184 ]
185
[end of backend/core/settings.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/backend/core/settings.py b/backend/core/settings.py
--- a/backend/core/settings.py
+++ b/backend/core/settings.py
@@ -41,7 +41,8 @@
"django.contrib.staticfiles",
# Rest Framework
"rest_framework",
- "rest_framework.authtoken",
+ "rest_framework_simplejwt.token_blacklist",
+ # 3rd party stuff
"corsheaders",
# Custom Stuff
"custom.user",
@@ -166,6 +167,7 @@
# https://django-rest-framework-simplejwt.readthedocs.io/en/latest/getting_started.html#installation
REST_FRAMEWORK = {
"DEFAULT_AUTHENTICATION_CLASSES": (
+ "rest_framework_simplejwt.authentication.JWTAuthentication",
"rest_framework.authentication.BasicAuthentication",
"rest_framework.authentication.SessionAuthentication",
)
diff --git a/backend/core/urls.py b/backend/core/urls.py
--- a/backend/core/urls.py
+++ b/backend/core/urls.py
@@ -19,6 +19,12 @@
from django.conf.urls.static import static
from django.conf import settings
+from rest_framework_simplejwt.views import (
+ TokenObtainPairView,
+ TokenRefreshView,
+ TokenBlacklistView,
+)
+
urlpatterns = [
path("admin/", admin.site.urls),
# Pages
@@ -26,6 +32,12 @@
path("authentication/", include("pages.authentication.urls")),
# Api
path("api/v1/avatar/", include("api.v1.avatar.urls")),
+ # https://django-rest-framework-simplejwt.readthedocs.io/en/latest/getting_started.html#installation
+ path("api/v1/token/", TokenObtainPairView.as_view(), name="token_obtain_pair"),
+ path("api/v1/token/refresh/", TokenRefreshView.as_view(), name="token_refresh"),
+ path(
+ "api/v1/token/blacklist/", TokenBlacklistView.as_view(), name="token_blacklist"
+ ),
# Rest endpoints
path("api/v1/users/", include("api.v1._user.urls")),
]
| {"golden_diff": "diff --git a/backend/core/settings.py b/backend/core/settings.py\n--- a/backend/core/settings.py\n+++ b/backend/core/settings.py\n@@ -41,7 +41,8 @@\n \"django.contrib.staticfiles\",\n # Rest Framework\n \"rest_framework\",\n- \"rest_framework.authtoken\",\n+ \"rest_framework_simplejwt.token_blacklist\",\n+ # 3rd party stuff\n \"corsheaders\",\n # Custom Stuff\n \"custom.user\",\n@@ -166,6 +167,7 @@\n # https://django-rest-framework-simplejwt.readthedocs.io/en/latest/getting_started.html#installation\n REST_FRAMEWORK = {\n \"DEFAULT_AUTHENTICATION_CLASSES\": (\n+ \"rest_framework_simplejwt.authentication.JWTAuthentication\",\n \"rest_framework.authentication.BasicAuthentication\",\n \"rest_framework.authentication.SessionAuthentication\",\n )\ndiff --git a/backend/core/urls.py b/backend/core/urls.py\n--- a/backend/core/urls.py\n+++ b/backend/core/urls.py\n@@ -19,6 +19,12 @@\n from django.conf.urls.static import static\n from django.conf import settings\n \n+from rest_framework_simplejwt.views import (\n+ TokenObtainPairView,\n+ TokenRefreshView,\n+ TokenBlacklistView,\n+)\n+\n urlpatterns = [\n path(\"admin/\", admin.site.urls),\n # Pages\n@@ -26,6 +32,12 @@\n path(\"authentication/\", include(\"pages.authentication.urls\")),\n # Api\n path(\"api/v1/avatar/\", include(\"api.v1.avatar.urls\")),\n+ # https://django-rest-framework-simplejwt.readthedocs.io/en/latest/getting_started.html#installation\n+ path(\"api/v1/token/\", TokenObtainPairView.as_view(), name=\"token_obtain_pair\"),\n+ path(\"api/v1/token/refresh/\", TokenRefreshView.as_view(), name=\"token_refresh\"),\n+ path(\n+ \"api/v1/token/blacklist/\", TokenBlacklistView.as_view(), name=\"token_blacklist\"\n+ ),\n # Rest endpoints\n path(\"api/v1/users/\", include(\"api.v1._user.urls\")),\n ]\n", "issue": "Add `djangorestframework-simplejwt` and add Django endpoints. ( Easiest part NGL )\nGonna leave it as is till i finish other stuff\r\n\r\n\n", "before_files": [{"content": "\"\"\"core URL Configuration\n\nThe `urlpatterns` list routes URLs to views. For more information please see:\n https://docs.djangoproject.com/en/3.2/topics/http/urls/\nExamples:\nFunction views\n 1. Add an import: from my_app import views\n 2. Add a URL to urlpatterns: path('', views.home, name='home')\nClass-based views\n 1. Add an import: from other_app.views import Home\n 2. Add a URL to urlpatterns: path('', Home.as_view(), name='home')\nIncluding another URLconf\n 1. Import the include() function: from django.urls import include, path\n 2. Add a URL to urlpatterns: path('blog/', include('blog.urls'))\n\"\"\"\nfrom django.contrib import admin\nfrom django.urls import path\nfrom django.urls import include\nfrom django.conf.urls.static import static\nfrom django.conf import settings\n\nurlpatterns = [\n path(\"admin/\", admin.site.urls),\n # Pages\n path(\"user/\", include(\"pages.users.urls\")),\n path(\"authentication/\", include(\"pages.authentication.urls\")),\n # Api\n path(\"api/v1/avatar/\", include(\"api.v1.avatar.urls\")),\n # Rest endpoints\n path(\"api/v1/users/\", include(\"api.v1._user.urls\")),\n]\nif settings.DEBUG:\n urlpatterns += static(settings.STATIC_URL, document_root=settings.STATIC_ROOT)\n urlpatterns += static(settings.MEDIA_URL, document_root=settings.MEDIA_ROOT)\n", "path": "backend/core/urls.py"}, {"content": "\"\"\"\nDjango settings for core project.\n\nGenerated by 'django-admin startproject' using Django 3.2.7.\n\nFor more information on this file, see\nhttps://docs.djangoproject.com/en/3.2/topics/settings/\n\nFor the full list of settings and their values, see\nhttps://docs.djangoproject.com/en/3.2/ref/settings/\n\"\"\"\n\nfrom pathlib import Path\nimport os\n\n# Build paths inside the project like this: BASE_DIR / 'subdir'.\nBASE_DIR = Path(__file__).resolve().parent.parent\n\n\n# Quick-start development settings - unsuitable for production\n# See https://docs.djangoproject.com/en/3.2/howto/deployment/checklist/\n\n# SECURITY WARNING: keep the secret key used in production secret!\nSECRET_KEY = \"django-insecure-mn19l@e%r^s&a^pa9%(bf173v-0c54^@3s(pb!ts_yuts0$+6p\"\n\n# SECURITY WARNING: don't run with debug turned on in production!\nDEBUG = True\n\nALLOWED_HOSTS = []\n\n\n# Application definition\n\nINSTALLED_APPS = [\n \"django.contrib.admin\",\n \"django.contrib.auth\",\n \"django.contrib.contenttypes\",\n \"django.contrib.sessions\",\n \"django.contrib.messages\",\n \"whitenoise.runserver_nostatic\",\n \"django.contrib.staticfiles\",\n # Rest Framework\n \"rest_framework\",\n \"rest_framework.authtoken\",\n \"corsheaders\",\n # Custom Stuff\n \"custom.user\",\n # Pages\n \"pages.users\",\n \"pages.authentication\",\n # Rest stuff\n \"api.v1.avatar\",\n \"api.v1._user\",\n]\n\nMIDDLEWARE = [\n \"django.middleware.security.SecurityMiddleware\",\n \"whitenoise.middleware.WhiteNoiseMiddleware\",\n \"django.contrib.sessions.middleware.SessionMiddleware\",\n \"corsheaders.middleware.CorsMiddleware\",\n \"django.middleware.common.CommonMiddleware\",\n \"django.middleware.csrf.CsrfViewMiddleware\",\n \"django.contrib.auth.middleware.AuthenticationMiddleware\",\n \"django.contrib.messages.middleware.MessageMiddleware\",\n \"django.middleware.clickjacking.XFrameOptionsMiddleware\",\n]\n\nROOT_URLCONF = \"core.urls\"\n\nTEMPLATES = [\n {\n \"BACKEND\": \"django.template.backends.django.DjangoTemplates\",\n \"DIRS\": [BASE_DIR / \"templates\"],\n \"APP_DIRS\": True,\n \"OPTIONS\": {\n \"context_processors\": [\n \"django.template.context_processors.debug\",\n \"django.template.context_processors.request\",\n \"django.contrib.auth.context_processors.auth\",\n \"django.contrib.messages.context_processors.messages\",\n ],\n },\n },\n]\n\nWSGI_APPLICATION = \"core.wsgi.application\"\n\n\n# Database\n# https://docs.djangoproject.com/en/3.2/ref/settings/#databases\n\nDATABASES = {\n \"default\": {\n \"ENGINE\": \"django.db.backends.sqlite3\",\n \"NAME\": BASE_DIR / \"db.sqlite3\",\n }\n}\n\n\n# Password validation\n# https://docs.djangoproject.com/en/3.2/ref/settings/#auth-password-validators\n\nAUTH_PASSWORD_VALIDATORS = [\n {\n \"NAME\": \"django.contrib.auth.password_validation.UserAttributeSimilarityValidator\",\n },\n {\n \"NAME\": \"django.contrib.auth.password_validation.MinimumLengthValidator\",\n },\n {\n \"NAME\": \"django.contrib.auth.password_validation.CommonPasswordValidator\",\n },\n {\n \"NAME\": \"django.contrib.auth.password_validation.NumericPasswordValidator\",\n },\n]\n\n# Custom user model\n# https://testdriven.io/blog/django-custom-user-model/\n\nAUTH_USER_MODEL = \"user.CustomUser\"\n\n# Password hashers\n# https://docs.djangoproject.com/en/3.2/topics/auth/passwords/#using-argon2-with-django\n\nPASSWORD_HASHERS = [\n \"django.contrib.auth.hashers.Argon2PasswordHasher\",\n \"django.contrib.auth.hashers.PBKDF2PasswordHasher\",\n \"django.contrib.auth.hashers.PBKDF2SHA1PasswordHasher\",\n \"django.contrib.auth.hashers.BCryptSHA256PasswordHasher\",\n]\n\n# Internationalization\n# https://docs.djangoproject.com/en/3.2/topics/i18n/\n\nLANGUAGE_CODE = \"en-us\"\n\nTIME_ZONE = \"UTC\"\n\nUSE_I18N = True\n\nUSE_L10N = True\n\nUSE_TZ = True\n\n\n# Static files (CSS, JavaScript, Images)\n# https://docs.djangoproject.com/en/3.2/howto/static-files/\nSTATIC_URL = \"/static/\"\n\nSTATICFILES_DIRS = [\n os.path.join(BASE_DIR, \"static\"),\n]\n\nSTATIC_ROOT = os.path.join(BASE_DIR, \"staticfiles\")\nSTATICFILES_STORAGE = \"whitenoise.storage.CompressedManifestStaticFilesStorage\"\n\nMEDIA_URL = \"/media/\"\nMEDIA_ROOT = os.path.join(BASE_DIR, \"media\")\n# Default primary key field type\n# https://docs.djangoproject.com/en/3.2/ref/settings/#default-auto-field\n\nDEFAULT_AUTO_FIELD = \"django.db.models.BigAutoField\"\n\n# Rest framework auth\n# https://django-rest-framework-simplejwt.readthedocs.io/en/latest/getting_started.html#installation\nREST_FRAMEWORK = {\n \"DEFAULT_AUTHENTICATION_CLASSES\": (\n \"rest_framework.authentication.BasicAuthentication\",\n \"rest_framework.authentication.SessionAuthentication\",\n )\n}\n\n# Override the login url\n# https://stackoverflow.com/questions/49532708/custom-login-url-in-django#49532812\nLOGIN_URL = \"login_page\"\n\n\nCORS_ALLOWED_ORIGINS = [\n \"http://127.0.0.1:3000\",\n \"http://127.0.0.1:8000\",\n \"http://localhost:3000\",\n \"http://localhost:8000\",\n]\n", "path": "backend/core/settings.py"}]} | 2,648 | 452 |
gh_patches_debug_28758 | rasdani/github-patches | git_diff | microsoft__botbuilder-python-1402 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[PORT] Add EndOfConversationCodes to EndOfConversation activity from Skill
> Port this change from botbuilder-dotnet/master branch:
https://github.com/microsoft/botbuilder-dotnet/pull/4235
Fixes https://github.com/microsoft/botframework-sdk/issues/5852
# Changed projects
* Microsoft.Bot.Builder.Dialogs
* Microsoft.Bot.Builder.Dialogs.Tests
</issue>
<code>
[start of libraries/botbuilder-dialogs/botbuilder/dialogs/dialog_extensions.py]
1 # Copyright (c) Microsoft Corporation. All rights reserved.
2 # Licensed under the MIT License.
3
4 from botbuilder.core import BotAdapter, StatePropertyAccessor, TurnContext
5 from botbuilder.core.skills import SkillHandler, SkillConversationReference
6
7 from botbuilder.dialogs import (
8 Dialog,
9 DialogEvents,
10 DialogSet,
11 DialogTurnStatus,
12 )
13 from botbuilder.schema import Activity, ActivityTypes
14 from botframework.connector.auth import (
15 ClaimsIdentity,
16 SkillValidation,
17 AuthenticationConstants,
18 GovernmentConstants,
19 )
20
21
22 class DialogExtensions:
23 @staticmethod
24 async def run_dialog(
25 dialog: Dialog, turn_context: TurnContext, accessor: StatePropertyAccessor
26 ):
27 """
28 Creates a dialog stack and starts a dialog, pushing it onto the stack.
29 """
30
31 dialog_set = DialogSet(accessor)
32 dialog_set.add(dialog)
33
34 dialog_context = await dialog_set.create_context(turn_context)
35
36 # Handle EoC and Reprompt event from a parent bot (can be root bot to skill or skill to skill)
37 if DialogExtensions.__is_from_parent_to_skill(turn_context):
38 # Handle remote cancellation request from parent.
39 if turn_context.activity.type == ActivityTypes.end_of_conversation:
40 if not dialog_context.stack:
41 # No dialogs to cancel, just return.
42 return
43
44 remote_cancel_text = "Skill was canceled through an EndOfConversation activity from the parent."
45 await turn_context.send_trace_activity(
46 f"Extension {Dialog.__name__}.run_dialog", label=remote_cancel_text,
47 )
48
49 # Send cancellation message to the dialog to ensure all the parents are canceled
50 # in the right order.
51 await dialog_context.cancel_all_dialogs()
52 return
53
54 # Handle a reprompt event sent from the parent.
55 if (
56 turn_context.activity.type == ActivityTypes.event
57 and turn_context.activity.name == DialogEvents.reprompt_dialog
58 ):
59 if not dialog_context.stack:
60 # No dialogs to reprompt, just return.
61 return
62
63 await dialog_context.reprompt_dialog()
64 return
65
66 # Continue or start the dialog.
67 result = await dialog_context.continue_dialog()
68 if result.status == DialogTurnStatus.Empty:
69 result = await dialog_context.begin_dialog(dialog.id)
70
71 # Skills should send EoC when the dialog completes.
72 if (
73 result.status == DialogTurnStatus.Complete
74 or result.status == DialogTurnStatus.Cancelled
75 ):
76 if DialogExtensions.__send_eoc_to_parent(turn_context):
77 end_message_text = (
78 f"Dialog {dialog.id} has **completed**. Sending EndOfConversation."
79 )
80 await turn_context.send_trace_activity(
81 f"Extension {Dialog.__name__}.run_dialog",
82 label=end_message_text,
83 value=result.result,
84 )
85
86 activity = Activity(
87 type=ActivityTypes.end_of_conversation,
88 value=result.result,
89 locale=turn_context.activity.locale,
90 )
91 await turn_context.send_activity(activity)
92
93 @staticmethod
94 def __is_from_parent_to_skill(turn_context: TurnContext) -> bool:
95 if turn_context.turn_state.get(SkillHandler.SKILL_CONVERSATION_REFERENCE_KEY):
96 return False
97
98 claims_identity = turn_context.turn_state.get(BotAdapter.BOT_IDENTITY_KEY)
99 return isinstance(
100 claims_identity, ClaimsIdentity
101 ) and SkillValidation.is_skill_claim(claims_identity.claims)
102
103 @staticmethod
104 def __send_eoc_to_parent(turn_context: TurnContext) -> bool:
105 claims_identity = turn_context.turn_state.get(BotAdapter.BOT_IDENTITY_KEY)
106 if isinstance(
107 claims_identity, ClaimsIdentity
108 ) and SkillValidation.is_skill_claim(claims_identity.claims):
109 # EoC Activities returned by skills are bounced back to the bot by SkillHandler.
110 # In those cases we will have a SkillConversationReference instance in state.
111 skill_conversation_reference: SkillConversationReference = turn_context.turn_state.get(
112 SkillHandler.SKILL_CONVERSATION_REFERENCE_KEY
113 )
114 if skill_conversation_reference:
115 # If the skillConversationReference.OAuthScope is for one of the supported channels,
116 # we are at the root and we should not send an EoC.
117 return (
118 skill_conversation_reference.oauth_scope
119 != AuthenticationConstants.TO_CHANNEL_FROM_BOT_OAUTH_SCOPE
120 and skill_conversation_reference.oauth_scope
121 != GovernmentConstants.TO_CHANNEL_FROM_BOT_OAUTH_SCOPE
122 )
123 return True
124
125 return False
126
[end of libraries/botbuilder-dialogs/botbuilder/dialogs/dialog_extensions.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/libraries/botbuilder-dialogs/botbuilder/dialogs/dialog_extensions.py b/libraries/botbuilder-dialogs/botbuilder/dialogs/dialog_extensions.py
--- a/libraries/botbuilder-dialogs/botbuilder/dialogs/dialog_extensions.py
+++ b/libraries/botbuilder-dialogs/botbuilder/dialogs/dialog_extensions.py
@@ -1,22 +1,21 @@
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License.
+from botframework.connector.auth import (
+ ClaimsIdentity,
+ SkillValidation,
+ AuthenticationConstants,
+ GovernmentConstants,
+)
from botbuilder.core import BotAdapter, StatePropertyAccessor, TurnContext
from botbuilder.core.skills import SkillHandler, SkillConversationReference
-
from botbuilder.dialogs import (
Dialog,
DialogEvents,
DialogSet,
DialogTurnStatus,
)
-from botbuilder.schema import Activity, ActivityTypes
-from botframework.connector.auth import (
- ClaimsIdentity,
- SkillValidation,
- AuthenticationConstants,
- GovernmentConstants,
-)
+from botbuilder.schema import Activity, ActivityTypes, EndOfConversationCodes
class DialogExtensions:
@@ -87,6 +86,9 @@
type=ActivityTypes.end_of_conversation,
value=result.result,
locale=turn_context.activity.locale,
+ code=EndOfConversationCodes.completed_successfully
+ if result.status == DialogTurnStatus.Complete
+ else EndOfConversationCodes.user_cancelled,
)
await turn_context.send_activity(activity)
| {"golden_diff": "diff --git a/libraries/botbuilder-dialogs/botbuilder/dialogs/dialog_extensions.py b/libraries/botbuilder-dialogs/botbuilder/dialogs/dialog_extensions.py\n--- a/libraries/botbuilder-dialogs/botbuilder/dialogs/dialog_extensions.py\n+++ b/libraries/botbuilder-dialogs/botbuilder/dialogs/dialog_extensions.py\n@@ -1,22 +1,21 @@\n # Copyright (c) Microsoft Corporation. All rights reserved.\n # Licensed under the MIT License.\n \n+from botframework.connector.auth import (\n+ ClaimsIdentity,\n+ SkillValidation,\n+ AuthenticationConstants,\n+ GovernmentConstants,\n+)\n from botbuilder.core import BotAdapter, StatePropertyAccessor, TurnContext\n from botbuilder.core.skills import SkillHandler, SkillConversationReference\n-\n from botbuilder.dialogs import (\n Dialog,\n DialogEvents,\n DialogSet,\n DialogTurnStatus,\n )\n-from botbuilder.schema import Activity, ActivityTypes\n-from botframework.connector.auth import (\n- ClaimsIdentity,\n- SkillValidation,\n- AuthenticationConstants,\n- GovernmentConstants,\n-)\n+from botbuilder.schema import Activity, ActivityTypes, EndOfConversationCodes\n \n \n class DialogExtensions:\n@@ -87,6 +86,9 @@\n type=ActivityTypes.end_of_conversation,\n value=result.result,\n locale=turn_context.activity.locale,\n+ code=EndOfConversationCodes.completed_successfully\n+ if result.status == DialogTurnStatus.Complete\n+ else EndOfConversationCodes.user_cancelled,\n )\n await turn_context.send_activity(activity)\n", "issue": "[PORT] Add EndOfConversationCodes to EndOfConversation activity from Skill\n> Port this change from botbuilder-dotnet/master branch:\nhttps://github.com/microsoft/botbuilder-dotnet/pull/4235\n\nFixes https://github.com/microsoft/botframework-sdk/issues/5852\n\n\r\n# Changed projects\r\n* Microsoft.Bot.Builder.Dialogs\r\n* Microsoft.Bot.Builder.Dialogs.Tests\r\n\r\n\r\n\n\n", "before_files": [{"content": "# Copyright (c) Microsoft Corporation. All rights reserved.\n# Licensed under the MIT License.\n\nfrom botbuilder.core import BotAdapter, StatePropertyAccessor, TurnContext\nfrom botbuilder.core.skills import SkillHandler, SkillConversationReference\n\nfrom botbuilder.dialogs import (\n Dialog,\n DialogEvents,\n DialogSet,\n DialogTurnStatus,\n)\nfrom botbuilder.schema import Activity, ActivityTypes\nfrom botframework.connector.auth import (\n ClaimsIdentity,\n SkillValidation,\n AuthenticationConstants,\n GovernmentConstants,\n)\n\n\nclass DialogExtensions:\n @staticmethod\n async def run_dialog(\n dialog: Dialog, turn_context: TurnContext, accessor: StatePropertyAccessor\n ):\n \"\"\"\n Creates a dialog stack and starts a dialog, pushing it onto the stack.\n \"\"\"\n\n dialog_set = DialogSet(accessor)\n dialog_set.add(dialog)\n\n dialog_context = await dialog_set.create_context(turn_context)\n\n # Handle EoC and Reprompt event from a parent bot (can be root bot to skill or skill to skill)\n if DialogExtensions.__is_from_parent_to_skill(turn_context):\n # Handle remote cancellation request from parent.\n if turn_context.activity.type == ActivityTypes.end_of_conversation:\n if not dialog_context.stack:\n # No dialogs to cancel, just return.\n return\n\n remote_cancel_text = \"Skill was canceled through an EndOfConversation activity from the parent.\"\n await turn_context.send_trace_activity(\n f\"Extension {Dialog.__name__}.run_dialog\", label=remote_cancel_text,\n )\n\n # Send cancellation message to the dialog to ensure all the parents are canceled\n # in the right order.\n await dialog_context.cancel_all_dialogs()\n return\n\n # Handle a reprompt event sent from the parent.\n if (\n turn_context.activity.type == ActivityTypes.event\n and turn_context.activity.name == DialogEvents.reprompt_dialog\n ):\n if not dialog_context.stack:\n # No dialogs to reprompt, just return.\n return\n\n await dialog_context.reprompt_dialog()\n return\n\n # Continue or start the dialog.\n result = await dialog_context.continue_dialog()\n if result.status == DialogTurnStatus.Empty:\n result = await dialog_context.begin_dialog(dialog.id)\n\n # Skills should send EoC when the dialog completes.\n if (\n result.status == DialogTurnStatus.Complete\n or result.status == DialogTurnStatus.Cancelled\n ):\n if DialogExtensions.__send_eoc_to_parent(turn_context):\n end_message_text = (\n f\"Dialog {dialog.id} has **completed**. Sending EndOfConversation.\"\n )\n await turn_context.send_trace_activity(\n f\"Extension {Dialog.__name__}.run_dialog\",\n label=end_message_text,\n value=result.result,\n )\n\n activity = Activity(\n type=ActivityTypes.end_of_conversation,\n value=result.result,\n locale=turn_context.activity.locale,\n )\n await turn_context.send_activity(activity)\n\n @staticmethod\n def __is_from_parent_to_skill(turn_context: TurnContext) -> bool:\n if turn_context.turn_state.get(SkillHandler.SKILL_CONVERSATION_REFERENCE_KEY):\n return False\n\n claims_identity = turn_context.turn_state.get(BotAdapter.BOT_IDENTITY_KEY)\n return isinstance(\n claims_identity, ClaimsIdentity\n ) and SkillValidation.is_skill_claim(claims_identity.claims)\n\n @staticmethod\n def __send_eoc_to_parent(turn_context: TurnContext) -> bool:\n claims_identity = turn_context.turn_state.get(BotAdapter.BOT_IDENTITY_KEY)\n if isinstance(\n claims_identity, ClaimsIdentity\n ) and SkillValidation.is_skill_claim(claims_identity.claims):\n # EoC Activities returned by skills are bounced back to the bot by SkillHandler.\n # In those cases we will have a SkillConversationReference instance in state.\n skill_conversation_reference: SkillConversationReference = turn_context.turn_state.get(\n SkillHandler.SKILL_CONVERSATION_REFERENCE_KEY\n )\n if skill_conversation_reference:\n # If the skillConversationReference.OAuthScope is for one of the supported channels,\n # we are at the root and we should not send an EoC.\n return (\n skill_conversation_reference.oauth_scope\n != AuthenticationConstants.TO_CHANNEL_FROM_BOT_OAUTH_SCOPE\n and skill_conversation_reference.oauth_scope\n != GovernmentConstants.TO_CHANNEL_FROM_BOT_OAUTH_SCOPE\n )\n return True\n\n return False\n", "path": "libraries/botbuilder-dialogs/botbuilder/dialogs/dialog_extensions.py"}]} | 1,851 | 326 |
gh_patches_debug_9886 | rasdani/github-patches | git_diff | yt-dlp__yt-dlp-8144 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[media.ccc.de:lists] playlist_id should be case-sensitive
### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting that yt-dlp is broken on a **supported** site
- [X] I've verified that I'm running yt-dlp version **2023.07.06** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit)
- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
- [ ] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required
### Region
_No response_
### Provide a description that is worded well enough to be understood
Some playlists use uppercase `playlist_id`s, like `https://media.ccc.de/c/DS2023` → `https://media.ccc.de/public/conferences/DS2022` or `https://media.ccc.de/c/MCH2022` → `https://media.ccc.de/public/conferences/MCH2022`. So I guess removing `.lower()` in https://github.com/yt-dlp/yt-dlp/blob/master/yt_dlp/extractor/ccc.py#L96 should resolve this.
### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
$ python -m yt_dlp --verbose --ignore-config https://media.ccc.de/c/DS2023
[debug] Command-line config: ['--verbose', '--ignore-config', 'https://media.ccc.de/c/DS2023']
[debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version [email protected] [b532a3481] (source)
[debug] Lazy loading extractors is disabled
[debug] Git HEAD: 30ba233d4
[debug] Python 3.11.5 (CPython x86_64 64bit) - Linux-6.5.0-5-generic-x86_64-with-glibc2.38 (OpenSSL 3.0.10 1 Aug 2023, glibc 2.38)
[debug] exe versions: ffmpeg 6.0 (setts), ffprobe 6.0, rtmpdump 2.4
[debug] Optional libraries: Cryptodome-3.11.0, brotli-1.0.9, certifi-2022.09.24, mutagen-1.46.0, pyxattr-0.8.1, secretstorage-3.3.3, sqlite3-2.6.0, websockets-10.4
[debug] Proxy map: {}
[debug] Loaded 1866 extractors
[media.ccc.de:lists] Extracting URL: https://media.ccc.de/c/DS2023
[media.ccc.de:lists] ds2023: Downloading JSON metadata
ERROR: [media.ccc.de:lists] DS2023: Unable to download JSON metadata: HTTP Error 404: Not Found (caused by <HTTPError 404: Not Found>); please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U
[…]
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/tmp/yt-dlp/yt_dlp/extractor/common.py", line 847, in _request_webpage
return self._downloader.urlopen(self._create_request(url_or_request, data, headers, query))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/tmp/yt-dlp/yt_dlp/YoutubeDL.py", line 4078, in urlopen
raise _CompatHTTPError(e) from e
yt_dlp.networking.exceptions._CompatHTTPError: HTTP Error 404: Not Found
```
</issue>
<code>
[start of yt_dlp/extractor/ccc.py]
1 from .common import InfoExtractor
2 from ..utils import (
3 int_or_none,
4 parse_iso8601,
5 try_get,
6 url_or_none,
7 )
8
9
10 class CCCIE(InfoExtractor):
11 IE_NAME = 'media.ccc.de'
12 _VALID_URL = r'https?://(?:www\.)?media\.ccc\.de/v/(?P<id>[^/?#&]+)'
13
14 _TESTS = [{
15 'url': 'https://media.ccc.de/v/30C3_-_5443_-_en_-_saal_g_-_201312281830_-_introduction_to_processor_design_-_byterazor#video',
16 'md5': '3a1eda8f3a29515d27f5adb967d7e740',
17 'info_dict': {
18 'id': '1839',
19 'ext': 'mp4',
20 'title': 'Introduction to Processor Design',
21 'creator': 'byterazor',
22 'description': 'md5:df55f6d073d4ceae55aae6f2fd98a0ac',
23 'thumbnail': r're:^https?://.*\.jpg$',
24 'upload_date': '20131228',
25 'timestamp': 1388188800,
26 'duration': 3710,
27 'tags': list,
28 }
29 }, {
30 'url': 'https://media.ccc.de/v/32c3-7368-shopshifting#download',
31 'only_matching': True,
32 }]
33
34 def _real_extract(self, url):
35 display_id = self._match_id(url)
36 webpage = self._download_webpage(url, display_id)
37 event_id = self._search_regex(r"data-id='(\d+)'", webpage, 'event id')
38 event_data = self._download_json('https://media.ccc.de/public/events/%s' % event_id, event_id)
39
40 formats = []
41 for recording in event_data.get('recordings', []):
42 recording_url = recording.get('recording_url')
43 if not recording_url:
44 continue
45 language = recording.get('language')
46 folder = recording.get('folder')
47 format_id = None
48 if language:
49 format_id = language
50 if folder:
51 if language:
52 format_id += '-' + folder
53 else:
54 format_id = folder
55 vcodec = 'h264' if 'h264' in folder else (
56 'none' if folder in ('mp3', 'opus') else None
57 )
58 formats.append({
59 'format_id': format_id,
60 'url': recording_url,
61 'width': int_or_none(recording.get('width')),
62 'height': int_or_none(recording.get('height')),
63 'filesize': int_or_none(recording.get('size'), invscale=1024 * 1024),
64 'language': language,
65 'vcodec': vcodec,
66 })
67
68 return {
69 'id': event_id,
70 'display_id': display_id,
71 'title': event_data['title'],
72 'creator': try_get(event_data, lambda x: ', '.join(x['persons'])),
73 'description': event_data.get('description'),
74 'thumbnail': event_data.get('thumb_url'),
75 'timestamp': parse_iso8601(event_data.get('date')),
76 'duration': int_or_none(event_data.get('length')),
77 'view_count': int_or_none(event_data.get('view_count')),
78 'tags': event_data.get('tags'),
79 'formats': formats,
80 }
81
82
83 class CCCPlaylistIE(InfoExtractor):
84 IE_NAME = 'media.ccc.de:lists'
85 _VALID_URL = r'https?://(?:www\.)?media\.ccc\.de/c/(?P<id>[^/?#&]+)'
86 _TESTS = [{
87 'url': 'https://media.ccc.de/c/30c3',
88 'info_dict': {
89 'title': '30C3',
90 'id': '30c3',
91 },
92 'playlist_count': 135,
93 }]
94
95 def _real_extract(self, url):
96 playlist_id = self._match_id(url).lower()
97
98 conf = self._download_json(
99 'https://media.ccc.de/public/conferences/' + playlist_id,
100 playlist_id)
101
102 entries = []
103 for e in conf['events']:
104 event_url = url_or_none(e.get('frontend_link'))
105 if event_url:
106 entries.append(self.url_result(event_url, ie=CCCIE.ie_key()))
107
108 return self.playlist_result(entries, playlist_id, conf.get('title'))
109
[end of yt_dlp/extractor/ccc.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/yt_dlp/extractor/ccc.py b/yt_dlp/extractor/ccc.py
--- a/yt_dlp/extractor/ccc.py
+++ b/yt_dlp/extractor/ccc.py
@@ -90,10 +90,17 @@
'id': '30c3',
},
'playlist_count': 135,
+ }, {
+ 'url': 'https://media.ccc.de/c/DS2023',
+ 'info_dict': {
+ 'title': 'Datenspuren 2023',
+ 'id': 'DS2023',
+ },
+ 'playlist_count': 37
}]
def _real_extract(self, url):
- playlist_id = self._match_id(url).lower()
+ playlist_id = self._match_id(url)
conf = self._download_json(
'https://media.ccc.de/public/conferences/' + playlist_id,
| {"golden_diff": "diff --git a/yt_dlp/extractor/ccc.py b/yt_dlp/extractor/ccc.py\n--- a/yt_dlp/extractor/ccc.py\n+++ b/yt_dlp/extractor/ccc.py\n@@ -90,10 +90,17 @@\n 'id': '30c3',\n },\n 'playlist_count': 135,\n+ }, {\n+ 'url': 'https://media.ccc.de/c/DS2023',\n+ 'info_dict': {\n+ 'title': 'Datenspuren 2023',\n+ 'id': 'DS2023',\n+ },\n+ 'playlist_count': 37\n }]\n \n def _real_extract(self, url):\n- playlist_id = self._match_id(url).lower()\n+ playlist_id = self._match_id(url)\n \n conf = self._download_json(\n 'https://media.ccc.de/public/conferences/' + playlist_id,\n", "issue": "[media.ccc.de:lists] playlist_id should be case-sensitive\n### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE\n\n- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\\* field\n\n### Checklist\n\n- [X] I'm reporting that yt-dlp is broken on a **supported** site\n- [X] I've verified that I'm running yt-dlp version **2023.07.06** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit)\n- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details\n- [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)\n- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates\n- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)\n- [ ] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required\n\n### Region\n\n_No response_\n\n### Provide a description that is worded well enough to be understood\n\nSome playlists use uppercase `playlist_id`s, like `https://media.ccc.de/c/DS2023` \u2192 `https://media.ccc.de/public/conferences/DS2022` or `https://media.ccc.de/c/MCH2022` \u2192 `https://media.ccc.de/public/conferences/MCH2022`. So I guess removing `.lower()` in https://github.com/yt-dlp/yt-dlp/blob/master/yt_dlp/extractor/ccc.py#L96 should resolve this.\n\n### Provide verbose output that clearly demonstrates the problem\n\n- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)\n- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead\n- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below\n\n### Complete Verbose Output\n\n```shell\n$ python -m yt_dlp --verbose --ignore-config https://media.ccc.de/c/DS2023\r\n[debug] Command-line config: ['--verbose', '--ignore-config', 'https://media.ccc.de/c/DS2023']\r\n[debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out utf-8, error utf-8, screen utf-8\r\n[debug] yt-dlp version [email protected] [b532a3481] (source)\r\n[debug] Lazy loading extractors is disabled\r\n[debug] Git HEAD: 30ba233d4\r\n[debug] Python 3.11.5 (CPython x86_64 64bit) - Linux-6.5.0-5-generic-x86_64-with-glibc2.38 (OpenSSL 3.0.10 1 Aug 2023, glibc 2.38)\r\n[debug] exe versions: ffmpeg 6.0 (setts), ffprobe 6.0, rtmpdump 2.4\r\n[debug] Optional libraries: Cryptodome-3.11.0, brotli-1.0.9, certifi-2022.09.24, mutagen-1.46.0, pyxattr-0.8.1, secretstorage-3.3.3, sqlite3-2.6.0, websockets-10.4\r\n[debug] Proxy map: {}\r\n[debug] Loaded 1866 extractors\r\n[media.ccc.de:lists] Extracting URL: https://media.ccc.de/c/DS2023\r\n[media.ccc.de:lists] ds2023: Downloading JSON metadata\r\nERROR: [media.ccc.de:lists] DS2023: Unable to download JSON metadata: HTTP Error 404: Not Found (caused by <HTTPError 404: Not Found>); please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U\r\n[\u2026]\r\nThe above exception was the direct cause of the following exception:\r\n\r\nTraceback (most recent call last):\r\n File \"/tmp/yt-dlp/yt_dlp/extractor/common.py\", line 847, in _request_webpage\r\n return self._downloader.urlopen(self._create_request(url_or_request, data, headers, query))\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/tmp/yt-dlp/yt_dlp/YoutubeDL.py\", line 4078, in urlopen\r\n raise _CompatHTTPError(e) from e\r\nyt_dlp.networking.exceptions._CompatHTTPError: HTTP Error 404: Not Found\n```\n\n", "before_files": [{"content": "from .common import InfoExtractor\nfrom ..utils import (\n int_or_none,\n parse_iso8601,\n try_get,\n url_or_none,\n)\n\n\nclass CCCIE(InfoExtractor):\n IE_NAME = 'media.ccc.de'\n _VALID_URL = r'https?://(?:www\\.)?media\\.ccc\\.de/v/(?P<id>[^/?#&]+)'\n\n _TESTS = [{\n 'url': 'https://media.ccc.de/v/30C3_-_5443_-_en_-_saal_g_-_201312281830_-_introduction_to_processor_design_-_byterazor#video',\n 'md5': '3a1eda8f3a29515d27f5adb967d7e740',\n 'info_dict': {\n 'id': '1839',\n 'ext': 'mp4',\n 'title': 'Introduction to Processor Design',\n 'creator': 'byterazor',\n 'description': 'md5:df55f6d073d4ceae55aae6f2fd98a0ac',\n 'thumbnail': r're:^https?://.*\\.jpg$',\n 'upload_date': '20131228',\n 'timestamp': 1388188800,\n 'duration': 3710,\n 'tags': list,\n }\n }, {\n 'url': 'https://media.ccc.de/v/32c3-7368-shopshifting#download',\n 'only_matching': True,\n }]\n\n def _real_extract(self, url):\n display_id = self._match_id(url)\n webpage = self._download_webpage(url, display_id)\n event_id = self._search_regex(r\"data-id='(\\d+)'\", webpage, 'event id')\n event_data = self._download_json('https://media.ccc.de/public/events/%s' % event_id, event_id)\n\n formats = []\n for recording in event_data.get('recordings', []):\n recording_url = recording.get('recording_url')\n if not recording_url:\n continue\n language = recording.get('language')\n folder = recording.get('folder')\n format_id = None\n if language:\n format_id = language\n if folder:\n if language:\n format_id += '-' + folder\n else:\n format_id = folder\n vcodec = 'h264' if 'h264' in folder else (\n 'none' if folder in ('mp3', 'opus') else None\n )\n formats.append({\n 'format_id': format_id,\n 'url': recording_url,\n 'width': int_or_none(recording.get('width')),\n 'height': int_or_none(recording.get('height')),\n 'filesize': int_or_none(recording.get('size'), invscale=1024 * 1024),\n 'language': language,\n 'vcodec': vcodec,\n })\n\n return {\n 'id': event_id,\n 'display_id': display_id,\n 'title': event_data['title'],\n 'creator': try_get(event_data, lambda x: ', '.join(x['persons'])),\n 'description': event_data.get('description'),\n 'thumbnail': event_data.get('thumb_url'),\n 'timestamp': parse_iso8601(event_data.get('date')),\n 'duration': int_or_none(event_data.get('length')),\n 'view_count': int_or_none(event_data.get('view_count')),\n 'tags': event_data.get('tags'),\n 'formats': formats,\n }\n\n\nclass CCCPlaylistIE(InfoExtractor):\n IE_NAME = 'media.ccc.de:lists'\n _VALID_URL = r'https?://(?:www\\.)?media\\.ccc\\.de/c/(?P<id>[^/?#&]+)'\n _TESTS = [{\n 'url': 'https://media.ccc.de/c/30c3',\n 'info_dict': {\n 'title': '30C3',\n 'id': '30c3',\n },\n 'playlist_count': 135,\n }]\n\n def _real_extract(self, url):\n playlist_id = self._match_id(url).lower()\n\n conf = self._download_json(\n 'https://media.ccc.de/public/conferences/' + playlist_id,\n playlist_id)\n\n entries = []\n for e in conf['events']:\n event_url = url_or_none(e.get('frontend_link'))\n if event_url:\n entries.append(self.url_result(event_url, ie=CCCIE.ie_key()))\n\n return self.playlist_result(entries, playlist_id, conf.get('title'))\n", "path": "yt_dlp/extractor/ccc.py"}]} | 3,080 | 221 |
gh_patches_debug_40887 | rasdani/github-patches | git_diff | mozilla__pontoon-2853 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Pretranslate access keys using the algorithm to extract candidate keys
Fluent Rich editor has a special UI for messages with access keys, which lists access key candidates.
We should use the same logic when pretranslating accesskeys and use the first candidate as the translation.
We should also take into account #2717.
</issue>
<code>
[start of pontoon/pretranslation/transformer.py]
1 from copy import deepcopy
2 from typing import Callable, Optional, cast
3
4 from fluent.syntax import ast as FTL
5 from fluent.syntax.serializer import serialize_expression
6 from fluent.syntax.visitor import Transformer
7
8 from pontoon.base.fluent import is_plural_expression
9 from pontoon.base.models import Locale
10
11
12 def flatten_select_expressions(pattern: FTL.Pattern):
13 """
14 If the pattern contains any select expressions,
15 flatten it to only contain select expressions.
16 Leading and trailing elements are copied into each variant,
17 and any single leading or trailing spaces are lifted out of the select expressions.
18 """
19
20 def isSelExp(el: FTL.PatternElement):
21 return isinstance(el, FTL.Placeable) and isinstance(
22 el.expression, FTL.SelectExpression
23 )
24
25 def patternStartsWithSpace(pat: list[FTL.PatternElement]):
26 return isinstance(pat[0], FTL.TextElement) and pat[0].value.startswith(" ")
27
28 def patternEndsWithSpace(pat: list[FTL.PatternElement]):
29 return isinstance(pat[-1], FTL.TextElement) and pat[-1].value.endswith(" ")
30
31 prev = -1
32 select = None
33 for idx, placeable in filter(lambda x: isSelExp(x[1]), enumerate(pattern.elements)):
34 before = pattern.elements[prev + 1 : idx]
35 if before:
36 select = cast(FTL.SelectExpression, placeable.expression)
37 for variant in select.variants:
38 variant.value.elements[0:0] = deepcopy(before)
39 prev = idx
40 if select:
41 after = pattern.elements[prev + 1 :]
42 if after:
43 for variant in select.variants:
44 variant.value.elements += deepcopy(after)
45
46 res: list[FTL.PatternElement] = []
47 for placeable in filter(isSelExp, pattern.elements):
48 patterns = tuple(
49 map(lambda var: var.value.elements, placeable.expression.variants)
50 )
51
52 # Collect leading spaces
53 if all(map(patternStartsWithSpace, patterns)):
54 res.append(FTL.Placeable(FTL.StringLiteral(" ")))
55 for pat in patterns:
56 pat[0].value = pat[0].value[1:]
57
58 res.append(placeable)
59
60 # Collect trailing spaces
61 if all(map(patternEndsWithSpace, patterns)):
62 res.append(FTL.Placeable(FTL.StringLiteral(" ")))
63 for pat in patterns:
64 pat[-1].value = pat[-1].value[:-1]
65 pattern.elements = res
66
67
68 def create_locale_plural_variants(node: FTL.SelectExpression, locale: Locale):
69 variants: list[FTL.Variant] = []
70 source_plurals: dict[str, FTL.Variant] = {}
71 default = cast(FTL.Variant, None)
72
73 for variant in node.variants:
74 key = variant.key
75 if isinstance(key, FTL.NumberLiteral):
76 variants.append(variant)
77 else:
78 source_plurals[key.name] = variant
79 if variant.default:
80 default = variant
81
82 for plural in locale.cldr_plurals_list():
83 if plural in source_plurals.keys():
84 variant = source_plurals[plural]
85 else:
86 variant = deepcopy(default)
87 variant.key.name = plural
88 variant.default = False
89 variants.append(variant)
90
91 variants[-1].default = True
92
93 node.variants = variants
94
95
96 class PreparePretranslation(Transformer):
97 """
98 Flattens the given Pattern, uplifting selectors to the highest possible level and
99 duplicating shared parts in the variants. Transforms plural variants to match the
100 locale.
101 """
102
103 def __init__(self, locale: Locale):
104 self.locale = locale
105
106 def visit_Attribute(self, node: FTL.Attribute):
107 flatten_select_expressions(node.value)
108 return self.generic_visit(node)
109
110 def visit_Message(self, node: FTL.Message):
111 if node.value:
112 flatten_select_expressions(node.value)
113 return self.generic_visit(node)
114
115 def visit_SelectExpression(self, node: FTL.SelectExpression):
116 if is_plural_expression(node):
117 create_locale_plural_variants(node, self.locale)
118 return self.generic_visit(node)
119
120
121 class ApplyPretranslation(Transformer):
122 """
123 During `visit()`, calls `callback(source, locale) -> (translation, service)` for each pattern.
124 """
125
126 def __init__(
127 self,
128 locale: Locale,
129 entry: FTL.EntryType,
130 callback: Callable[[str, str], tuple[Optional[str], str]],
131 ):
132 prep = PreparePretranslation(locale)
133 prep.visit(entry)
134 self.callback = callback
135 self.locale = locale
136 self.services: list[str] = []
137
138 def visit_Attribute(self, node):
139 if (
140 node.id.name.endswith("accesskey")
141 and not self.locale.accesskey_localization
142 ):
143 return node
144 return self.generic_visit(node)
145
146 def visit_Pattern(self, node: FTL.Pattern):
147 has_selects = False
148 source = ""
149 for el in node.elements:
150 if isinstance(el, FTL.TextElement):
151 source += el.value
152 elif isinstance(el.expression, FTL.SelectExpression):
153 self.generic_visit(el.expression)
154 has_selects = True
155 else:
156 source += serialize_expression(el)
157 if not has_selects and source != "":
158 # Machine translation treats each line as a separate sentence,
159 # hence we replace newline characters with spaces.
160 source = source.replace("\n", " ")
161
162 translation, service = self.callback(source, self.locale)
163 if translation is None:
164 raise ValueError(
165 f"Pretranslation for `{source}` to {self.locale.code} not available."
166 )
167 node.elements = [FTL.TextElement(translation)]
168 self.services.append(service)
169 return node
170
[end of pontoon/pretranslation/transformer.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/pontoon/pretranslation/transformer.py b/pontoon/pretranslation/transformer.py
--- a/pontoon/pretranslation/transformer.py
+++ b/pontoon/pretranslation/transformer.py
@@ -1,3 +1,5 @@
+import re
+
from copy import deepcopy
from typing import Callable, Optional, cast
@@ -93,6 +95,51 @@
node.variants = variants
+def extract_accesskey_candidates(message: FTL.Message, label: str, variant_name=None):
+ def get_source(names):
+ for attribute in message.attributes:
+ if attribute.id.name in names:
+ element = attribute.value.elements[0]
+
+ if isinstance(element, FTL.TextElement):
+ return element.value
+ elif isinstance(element.expression, FTL.SelectExpression):
+ variants = element.expression.variants
+ variant = next(
+ (v for v in variants if v.key.name == variant_name), variants[0]
+ )
+ variant_element = variant.value.elements[0]
+
+ if isinstance(variant_element, FTL.TextElement):
+ return variant_element.value
+
+ return None
+
+ prefix_end = label.index("accesskey")
+ prefix = label[0:prefix_end]
+
+ # Generate access key candidates:
+ if prefix:
+ # From a prefixed "label" attribute
+ name = f"{prefix}label"
+ source = get_source([name])
+ else:
+ # From a pre-defined list of attribute names
+ source = get_source(["label", "value", "aria-label"])
+ # From a message value
+ if not source and message.value:
+ source = message.value.elements[0].value
+
+ if not source:
+ return []
+
+ # Exclude placeables (message is flat). See bug 1447103 for details.
+ keys = re.sub(r"(?s){.*?}|[\W_]", "", source)
+
+ # Extract unique candidates
+ return list(dict.fromkeys(keys))
+
+
class PreparePretranslation(Transformer):
"""
Flattens the given Pattern, uplifting selectors to the highest possible level and
@@ -132,15 +179,43 @@
prep = PreparePretranslation(locale)
prep.visit(entry)
self.callback = callback
+ self.entry = entry
self.locale = locale
self.services: list[str] = []
- def visit_Attribute(self, node):
- if (
- node.id.name.endswith("accesskey")
- and not self.locale.accesskey_localization
- ):
- return node
+ def visit_Attribute(self, node: FTL.Pattern):
+ name = node.id.name
+
+ def set_accesskey(element, variant_name=None):
+ if isinstance(element, FTL.TextElement) and len(element.value) <= 1:
+ candidates = extract_accesskey_candidates(
+ self.entry, name, variant_name
+ )
+ if candidates:
+ element.value = candidates[0]
+ return True
+
+ if name.endswith("accesskey"):
+ if self.locale.accesskey_localization:
+ element = node.value.elements[0]
+
+ if set_accesskey(element):
+ return node
+ elif isinstance(element, FTL.Placeable) and isinstance(
+ element.expression, FTL.SelectExpression
+ ):
+ variants = element.expression.variants
+ processed_variants = 0
+ for variant in variants:
+ variant_element = variant.value.elements[0]
+ if set_accesskey(variant_element, variant.key.name):
+ processed_variants += 1
+ if processed_variants == len(variants):
+ return node
+
+ else:
+ return node
+
return self.generic_visit(node)
def visit_Pattern(self, node: FTL.Pattern):
| {"golden_diff": "diff --git a/pontoon/pretranslation/transformer.py b/pontoon/pretranslation/transformer.py\n--- a/pontoon/pretranslation/transformer.py\n+++ b/pontoon/pretranslation/transformer.py\n@@ -1,3 +1,5 @@\n+import re\n+\n from copy import deepcopy\n from typing import Callable, Optional, cast\n \n@@ -93,6 +95,51 @@\n node.variants = variants\n \n \n+def extract_accesskey_candidates(message: FTL.Message, label: str, variant_name=None):\n+ def get_source(names):\n+ for attribute in message.attributes:\n+ if attribute.id.name in names:\n+ element = attribute.value.elements[0]\n+\n+ if isinstance(element, FTL.TextElement):\n+ return element.value\n+ elif isinstance(element.expression, FTL.SelectExpression):\n+ variants = element.expression.variants\n+ variant = next(\n+ (v for v in variants if v.key.name == variant_name), variants[0]\n+ )\n+ variant_element = variant.value.elements[0]\n+\n+ if isinstance(variant_element, FTL.TextElement):\n+ return variant_element.value\n+\n+ return None\n+\n+ prefix_end = label.index(\"accesskey\")\n+ prefix = label[0:prefix_end]\n+\n+ # Generate access key candidates:\n+ if prefix:\n+ # From a prefixed \"label\" attribute\n+ name = f\"{prefix}label\"\n+ source = get_source([name])\n+ else:\n+ # From a pre-defined list of attribute names\n+ source = get_source([\"label\", \"value\", \"aria-label\"])\n+ # From a message value\n+ if not source and message.value:\n+ source = message.value.elements[0].value\n+\n+ if not source:\n+ return []\n+\n+ # Exclude placeables (message is flat). See bug 1447103 for details.\n+ keys = re.sub(r\"(?s){.*?}|[\\W_]\", \"\", source)\n+\n+ # Extract unique candidates\n+ return list(dict.fromkeys(keys))\n+\n+\n class PreparePretranslation(Transformer):\n \"\"\"\n Flattens the given Pattern, uplifting selectors to the highest possible level and\n@@ -132,15 +179,43 @@\n prep = PreparePretranslation(locale)\n prep.visit(entry)\n self.callback = callback\n+ self.entry = entry\n self.locale = locale\n self.services: list[str] = []\n \n- def visit_Attribute(self, node):\n- if (\n- node.id.name.endswith(\"accesskey\")\n- and not self.locale.accesskey_localization\n- ):\n- return node\n+ def visit_Attribute(self, node: FTL.Pattern):\n+ name = node.id.name\n+\n+ def set_accesskey(element, variant_name=None):\n+ if isinstance(element, FTL.TextElement) and len(element.value) <= 1:\n+ candidates = extract_accesskey_candidates(\n+ self.entry, name, variant_name\n+ )\n+ if candidates:\n+ element.value = candidates[0]\n+ return True\n+\n+ if name.endswith(\"accesskey\"):\n+ if self.locale.accesskey_localization:\n+ element = node.value.elements[0]\n+\n+ if set_accesskey(element):\n+ return node\n+ elif isinstance(element, FTL.Placeable) and isinstance(\n+ element.expression, FTL.SelectExpression\n+ ):\n+ variants = element.expression.variants\n+ processed_variants = 0\n+ for variant in variants:\n+ variant_element = variant.value.elements[0]\n+ if set_accesskey(variant_element, variant.key.name):\n+ processed_variants += 1\n+ if processed_variants == len(variants):\n+ return node\n+\n+ else:\n+ return node\n+\n return self.generic_visit(node)\n \n def visit_Pattern(self, node: FTL.Pattern):\n", "issue": "Pretranslate access keys using the algorithm to extract candidate keys\nFluent Rich editor has a special UI for messages with access keys, which lists access key candidates.\r\n\r\nWe should use the same logic when pretranslating accesskeys and use the first candidate as the translation.\r\n\r\nWe should also take into account #2717.\n", "before_files": [{"content": "from copy import deepcopy\nfrom typing import Callable, Optional, cast\n\nfrom fluent.syntax import ast as FTL\nfrom fluent.syntax.serializer import serialize_expression\nfrom fluent.syntax.visitor import Transformer\n\nfrom pontoon.base.fluent import is_plural_expression\nfrom pontoon.base.models import Locale\n\n\ndef flatten_select_expressions(pattern: FTL.Pattern):\n \"\"\"\n If the pattern contains any select expressions,\n flatten it to only contain select expressions.\n Leading and trailing elements are copied into each variant,\n and any single leading or trailing spaces are lifted out of the select expressions.\n \"\"\"\n\n def isSelExp(el: FTL.PatternElement):\n return isinstance(el, FTL.Placeable) and isinstance(\n el.expression, FTL.SelectExpression\n )\n\n def patternStartsWithSpace(pat: list[FTL.PatternElement]):\n return isinstance(pat[0], FTL.TextElement) and pat[0].value.startswith(\" \")\n\n def patternEndsWithSpace(pat: list[FTL.PatternElement]):\n return isinstance(pat[-1], FTL.TextElement) and pat[-1].value.endswith(\" \")\n\n prev = -1\n select = None\n for idx, placeable in filter(lambda x: isSelExp(x[1]), enumerate(pattern.elements)):\n before = pattern.elements[prev + 1 : idx]\n if before:\n select = cast(FTL.SelectExpression, placeable.expression)\n for variant in select.variants:\n variant.value.elements[0:0] = deepcopy(before)\n prev = idx\n if select:\n after = pattern.elements[prev + 1 :]\n if after:\n for variant in select.variants:\n variant.value.elements += deepcopy(after)\n\n res: list[FTL.PatternElement] = []\n for placeable in filter(isSelExp, pattern.elements):\n patterns = tuple(\n map(lambda var: var.value.elements, placeable.expression.variants)\n )\n\n # Collect leading spaces\n if all(map(patternStartsWithSpace, patterns)):\n res.append(FTL.Placeable(FTL.StringLiteral(\" \")))\n for pat in patterns:\n pat[0].value = pat[0].value[1:]\n\n res.append(placeable)\n\n # Collect trailing spaces\n if all(map(patternEndsWithSpace, patterns)):\n res.append(FTL.Placeable(FTL.StringLiteral(\" \")))\n for pat in patterns:\n pat[-1].value = pat[-1].value[:-1]\n pattern.elements = res\n\n\ndef create_locale_plural_variants(node: FTL.SelectExpression, locale: Locale):\n variants: list[FTL.Variant] = []\n source_plurals: dict[str, FTL.Variant] = {}\n default = cast(FTL.Variant, None)\n\n for variant in node.variants:\n key = variant.key\n if isinstance(key, FTL.NumberLiteral):\n variants.append(variant)\n else:\n source_plurals[key.name] = variant\n if variant.default:\n default = variant\n\n for plural in locale.cldr_plurals_list():\n if plural in source_plurals.keys():\n variant = source_plurals[plural]\n else:\n variant = deepcopy(default)\n variant.key.name = plural\n variant.default = False\n variants.append(variant)\n\n variants[-1].default = True\n\n node.variants = variants\n\n\nclass PreparePretranslation(Transformer):\n \"\"\"\n Flattens the given Pattern, uplifting selectors to the highest possible level and\n duplicating shared parts in the variants. Transforms plural variants to match the\n locale.\n \"\"\"\n\n def __init__(self, locale: Locale):\n self.locale = locale\n\n def visit_Attribute(self, node: FTL.Attribute):\n flatten_select_expressions(node.value)\n return self.generic_visit(node)\n\n def visit_Message(self, node: FTL.Message):\n if node.value:\n flatten_select_expressions(node.value)\n return self.generic_visit(node)\n\n def visit_SelectExpression(self, node: FTL.SelectExpression):\n if is_plural_expression(node):\n create_locale_plural_variants(node, self.locale)\n return self.generic_visit(node)\n\n\nclass ApplyPretranslation(Transformer):\n \"\"\"\n During `visit()`, calls `callback(source, locale) -> (translation, service)` for each pattern.\n \"\"\"\n\n def __init__(\n self,\n locale: Locale,\n entry: FTL.EntryType,\n callback: Callable[[str, str], tuple[Optional[str], str]],\n ):\n prep = PreparePretranslation(locale)\n prep.visit(entry)\n self.callback = callback\n self.locale = locale\n self.services: list[str] = []\n\n def visit_Attribute(self, node):\n if (\n node.id.name.endswith(\"accesskey\")\n and not self.locale.accesskey_localization\n ):\n return node\n return self.generic_visit(node)\n\n def visit_Pattern(self, node: FTL.Pattern):\n has_selects = False\n source = \"\"\n for el in node.elements:\n if isinstance(el, FTL.TextElement):\n source += el.value\n elif isinstance(el.expression, FTL.SelectExpression):\n self.generic_visit(el.expression)\n has_selects = True\n else:\n source += serialize_expression(el)\n if not has_selects and source != \"\":\n # Machine translation treats each line as a separate sentence,\n # hence we replace newline characters with spaces.\n source = source.replace(\"\\n\", \" \")\n\n translation, service = self.callback(source, self.locale)\n if translation is None:\n raise ValueError(\n f\"Pretranslation for `{source}` to {self.locale.code} not available.\"\n )\n node.elements = [FTL.TextElement(translation)]\n self.services.append(service)\n return node\n", "path": "pontoon/pretranslation/transformer.py"}]} | 2,247 | 863 |
gh_patches_debug_39375 | rasdani/github-patches | git_diff | opendatacube__datacube-core-694 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
write_geotiff helper function fails if CRS is string, not object
### Expected behaviour
The write_geotiff helper function assumes that you will pass the function a datacube crs object. If you are writing out a geotiff from data that was not loaded using `dc.load`, this crs object is not present.
E.g. I read a Geotiff file produced by datacube-stats into a Notebook, ran some analysis on it, and wanted to write it back out to Geotiff. I have a crs string from the attributes of my original Geotiff, but no datacube crs object, so the write_geotiff function fails.
### Error
```
AttributeError Traceback (most recent call last)
<ipython-input-41-736bab55bae5> in <module>()
3 Differenceds.attrs['crs'] = (GeotiffData.crs)
4
----> 5 write_geotiff(PercentileConfidence, Differenceds)
/g/data/v10/public/modules/dea/20180515/lib/python3.6/site-packages/datacube/helpers.py in write_geotiff(filename, dataset, profile_override, time_index)
44 profile = DEFAULT_PROFILE.copy()
45 profile.update({
---> 46 'width': dataset.dims[dataset.crs.dimensions[1]],
47 'height': dataset.dims[dataset.crs.dimensions[0]],
48 'transform': dataset.affine,
AttributeError: 'str' object has no attribute 'dimensions'
```
</issue>
<code>
[start of datacube/helpers.py]
1 """
2 Useful functions for Datacube users
3
4 Not used internally, those should go in `utils.py`
5 """
6
7 import numpy as np
8 import rasterio
9
10 DEFAULT_PROFILE = {
11 'blockxsize': 256,
12 'blockysize': 256,
13 'compress': 'lzw',
14 'driver': 'GTiff',
15 'interleave': 'band',
16 'nodata': 0.0,
17 'tiled': True}
18
19
20 def write_geotiff(filename, dataset, profile_override=None, time_index=None):
21 """
22 Write an ODC style xarray.Dataset to a GeoTIFF file.
23
24 :param filename: Output filename
25 :param dataset: xarray dataset containing one or more bands to write to a file.
26 :param profile_override: option dict, overrides rasterio file creation options.
27 :param time_index: DEPRECATED
28 """
29 profile_override = profile_override or {}
30
31 if time_index is not None:
32 raise ValueError('''The write_geotiff function no longer supports passing in `time_index`.
33 The same function can be achieved by calling `dataset.isel(time=<time_index>)` before passing
34 in your dataset. It was removed because it made the function much less useful for more advanced cases.''')
35
36 try:
37 dtypes = {val.dtype for val in dataset.data_vars.values()}
38 assert len(dtypes) == 1 # Check for multiple dtypes
39 except AttributeError:
40 dtypes = [dataset.dtype]
41
42 profile = DEFAULT_PROFILE.copy()
43 profile.update({
44 'width': dataset.dims[dataset.crs.dimensions[1]],
45 'height': dataset.dims[dataset.crs.dimensions[0]],
46 'transform': dataset.affine,
47 'crs': dataset.crs.crs_str,
48 'count': len(dataset.data_vars),
49 'dtype': str(dtypes.pop())
50 })
51 profile.update(profile_override)
52
53 _calculate_blocksize(profile)
54
55 with rasterio.open(str(filename), 'w', **profile) as dest:
56 if hasattr(dataset, 'data_vars'):
57 for bandnum, data in enumerate(dataset.data_vars.values(), start=1):
58 dest.write(data.data, bandnum)
59
60
61 def _calculate_blocksize(profile):
62 # Block size must be smaller than the image size, and for geotiffs must be divisible by 16
63 # Fix for small images.
64 if profile['blockxsize'] > profile['width']:
65 if profile['width'] % 16 == 0 or profile['width'] < 16:
66 profile['blockxsize'] = profile['width']
67 else:
68 profile['blockxsize'] = 16
69
70 if profile['blockysize'] > profile['height']:
71 if profile['height'] % 16 == 0 or profile['height'] < 16:
72 profile['blockysize'] = profile['height']
73 else:
74 profile['blockysize'] = 16
75
76
77 def ga_pq_fuser(dest, src):
78 """
79 Fuse two Geoscience Australia Pixel Quality ndarrays
80
81 To be used as a `fuse_func` when loaded `grouped` data, for example when grouping
82 by solar day to avoid duplicate data from scene overlaps.
83 """
84 valid_bit = 8
85 valid_val = (1 << valid_bit)
86
87 no_data_dest_mask = ~(dest & valid_val).astype(bool)
88 np.copyto(dest, src, where=no_data_dest_mask)
89
90 both_data_mask = (valid_val & dest & src).astype(bool)
91 np.copyto(dest, src & dest, where=both_data_mask)
92
[end of datacube/helpers.py]
[start of datacube/utils/xarray_geoextensions.py]
1 """
2 Add geometric extensions to :class:`xarray.Dataset` and :class:`xarray.DataArray` for use
3 with Data Cube by Monkey Patching those classes.
4
5 This extension is reliant on an `xarray` object having a `.crs` property of type
6 :class:`datacube.utils.geometry.CRS`. This is used to inspect the spatial dimensions of the
7 :class:`Dataset` or :class:`DataArray`, and provide new attributes for accessing a
8 :class:`datacube.utils.geometry.GeoBox`, affine transform and extent for the dataset as
9 `.geobox`, `.affine` and `.extent` respectively.
10
11 """
12
13 import xarray
14 from affine import Affine
15
16 from datacube.utils import data_resolution_and_offset, geometry
17
18
19 def _xarray_affine(obj):
20 dims = obj.crs.dimensions
21 xres, xoff = data_resolution_and_offset(obj[dims[1]].values)
22 yres, yoff = data_resolution_and_offset(obj[dims[0]].values)
23 return Affine.translation(xoff, yoff) * Affine.scale(xres, yres)
24
25
26 def _xarray_extent(obj):
27 return obj.geobox.extent
28
29
30 def _xarray_geobox(obj):
31 dims = obj.crs.dimensions
32 return geometry.GeoBox(obj[dims[1]].size, obj[dims[0]].size, obj.affine, obj.crs)
33
34
35 xarray.Dataset.geobox = property(_xarray_geobox)
36 xarray.Dataset.affine = property(_xarray_affine)
37 xarray.Dataset.extent = property(_xarray_extent)
38 xarray.DataArray.geobox = property(_xarray_geobox)
39 xarray.DataArray.affine = property(_xarray_affine)
40 xarray.DataArray.extent = property(_xarray_extent)
41
[end of datacube/utils/xarray_geoextensions.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/datacube/helpers.py b/datacube/helpers.py
--- a/datacube/helpers.py
+++ b/datacube/helpers.py
@@ -26,12 +26,17 @@
:param profile_override: option dict, overrides rasterio file creation options.
:param time_index: DEPRECATED
"""
- profile_override = profile_override or {}
-
if time_index is not None:
raise ValueError('''The write_geotiff function no longer supports passing in `time_index`.
The same function can be achieved by calling `dataset.isel(time=<time_index>)` before passing
- in your dataset. It was removed because it made the function much less useful for more advanced cases.''')
+ in your dataset. It was removed because it made the function much less useful for more advanced cases.''')
+
+ profile_override = profile_override or {}
+
+ geobox = getattr(dataset, 'geobox', None)
+
+ if geobox is None:
+ raise ValueError('Can only write datasets with specified `crs` attribute')
try:
dtypes = {val.dtype for val in dataset.data_vars.values()}
@@ -40,11 +45,13 @@
dtypes = [dataset.dtype]
profile = DEFAULT_PROFILE.copy()
+ height, width = geobox.shape
+
profile.update({
- 'width': dataset.dims[dataset.crs.dimensions[1]],
- 'height': dataset.dims[dataset.crs.dimensions[0]],
- 'transform': dataset.affine,
- 'crs': dataset.crs.crs_str,
+ 'width': width,
+ 'height': height,
+ 'transform': geobox.affine,
+ 'crs': geobox.crs.crs_str,
'count': len(dataset.data_vars),
'dtype': str(dtypes.pop())
})
diff --git a/datacube/utils/xarray_geoextensions.py b/datacube/utils/xarray_geoextensions.py
--- a/datacube/utils/xarray_geoextensions.py
+++ b/datacube/utils/xarray_geoextensions.py
@@ -16,20 +16,38 @@
from datacube.utils import data_resolution_and_offset, geometry
+def _norm_crs(crs):
+ if crs is None or isinstance(crs, geometry.CRS):
+ return crs
+ elif isinstance(crs, str):
+ return geometry.CRS(crs)
+ else:
+ raise ValueError('Can not interpret {} as CRS'.format(type(crs)))
+
+
def _xarray_affine(obj):
- dims = obj.crs.dimensions
+ crs = _norm_crs(obj.crs)
+ if crs is None:
+ return None
+
+ dims = crs.dimensions
xres, xoff = data_resolution_and_offset(obj[dims[1]].values)
yres, yoff = data_resolution_and_offset(obj[dims[0]].values)
return Affine.translation(xoff, yoff) * Affine.scale(xres, yres)
def _xarray_extent(obj):
- return obj.geobox.extent
+ geobox = obj.geobox
+ return None if geobox is None else geobox.extent
def _xarray_geobox(obj):
- dims = obj.crs.dimensions
- return geometry.GeoBox(obj[dims[1]].size, obj[dims[0]].size, obj.affine, obj.crs)
+ crs = _norm_crs(obj.crs)
+ if crs is None:
+ return None
+
+ dims = crs.dimensions
+ return geometry.GeoBox(obj[dims[1]].size, obj[dims[0]].size, obj.affine, crs)
xarray.Dataset.geobox = property(_xarray_geobox)
| {"golden_diff": "diff --git a/datacube/helpers.py b/datacube/helpers.py\n--- a/datacube/helpers.py\n+++ b/datacube/helpers.py\n@@ -26,12 +26,17 @@\n :param profile_override: option dict, overrides rasterio file creation options.\n :param time_index: DEPRECATED\n \"\"\"\n- profile_override = profile_override or {}\n-\n if time_index is not None:\n raise ValueError('''The write_geotiff function no longer supports passing in `time_index`.\n The same function can be achieved by calling `dataset.isel(time=<time_index>)` before passing\n- in your dataset. It was removed because it made the function much less useful for more advanced cases.''')\n+ in your dataset. It was removed because it made the function much less useful for more advanced cases.''')\n+\n+ profile_override = profile_override or {}\n+\n+ geobox = getattr(dataset, 'geobox', None)\n+\n+ if geobox is None:\n+ raise ValueError('Can only write datasets with specified `crs` attribute')\n \n try:\n dtypes = {val.dtype for val in dataset.data_vars.values()}\n@@ -40,11 +45,13 @@\n dtypes = [dataset.dtype]\n \n profile = DEFAULT_PROFILE.copy()\n+ height, width = geobox.shape\n+\n profile.update({\n- 'width': dataset.dims[dataset.crs.dimensions[1]],\n- 'height': dataset.dims[dataset.crs.dimensions[0]],\n- 'transform': dataset.affine,\n- 'crs': dataset.crs.crs_str,\n+ 'width': width,\n+ 'height': height,\n+ 'transform': geobox.affine,\n+ 'crs': geobox.crs.crs_str,\n 'count': len(dataset.data_vars),\n 'dtype': str(dtypes.pop())\n })\ndiff --git a/datacube/utils/xarray_geoextensions.py b/datacube/utils/xarray_geoextensions.py\n--- a/datacube/utils/xarray_geoextensions.py\n+++ b/datacube/utils/xarray_geoextensions.py\n@@ -16,20 +16,38 @@\n from datacube.utils import data_resolution_and_offset, geometry\n \n \n+def _norm_crs(crs):\n+ if crs is None or isinstance(crs, geometry.CRS):\n+ return crs\n+ elif isinstance(crs, str):\n+ return geometry.CRS(crs)\n+ else:\n+ raise ValueError('Can not interpret {} as CRS'.format(type(crs)))\n+\n+\n def _xarray_affine(obj):\n- dims = obj.crs.dimensions\n+ crs = _norm_crs(obj.crs)\n+ if crs is None:\n+ return None\n+\n+ dims = crs.dimensions\n xres, xoff = data_resolution_and_offset(obj[dims[1]].values)\n yres, yoff = data_resolution_and_offset(obj[dims[0]].values)\n return Affine.translation(xoff, yoff) * Affine.scale(xres, yres)\n \n \n def _xarray_extent(obj):\n- return obj.geobox.extent\n+ geobox = obj.geobox\n+ return None if geobox is None else geobox.extent\n \n \n def _xarray_geobox(obj):\n- dims = obj.crs.dimensions\n- return geometry.GeoBox(obj[dims[1]].size, obj[dims[0]].size, obj.affine, obj.crs)\n+ crs = _norm_crs(obj.crs)\n+ if crs is None:\n+ return None\n+\n+ dims = crs.dimensions\n+ return geometry.GeoBox(obj[dims[1]].size, obj[dims[0]].size, obj.affine, crs)\n \n \n xarray.Dataset.geobox = property(_xarray_geobox)\n", "issue": "write_geotiff helper function fails if CRS is string, not object\n### Expected behaviour\r\nThe write_geotiff helper function assumes that you will pass the function a datacube crs object. If you are writing out a geotiff from data that was not loaded using `dc.load`, this crs object is not present. \r\nE.g. I read a Geotiff file produced by datacube-stats into a Notebook, ran some analysis on it, and wanted to write it back out to Geotiff. I have a crs string from the attributes of my original Geotiff, but no datacube crs object, so the write_geotiff function fails.\r\n\r\n### Error\r\n```\r\nAttributeError Traceback (most recent call last)\r\n<ipython-input-41-736bab55bae5> in <module>()\r\n 3 Differenceds.attrs['crs'] = (GeotiffData.crs)\r\n 4 \r\n----> 5 write_geotiff(PercentileConfidence, Differenceds)\r\n\r\n/g/data/v10/public/modules/dea/20180515/lib/python3.6/site-packages/datacube/helpers.py in write_geotiff(filename, dataset, profile_override, time_index)\r\n 44 profile = DEFAULT_PROFILE.copy()\r\n 45 profile.update({\r\n---> 46 'width': dataset.dims[dataset.crs.dimensions[1]],\r\n 47 'height': dataset.dims[dataset.crs.dimensions[0]],\r\n 48 'transform': dataset.affine,\r\n\r\nAttributeError: 'str' object has no attribute 'dimensions'\r\n```\n", "before_files": [{"content": "\"\"\"\nUseful functions for Datacube users\n\nNot used internally, those should go in `utils.py`\n\"\"\"\n\nimport numpy as np\nimport rasterio\n\nDEFAULT_PROFILE = {\n 'blockxsize': 256,\n 'blockysize': 256,\n 'compress': 'lzw',\n 'driver': 'GTiff',\n 'interleave': 'band',\n 'nodata': 0.0,\n 'tiled': True}\n\n\ndef write_geotiff(filename, dataset, profile_override=None, time_index=None):\n \"\"\"\n Write an ODC style xarray.Dataset to a GeoTIFF file.\n\n :param filename: Output filename\n :param dataset: xarray dataset containing one or more bands to write to a file.\n :param profile_override: option dict, overrides rasterio file creation options.\n :param time_index: DEPRECATED\n \"\"\"\n profile_override = profile_override or {}\n\n if time_index is not None:\n raise ValueError('''The write_geotiff function no longer supports passing in `time_index`.\n The same function can be achieved by calling `dataset.isel(time=<time_index>)` before passing\n in your dataset. It was removed because it made the function much less useful for more advanced cases.''')\n\n try:\n dtypes = {val.dtype for val in dataset.data_vars.values()}\n assert len(dtypes) == 1 # Check for multiple dtypes\n except AttributeError:\n dtypes = [dataset.dtype]\n\n profile = DEFAULT_PROFILE.copy()\n profile.update({\n 'width': dataset.dims[dataset.crs.dimensions[1]],\n 'height': dataset.dims[dataset.crs.dimensions[0]],\n 'transform': dataset.affine,\n 'crs': dataset.crs.crs_str,\n 'count': len(dataset.data_vars),\n 'dtype': str(dtypes.pop())\n })\n profile.update(profile_override)\n\n _calculate_blocksize(profile)\n\n with rasterio.open(str(filename), 'w', **profile) as dest:\n if hasattr(dataset, 'data_vars'):\n for bandnum, data in enumerate(dataset.data_vars.values(), start=1):\n dest.write(data.data, bandnum)\n\n\ndef _calculate_blocksize(profile):\n # Block size must be smaller than the image size, and for geotiffs must be divisible by 16\n # Fix for small images.\n if profile['blockxsize'] > profile['width']:\n if profile['width'] % 16 == 0 or profile['width'] < 16:\n profile['blockxsize'] = profile['width']\n else:\n profile['blockxsize'] = 16\n\n if profile['blockysize'] > profile['height']:\n if profile['height'] % 16 == 0 or profile['height'] < 16:\n profile['blockysize'] = profile['height']\n else:\n profile['blockysize'] = 16\n\n\ndef ga_pq_fuser(dest, src):\n \"\"\"\n Fuse two Geoscience Australia Pixel Quality ndarrays\n\n To be used as a `fuse_func` when loaded `grouped` data, for example when grouping\n by solar day to avoid duplicate data from scene overlaps.\n \"\"\"\n valid_bit = 8\n valid_val = (1 << valid_bit)\n\n no_data_dest_mask = ~(dest & valid_val).astype(bool)\n np.copyto(dest, src, where=no_data_dest_mask)\n\n both_data_mask = (valid_val & dest & src).astype(bool)\n np.copyto(dest, src & dest, where=both_data_mask)\n", "path": "datacube/helpers.py"}, {"content": "\"\"\"\nAdd geometric extensions to :class:`xarray.Dataset` and :class:`xarray.DataArray` for use\nwith Data Cube by Monkey Patching those classes.\n\nThis extension is reliant on an `xarray` object having a `.crs` property of type\n:class:`datacube.utils.geometry.CRS`. This is used to inspect the spatial dimensions of the\n:class:`Dataset` or :class:`DataArray`, and provide new attributes for accessing a\n:class:`datacube.utils.geometry.GeoBox`, affine transform and extent for the dataset as\n`.geobox`, `.affine` and `.extent` respectively.\n\n\"\"\"\n\nimport xarray\nfrom affine import Affine\n\nfrom datacube.utils import data_resolution_and_offset, geometry\n\n\ndef _xarray_affine(obj):\n dims = obj.crs.dimensions\n xres, xoff = data_resolution_and_offset(obj[dims[1]].values)\n yres, yoff = data_resolution_and_offset(obj[dims[0]].values)\n return Affine.translation(xoff, yoff) * Affine.scale(xres, yres)\n\n\ndef _xarray_extent(obj):\n return obj.geobox.extent\n\n\ndef _xarray_geobox(obj):\n dims = obj.crs.dimensions\n return geometry.GeoBox(obj[dims[1]].size, obj[dims[0]].size, obj.affine, obj.crs)\n\n\nxarray.Dataset.geobox = property(_xarray_geobox)\nxarray.Dataset.affine = property(_xarray_affine)\nxarray.Dataset.extent = property(_xarray_extent)\nxarray.DataArray.geobox = property(_xarray_geobox)\nxarray.DataArray.affine = property(_xarray_affine)\nxarray.DataArray.extent = property(_xarray_extent)\n", "path": "datacube/utils/xarray_geoextensions.py"}]} | 2,316 | 825 |
gh_patches_debug_41475 | rasdani/github-patches | git_diff | automl__auto-sklearn-1407 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
run_wrapper() got an unexpected keyword argument 'pure'
While running a fitting on a classifier (*output[0]* with data *output[1]* to *output[4]*), I get the following error; I think it's packages issue;
> output[0].fit(output[1], output[2], output[3], output[4])
> File "/usr/local/lib/python3.8/dist-packages/autosklearn/estimators.py", line 1045, in fit
super().fit(
> File "/usr/local/lib/python3.8/dist-packages/autosklearn/estimators.py", line 375, in fit
self.automl_.fit(load_models=self.load_models, **kwargs)
> File "/usr/local/lib/python3.8/dist-packages/autosklearn/automl.py", line 2056, in fit
return super().fit(
> File "/usr/local/lib/python3.8/dist-packages/autosklearn/automl.py", line 931, in fit
_proc_smac.run_smbo()
> File "/usr/local/lib/python3.8/dist-packages/autosklearn/smbo.py", line 498, in run_smbo
smac.optimize()
> File "/usr/local/lib/python3.8/dist-packages/smac/facade/smac_ac_facade.py", line 720, in optimize
incumbent = self.solver.run()
> File "/usr/local/lib/python3.8/dist-packages/smac/optimizer/smbo.py", line 287, in run
self.tae_runner.submit_run(run_info=run_info)
> File "/usr/local/lib/python3.8/dist-packages/smac/tae/dask_runner.py", line 166, in submit_run
self.client.submit(
> File "/usr/local/lib/python3.8/dist-packages/autosklearn/util/single_thread_client.py", line 59, in submit
return DummyFuture(func(*args, **kwargs))
> **TypeError: run_wrapper() got an unexpected keyword argument 'pure'**
It seems like it's an error with Dask. Here are the installed packages on ubuntu 18.04
pandas==1.3.0
scikit-learn==0.24
dask==2021.12.0
auto-sklearn==0.14.5 #AutoML
tensorflow==2.8.0
I've tried all versions of dask from 2021.12.0 to 2022.02.0 (Current) and nothing seems to work. Downgrading to auto-sklearn 0.14.4 and lower didn't solve the problem.
</issue>
<code>
[start of autosklearn/util/single_thread_client.py]
1 import typing
2 from pathlib import Path
3
4 import dask.distributed
5
6
7 class DummyFuture(dask.distributed.Future):
8 """
9 A class that mimics a distributed Future, the outcome of
10 performing submit on a distributed client.
11 """
12 def __init__(self, result: typing.Any) -> None:
13 self._result = result # type: typing.Any
14
15 def result(self, timeout: typing.Optional[int] = None) -> typing.Any:
16 return self._result
17
18 def cancel(self) -> None:
19 pass
20
21 def done(self) -> bool:
22 return True
23
24 def __repr__(self) -> str:
25 return "DummyFuture: {}".format(self._result)
26
27 def __del__(self) -> None:
28 pass
29
30
31 class SingleThreadedClient(dask.distributed.Client):
32 """
33 A class to Mock the Distributed Client class, in case
34 Auto-Sklearn is meant to run in the current Thread.
35 """
36 def __init__(self) -> None:
37
38 # Raise a not implemented error if using a method from Client
39 implemented_methods = ['submit', 'close', 'shutdown', 'write_scheduler_file',
40 '_get_scheduler_info', 'nthreads']
41 method_list = [func for func in dir(dask.distributed.Client) if callable(
42 getattr(dask.distributed.Client, func)) and not func.startswith('__')]
43 for method in method_list:
44 if method in implemented_methods:
45 continue
46 setattr(self, method, self._unsupported_method)
47 pass
48
49 def _unsupported_method(self) -> None:
50 raise NotImplementedError()
51
52 def submit(
53 self,
54 func: typing.Callable,
55 *args: typing.List,
56 priority: int = 0,
57 **kwargs: typing.Dict,
58 ) -> typing.Any:
59 return DummyFuture(func(*args, **kwargs))
60
61 def close(self) -> None:
62 pass
63
64 def shutdown(self) -> None:
65 pass
66
67 def write_scheduler_file(self, scheduler_file: str) -> None:
68 Path(scheduler_file).touch()
69 return
70
71 def _get_scheduler_info(self) -> typing.Dict:
72 return {
73 'workers': ['127.0.0.1'],
74 'type': 'Scheduler',
75 }
76
77 def nthreads(self) -> typing.Dict:
78 return {
79 '127.0.0.1': 1,
80 }
81
82 def __repr__(self) -> str:
83 return 'SingleThreadedClient()'
84
85 def __del__(self) -> None:
86 pass
87
[end of autosklearn/util/single_thread_client.py]
[start of autosklearn/__version__.py]
1 """Version information."""
2
3 # The following line *must* be the last in the module, exactly as formatted:
4 __version__ = "0.14.4"
5
[end of autosklearn/__version__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/autosklearn/__version__.py b/autosklearn/__version__.py
--- a/autosklearn/__version__.py
+++ b/autosklearn/__version__.py
@@ -1,4 +1,4 @@
"""Version information."""
# The following line *must* be the last in the module, exactly as formatted:
-__version__ = "0.14.4"
+__version__ = "0.14.6"
diff --git a/autosklearn/util/single_thread_client.py b/autosklearn/util/single_thread_client.py
--- a/autosklearn/util/single_thread_client.py
+++ b/autosklearn/util/single_thread_client.py
@@ -1,5 +1,6 @@
import typing
from pathlib import Path
+from typing import Any
import dask.distributed
@@ -9,6 +10,7 @@
A class that mimics a distributed Future, the outcome of
performing submit on a distributed client.
"""
+
def __init__(self, result: typing.Any) -> None:
self._result = result # type: typing.Any
@@ -33,13 +35,24 @@
A class to Mock the Distributed Client class, in case
Auto-Sklearn is meant to run in the current Thread.
"""
+
def __init__(self) -> None:
# Raise a not implemented error if using a method from Client
- implemented_methods = ['submit', 'close', 'shutdown', 'write_scheduler_file',
- '_get_scheduler_info', 'nthreads']
- method_list = [func for func in dir(dask.distributed.Client) if callable(
- getattr(dask.distributed.Client, func)) and not func.startswith('__')]
+ implemented_methods = [
+ "submit",
+ "close",
+ "shutdown",
+ "write_scheduler_file",
+ "_get_scheduler_info",
+ "nthreads",
+ ]
+ method_list = [
+ func
+ for func in dir(dask.distributed.Client)
+ if callable(getattr(dask.distributed.Client, func))
+ and not func.startswith("__")
+ ]
for method in method_list:
if method in implemented_methods:
continue
@@ -54,8 +67,24 @@
func: typing.Callable,
*args: typing.List,
priority: int = 0,
- **kwargs: typing.Dict,
+ key: Any = None,
+ workers: Any = None,
+ resources: Any = None,
+ retries: Any = None,
+ fifo_timeout: Any = "100 ms",
+ allow_other_workers: Any = False,
+ actor: Any = False,
+ actors: Any = False,
+ pure: Any = None,
+ **kwargs: Any,
) -> typing.Any:
+ """
+ Note
+ ----
+ The keyword arguments caught in `dask.distributed.Client` need to
+ be specified here so they don't get passed in as ``**kwargs`` to the
+ ``func``.
+ """
return DummyFuture(func(*args, **kwargs))
def close(self) -> None:
@@ -70,17 +99,17 @@
def _get_scheduler_info(self) -> typing.Dict:
return {
- 'workers': ['127.0.0.1'],
- 'type': 'Scheduler',
+ "workers": ["127.0.0.1"],
+ "type": "Scheduler",
}
def nthreads(self) -> typing.Dict:
return {
- '127.0.0.1': 1,
+ "127.0.0.1": 1,
}
def __repr__(self) -> str:
- return 'SingleThreadedClient()'
+ return "SingleThreadedClient()"
def __del__(self) -> None:
pass
| {"golden_diff": "diff --git a/autosklearn/__version__.py b/autosklearn/__version__.py\n--- a/autosklearn/__version__.py\n+++ b/autosklearn/__version__.py\n@@ -1,4 +1,4 @@\n \"\"\"Version information.\"\"\"\n \n # The following line *must* be the last in the module, exactly as formatted:\n-__version__ = \"0.14.4\"\n+__version__ = \"0.14.6\"\ndiff --git a/autosklearn/util/single_thread_client.py b/autosklearn/util/single_thread_client.py\n--- a/autosklearn/util/single_thread_client.py\n+++ b/autosklearn/util/single_thread_client.py\n@@ -1,5 +1,6 @@\n import typing\n from pathlib import Path\n+from typing import Any\n \n import dask.distributed\n \n@@ -9,6 +10,7 @@\n A class that mimics a distributed Future, the outcome of\n performing submit on a distributed client.\n \"\"\"\n+\n def __init__(self, result: typing.Any) -> None:\n self._result = result # type: typing.Any\n \n@@ -33,13 +35,24 @@\n A class to Mock the Distributed Client class, in case\n Auto-Sklearn is meant to run in the current Thread.\n \"\"\"\n+\n def __init__(self) -> None:\n \n # Raise a not implemented error if using a method from Client\n- implemented_methods = ['submit', 'close', 'shutdown', 'write_scheduler_file',\n- '_get_scheduler_info', 'nthreads']\n- method_list = [func for func in dir(dask.distributed.Client) if callable(\n- getattr(dask.distributed.Client, func)) and not func.startswith('__')]\n+ implemented_methods = [\n+ \"submit\",\n+ \"close\",\n+ \"shutdown\",\n+ \"write_scheduler_file\",\n+ \"_get_scheduler_info\",\n+ \"nthreads\",\n+ ]\n+ method_list = [\n+ func\n+ for func in dir(dask.distributed.Client)\n+ if callable(getattr(dask.distributed.Client, func))\n+ and not func.startswith(\"__\")\n+ ]\n for method in method_list:\n if method in implemented_methods:\n continue\n@@ -54,8 +67,24 @@\n func: typing.Callable,\n *args: typing.List,\n priority: int = 0,\n- **kwargs: typing.Dict,\n+ key: Any = None,\n+ workers: Any = None,\n+ resources: Any = None,\n+ retries: Any = None,\n+ fifo_timeout: Any = \"100 ms\",\n+ allow_other_workers: Any = False,\n+ actor: Any = False,\n+ actors: Any = False,\n+ pure: Any = None,\n+ **kwargs: Any,\n ) -> typing.Any:\n+ \"\"\"\n+ Note\n+ ----\n+ The keyword arguments caught in `dask.distributed.Client` need to\n+ be specified here so they don't get passed in as ``**kwargs`` to the\n+ ``func``.\n+ \"\"\"\n return DummyFuture(func(*args, **kwargs))\n \n def close(self) -> None:\n@@ -70,17 +99,17 @@\n \n def _get_scheduler_info(self) -> typing.Dict:\n return {\n- 'workers': ['127.0.0.1'],\n- 'type': 'Scheduler',\n+ \"workers\": [\"127.0.0.1\"],\n+ \"type\": \"Scheduler\",\n }\n \n def nthreads(self) -> typing.Dict:\n return {\n- '127.0.0.1': 1,\n+ \"127.0.0.1\": 1,\n }\n \n def __repr__(self) -> str:\n- return 'SingleThreadedClient()'\n+ return \"SingleThreadedClient()\"\n \n def __del__(self) -> None:\n pass\n", "issue": "run_wrapper() got an unexpected keyword argument 'pure'\nWhile running a fitting on a classifier (*output[0]* with data *output[1]* to *output[4]*), I get the following error; I think it's packages issue;\r\n\r\n > output[0].fit(output[1], output[2], output[3], output[4])\r\n > File \"/usr/local/lib/python3.8/dist-packages/autosklearn/estimators.py\", line 1045, in fit\r\n super().fit(\r\n > File \"/usr/local/lib/python3.8/dist-packages/autosklearn/estimators.py\", line 375, in fit\r\n self.automl_.fit(load_models=self.load_models, **kwargs)\r\n > File \"/usr/local/lib/python3.8/dist-packages/autosklearn/automl.py\", line 2056, in fit\r\n return super().fit(\r\n > File \"/usr/local/lib/python3.8/dist-packages/autosklearn/automl.py\", line 931, in fit\r\n _proc_smac.run_smbo()\r\n > File \"/usr/local/lib/python3.8/dist-packages/autosklearn/smbo.py\", line 498, in run_smbo\r\n smac.optimize()\r\n > File \"/usr/local/lib/python3.8/dist-packages/smac/facade/smac_ac_facade.py\", line 720, in optimize\r\n incumbent = self.solver.run()\r\n > File \"/usr/local/lib/python3.8/dist-packages/smac/optimizer/smbo.py\", line 287, in run\r\n self.tae_runner.submit_run(run_info=run_info)\r\n > File \"/usr/local/lib/python3.8/dist-packages/smac/tae/dask_runner.py\", line 166, in submit_run\r\n self.client.submit(\r\n > File \"/usr/local/lib/python3.8/dist-packages/autosklearn/util/single_thread_client.py\", line 59, in submit\r\n return DummyFuture(func(*args, **kwargs))\r\n > **TypeError: run_wrapper() got an unexpected keyword argument 'pure'**\r\n\r\nIt seems like it's an error with Dask. Here are the installed packages on ubuntu 18.04\r\n\r\npandas==1.3.0\r\nscikit-learn==0.24\r\ndask==2021.12.0\r\nauto-sklearn==0.14.5 #AutoML\r\ntensorflow==2.8.0\r\n\r\nI've tried all versions of dask from 2021.12.0 to 2022.02.0 (Current) and nothing seems to work. Downgrading to auto-sklearn 0.14.4 and lower didn't solve the problem.\n", "before_files": [{"content": "import typing\nfrom pathlib import Path\n\nimport dask.distributed\n\n\nclass DummyFuture(dask.distributed.Future):\n \"\"\"\n A class that mimics a distributed Future, the outcome of\n performing submit on a distributed client.\n \"\"\"\n def __init__(self, result: typing.Any) -> None:\n self._result = result # type: typing.Any\n\n def result(self, timeout: typing.Optional[int] = None) -> typing.Any:\n return self._result\n\n def cancel(self) -> None:\n pass\n\n def done(self) -> bool:\n return True\n\n def __repr__(self) -> str:\n return \"DummyFuture: {}\".format(self._result)\n\n def __del__(self) -> None:\n pass\n\n\nclass SingleThreadedClient(dask.distributed.Client):\n \"\"\"\n A class to Mock the Distributed Client class, in case\n Auto-Sklearn is meant to run in the current Thread.\n \"\"\"\n def __init__(self) -> None:\n\n # Raise a not implemented error if using a method from Client\n implemented_methods = ['submit', 'close', 'shutdown', 'write_scheduler_file',\n '_get_scheduler_info', 'nthreads']\n method_list = [func for func in dir(dask.distributed.Client) if callable(\n getattr(dask.distributed.Client, func)) and not func.startswith('__')]\n for method in method_list:\n if method in implemented_methods:\n continue\n setattr(self, method, self._unsupported_method)\n pass\n\n def _unsupported_method(self) -> None:\n raise NotImplementedError()\n\n def submit(\n self,\n func: typing.Callable,\n *args: typing.List,\n priority: int = 0,\n **kwargs: typing.Dict,\n ) -> typing.Any:\n return DummyFuture(func(*args, **kwargs))\n\n def close(self) -> None:\n pass\n\n def shutdown(self) -> None:\n pass\n\n def write_scheduler_file(self, scheduler_file: str) -> None:\n Path(scheduler_file).touch()\n return\n\n def _get_scheduler_info(self) -> typing.Dict:\n return {\n 'workers': ['127.0.0.1'],\n 'type': 'Scheduler',\n }\n\n def nthreads(self) -> typing.Dict:\n return {\n '127.0.0.1': 1,\n }\n\n def __repr__(self) -> str:\n return 'SingleThreadedClient()'\n\n def __del__(self) -> None:\n pass\n", "path": "autosklearn/util/single_thread_client.py"}, {"content": "\"\"\"Version information.\"\"\"\n\n# The following line *must* be the last in the module, exactly as formatted:\n__version__ = \"0.14.4\"\n", "path": "autosklearn/__version__.py"}]} | 1,916 | 873 |
gh_patches_debug_12360 | rasdani/github-patches | git_diff | akvo__akvo-rsr-2517 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
UnicodeEncodeError with ProjectCustomField
```python
File "django/core/handlers/base.py", line 111, in get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "django/views/decorators/csrf.py", line 57, in wrapped_view
return view_func(*args, **kwargs)
File "django/views/generic/base.py", line 69, in view
return self.dispatch(request, *args, **kwargs)
File "rest_framework/views.py", line 466, in dispatch
response = self.handle_exception(exc)
File "rest_framework/views.py", line 463, in dispatch
response = handler(request, *args, **kwargs)
File "rest_framework/decorators.py", line 53, in handler
return func(*args, **kwargs)
File "akvo/rest/views/project_editor.py", line 576, in project_editor
'changes': log_changes(changes, user, project),
File "akvo/rest/views/project_editor.py", line 92, in log_changes
object_repr=obj.__unicode__(),
File "akvo/rsr/models/custom_field.py", line 77, in __unicode__
return u'%s' % str(self.value)
```
</issue>
<code>
[start of akvo/rsr/models/custom_field.py]
1 # -*- coding: utf-8 -*-
2
3 # Akvo RSR is covered by the GNU Affero General Public License.
4 # See more details in the license.txt file located at the root folder of the Akvo RSR module.
5 # For additional details on the GNU license please see < http://www.gnu.org/licenses/agpl.html >.
6
7
8 from django.db import models
9 from django.utils.translation import ugettext_lazy as _
10
11 from ..fields import ValidXMLCharField, ValidXMLTextField
12
13
14 class ProjectCustomField(models.Model):
15 """
16 Custom fields make it possible for partner to specify additional fields. When specified for a
17 project, the fields will appear in the admin (under the specified section) and can then be
18 filled in.
19
20 Custom fields for a project, linking the project to its' custom fields.
21
22 Name: name of the custom field (label in the admin)
23 Section: the section in the admin where the field should be added
24 Maxlength: the maximum number of characters of the field
25 Help text: the help text belonging to the field
26 Value: the value which can be filled in the project admin.
27 """
28 SECTIONS = (
29 (1, _(u'01 - General information')),
30 (2, _(u'02 - Contact information')),
31 (3, _(u'03 - Project partners')),
32 (4, _(u'04 - Project descriptions')),
33 (5, _(u'05 - Results and indicators')),
34 (6, _(u'06 - Finance')),
35 (7, _(u'07 - Project locations')),
36 (8, _(u'08 - Project focus')),
37 (9, _(u'09 - Links and documents')),
38 (10, _(u'10 - Project comments')),
39 )
40
41 TYPES = (
42 ('text', _(u'Text')),
43 ('boolean', _(u'Checkbox')),
44 )
45
46 project = models.ForeignKey('Project', verbose_name=_(u'project'), related_name='custom_fields')
47 name = ValidXMLCharField(_(u'name'), max_length=255, help_text=_(u'(max 255 characters)'))
48 section = models.IntegerField(
49 _(u'admin section'), choices=SECTIONS,
50 help_text=_(u'Select the section of the admin where the custom field should be displayed')
51 )
52 max_characters = models.IntegerField(
53 _(u'maximum characters'), blank=True, null=True,
54 help_text=_(u'Set the maximum amount of characters that the user is allowed to fill in. '
55 u'Leave empty or fill in 0 if there is no character limit.')
56 )
57 help_text = ValidXMLTextField(
58 _(u'help text'), max_length=1000, blank=True,
59 help_text=_(u'The help text to be displayed with the field in the admin. Leave empty if '
60 u'there is no need for a help text. (max 1000 characters)')
61 )
62 value = ValidXMLTextField(_(u'value'), blank=True)
63 mandatory = models.BooleanField(_(u'mandatory'), default=False,
64 help_text=_(u'Indicate whether this field is mandatory or not'))
65 order = models.PositiveSmallIntegerField(
66 _(u'order'), help_text=_(u'The order of the fields as they will be displayed in the '
67 u'project editor. Must be a positive number, and the lowest '
68 u'number will be shown on top.')
69 )
70 type = ValidXMLCharField(
71 _(u'type'), max_length=20, choices=TYPES, default='text',
72 help_text=_(u'Select the type of custom field. Text will show a text area in the project '
73 u'editor, and checkbox will show a checkbox.')
74 )
75
76 def __unicode__(self):
77 return u'%s' % str(self.value)
78
79
80 class OrganisationCustomField(models.Model):
81 """
82 Custom fields make it possible for partner to specify additional fields. When specified for a
83 project, the fields will appear in the admin (under the specified section) and can then be
84 filled in.
85
86 Custom fields for an organisation, linking the organisation to its' custom fields.
87
88 These custom fields will be used for the projects whenever a user of the organisation
89 creates a new project.
90
91 Name: name of the custom field (label in the admin)
92 Section: the section in the admin where the field should be added
93 Maxlength: the maximum number of characters of the field
94 Help text: the help text belonging to the field
95 """
96 SECTIONS = (
97 (1, _(u'01 - General information')),
98 (2, _(u'02 - Contact information')),
99 (3, _(u'03 - Project partners')),
100 (4, _(u'04 - Project descriptions')),
101 (5, _(u'05 - Results and indicators')),
102 (6, _(u'06 - Finance')),
103 (7, _(u'07 - Project locations')),
104 (8, _(u'08 - Project focus')),
105 (9, _(u'09 - Links and documents')),
106 (10, _(u'10 - Project comments')),
107 )
108
109 TYPES = (
110 ('text', _(u'Text')),
111 ('boolean', _(u'Checkbox')),
112 )
113
114 organisation = models.ForeignKey(
115 'Organisation', verbose_name=_(u'organisation'), related_name='custom_fields'
116 )
117 name = ValidXMLCharField(_(u'name'), max_length=255, help_text=_(u'(max 255 characters)'))
118 section = models.IntegerField(
119 _(u'admin section'), choices=SECTIONS,
120 help_text=_(u'Select the section of the admin where the custom field should be displayed')
121 )
122 max_characters = models.IntegerField(
123 _(u'maximum characters'), blank=True, null=True,
124 help_text=_(u'Set the maximum amount of characters that the user is allowed to fill in. '
125 u'Leave empty or fill in 0 if there is no character limit.')
126 )
127 help_text = ValidXMLTextField(
128 _(u'help text'), max_length=1000, blank=True,
129 help_text=_(u'The help text to be displayed with the field in the admin. Leave empty if '
130 u'there is no need for a help text. (max 1000 characters)')
131 )
132 mandatory = models.BooleanField(_(u'mandatory'), default=False,
133 help_text=_(u'Indicate whether this field is mandatory or not'))
134 order = models.PositiveSmallIntegerField(
135 _(u'order'), help_text=_(u'The order of the fields as they will be displayed in the '
136 u'project editor. Must be a positive number, and the lowest '
137 u'number will be shown on top.')
138 )
139 type = ValidXMLCharField(
140 _(u'type'), max_length=20, choices=TYPES, default='text',
141 help_text=_(u'Select the type of custom field. Text will show a text area in the project '
142 u'editor, and checkbox will show a checkbox.')
143 )
[end of akvo/rsr/models/custom_field.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/akvo/rsr/models/custom_field.py b/akvo/rsr/models/custom_field.py
--- a/akvo/rsr/models/custom_field.py
+++ b/akvo/rsr/models/custom_field.py
@@ -74,7 +74,7 @@
)
def __unicode__(self):
- return u'%s' % str(self.value)
+ return u'%s' % self.value
class OrganisationCustomField(models.Model):
@@ -140,4 +140,4 @@
_(u'type'), max_length=20, choices=TYPES, default='text',
help_text=_(u'Select the type of custom field. Text will show a text area in the project '
u'editor, and checkbox will show a checkbox.')
- )
\ No newline at end of file
+ )
| {"golden_diff": "diff --git a/akvo/rsr/models/custom_field.py b/akvo/rsr/models/custom_field.py\n--- a/akvo/rsr/models/custom_field.py\n+++ b/akvo/rsr/models/custom_field.py\n@@ -74,7 +74,7 @@\n )\n \n def __unicode__(self):\n- return u'%s' % str(self.value)\n+ return u'%s' % self.value\n \n \n class OrganisationCustomField(models.Model):\n@@ -140,4 +140,4 @@\n _(u'type'), max_length=20, choices=TYPES, default='text',\n help_text=_(u'Select the type of custom field. Text will show a text area in the project '\n u'editor, and checkbox will show a checkbox.')\n- )\n\\ No newline at end of file\n+ )\n", "issue": "UnicodeEncodeError with ProjectCustomField\n```python\r\n\r\n File \"django/core/handlers/base.py\", line 111, in get_response\r\n response = wrapped_callback(request, *callback_args, **callback_kwargs)\r\n File \"django/views/decorators/csrf.py\", line 57, in wrapped_view\r\n return view_func(*args, **kwargs)\r\n File \"django/views/generic/base.py\", line 69, in view\r\n return self.dispatch(request, *args, **kwargs)\r\n File \"rest_framework/views.py\", line 466, in dispatch\r\n response = self.handle_exception(exc)\r\n File \"rest_framework/views.py\", line 463, in dispatch\r\n response = handler(request, *args, **kwargs)\r\n File \"rest_framework/decorators.py\", line 53, in handler\r\n return func(*args, **kwargs)\r\n File \"akvo/rest/views/project_editor.py\", line 576, in project_editor\r\n 'changes': log_changes(changes, user, project),\r\n File \"akvo/rest/views/project_editor.py\", line 92, in log_changes\r\n object_repr=obj.__unicode__(),\r\n File \"akvo/rsr/models/custom_field.py\", line 77, in __unicode__\r\n return u'%s' % str(self.value)\r\n```\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\n# Akvo RSR is covered by the GNU Affero General Public License.\n# See more details in the license.txt file located at the root folder of the Akvo RSR module.\n# For additional details on the GNU license please see < http://www.gnu.org/licenses/agpl.html >.\n\n\nfrom django.db import models\nfrom django.utils.translation import ugettext_lazy as _\n\nfrom ..fields import ValidXMLCharField, ValidXMLTextField\n\n\nclass ProjectCustomField(models.Model):\n \"\"\"\n Custom fields make it possible for partner to specify additional fields. When specified for a\n project, the fields will appear in the admin (under the specified section) and can then be\n filled in.\n\n Custom fields for a project, linking the project to its' custom fields.\n\n Name: name of the custom field (label in the admin)\n Section: the section in the admin where the field should be added\n Maxlength: the maximum number of characters of the field\n Help text: the help text belonging to the field\n Value: the value which can be filled in the project admin.\n \"\"\"\n SECTIONS = (\n (1, _(u'01 - General information')),\n (2, _(u'02 - Contact information')),\n (3, _(u'03 - Project partners')),\n (4, _(u'04 - Project descriptions')),\n (5, _(u'05 - Results and indicators')),\n (6, _(u'06 - Finance')),\n (7, _(u'07 - Project locations')),\n (8, _(u'08 - Project focus')),\n (9, _(u'09 - Links and documents')),\n (10, _(u'10 - Project comments')),\n )\n\n TYPES = (\n ('text', _(u'Text')),\n ('boolean', _(u'Checkbox')),\n )\n\n project = models.ForeignKey('Project', verbose_name=_(u'project'), related_name='custom_fields')\n name = ValidXMLCharField(_(u'name'), max_length=255, help_text=_(u'(max 255 characters)'))\n section = models.IntegerField(\n _(u'admin section'), choices=SECTIONS,\n help_text=_(u'Select the section of the admin where the custom field should be displayed')\n )\n max_characters = models.IntegerField(\n _(u'maximum characters'), blank=True, null=True,\n help_text=_(u'Set the maximum amount of characters that the user is allowed to fill in. '\n u'Leave empty or fill in 0 if there is no character limit.')\n )\n help_text = ValidXMLTextField(\n _(u'help text'), max_length=1000, blank=True,\n help_text=_(u'The help text to be displayed with the field in the admin. Leave empty if '\n u'there is no need for a help text. (max 1000 characters)')\n )\n value = ValidXMLTextField(_(u'value'), blank=True)\n mandatory = models.BooleanField(_(u'mandatory'), default=False,\n help_text=_(u'Indicate whether this field is mandatory or not'))\n order = models.PositiveSmallIntegerField(\n _(u'order'), help_text=_(u'The order of the fields as they will be displayed in the '\n u'project editor. Must be a positive number, and the lowest '\n u'number will be shown on top.')\n )\n type = ValidXMLCharField(\n _(u'type'), max_length=20, choices=TYPES, default='text',\n help_text=_(u'Select the type of custom field. Text will show a text area in the project '\n u'editor, and checkbox will show a checkbox.')\n )\n\n def __unicode__(self):\n return u'%s' % str(self.value)\n\n\nclass OrganisationCustomField(models.Model):\n \"\"\"\n Custom fields make it possible for partner to specify additional fields. When specified for a\n project, the fields will appear in the admin (under the specified section) and can then be\n filled in.\n\n Custom fields for an organisation, linking the organisation to its' custom fields.\n\n These custom fields will be used for the projects whenever a user of the organisation\n creates a new project.\n\n Name: name of the custom field (label in the admin)\n Section: the section in the admin where the field should be added\n Maxlength: the maximum number of characters of the field\n Help text: the help text belonging to the field\n \"\"\"\n SECTIONS = (\n (1, _(u'01 - General information')),\n (2, _(u'02 - Contact information')),\n (3, _(u'03 - Project partners')),\n (4, _(u'04 - Project descriptions')),\n (5, _(u'05 - Results and indicators')),\n (6, _(u'06 - Finance')),\n (7, _(u'07 - Project locations')),\n (8, _(u'08 - Project focus')),\n (9, _(u'09 - Links and documents')),\n (10, _(u'10 - Project comments')),\n )\n\n TYPES = (\n ('text', _(u'Text')),\n ('boolean', _(u'Checkbox')),\n )\n\n organisation = models.ForeignKey(\n 'Organisation', verbose_name=_(u'organisation'), related_name='custom_fields'\n )\n name = ValidXMLCharField(_(u'name'), max_length=255, help_text=_(u'(max 255 characters)'))\n section = models.IntegerField(\n _(u'admin section'), choices=SECTIONS,\n help_text=_(u'Select the section of the admin where the custom field should be displayed')\n )\n max_characters = models.IntegerField(\n _(u'maximum characters'), blank=True, null=True,\n help_text=_(u'Set the maximum amount of characters that the user is allowed to fill in. '\n u'Leave empty or fill in 0 if there is no character limit.')\n )\n help_text = ValidXMLTextField(\n _(u'help text'), max_length=1000, blank=True,\n help_text=_(u'The help text to be displayed with the field in the admin. Leave empty if '\n u'there is no need for a help text. (max 1000 characters)')\n )\n mandatory = models.BooleanField(_(u'mandatory'), default=False,\n help_text=_(u'Indicate whether this field is mandatory or not'))\n order = models.PositiveSmallIntegerField(\n _(u'order'), help_text=_(u'The order of the fields as they will be displayed in the '\n u'project editor. Must be a positive number, and the lowest '\n u'number will be shown on top.')\n )\n type = ValidXMLCharField(\n _(u'type'), max_length=20, choices=TYPES, default='text',\n help_text=_(u'Select the type of custom field. Text will show a text area in the project '\n u'editor, and checkbox will show a checkbox.')\n )", "path": "akvo/rsr/models/custom_field.py"}]} | 2,685 | 185 |
gh_patches_debug_16694 | rasdani/github-patches | git_diff | mitmproxy__mitmproxy-2069 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Entering Palette options crashes mitmproxy
##### Steps to reproduce the problem:
1. Press 'O' for options
2. Select 'Palette'
3. mitmproxy will crash
##### Any other comments? What have you tried so far?
```
Traceback (most recent call last):
File "/usr/local/Cellar/mitmproxy/2.0.0/libexec/lib/python3.6/site-packages/mitmproxy/tools/console/master.py", line 281, in run
self.loop.run()
File "/usr/local/Cellar/mitmproxy/2.0.0/libexec/lib/python3.6/site-packages/urwid/main_loop.py", line 278, in run
self._run()
File "/usr/local/Cellar/mitmproxy/2.0.0/libexec/lib/python3.6/site-packages/urwid/main_loop.py", line 376, in _run
self.event_loop.run()
File "/usr/local/Cellar/mitmproxy/2.0.0/libexec/lib/python3.6/site-packages/urwid/main_loop.py", line 682, in run
self._loop()
File "/usr/local/Cellar/mitmproxy/2.0.0/libexec/lib/python3.6/site-packages/urwid/main_loop.py", line 719, in _loop
self._watch_files[fd]()
File "/usr/local/Cellar/mitmproxy/2.0.0/libexec/lib/python3.6/site-packages/urwid/raw_display.py", line 393, in <lambda>
event_loop, callback, self.get_available_raw_input())
File "/usr/local/Cellar/mitmproxy/2.0.0/libexec/lib/python3.6/site-packages/urwid/raw_display.py", line 493, in parse_input
callback(processed, processed_codes)
File "/usr/local/Cellar/mitmproxy/2.0.0/libexec/lib/python3.6/site-packages/urwid/main_loop.py", line 403, in _update
self.process_input(keys)
File "/usr/local/Cellar/mitmproxy/2.0.0/libexec/lib/python3.6/site-packages/urwid/main_loop.py", line 503, in process_input
k = self._topmost_widget.keypress(self.screen_size, k)
File "/usr/local/Cellar/mitmproxy/2.0.0/libexec/lib/python3.6/site-packages/mitmproxy/tools/console/window.py", line 84, in keypress
k = super().keypress(size, k)
File "/usr/local/Cellar/mitmproxy/2.0.0/libexec/lib/python3.6/site-packages/urwid/container.py", line 1128, in keypress
return self.body.keypress( (maxcol, remaining), key )
File "/usr/local/Cellar/mitmproxy/2.0.0/libexec/lib/python3.6/site-packages/urwid/container.py", line 1128, in keypress
return self.body.keypress( (maxcol, remaining), key )
File "/usr/local/Cellar/mitmproxy/2.0.0/libexec/lib/python3.6/site-packages/mitmproxy/tools/console/select.py", line 114, in keypress
self.get_focus()[0].option.activate()
File "/usr/local/Cellar/mitmproxy/2.0.0/libexec/lib/python3.6/site-packages/mitmproxy/tools/console/palettepicker.py", line 46, in <lambda>
lambda: setattr(self.master.options, "palette", name)
File "/usr/local/Cellar/mitmproxy/2.0.0/libexec/lib/python3.6/site-packages/mitmproxy/optmanager.py", line 114, in __setattr__
self.update(**{attr: value})
File "/usr/local/Cellar/mitmproxy/2.0.0/libexec/lib/python3.6/site-packages/mitmproxy/optmanager.py", line 141, in update
raise KeyError("No such option: %s" % k)
KeyError: 'No such option: palette'
```
The option names in mitmproxy/options.py were prefixed with 'console_', but line 46 and line 62 of mitmproxy/tools/console/palettepicker.py were not updated to include this prefix.
This appears to have been broken by commit [35aff3b](https://github.com/mitmproxy/mitmproxy/commit/35aff3b7838f8df718cc574d2643f1355849fa8e)
##### System information
Mitmproxy version: 2.0.0 (release version)
Python version: 3.6.0
Platform: Darwin-16.4.0-x86_64-i386-64bit
SSL version: OpenSSL 1.1.0e 16 Feb 2017
Mac version: 10.12.3 ('', '', '') x86_64
</issue>
<code>
[start of mitmproxy/tools/console/palettepicker.py]
1 import urwid
2
3 from mitmproxy.tools.console import common
4 from mitmproxy.tools.console import palettes
5 from mitmproxy.tools.console import select
6
7 footer = [
8 ('heading_key', "enter/space"), ":select",
9 ]
10
11
12 def _mkhelp():
13 text = []
14 keys = [
15 ("enter/space", "select"),
16 ]
17 text.extend(common.format_keyvals(keys, key="key", val="text", indent=4))
18 return text
19
20
21 help_context = _mkhelp()
22
23
24 class PalettePicker(urwid.WidgetWrap):
25
26 def __init__(self, master):
27 self.master = master
28 low, high = [], []
29 for k, v in palettes.palettes.items():
30 if v.high:
31 high.append(k)
32 else:
33 low.append(k)
34 high.sort()
35 low.sort()
36
37 options = [
38 select.Heading("High Colour")
39 ]
40
41 def mkopt(name):
42 return select.Option(
43 i,
44 None,
45 lambda: self.master.options.console_palette == name,
46 lambda: setattr(self.master.options, "palette", name)
47 )
48
49 for i in high:
50 options.append(mkopt(i))
51 options.append(select.Heading("Low Colour"))
52 for i in low:
53 options.append(mkopt(i))
54
55 options.extend(
56 [
57 select.Heading("Options"),
58 select.Option(
59 "Transparent",
60 "T",
61 lambda: master.options.console_palette_transparent,
62 master.options.toggler("palette_transparent")
63 )
64 ]
65 )
66
67 self.lb = select.Select(options)
68 title = urwid.Text("Palettes")
69 title = urwid.Padding(title, align="left", width=("relative", 100))
70 title = urwid.AttrWrap(title, "heading")
71 self._w = urwid.Frame(
72 self.lb,
73 header = title
74 )
75 master.options.changed.connect(self.sig_options_changed)
76
77 def sig_options_changed(self, options, updated):
78 self.lb.walker._modified()
79
[end of mitmproxy/tools/console/palettepicker.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/mitmproxy/tools/console/palettepicker.py b/mitmproxy/tools/console/palettepicker.py
--- a/mitmproxy/tools/console/palettepicker.py
+++ b/mitmproxy/tools/console/palettepicker.py
@@ -43,7 +43,7 @@
i,
None,
lambda: self.master.options.console_palette == name,
- lambda: setattr(self.master.options, "palette", name)
+ lambda: setattr(self.master.options, "console_palette", name)
)
for i in high:
@@ -59,7 +59,7 @@
"Transparent",
"T",
lambda: master.options.console_palette_transparent,
- master.options.toggler("palette_transparent")
+ master.options.toggler("console_palette_transparent")
)
]
)
| {"golden_diff": "diff --git a/mitmproxy/tools/console/palettepicker.py b/mitmproxy/tools/console/palettepicker.py\n--- a/mitmproxy/tools/console/palettepicker.py\n+++ b/mitmproxy/tools/console/palettepicker.py\n@@ -43,7 +43,7 @@\n i,\n None,\n lambda: self.master.options.console_palette == name,\n- lambda: setattr(self.master.options, \"palette\", name)\n+ lambda: setattr(self.master.options, \"console_palette\", name)\n )\n \n for i in high:\n@@ -59,7 +59,7 @@\n \"Transparent\",\n \"T\",\n lambda: master.options.console_palette_transparent,\n- master.options.toggler(\"palette_transparent\")\n+ master.options.toggler(\"console_palette_transparent\")\n )\n ]\n )\n", "issue": "Entering Palette options crashes mitmproxy\n##### Steps to reproduce the problem:\r\n\r\n1. Press 'O' for options\r\n2. Select 'Palette'\r\n3. mitmproxy will crash\r\n\r\n\r\n##### Any other comments? What have you tried so far?\r\n```\r\nTraceback (most recent call last):\r\n File \"/usr/local/Cellar/mitmproxy/2.0.0/libexec/lib/python3.6/site-packages/mitmproxy/tools/console/master.py\", line 281, in run\r\n self.loop.run()\r\n File \"/usr/local/Cellar/mitmproxy/2.0.0/libexec/lib/python3.6/site-packages/urwid/main_loop.py\", line 278, in run\r\n self._run()\r\n File \"/usr/local/Cellar/mitmproxy/2.0.0/libexec/lib/python3.6/site-packages/urwid/main_loop.py\", line 376, in _run\r\n self.event_loop.run()\r\n File \"/usr/local/Cellar/mitmproxy/2.0.0/libexec/lib/python3.6/site-packages/urwid/main_loop.py\", line 682, in run\r\n self._loop()\r\n File \"/usr/local/Cellar/mitmproxy/2.0.0/libexec/lib/python3.6/site-packages/urwid/main_loop.py\", line 719, in _loop\r\n self._watch_files[fd]()\r\n File \"/usr/local/Cellar/mitmproxy/2.0.0/libexec/lib/python3.6/site-packages/urwid/raw_display.py\", line 393, in <lambda>\r\n event_loop, callback, self.get_available_raw_input())\r\n File \"/usr/local/Cellar/mitmproxy/2.0.0/libexec/lib/python3.6/site-packages/urwid/raw_display.py\", line 493, in parse_input\r\n callback(processed, processed_codes)\r\n File \"/usr/local/Cellar/mitmproxy/2.0.0/libexec/lib/python3.6/site-packages/urwid/main_loop.py\", line 403, in _update\r\n self.process_input(keys)\r\n File \"/usr/local/Cellar/mitmproxy/2.0.0/libexec/lib/python3.6/site-packages/urwid/main_loop.py\", line 503, in process_input\r\n k = self._topmost_widget.keypress(self.screen_size, k)\r\n File \"/usr/local/Cellar/mitmproxy/2.0.0/libexec/lib/python3.6/site-packages/mitmproxy/tools/console/window.py\", line 84, in keypress\r\n k = super().keypress(size, k)\r\n File \"/usr/local/Cellar/mitmproxy/2.0.0/libexec/lib/python3.6/site-packages/urwid/container.py\", line 1128, in keypress\r\n return self.body.keypress( (maxcol, remaining), key )\r\n File \"/usr/local/Cellar/mitmproxy/2.0.0/libexec/lib/python3.6/site-packages/urwid/container.py\", line 1128, in keypress\r\n return self.body.keypress( (maxcol, remaining), key )\r\n File \"/usr/local/Cellar/mitmproxy/2.0.0/libexec/lib/python3.6/site-packages/mitmproxy/tools/console/select.py\", line 114, in keypress\r\n self.get_focus()[0].option.activate()\r\n File \"/usr/local/Cellar/mitmproxy/2.0.0/libexec/lib/python3.6/site-packages/mitmproxy/tools/console/palettepicker.py\", line 46, in <lambda>\r\n lambda: setattr(self.master.options, \"palette\", name)\r\n File \"/usr/local/Cellar/mitmproxy/2.0.0/libexec/lib/python3.6/site-packages/mitmproxy/optmanager.py\", line 114, in __setattr__\r\n self.update(**{attr: value})\r\n File \"/usr/local/Cellar/mitmproxy/2.0.0/libexec/lib/python3.6/site-packages/mitmproxy/optmanager.py\", line 141, in update\r\n raise KeyError(\"No such option: %s\" % k)\r\nKeyError: 'No such option: palette'\r\n\r\n```\r\nThe option names in mitmproxy/options.py were prefixed with 'console_', but line 46 and line 62 of mitmproxy/tools/console/palettepicker.py were not updated to include this prefix. \r\n\r\nThis appears to have been broken by commit [35aff3b](https://github.com/mitmproxy/mitmproxy/commit/35aff3b7838f8df718cc574d2643f1355849fa8e)\r\n\r\n##### System information\r\n\r\nMitmproxy version: 2.0.0 (release version) \r\nPython version: 3.6.0\r\nPlatform: Darwin-16.4.0-x86_64-i386-64bit\r\nSSL version: OpenSSL 1.1.0e 16 Feb 2017\r\nMac version: 10.12.3 ('', '', '') x86_64\r\n\n", "before_files": [{"content": "import urwid\n\nfrom mitmproxy.tools.console import common\nfrom mitmproxy.tools.console import palettes\nfrom mitmproxy.tools.console import select\n\nfooter = [\n ('heading_key', \"enter/space\"), \":select\",\n]\n\n\ndef _mkhelp():\n text = []\n keys = [\n (\"enter/space\", \"select\"),\n ]\n text.extend(common.format_keyvals(keys, key=\"key\", val=\"text\", indent=4))\n return text\n\n\nhelp_context = _mkhelp()\n\n\nclass PalettePicker(urwid.WidgetWrap):\n\n def __init__(self, master):\n self.master = master\n low, high = [], []\n for k, v in palettes.palettes.items():\n if v.high:\n high.append(k)\n else:\n low.append(k)\n high.sort()\n low.sort()\n\n options = [\n select.Heading(\"High Colour\")\n ]\n\n def mkopt(name):\n return select.Option(\n i,\n None,\n lambda: self.master.options.console_palette == name,\n lambda: setattr(self.master.options, \"palette\", name)\n )\n\n for i in high:\n options.append(mkopt(i))\n options.append(select.Heading(\"Low Colour\"))\n for i in low:\n options.append(mkopt(i))\n\n options.extend(\n [\n select.Heading(\"Options\"),\n select.Option(\n \"Transparent\",\n \"T\",\n lambda: master.options.console_palette_transparent,\n master.options.toggler(\"palette_transparent\")\n )\n ]\n )\n\n self.lb = select.Select(options)\n title = urwid.Text(\"Palettes\")\n title = urwid.Padding(title, align=\"left\", width=(\"relative\", 100))\n title = urwid.AttrWrap(title, \"heading\")\n self._w = urwid.Frame(\n self.lb,\n header = title\n )\n master.options.changed.connect(self.sig_options_changed)\n\n def sig_options_changed(self, options, updated):\n self.lb.walker._modified()\n", "path": "mitmproxy/tools/console/palettepicker.py"}]} | 2,254 | 174 |
gh_patches_debug_10877 | rasdani/github-patches | git_diff | ivy-llc__ivy-20857 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
fftshift
This subtask is to add the `fftshift` function to the Jax frontend
</issue>
<code>
[start of ivy/functional/frontends/jax/numpy/fft.py]
1 # local
2 import ivy
3 from ivy.functional.frontends.jax.func_wrapper import to_ivy_arrays_and_back
4
5
6 @to_ivy_arrays_and_back
7 def fft(a, n=None, axis=-1, norm=None):
8 if norm is None:
9 norm = "backward"
10 return ivy.fft(a, axis, norm=norm, n=n)
11
[end of ivy/functional/frontends/jax/numpy/fft.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/ivy/functional/frontends/jax/numpy/fft.py b/ivy/functional/frontends/jax/numpy/fft.py
--- a/ivy/functional/frontends/jax/numpy/fft.py
+++ b/ivy/functional/frontends/jax/numpy/fft.py
@@ -1,6 +1,25 @@
# local
import ivy
from ivy.functional.frontends.jax.func_wrapper import to_ivy_arrays_and_back
+from ivy.func_wrapper import with_unsupported_dtypes
+
+
+@to_ivy_arrays_and_back
+@with_unsupported_dtypes({"2.4.2 and below": ("float16", "bfloat16")}, "paddle")
+def fftshift(x, axes=None, name=None):
+ shape = x.shape
+
+ if axes is None:
+ axes = tuple(range(x.ndim))
+ shifts = [(dim // 2) for dim in shape]
+ elif isinstance(axes, int):
+ shifts = shape[axes] // 2
+ else:
+ shifts = [shape[ax] // 2 for ax in axes]
+
+ roll = ivy.roll(x, shifts, axis=axes)
+
+ return roll
@to_ivy_arrays_and_back
| {"golden_diff": "diff --git a/ivy/functional/frontends/jax/numpy/fft.py b/ivy/functional/frontends/jax/numpy/fft.py\n--- a/ivy/functional/frontends/jax/numpy/fft.py\n+++ b/ivy/functional/frontends/jax/numpy/fft.py\n@@ -1,6 +1,25 @@\n # local\n import ivy\n from ivy.functional.frontends.jax.func_wrapper import to_ivy_arrays_and_back\n+from ivy.func_wrapper import with_unsupported_dtypes\n+\n+\n+@to_ivy_arrays_and_back\n+@with_unsupported_dtypes({\"2.4.2 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n+def fftshift(x, axes=None, name=None):\n+ shape = x.shape\n+\n+ if axes is None:\n+ axes = tuple(range(x.ndim))\n+ shifts = [(dim // 2) for dim in shape]\n+ elif isinstance(axes, int):\n+ shifts = shape[axes] // 2\n+ else:\n+ shifts = [shape[ax] // 2 for ax in axes]\n+\n+ roll = ivy.roll(x, shifts, axis=axes)\n+\n+ return roll\n \n \n @to_ivy_arrays_and_back\n", "issue": "fftshift\nThis subtask is to add the `fftshift` function to the Jax frontend\n", "before_files": [{"content": "# local\nimport ivy\nfrom ivy.functional.frontends.jax.func_wrapper import to_ivy_arrays_and_back\n\n\n@to_ivy_arrays_and_back\ndef fft(a, n=None, axis=-1, norm=None):\n if norm is None:\n norm = \"backward\"\n return ivy.fft(a, axis, norm=norm, n=n)\n", "path": "ivy/functional/frontends/jax/numpy/fft.py"}]} | 662 | 280 |
gh_patches_debug_963 | rasdani/github-patches | git_diff | pyjanitor-devs__pyjanitor-289 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Utilize autosummary Sphinx directive in API Reference
## Proposal
A consolidated list of functionality would go a long way in [our API Reference](https://pyjanitor.readthedocs.io/api.html) section.
Other libraries have leveraged the [autosummary](http://www.sphinx-doc.org/en/master/usage/extensions/autosummary.html#directive-autosummary) Sphinx directive to achieve this to great effect. For instance:
* Pandas: [Docs](https://pandas.pydata.org/pandas-docs/stable/reference/indexing.html), [Raw](https://raw.githubusercontent.com/pandas-dev/pandas/master/doc/source/reference/indexing.rst)
* Matplotlib: [Docs](https://matplotlib.org/api/axes_api.html), [Raw](https://matplotlib.org/_sources/api/axes_api.rst.txt)
## Implementation Details
Apart from rolling `sphinx.ext.autosummary` into the `conf.py` this would also involve going through and enumerating the different functions in the `api.rst` documentation.
A concern here, though-- this would mean that all future feature introductions would have to get appended to the lists in these files, **which necessitates adding this step to the PR checklist**... Until someone figures out a more programmatic way to do this, anyhow 😉
</issue>
<code>
[start of docs/conf.py]
1 # -*- coding: utf-8 -*-
2 #
3 # Configuration file for the Sphinx documentation builder.
4 #
5 # This file does only contain a selection of the most common options. For a
6 # full list see the documentation:
7 # http://www.sphinx-doc.org/en/stable/config
8
9 # -- Path setup --------------------------------------------------------------
10
11 # If extensions (or modules to document with autodoc) are in another directory,
12 # add these directories to sys.path here. If the directory is relative to the
13 # documentation root, use os.path.abspath to make it absolute, like shown here.
14 #
15 import os
16 import sys
17 from pathlib import Path
18
19 sys.path.insert(0, os.path.abspath("."))
20 sys.path.insert(0, os.path.abspath("../examples"))
21
22 # Make a symlink in our sphinx source directory to the top-level
23 # examples/notebooks directory so we can include notebooks in the doc
24 notebooks = Path("./notebooks")
25 if not notebooks.exists():
26 print("Making symlink to ../examples/notebooks")
27 notebooks.symlink_to("../examples/notebooks")
28
29
30 # -- Project information -----------------------------------------------------
31
32 project = "pyjanitor"
33 copyright = "2018, Eric J. Ma"
34 author = "Eric J. Ma"
35
36 # The short X.Y version
37 version = "0.1.0"
38 # The full version, including alpha/beta/rc tags
39 release = ""
40
41
42 # -- General configuration ---------------------------------------------------
43
44 # If your documentation needs a minimal Sphinx version, state it here.
45 #
46 # needs_sphinx = '1.0'
47
48 # Add any Sphinx extension module names here, as strings. They can be
49 # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
50 # ones.
51 extensions = [
52 "sphinx.ext.autodoc",
53 "sphinx.ext.doctest",
54 "sphinx.ext.intersphinx",
55 "sphinx.ext.todo",
56 "sphinx.ext.coverage",
57 "sphinx.ext.viewcode",
58 "sphinx.ext.githubpages",
59 "sphinxcontrib.fulltoc",
60 "nbsphinx",
61 ]
62
63 # Add any paths that contain templates here, relative to this directory.
64 templates_path = ["_templates"]
65
66 # The suffix(es) of source filenames.
67 # You can specify multiple suffix as a list of string:
68 #
69 # source_suffix = ['.rst', '.md']
70 source_suffix = [".md", ".rst", ".ipynb"]
71
72 # The master toctree document.
73 master_doc = "index"
74
75 # The language for content autogenerated by Sphinx. Refer to documentation
76 # for a list of supported languages.
77 #
78 # This is also used if you do content translation via gettext catalogs.
79 # Usually you set "language" from the command line for these cases.
80 language = None
81
82 # List of patterns, relative to source directory, that match files and
83 # directories to ignore when looking for source files.
84 # This pattern also affects html_static_path and html_extra_path .
85 exclude_patterns = ["_build", "Thumbs.db", ".DS_Store", "**.ipynb_checkpoints"]
86
87 # The name of the Pygments (syntax highlighting) style to use.
88 pygments_style = "sphinx"
89
90
91 # -- Options for HTML output -------------------------------------------------
92
93 # The theme to use for HTML and HTML Help pages. See the documentation for
94 # a list of builtin themes.
95 #
96 html_theme = "alabaster"
97
98 # Theme options are theme-specific and customize the look and feel of a theme
99 # further. For a list of options available for each theme, see the
100 # documentation.
101 #
102 html_theme_options = {"logo": "logo_title.svg"}
103
104 # Add any paths that contain custom static files (such as style sheets) here,
105 # relative to this directory. They are copied after the builtin static files,
106 # so a file named "default.css" will overwrite the builtin "default.css".
107 html_static_path = ["_static"]
108
109 # Custom sidebar templates, must be a dictionary that maps document names
110 # to template names.
111 #
112 # The default sidebars (for documents that don't match any pattern) are
113 # defined by theme itself. Builtin themes are using these templates by
114 # default: ``['localtoc.html', 'relations.html', 'sourcelink.html',
115 # 'searchbox.html']``.
116 #
117 html_sidebars = {
118 "**": ["about.html", "navigation.html", "relations.html", "searchbox.html"]
119 }
120
121
122 # -- Options for HTMLHelp output ---------------------------------------------
123
124 # Output file base name for HTML help builder.
125 htmlhelp_basename = "pyjanitordoc"
126
127
128 # -- Options for LaTeX output ------------------------------------------------
129
130 latex_elements = {
131 # The paper size ('letterpaper' or 'a4paper').
132 #
133 # 'papersize': 'letterpaper',
134 # The font size ('10pt', '11pt' or '12pt').
135 #
136 # 'pointsize': '10pt',
137 # Additional stuff for the LaTeX preamble.
138 #
139 # 'preamble': '',
140 # Latex figure (float) alignment
141 #
142 # 'figure_align': 'htbp',
143 }
144
145 # Grouping the document tree into LaTeX files. List of tuples
146 # (source start file, target name, title,
147 # author, documentclass [howto, manual, or own class]).
148 latex_documents = [
149 (
150 master_doc,
151 "pyjanitor.tex",
152 "pyjanitor Documentation",
153 "Eric J. Ma",
154 "manual",
155 )
156 ]
157
158
159 # -- Options for manual page output ------------------------------------------
160
161 # One entry per manual page. List of tuples
162 # (source start file, name, description, authors, manual section).
163 man_pages = [(master_doc, "pyjanitor", "pyjanitor Documentation", [author], 1)]
164
165
166 # -- Options for Texinfo output ----------------------------------------------
167
168 # Grouping the document tree into Texinfo files. List of tuples
169 # (source start file, target name, title, author,
170 # dir menu entry, description, category)
171 texinfo_documents = [
172 (
173 master_doc,
174 "pyjanitor",
175 "pyjanitor Documentation",
176 author,
177 "pyjanitor",
178 "One line description of project.",
179 "Miscellaneous",
180 )
181 ]
182
183
184 # -- Extension configuration -------------------------------------------------
185
186 # -- Options for intersphinx extension ---------------------------------------
187
188 # Example configuration for intersphinx: refer to the Python standard library.
189 intersphinx_mapping = {
190 "https://docs.python.org/": None,
191 "https://pandas.pydata.org/pandas-docs/stable": None,
192 }
193
194 # -- Options for todo extension ----------------------------------------------
195
196 # If true, `todo` and `todoList` produce output, else they produce nothing.
197 todo_include_todos = True
198
[end of docs/conf.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/docs/conf.py b/docs/conf.py
--- a/docs/conf.py
+++ b/docs/conf.py
@@ -58,6 +58,7 @@
"sphinx.ext.githubpages",
"sphinxcontrib.fulltoc",
"nbsphinx",
+ "sphinx.ext.autosummary",
]
# Add any paths that contain templates here, relative to this directory.
| {"golden_diff": "diff --git a/docs/conf.py b/docs/conf.py\n--- a/docs/conf.py\n+++ b/docs/conf.py\n@@ -58,6 +58,7 @@\n \"sphinx.ext.githubpages\",\n \"sphinxcontrib.fulltoc\",\n \"nbsphinx\",\n+ \"sphinx.ext.autosummary\",\n ]\n \n # Add any paths that contain templates here, relative to this directory.\n", "issue": "Utilize autosummary Sphinx directive in API Reference\n## Proposal\r\n\r\nA consolidated list of functionality would go a long way in [our API Reference](https://pyjanitor.readthedocs.io/api.html) section.\r\n\r\nOther libraries have leveraged the [autosummary](http://www.sphinx-doc.org/en/master/usage/extensions/autosummary.html#directive-autosummary) Sphinx directive to achieve this to great effect. For instance:\r\n\r\n* Pandas: [Docs](https://pandas.pydata.org/pandas-docs/stable/reference/indexing.html), [Raw](https://raw.githubusercontent.com/pandas-dev/pandas/master/doc/source/reference/indexing.rst)\r\n* Matplotlib: [Docs](https://matplotlib.org/api/axes_api.html), [Raw](https://matplotlib.org/_sources/api/axes_api.rst.txt)\r\n\r\n## Implementation Details\r\n\r\nApart from rolling `sphinx.ext.autosummary` into the `conf.py` this would also involve going through and enumerating the different functions in the `api.rst` documentation.\r\n\r\nA concern here, though-- this would mean that all future feature introductions would have to get appended to the lists in these files, **which necessitates adding this step to the PR checklist**... Until someone figures out a more programmatic way to do this, anyhow \ud83d\ude09 \n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# Configuration file for the Sphinx documentation builder.\n#\n# This file does only contain a selection of the most common options. For a\n# full list see the documentation:\n# http://www.sphinx-doc.org/en/stable/config\n\n# -- Path setup --------------------------------------------------------------\n\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\n#\nimport os\nimport sys\nfrom pathlib import Path\n\nsys.path.insert(0, os.path.abspath(\".\"))\nsys.path.insert(0, os.path.abspath(\"../examples\"))\n\n# Make a symlink in our sphinx source directory to the top-level\n# examples/notebooks directory so we can include notebooks in the doc\nnotebooks = Path(\"./notebooks\")\nif not notebooks.exists():\n print(\"Making symlink to ../examples/notebooks\")\n notebooks.symlink_to(\"../examples/notebooks\")\n\n\n# -- Project information -----------------------------------------------------\n\nproject = \"pyjanitor\"\ncopyright = \"2018, Eric J. Ma\"\nauthor = \"Eric J. Ma\"\n\n# The short X.Y version\nversion = \"0.1.0\"\n# The full version, including alpha/beta/rc tags\nrelease = \"\"\n\n\n# -- General configuration ---------------------------------------------------\n\n# If your documentation needs a minimal Sphinx version, state it here.\n#\n# needs_sphinx = '1.0'\n\n# Add any Sphinx extension module names here, as strings. They can be\n# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom\n# ones.\nextensions = [\n \"sphinx.ext.autodoc\",\n \"sphinx.ext.doctest\",\n \"sphinx.ext.intersphinx\",\n \"sphinx.ext.todo\",\n \"sphinx.ext.coverage\",\n \"sphinx.ext.viewcode\",\n \"sphinx.ext.githubpages\",\n \"sphinxcontrib.fulltoc\",\n \"nbsphinx\",\n]\n\n# Add any paths that contain templates here, relative to this directory.\ntemplates_path = [\"_templates\"]\n\n# The suffix(es) of source filenames.\n# You can specify multiple suffix as a list of string:\n#\n# source_suffix = ['.rst', '.md']\nsource_suffix = [\".md\", \".rst\", \".ipynb\"]\n\n# The master toctree document.\nmaster_doc = \"index\"\n\n# The language for content autogenerated by Sphinx. Refer to documentation\n# for a list of supported languages.\n#\n# This is also used if you do content translation via gettext catalogs.\n# Usually you set \"language\" from the command line for these cases.\nlanguage = None\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\n# This pattern also affects html_static_path and html_extra_path .\nexclude_patterns = [\"_build\", \"Thumbs.db\", \".DS_Store\", \"**.ipynb_checkpoints\"]\n\n# The name of the Pygments (syntax highlighting) style to use.\npygments_style = \"sphinx\"\n\n\n# -- Options for HTML output -------------------------------------------------\n\n# The theme to use for HTML and HTML Help pages. See the documentation for\n# a list of builtin themes.\n#\nhtml_theme = \"alabaster\"\n\n# Theme options are theme-specific and customize the look and feel of a theme\n# further. For a list of options available for each theme, see the\n# documentation.\n#\nhtml_theme_options = {\"logo\": \"logo_title.svg\"}\n\n# Add any paths that contain custom static files (such as style sheets) here,\n# relative to this directory. They are copied after the builtin static files,\n# so a file named \"default.css\" will overwrite the builtin \"default.css\".\nhtml_static_path = [\"_static\"]\n\n# Custom sidebar templates, must be a dictionary that maps document names\n# to template names.\n#\n# The default sidebars (for documents that don't match any pattern) are\n# defined by theme itself. Builtin themes are using these templates by\n# default: ``['localtoc.html', 'relations.html', 'sourcelink.html',\n# 'searchbox.html']``.\n#\nhtml_sidebars = {\n \"**\": [\"about.html\", \"navigation.html\", \"relations.html\", \"searchbox.html\"]\n}\n\n\n# -- Options for HTMLHelp output ---------------------------------------------\n\n# Output file base name for HTML help builder.\nhtmlhelp_basename = \"pyjanitordoc\"\n\n\n# -- Options for LaTeX output ------------------------------------------------\n\nlatex_elements = {\n # The paper size ('letterpaper' or 'a4paper').\n #\n # 'papersize': 'letterpaper',\n # The font size ('10pt', '11pt' or '12pt').\n #\n # 'pointsize': '10pt',\n # Additional stuff for the LaTeX preamble.\n #\n # 'preamble': '',\n # Latex figure (float) alignment\n #\n # 'figure_align': 'htbp',\n}\n\n# Grouping the document tree into LaTeX files. List of tuples\n# (source start file, target name, title,\n# author, documentclass [howto, manual, or own class]).\nlatex_documents = [\n (\n master_doc,\n \"pyjanitor.tex\",\n \"pyjanitor Documentation\",\n \"Eric J. Ma\",\n \"manual\",\n )\n]\n\n\n# -- Options for manual page output ------------------------------------------\n\n# One entry per manual page. List of tuples\n# (source start file, name, description, authors, manual section).\nman_pages = [(master_doc, \"pyjanitor\", \"pyjanitor Documentation\", [author], 1)]\n\n\n# -- Options for Texinfo output ----------------------------------------------\n\n# Grouping the document tree into Texinfo files. List of tuples\n# (source start file, target name, title, author,\n# dir menu entry, description, category)\ntexinfo_documents = [\n (\n master_doc,\n \"pyjanitor\",\n \"pyjanitor Documentation\",\n author,\n \"pyjanitor\",\n \"One line description of project.\",\n \"Miscellaneous\",\n )\n]\n\n\n# -- Extension configuration -------------------------------------------------\n\n# -- Options for intersphinx extension ---------------------------------------\n\n# Example configuration for intersphinx: refer to the Python standard library.\nintersphinx_mapping = {\n \"https://docs.python.org/\": None,\n \"https://pandas.pydata.org/pandas-docs/stable\": None,\n}\n\n# -- Options for todo extension ----------------------------------------------\n\n# If true, `todo` and `todoList` produce output, else they produce nothing.\ntodo_include_todos = True\n", "path": "docs/conf.py"}]} | 2,707 | 85 |
gh_patches_debug_21747 | rasdani/github-patches | git_diff | huggingface__dataset-viewer-2770 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
FineWeb: Unexpected end of stream: Page was smaller (1862094) than expected (2055611)
The config-parquet-metadata job succeeds but the split-first-rows job fails when using `compute_first_rows_from_parquet_response`.
In the meantime I set the error code in the config-parquet-metadata response as `CachedResponseNotFound` to make the split-first-rows succeed
This workaround causes `ResponseNotFound` when opening page 2 in the viewer unfortunately (can't do random access in the parquet data without a valid config-parquet-metadata response)
</issue>
<code>
[start of services/worker/src/worker/job_runners/config/parquet_metadata.py]
1 # SPDX-License-Identifier: Apache-2.0
2 # Copyright 2022 The HuggingFace Authors.
3
4 import functools
5 import logging
6 from typing import Optional
7
8 from fsspec.implementations.http import HTTPFileSystem
9 from libcommon.dtos import JobInfo, SplitHubFile
10 from libcommon.exceptions import (
11 FileSystemError,
12 ParquetResponseEmptyError,
13 PreviousStepFormatError,
14 )
15 from libcommon.simple_cache import get_previous_step_or_raise
16 from libcommon.storage import StrPath
17 from libcommon.viewer_utils.parquet_metadata import create_parquet_metadata_file
18 from tqdm.contrib.concurrent import thread_map
19
20 from worker.config import AppConfig
21 from worker.dtos import (
22 CompleteJobResult,
23 ConfigParquetMetadataResponse,
24 ParquetFileMetadataItem,
25 )
26 from worker.job_runners.config.config_job_runner import ConfigJobRunner
27 from worker.utils import get_parquet_file
28
29
30 def create_parquet_metadata_file_from_remote_parquet(
31 parquet_file_item: SplitHubFile, fs: HTTPFileSystem, hf_token: Optional[str], parquet_metadata_directory: StrPath
32 ) -> ParquetFileMetadataItem:
33 try:
34 parquet_file = get_parquet_file(url=parquet_file_item["url"], fs=fs, hf_token=hf_token)
35 except Exception as e:
36 raise FileSystemError(f"Could not read the parquet files: {e}") from e
37 parquet_metadata_subpath = create_parquet_metadata_file(
38 dataset=parquet_file_item["dataset"],
39 config=parquet_file_item["config"],
40 split=parquet_file_item["split"],
41 parquet_file_metadata=parquet_file.metadata,
42 filename=parquet_file_item["filename"],
43 parquet_metadata_directory=parquet_metadata_directory,
44 )
45 return ParquetFileMetadataItem(
46 dataset=parquet_file_item["dataset"],
47 config=parquet_file_item["config"],
48 split=parquet_file_item["split"],
49 url=parquet_file_item["url"],
50 filename=parquet_file_item["filename"],
51 size=parquet_file_item["size"],
52 num_rows=parquet_file.metadata.num_rows,
53 parquet_metadata_subpath=parquet_metadata_subpath,
54 )
55
56
57 def compute_parquet_metadata_response(
58 dataset: str, config: str, hf_token: Optional[str], parquet_metadata_directory: StrPath
59 ) -> ConfigParquetMetadataResponse:
60 """
61 Get the response of 'config-parquet-metadata' for one specific dataset and config on huggingface.co.
62 Store the config's parquet metadata on the disk and return the list of local metadata files.
63
64 Args:
65 dataset (`str`):
66 A namespace (user or an organization) and a repo name separated
67 by a `/`.
68 config (`str`):
69 A configuration name.
70 hf_token (`str`, *optional*):
71 An authentication token (See https://huggingface.co/settings/token)
72 parquet_metadata_directory (`str` or `pathlib.Path`):
73 The directory where the parquet metadata files are stored.
74
75 Raises:
76 [~`libcommon.simple_cache.CachedArtifactError`]:
77 If the previous step gave an error.
78 [~`libcommon.exceptions.PreviousStepFormatError`]:
79 If the content of the previous step has not the expected format
80 [~`libcommon.exceptions.ParquetResponseEmptyError`]:
81 If the previous step provided an empty list of parquet files.
82 [~`libcommon.exceptions.FileSystemError`]:
83 If the HfFileSystem couldn't access the parquet files.
84
85 Returns:
86 `ConfigParquetMetadataResponse`: An object with the list of parquet metadata files.
87 """
88 logging.info(f"compute 'config-parquet-metadata' for {dataset=} {config=}")
89
90 config_parquet_response = get_previous_step_or_raise(kind="config-parquet", dataset=dataset, config=config)
91 try:
92 parquet_files_content = config_parquet_response["content"]["parquet_files"]
93 parquet_file_items: list[SplitHubFile] = [
94 parquet_file_item for parquet_file_item in parquet_files_content if parquet_file_item["config"] == config
95 ]
96 if not parquet_file_items:
97 raise ParquetResponseEmptyError("No parquet files found.")
98 content = config_parquet_response["content"]
99 if "features" in content and isinstance(content["features"], dict):
100 features = content["features"] # config-parquet version<6 didn't have features
101 else:
102 # (July 23) we can remove this later and raise an error instead (can be None for backward compatibility)
103 features = None
104 partial = config_parquet_response["content"]["partial"]
105 except Exception as e:
106 raise PreviousStepFormatError("Previous step did not return the expected content.") from e
107
108 fs = HTTPFileSystem()
109 desc = f"{dataset}/{config}"
110 parquet_files_metadata: list[ParquetFileMetadataItem] = thread_map(
111 functools.partial(
112 create_parquet_metadata_file_from_remote_parquet,
113 fs=fs,
114 hf_token=hf_token,
115 parquet_metadata_directory=parquet_metadata_directory,
116 ),
117 parquet_file_items,
118 desc=desc,
119 unit="pq",
120 disable=True,
121 )
122 return ConfigParquetMetadataResponse(
123 parquet_files_metadata=parquet_files_metadata, features=features, partial=partial
124 )
125
126
127 class ConfigParquetMetadataJobRunner(ConfigJobRunner):
128 parquet_metadata_directory: StrPath
129
130 @staticmethod
131 def get_job_type() -> str:
132 return "config-parquet-metadata"
133
134 def __init__(
135 self,
136 job_info: JobInfo,
137 app_config: AppConfig,
138 parquet_metadata_directory: StrPath,
139 ) -> None:
140 super().__init__(
141 job_info=job_info,
142 app_config=app_config,
143 )
144 self.parquet_metadata_directory = parquet_metadata_directory
145
146 def compute(self) -> CompleteJobResult:
147 return CompleteJobResult(
148 compute_parquet_metadata_response(
149 dataset=self.dataset,
150 config=self.config,
151 hf_token=self.app_config.common.hf_token,
152 parquet_metadata_directory=self.parquet_metadata_directory,
153 )
154 )
155
[end of services/worker/src/worker/job_runners/config/parquet_metadata.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/services/worker/src/worker/job_runners/config/parquet_metadata.py b/services/worker/src/worker/job_runners/config/parquet_metadata.py
--- a/services/worker/src/worker/job_runners/config/parquet_metadata.py
+++ b/services/worker/src/worker/job_runners/config/parquet_metadata.py
@@ -34,10 +34,14 @@
parquet_file = get_parquet_file(url=parquet_file_item["url"], fs=fs, hf_token=hf_token)
except Exception as e:
raise FileSystemError(f"Could not read the parquet files: {e}") from e
+ split = parquet_file_item["url"].split("/")[-2]
+ # ^ https://github.com/huggingface/dataset-viewer/issues/2768
+ # to support more than 10k parquet files, in which case, instead of "train" for example,
+ # the subdirectories are "train-part0", "train-part1", "train-part2", etc.
parquet_metadata_subpath = create_parquet_metadata_file(
dataset=parquet_file_item["dataset"],
config=parquet_file_item["config"],
- split=parquet_file_item["split"],
+ split=split,
parquet_file_metadata=parquet_file.metadata,
filename=parquet_file_item["filename"],
parquet_metadata_directory=parquet_metadata_directory,
| {"golden_diff": "diff --git a/services/worker/src/worker/job_runners/config/parquet_metadata.py b/services/worker/src/worker/job_runners/config/parquet_metadata.py\n--- a/services/worker/src/worker/job_runners/config/parquet_metadata.py\n+++ b/services/worker/src/worker/job_runners/config/parquet_metadata.py\n@@ -34,10 +34,14 @@\n parquet_file = get_parquet_file(url=parquet_file_item[\"url\"], fs=fs, hf_token=hf_token)\n except Exception as e:\n raise FileSystemError(f\"Could not read the parquet files: {e}\") from e\n+ split = parquet_file_item[\"url\"].split(\"/\")[-2]\n+ # ^ https://github.com/huggingface/dataset-viewer/issues/2768\n+ # to support more than 10k parquet files, in which case, instead of \"train\" for example,\n+ # the subdirectories are \"train-part0\", \"train-part1\", \"train-part2\", etc.\n parquet_metadata_subpath = create_parquet_metadata_file(\n dataset=parquet_file_item[\"dataset\"],\n config=parquet_file_item[\"config\"],\n- split=parquet_file_item[\"split\"],\n+ split=split,\n parquet_file_metadata=parquet_file.metadata,\n filename=parquet_file_item[\"filename\"],\n parquet_metadata_directory=parquet_metadata_directory,\n", "issue": "FineWeb: Unexpected end of stream: Page was smaller (1862094) than expected (2055611)\nThe config-parquet-metadata job succeeds but the split-first-rows job fails when using `compute_first_rows_from_parquet_response`.\r\n\r\nIn the meantime I set the error code in the config-parquet-metadata response as `CachedResponseNotFound` to make the split-first-rows succeed\r\n\r\nThis workaround causes `ResponseNotFound` when opening page 2 in the viewer unfortunately (can't do random access in the parquet data without a valid config-parquet-metadata response)\n", "before_files": [{"content": "# SPDX-License-Identifier: Apache-2.0\n# Copyright 2022 The HuggingFace Authors.\n\nimport functools\nimport logging\nfrom typing import Optional\n\nfrom fsspec.implementations.http import HTTPFileSystem\nfrom libcommon.dtos import JobInfo, SplitHubFile\nfrom libcommon.exceptions import (\n FileSystemError,\n ParquetResponseEmptyError,\n PreviousStepFormatError,\n)\nfrom libcommon.simple_cache import get_previous_step_or_raise\nfrom libcommon.storage import StrPath\nfrom libcommon.viewer_utils.parquet_metadata import create_parquet_metadata_file\nfrom tqdm.contrib.concurrent import thread_map\n\nfrom worker.config import AppConfig\nfrom worker.dtos import (\n CompleteJobResult,\n ConfigParquetMetadataResponse,\n ParquetFileMetadataItem,\n)\nfrom worker.job_runners.config.config_job_runner import ConfigJobRunner\nfrom worker.utils import get_parquet_file\n\n\ndef create_parquet_metadata_file_from_remote_parquet(\n parquet_file_item: SplitHubFile, fs: HTTPFileSystem, hf_token: Optional[str], parquet_metadata_directory: StrPath\n) -> ParquetFileMetadataItem:\n try:\n parquet_file = get_parquet_file(url=parquet_file_item[\"url\"], fs=fs, hf_token=hf_token)\n except Exception as e:\n raise FileSystemError(f\"Could not read the parquet files: {e}\") from e\n parquet_metadata_subpath = create_parquet_metadata_file(\n dataset=parquet_file_item[\"dataset\"],\n config=parquet_file_item[\"config\"],\n split=parquet_file_item[\"split\"],\n parquet_file_metadata=parquet_file.metadata,\n filename=parquet_file_item[\"filename\"],\n parquet_metadata_directory=parquet_metadata_directory,\n )\n return ParquetFileMetadataItem(\n dataset=parquet_file_item[\"dataset\"],\n config=parquet_file_item[\"config\"],\n split=parquet_file_item[\"split\"],\n url=parquet_file_item[\"url\"],\n filename=parquet_file_item[\"filename\"],\n size=parquet_file_item[\"size\"],\n num_rows=parquet_file.metadata.num_rows,\n parquet_metadata_subpath=parquet_metadata_subpath,\n )\n\n\ndef compute_parquet_metadata_response(\n dataset: str, config: str, hf_token: Optional[str], parquet_metadata_directory: StrPath\n) -> ConfigParquetMetadataResponse:\n \"\"\"\n Get the response of 'config-parquet-metadata' for one specific dataset and config on huggingface.co.\n Store the config's parquet metadata on the disk and return the list of local metadata files.\n\n Args:\n dataset (`str`):\n A namespace (user or an organization) and a repo name separated\n by a `/`.\n config (`str`):\n A configuration name.\n hf_token (`str`, *optional*):\n An authentication token (See https://huggingface.co/settings/token)\n parquet_metadata_directory (`str` or `pathlib.Path`):\n The directory where the parquet metadata files are stored.\n\n Raises:\n [~`libcommon.simple_cache.CachedArtifactError`]:\n If the previous step gave an error.\n [~`libcommon.exceptions.PreviousStepFormatError`]:\n If the content of the previous step has not the expected format\n [~`libcommon.exceptions.ParquetResponseEmptyError`]:\n If the previous step provided an empty list of parquet files.\n [~`libcommon.exceptions.FileSystemError`]:\n If the HfFileSystem couldn't access the parquet files.\n\n Returns:\n `ConfigParquetMetadataResponse`: An object with the list of parquet metadata files.\n \"\"\"\n logging.info(f\"compute 'config-parquet-metadata' for {dataset=} {config=}\")\n\n config_parquet_response = get_previous_step_or_raise(kind=\"config-parquet\", dataset=dataset, config=config)\n try:\n parquet_files_content = config_parquet_response[\"content\"][\"parquet_files\"]\n parquet_file_items: list[SplitHubFile] = [\n parquet_file_item for parquet_file_item in parquet_files_content if parquet_file_item[\"config\"] == config\n ]\n if not parquet_file_items:\n raise ParquetResponseEmptyError(\"No parquet files found.\")\n content = config_parquet_response[\"content\"]\n if \"features\" in content and isinstance(content[\"features\"], dict):\n features = content[\"features\"] # config-parquet version<6 didn't have features\n else:\n # (July 23) we can remove this later and raise an error instead (can be None for backward compatibility)\n features = None\n partial = config_parquet_response[\"content\"][\"partial\"]\n except Exception as e:\n raise PreviousStepFormatError(\"Previous step did not return the expected content.\") from e\n\n fs = HTTPFileSystem()\n desc = f\"{dataset}/{config}\"\n parquet_files_metadata: list[ParquetFileMetadataItem] = thread_map(\n functools.partial(\n create_parquet_metadata_file_from_remote_parquet,\n fs=fs,\n hf_token=hf_token,\n parquet_metadata_directory=parquet_metadata_directory,\n ),\n parquet_file_items,\n desc=desc,\n unit=\"pq\",\n disable=True,\n )\n return ConfigParquetMetadataResponse(\n parquet_files_metadata=parquet_files_metadata, features=features, partial=partial\n )\n\n\nclass ConfigParquetMetadataJobRunner(ConfigJobRunner):\n parquet_metadata_directory: StrPath\n\n @staticmethod\n def get_job_type() -> str:\n return \"config-parquet-metadata\"\n\n def __init__(\n self,\n job_info: JobInfo,\n app_config: AppConfig,\n parquet_metadata_directory: StrPath,\n ) -> None:\n super().__init__(\n job_info=job_info,\n app_config=app_config,\n )\n self.parquet_metadata_directory = parquet_metadata_directory\n\n def compute(self) -> CompleteJobResult:\n return CompleteJobResult(\n compute_parquet_metadata_response(\n dataset=self.dataset,\n config=self.config,\n hf_token=self.app_config.common.hf_token,\n parquet_metadata_directory=self.parquet_metadata_directory,\n )\n )\n", "path": "services/worker/src/worker/job_runners/config/parquet_metadata.py"}]} | 2,353 | 311 |
gh_patches_debug_20482 | rasdani/github-patches | git_diff | crytic__slither-546 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
function-id not zero-padding function signature
```
ERC20:
+---------------------------------------+------------+
| Name | ID |
+---------------------------------------+------------+
| totalSupply() | 0x18160ddd |
| balanceOf(address) | 0x70a08231 |
| allowance(address,address) | 0xdd62ed3e |
| transfer(address,uint256) | 0xa9059cbb |
| transferFrom(address,address,uint256) | 0x23b872dd |
| approve(address,uint256) | 0x95ea7b3 |
+---------------------------------------+------------+
```
It's a minor annoyance, but for approve it outputs `0x95ea7b3` instead of `0x095ea7b3`. It is the same numerically, of course, but the function signature is more of an opaque 4-byte identifier than something numerically quantified.
function-id not zero-padding function signature
```
ERC20:
+---------------------------------------+------------+
| Name | ID |
+---------------------------------------+------------+
| totalSupply() | 0x18160ddd |
| balanceOf(address) | 0x70a08231 |
| allowance(address,address) | 0xdd62ed3e |
| transfer(address,uint256) | 0xa9059cbb |
| transferFrom(address,address,uint256) | 0x23b872dd |
| approve(address,uint256) | 0x95ea7b3 |
+---------------------------------------+------------+
```
It's a minor annoyance, but for approve it outputs `0x95ea7b3` instead of `0x095ea7b3`. It is the same numerically, of course, but the function signature is more of an opaque 4-byte identifier than something numerically quantified.
</issue>
<code>
[start of slither/printers/summary/function_ids.py]
1 """
2 Module printing summary of the contract
3 """
4 from slither.printers.abstract_printer import AbstractPrinter
5 from slither.utils.function import get_function_id
6 from slither.utils.myprettytable import MyPrettyTable
7
8
9 class FunctionIds(AbstractPrinter):
10
11 ARGUMENT = 'function-id'
12 HELP = 'Print the keccack256 signature of the functions'
13
14 WIKI = 'https://github.com/trailofbits/slither/wiki/Printer-documentation#function-id'
15
16 def output(self, _filename):
17 """
18 _filename is not used
19 Args:
20 _filename(string)
21 """
22
23 txt = ''
24 all_tables = []
25 for contract in self.slither.contracts_derived:
26 txt += '\n{}:\n'.format(contract.name)
27 table = MyPrettyTable(['Name', 'ID'])
28 for function in contract.functions:
29 if function.visibility in ['public', 'external']:
30 table.add_row([function.solidity_signature, hex(get_function_id(function.solidity_signature))])
31 for variable in contract.state_variables:
32 if variable.visibility in ['public']:
33 sig = variable.function_name
34 table.add_row([sig, hex(get_function_id(sig))])
35 txt += str(table) + '\n'
36 all_tables.append((contract.name, table))
37
38 self.info(txt)
39
40 res = self.generate_output(txt)
41 for name, table in all_tables:
42 res.add_pretty_table(table, name)
43
44 return res
[end of slither/printers/summary/function_ids.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/slither/printers/summary/function_ids.py b/slither/printers/summary/function_ids.py
--- a/slither/printers/summary/function_ids.py
+++ b/slither/printers/summary/function_ids.py
@@ -27,11 +27,13 @@
table = MyPrettyTable(['Name', 'ID'])
for function in contract.functions:
if function.visibility in ['public', 'external']:
- table.add_row([function.solidity_signature, hex(get_function_id(function.solidity_signature))])
+ function_id = get_function_id(function.solidity_signature)
+ table.add_row([function.solidity_signature, f"{function_id:#0{10}x}"])
for variable in contract.state_variables:
if variable.visibility in ['public']:
sig = variable.function_name
- table.add_row([sig, hex(get_function_id(sig))])
+ function_id = get_function_id(sig)
+ table.add_row([sig, f"{function_id:#0{10}x}"])
txt += str(table) + '\n'
all_tables.append((contract.name, table))
| {"golden_diff": "diff --git a/slither/printers/summary/function_ids.py b/slither/printers/summary/function_ids.py\n--- a/slither/printers/summary/function_ids.py\n+++ b/slither/printers/summary/function_ids.py\n@@ -27,11 +27,13 @@\n table = MyPrettyTable(['Name', 'ID'])\n for function in contract.functions:\n if function.visibility in ['public', 'external']:\n- table.add_row([function.solidity_signature, hex(get_function_id(function.solidity_signature))])\n+ function_id = get_function_id(function.solidity_signature)\n+ table.add_row([function.solidity_signature, f\"{function_id:#0{10}x}\"])\n for variable in contract.state_variables:\n if variable.visibility in ['public']:\n sig = variable.function_name\n- table.add_row([sig, hex(get_function_id(sig))])\n+ function_id = get_function_id(sig)\n+ table.add_row([sig, f\"{function_id:#0{10}x}\"])\n txt += str(table) + '\\n'\n all_tables.append((contract.name, table))\n", "issue": "function-id not zero-padding function signature \n```\r\nERC20:\r\n+---------------------------------------+------------+\r\n| Name | ID |\r\n+---------------------------------------+------------+\r\n| totalSupply() | 0x18160ddd |\r\n| balanceOf(address) | 0x70a08231 |\r\n| allowance(address,address) | 0xdd62ed3e |\r\n| transfer(address,uint256) | 0xa9059cbb |\r\n| transferFrom(address,address,uint256) | 0x23b872dd |\r\n| approve(address,uint256) | 0x95ea7b3 |\r\n+---------------------------------------+------------+\r\n\r\n```\r\n\r\nIt's a minor annoyance, but for approve it outputs `0x95ea7b3` instead of `0x095ea7b3`. It is the same numerically, of course, but the function signature is more of an opaque 4-byte identifier than something numerically quantified.\r\n\r\n\nfunction-id not zero-padding function signature \n```\r\nERC20:\r\n+---------------------------------------+------------+\r\n| Name | ID |\r\n+---------------------------------------+------------+\r\n| totalSupply() | 0x18160ddd |\r\n| balanceOf(address) | 0x70a08231 |\r\n| allowance(address,address) | 0xdd62ed3e |\r\n| transfer(address,uint256) | 0xa9059cbb |\r\n| transferFrom(address,address,uint256) | 0x23b872dd |\r\n| approve(address,uint256) | 0x95ea7b3 |\r\n+---------------------------------------+------------+\r\n\r\n```\r\n\r\nIt's a minor annoyance, but for approve it outputs `0x95ea7b3` instead of `0x095ea7b3`. It is the same numerically, of course, but the function signature is more of an opaque 4-byte identifier than something numerically quantified.\r\n\r\n\n", "before_files": [{"content": "\"\"\"\n Module printing summary of the contract\n\"\"\"\nfrom slither.printers.abstract_printer import AbstractPrinter\nfrom slither.utils.function import get_function_id\nfrom slither.utils.myprettytable import MyPrettyTable\n\n\nclass FunctionIds(AbstractPrinter):\n\n ARGUMENT = 'function-id'\n HELP = 'Print the keccack256 signature of the functions'\n\n WIKI = 'https://github.com/trailofbits/slither/wiki/Printer-documentation#function-id'\n\n def output(self, _filename):\n \"\"\"\n _filename is not used\n Args:\n _filename(string)\n \"\"\"\n\n txt = ''\n all_tables = []\n for contract in self.slither.contracts_derived:\n txt += '\\n{}:\\n'.format(contract.name)\n table = MyPrettyTable(['Name', 'ID'])\n for function in contract.functions:\n if function.visibility in ['public', 'external']:\n table.add_row([function.solidity_signature, hex(get_function_id(function.solidity_signature))])\n for variable in contract.state_variables:\n if variable.visibility in ['public']:\n sig = variable.function_name\n table.add_row([sig, hex(get_function_id(sig))])\n txt += str(table) + '\\n'\n all_tables.append((contract.name, table))\n\n self.info(txt)\n\n res = self.generate_output(txt)\n for name, table in all_tables:\n res.add_pretty_table(table, name)\n\n return res", "path": "slither/printers/summary/function_ids.py"}]} | 1,397 | 243 |
gh_patches_debug_34752 | rasdani/github-patches | git_diff | litestar-org__litestar-288 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Partial doesn't work with inherited fields
```python
from starlite import Partial, get
from pydantic import BaseModel
class Parent(BaseModel):
foo: int
class Child(Parent):
bar: int
@get("/test")
def example(obj: Partial[Child]) -> None:
print(obj)
```
In the above example, `Partial[Child]` only accepts the field `bar: Optional[int]` and ignores all fields from the superclass. I couldn't find this behaviour documented anywhere so I assume this isn't intended?
```python
Python 3.10.5 (main, Jun 23 2022, 17:14:57) [Clang 13.1.6 (clang-1316.0.21.2.5)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> from starlite import Partial
>>> from pydantic import BaseModel
>>> class Parent(BaseModel):
... foo: int
...
>>> class Child(Parent):
... bar: int
...
>>> PartialChild = Partial[Child]
>>> PartialChild.__annotations__
{'bar': typing.Optional[int]}
>>>
```
This behaviour can also be seen above
</issue>
<code>
[start of starlite/types.py]
1 from typing import (
2 TYPE_CHECKING,
3 Any,
4 Awaitable,
5 Callable,
6 Dict,
7 Generic,
8 Optional,
9 Tuple,
10 Type,
11 TypeVar,
12 Union,
13 cast,
14 )
15
16 from openapi_schema_pydantic.v3.v3_1_0.header import Header
17 from pydantic import BaseModel, create_model
18 from pydantic.typing import AnyCallable
19 from starlette.exceptions import HTTPException as StarletteHTTPException
20 from starlette.middleware import Middleware as StarletteMiddleware
21 from starlette.middleware.base import BaseHTTPMiddleware
22 from starlette.requests import HTTPConnection
23 from starlette.responses import Response as StarletteResponse
24 from typing_extensions import Literal, Protocol, runtime_checkable
25
26 from starlite.exceptions import HTTPException
27 from starlite.response import Response
28
29 try:
30 # python 3.9 changed these variable
31 from typing import _UnionGenericAlias as GenericAlias # type: ignore
32 except ImportError: # pragma: no cover
33 from typing import _GenericAlias as GenericAlias # type: ignore
34
35 if TYPE_CHECKING:
36 from starlette.types import ASGIApp, Receive, Scope, Send
37
38 from starlite.connection import Request # noqa: TC004
39 from starlite.controller import Controller # noqa: TC004
40 from starlite.datastructures import State # noqa: TC004
41 from starlite.handlers import BaseRouteHandler # noqa: TC004
42 from starlite.router import Router # noqa: TC004
43 else:
44 Request = Any
45 WebSocket = Any
46 BaseRouteHandler = Any
47 Controller = Any
48 Router = Any
49 State = Any
50
51 T = TypeVar("T", bound=BaseModel)
52 H = TypeVar("H", bound=HTTPConnection)
53
54 ExceptionHandler = Callable[
55 [Request, Union[Exception, HTTPException, StarletteHTTPException]], Union[Response, StarletteResponse]
56 ]
57 LifeCycleHandler = Union[
58 Callable[[], Any],
59 Callable[[State], Any],
60 Callable[[], Awaitable[Any]],
61 Callable[[State], Awaitable[Any]],
62 ]
63 Guard = Union[Callable[[H, BaseRouteHandler], Awaitable[None]], Callable[[H, BaseRouteHandler], None]]
64 Method = Union[Literal["GET"], Literal["POST"], Literal["DELETE"], Literal["PATCH"], Literal["PUT"], Literal["HEAD"]]
65 ReservedKwargs = Union[
66 Literal["request"],
67 Literal["socket"],
68 Literal["headers"],
69 Literal["query"],
70 Literal["cookies"],
71 Literal["state"],
72 Literal["data"],
73 ]
74 ControllerRouterHandler = Union[Type[Controller], BaseRouteHandler, Router, AnyCallable]
75
76 # connection-lifecycle hook handlers
77 BeforeRequestHandler = Union[Callable[[Request], Any], Callable[[Request], Awaitable[Any]]]
78 AfterRequestHandler = Union[
79 Callable[[Response], Response],
80 Callable[[Response], Awaitable[Response]],
81 Callable[[StarletteResponse], StarletteResponse],
82 Callable[[StarletteResponse], Awaitable[StarletteResponse]],
83 ]
84 AfterResponseHandler = Union[Callable[[Request], None], Callable[[Request], Awaitable[None]]]
85
86 AsyncAnyCallable = Callable[..., Awaitable[Any]]
87 CacheKeyBuilder = Callable[[Request], str]
88
89
90 @runtime_checkable
91 class MiddlewareProtocol(Protocol):
92 def __init__(self, app: "ASGIApp"): # pragma: no cover
93 ...
94
95 async def __call__(self, scope: "Scope", receive: "Receive", send: "Send") -> None: # pragma: no cover
96 ...
97
98
99 class Partial(Generic[T]):
100 _models: Dict[Type[T], Any] = {}
101
102 def __class_getitem__(cls, item: Type[T]) -> Type[T]:
103 """
104 Modifies a given T subclass of BaseModel to be all optional
105 """
106 if not cls._models.get(item):
107 field_definitions: Dict[str, Tuple[Any, None]] = {}
108 for field_name, field_type in item.__annotations__.items():
109 # we modify the field annotations to make it optional
110 if not isinstance(field_type, GenericAlias) or type(None) not in field_type.__args__:
111 field_definitions[field_name] = (Optional[field_type], None)
112 else:
113 field_definitions[field_name] = (field_type, None)
114 cls._models[item] = create_model("Partial" + item.__name__, **field_definitions) # type: ignore
115 return cast(Type[T], cls._models.get(item))
116
117
118 class ResponseHeader(Header):
119 value: Any = ...
120
121
122 Middleware = Union[StarletteMiddleware, Type[BaseHTTPMiddleware], Type[MiddlewareProtocol]]
123
[end of starlite/types.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/starlite/types.py b/starlite/types.py
--- a/starlite/types.py
+++ b/starlite/types.py
@@ -11,6 +11,7 @@
TypeVar,
Union,
cast,
+ get_type_hints,
)
from openapi_schema_pydantic.v3.v3_1_0.header import Header
@@ -23,7 +24,7 @@
from starlette.responses import Response as StarletteResponse
from typing_extensions import Literal, Protocol, runtime_checkable
-from starlite.exceptions import HTTPException
+from starlite.exceptions import HTTPException, ImproperlyConfiguredException
from starlite.response import Response
try:
@@ -103,15 +104,23 @@
"""
Modifies a given T subclass of BaseModel to be all optional
"""
+ if not issubclass(item, BaseModel):
+ raise ImproperlyConfiguredException(f"Partial[{item}] must be a subclass of BaseModel")
if not cls._models.get(item):
field_definitions: Dict[str, Tuple[Any, None]] = {}
- for field_name, field_type in item.__annotations__.items():
- # we modify the field annotations to make it optional
- if not isinstance(field_type, GenericAlias) or type(None) not in field_type.__args__:
- field_definitions[field_name] = (Optional[field_type], None)
+ # traverse the object's mro and get all annotations
+ # until we find a BaseModel.
+ for obj in item.mro():
+ if issubclass(obj, BaseModel):
+ for field_name, field_type in get_type_hints(obj).items():
+ # we modify the field annotations to make it optional
+ if not isinstance(field_type, GenericAlias) or type(None) not in field_type.__args__:
+ field_definitions[field_name] = (Optional[field_type], None)
+ else:
+ field_definitions[field_name] = (field_type, None)
else:
- field_definitions[field_name] = (field_type, None)
- cls._models[item] = create_model("Partial" + item.__name__, **field_definitions) # type: ignore
+ break
+ cls._models[item] = create_model(f"Partial{item.__name__}", **field_definitions) # type: ignore
return cast(Type[T], cls._models.get(item))
| {"golden_diff": "diff --git a/starlite/types.py b/starlite/types.py\n--- a/starlite/types.py\n+++ b/starlite/types.py\n@@ -11,6 +11,7 @@\n TypeVar,\n Union,\n cast,\n+ get_type_hints,\n )\n \n from openapi_schema_pydantic.v3.v3_1_0.header import Header\n@@ -23,7 +24,7 @@\n from starlette.responses import Response as StarletteResponse\n from typing_extensions import Literal, Protocol, runtime_checkable\n \n-from starlite.exceptions import HTTPException\n+from starlite.exceptions import HTTPException, ImproperlyConfiguredException\n from starlite.response import Response\n \n try:\n@@ -103,15 +104,23 @@\n \"\"\"\n Modifies a given T subclass of BaseModel to be all optional\n \"\"\"\n+ if not issubclass(item, BaseModel):\n+ raise ImproperlyConfiguredException(f\"Partial[{item}] must be a subclass of BaseModel\")\n if not cls._models.get(item):\n field_definitions: Dict[str, Tuple[Any, None]] = {}\n- for field_name, field_type in item.__annotations__.items():\n- # we modify the field annotations to make it optional\n- if not isinstance(field_type, GenericAlias) or type(None) not in field_type.__args__:\n- field_definitions[field_name] = (Optional[field_type], None)\n+ # traverse the object's mro and get all annotations\n+ # until we find a BaseModel.\n+ for obj in item.mro():\n+ if issubclass(obj, BaseModel):\n+ for field_name, field_type in get_type_hints(obj).items():\n+ # we modify the field annotations to make it optional\n+ if not isinstance(field_type, GenericAlias) or type(None) not in field_type.__args__:\n+ field_definitions[field_name] = (Optional[field_type], None)\n+ else:\n+ field_definitions[field_name] = (field_type, None)\n else:\n- field_definitions[field_name] = (field_type, None)\n- cls._models[item] = create_model(\"Partial\" + item.__name__, **field_definitions) # type: ignore\n+ break\n+ cls._models[item] = create_model(f\"Partial{item.__name__}\", **field_definitions) # type: ignore\n return cast(Type[T], cls._models.get(item))\n", "issue": "Partial doesn't work with inherited fields\n```python\r\nfrom starlite import Partial, get\r\nfrom pydantic import BaseModel\r\n\r\nclass Parent(BaseModel):\r\n foo: int\r\n\r\nclass Child(Parent):\r\n bar: int\r\n\r\n@get(\"/test\")\r\ndef example(obj: Partial[Child]) -> None:\r\n print(obj)\r\n```\r\n\r\nIn the above example, `Partial[Child]` only accepts the field `bar: Optional[int]` and ignores all fields from the superclass. I couldn't find this behaviour documented anywhere so I assume this isn't intended?\r\n\r\n```python\r\nPython 3.10.5 (main, Jun 23 2022, 17:14:57) [Clang 13.1.6 (clang-1316.0.21.2.5)] on darwin\r\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\r\n>>> from starlite import Partial\r\n>>> from pydantic import BaseModel\r\n>>> class Parent(BaseModel):\r\n... foo: int\r\n...\r\n>>> class Child(Parent):\r\n... bar: int\r\n...\r\n>>> PartialChild = Partial[Child]\r\n>>> PartialChild.__annotations__\r\n{'bar': typing.Optional[int]}\r\n>>>\r\n```\r\n\r\nThis behaviour can also be seen above\r\n\n", "before_files": [{"content": "from typing import (\n TYPE_CHECKING,\n Any,\n Awaitable,\n Callable,\n Dict,\n Generic,\n Optional,\n Tuple,\n Type,\n TypeVar,\n Union,\n cast,\n)\n\nfrom openapi_schema_pydantic.v3.v3_1_0.header import Header\nfrom pydantic import BaseModel, create_model\nfrom pydantic.typing import AnyCallable\nfrom starlette.exceptions import HTTPException as StarletteHTTPException\nfrom starlette.middleware import Middleware as StarletteMiddleware\nfrom starlette.middleware.base import BaseHTTPMiddleware\nfrom starlette.requests import HTTPConnection\nfrom starlette.responses import Response as StarletteResponse\nfrom typing_extensions import Literal, Protocol, runtime_checkable\n\nfrom starlite.exceptions import HTTPException\nfrom starlite.response import Response\n\ntry:\n # python 3.9 changed these variable\n from typing import _UnionGenericAlias as GenericAlias # type: ignore\nexcept ImportError: # pragma: no cover\n from typing import _GenericAlias as GenericAlias # type: ignore\n\nif TYPE_CHECKING:\n from starlette.types import ASGIApp, Receive, Scope, Send\n\n from starlite.connection import Request # noqa: TC004\n from starlite.controller import Controller # noqa: TC004\n from starlite.datastructures import State # noqa: TC004\n from starlite.handlers import BaseRouteHandler # noqa: TC004\n from starlite.router import Router # noqa: TC004\nelse:\n Request = Any\n WebSocket = Any\n BaseRouteHandler = Any\n Controller = Any\n Router = Any\n State = Any\n\nT = TypeVar(\"T\", bound=BaseModel)\nH = TypeVar(\"H\", bound=HTTPConnection)\n\nExceptionHandler = Callable[\n [Request, Union[Exception, HTTPException, StarletteHTTPException]], Union[Response, StarletteResponse]\n]\nLifeCycleHandler = Union[\n Callable[[], Any],\n Callable[[State], Any],\n Callable[[], Awaitable[Any]],\n Callable[[State], Awaitable[Any]],\n]\nGuard = Union[Callable[[H, BaseRouteHandler], Awaitable[None]], Callable[[H, BaseRouteHandler], None]]\nMethod = Union[Literal[\"GET\"], Literal[\"POST\"], Literal[\"DELETE\"], Literal[\"PATCH\"], Literal[\"PUT\"], Literal[\"HEAD\"]]\nReservedKwargs = Union[\n Literal[\"request\"],\n Literal[\"socket\"],\n Literal[\"headers\"],\n Literal[\"query\"],\n Literal[\"cookies\"],\n Literal[\"state\"],\n Literal[\"data\"],\n]\nControllerRouterHandler = Union[Type[Controller], BaseRouteHandler, Router, AnyCallable]\n\n# connection-lifecycle hook handlers\nBeforeRequestHandler = Union[Callable[[Request], Any], Callable[[Request], Awaitable[Any]]]\nAfterRequestHandler = Union[\n Callable[[Response], Response],\n Callable[[Response], Awaitable[Response]],\n Callable[[StarletteResponse], StarletteResponse],\n Callable[[StarletteResponse], Awaitable[StarletteResponse]],\n]\nAfterResponseHandler = Union[Callable[[Request], None], Callable[[Request], Awaitable[None]]]\n\nAsyncAnyCallable = Callable[..., Awaitable[Any]]\nCacheKeyBuilder = Callable[[Request], str]\n\n\n@runtime_checkable\nclass MiddlewareProtocol(Protocol):\n def __init__(self, app: \"ASGIApp\"): # pragma: no cover\n ...\n\n async def __call__(self, scope: \"Scope\", receive: \"Receive\", send: \"Send\") -> None: # pragma: no cover\n ...\n\n\nclass Partial(Generic[T]):\n _models: Dict[Type[T], Any] = {}\n\n def __class_getitem__(cls, item: Type[T]) -> Type[T]:\n \"\"\"\n Modifies a given T subclass of BaseModel to be all optional\n \"\"\"\n if not cls._models.get(item):\n field_definitions: Dict[str, Tuple[Any, None]] = {}\n for field_name, field_type in item.__annotations__.items():\n # we modify the field annotations to make it optional\n if not isinstance(field_type, GenericAlias) or type(None) not in field_type.__args__:\n field_definitions[field_name] = (Optional[field_type], None)\n else:\n field_definitions[field_name] = (field_type, None)\n cls._models[item] = create_model(\"Partial\" + item.__name__, **field_definitions) # type: ignore\n return cast(Type[T], cls._models.get(item))\n\n\nclass ResponseHeader(Header):\n value: Any = ...\n\n\nMiddleware = Union[StarletteMiddleware, Type[BaseHTTPMiddleware], Type[MiddlewareProtocol]]\n", "path": "starlite/types.py"}]} | 2,066 | 518 |
gh_patches_debug_64987 | rasdani/github-patches | git_diff | googleapis__google-auth-library-python-937 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
new cachetools version (5.0.0+) can't work with google-auth
`cachetools` has released a new version (5.0.0) which conflicts with google-auth requirements of it being <5, this prevents updates to the `cachetools` package and pose a potential security concern (as updates are no longer possible to it)
```
The conflict is caused by:
The user requested cachetools==5.0.0
google-auth 2.3.3 depends on cachetools<5.0 and >=2.0.0
```
issue seems in https://github.com/googleapis/google-auth-library-python/blob/3c3fbf40b07e090f2be7fac5b304dbf438b5cd6c/setup.py#L23
#### Environment details
- OS: alpine3.11
- Python version: python:3.8.6
- pip version: 20.3.3
- `google-auth` version: 2.3.3 (latest at time of writing)
#### Steps to reproduce
1. try pip install using latest `cachetools` with latest `google-auth`
2. pip fails
</issue>
<code>
[start of setup.py]
1 # Copyright 2014 Google Inc.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import io
16 import os
17
18 from setuptools import find_packages
19 from setuptools import setup
20
21
22 DEPENDENCIES = (
23 "cachetools>=2.0.0,<5.0",
24 "pyasn1-modules>=0.2.1",
25 # rsa==4.5 is the last version to support 2.7
26 # https://github.com/sybrenstuvel/python-rsa/issues/152#issuecomment-643470233
27 'rsa<4.6; python_version < "3.6"',
28 'rsa>=3.1.4,<5; python_version >= "3.6"',
29 # install enum34 to support 2.7. enum34 only works up to python version 3.3.
30 'enum34>=1.1.10; python_version < "3.4"',
31 "six>=1.9.0",
32 )
33
34 extras = {
35 "aiohttp": [
36 "aiohttp >= 3.6.2, < 4.0.0dev; python_version>='3.6'",
37 "requests >= 2.20.0, < 3.0.0dev",
38 ],
39 "pyopenssl": "pyopenssl>=20.0.0",
40 "reauth": "pyu2f>=0.1.5",
41 }
42
43 with io.open("README.rst", "r") as fh:
44 long_description = fh.read()
45
46 package_root = os.path.abspath(os.path.dirname(__file__))
47
48 version = {}
49 with open(os.path.join(package_root, "google/auth/version.py")) as fp:
50 exec(fp.read(), version)
51 version = version["__version__"]
52
53 setup(
54 name="google-auth",
55 version=version,
56 author="Google Cloud Platform",
57 author_email="[email protected]",
58 description="Google Authentication Library",
59 long_description=long_description,
60 url="https://github.com/googleapis/google-auth-library-python",
61 packages=find_packages(exclude=("tests*", "system_tests*")),
62 namespace_packages=("google",),
63 install_requires=DEPENDENCIES,
64 extras_require=extras,
65 python_requires=">=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,!=3.5.*",
66 license="Apache 2.0",
67 keywords="google auth oauth client",
68 classifiers=[
69 "Programming Language :: Python :: 3",
70 "Programming Language :: Python :: 3.6",
71 "Programming Language :: Python :: 3.7",
72 "Programming Language :: Python :: 3.8",
73 "Programming Language :: Python :: 3.9",
74 "Programming Language :: Python :: 3.10",
75 "Development Status :: 5 - Production/Stable",
76 "Intended Audience :: Developers",
77 "License :: OSI Approved :: Apache Software License",
78 "Operating System :: POSIX",
79 "Operating System :: Microsoft :: Windows",
80 "Operating System :: MacOS :: MacOS X",
81 "Operating System :: OS Independent",
82 "Topic :: Internet :: WWW/HTTP",
83 ],
84 )
85
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -20,7 +20,7 @@
DEPENDENCIES = (
- "cachetools>=2.0.0,<5.0",
+ "cachetools>=2.0.0,<6.0",
"pyasn1-modules>=0.2.1",
# rsa==4.5 is the last version to support 2.7
# https://github.com/sybrenstuvel/python-rsa/issues/152#issuecomment-643470233
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -20,7 +20,7 @@\n \n \n DEPENDENCIES = (\n- \"cachetools>=2.0.0,<5.0\",\n+ \"cachetools>=2.0.0,<6.0\",\n \"pyasn1-modules>=0.2.1\",\n # rsa==4.5 is the last version to support 2.7\n # https://github.com/sybrenstuvel/python-rsa/issues/152#issuecomment-643470233\n", "issue": "new cachetools version (5.0.0+) can't work with google-auth\n`cachetools` has released a new version (5.0.0) which conflicts with google-auth requirements of it being <5, this prevents updates to the `cachetools` package and pose a potential security concern (as updates are no longer possible to it)\r\n\r\n```\r\nThe conflict is caused by:\r\n The user requested cachetools==5.0.0\r\n google-auth 2.3.3 depends on cachetools<5.0 and >=2.0.0\r\n```\r\n\r\nissue seems in https://github.com/googleapis/google-auth-library-python/blob/3c3fbf40b07e090f2be7fac5b304dbf438b5cd6c/setup.py#L23 \r\n\r\n#### Environment details\r\n\r\n - OS: alpine3.11\r\n - Python version: python:3.8.6\r\n - pip version: 20.3.3\r\n - `google-auth` version: 2.3.3 (latest at time of writing)\r\n\r\n#### Steps to reproduce\r\n\r\n 1. try pip install using latest `cachetools` with latest `google-auth`\r\n 2. pip fails\r\n\n", "before_files": [{"content": "# Copyright 2014 Google Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport io\nimport os\n\nfrom setuptools import find_packages\nfrom setuptools import setup\n\n\nDEPENDENCIES = (\n \"cachetools>=2.0.0,<5.0\",\n \"pyasn1-modules>=0.2.1\",\n # rsa==4.5 is the last version to support 2.7\n # https://github.com/sybrenstuvel/python-rsa/issues/152#issuecomment-643470233\n 'rsa<4.6; python_version < \"3.6\"',\n 'rsa>=3.1.4,<5; python_version >= \"3.6\"',\n # install enum34 to support 2.7. enum34 only works up to python version 3.3.\n 'enum34>=1.1.10; python_version < \"3.4\"',\n \"six>=1.9.0\",\n)\n\nextras = {\n \"aiohttp\": [\n \"aiohttp >= 3.6.2, < 4.0.0dev; python_version>='3.6'\",\n \"requests >= 2.20.0, < 3.0.0dev\",\n ],\n \"pyopenssl\": \"pyopenssl>=20.0.0\",\n \"reauth\": \"pyu2f>=0.1.5\",\n}\n\nwith io.open(\"README.rst\", \"r\") as fh:\n long_description = fh.read()\n\npackage_root = os.path.abspath(os.path.dirname(__file__))\n\nversion = {}\nwith open(os.path.join(package_root, \"google/auth/version.py\")) as fp:\n exec(fp.read(), version)\nversion = version[\"__version__\"]\n\nsetup(\n name=\"google-auth\",\n version=version,\n author=\"Google Cloud Platform\",\n author_email=\"[email protected]\",\n description=\"Google Authentication Library\",\n long_description=long_description,\n url=\"https://github.com/googleapis/google-auth-library-python\",\n packages=find_packages(exclude=(\"tests*\", \"system_tests*\")),\n namespace_packages=(\"google\",),\n install_requires=DEPENDENCIES,\n extras_require=extras,\n python_requires=\">=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,!=3.5.*\",\n license=\"Apache 2.0\",\n keywords=\"google auth oauth client\",\n classifiers=[\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Programming Language :: Python :: 3.10\",\n \"Development Status :: 5 - Production/Stable\",\n \"Intended Audience :: Developers\",\n \"License :: OSI Approved :: Apache Software License\",\n \"Operating System :: POSIX\",\n \"Operating System :: Microsoft :: Windows\",\n \"Operating System :: MacOS :: MacOS X\",\n \"Operating System :: OS Independent\",\n \"Topic :: Internet :: WWW/HTTP\",\n ],\n)\n", "path": "setup.py"}]} | 1,773 | 138 |
gh_patches_debug_20415 | rasdani/github-patches | git_diff | ansible__awx-12803 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Token and Session Expiration never run after the first time
### Please confirm the following
- [X] I agree to follow this project's [code of conduct](https://docs.ansible.com/ansible/latest/community/code_of_conduct.html).
- [X] I have checked the [current issues](https://github.com/ansible/awx/issues) for duplicates.
- [X] I understand that AWX is open source software provided for free and that I might not receive a timely response.
### Bug Summary
Looks like when we implemented token and session cleanup way back in https://github.com/ansible/awx/pull/3856
We populated the recurrence rule incorrectly:
https://github.com/ansible/awx/blob/8a06ffbe15c9f8e68b1da86e5ca7daf5ecfd6da4/awx/main/migrations/_create_system_jobs.py#L39
This schedule will only ever run once due to `COUNT=1`.... we should omit that so that it will periodically run.
### AWX version
latest
### Select the relevant components
- [ ] UI
- [X] API
- [ ] Docs
- [ ] Collection
- [ ] CLI
- [ ] Other
### Installation method
minishift
### Modifications
no
### Ansible version
_No response_
### Operating system
_No response_
### Web browser
_No response_
### Steps to reproduce
install awx
### Expected results
periodic running of these management jobs
### Actual results
the jobs only run once
### Additional information
_No response_
</issue>
<code>
[start of awx/main/migrations/_create_system_jobs.py]
1 import logging
2
3 from django.utils.timezone import now
4
5 logger = logging.getLogger('awx.main.migrations')
6
7 __all__ = ['create_collection_jt', 'create_clearsessions_jt', 'create_cleartokens_jt']
8
9 '''
10 These methods are called by migrations to create various system job templates
11
12 Create default system job templates if not present. Create default schedules
13 only if new system job templates were created (i.e. new database).
14 '''
15
16
17 def create_clearsessions_jt(apps, schema_editor):
18
19 SystemJobTemplate = apps.get_model('main', 'SystemJobTemplate')
20 Schedule = apps.get_model('main', 'Schedule')
21 ContentType = apps.get_model('contenttypes', 'ContentType')
22 sjt_ct = ContentType.objects.get_for_model(SystemJobTemplate)
23 now_dt = now()
24 schedule_time = now_dt.strftime('%Y%m%dT%H%M%SZ')
25
26 sjt, created = SystemJobTemplate.objects.get_or_create(
27 job_type='cleanup_sessions',
28 defaults=dict(
29 name='Cleanup Expired Sessions',
30 description='Cleans out expired browser sessions',
31 polymorphic_ctype=sjt_ct,
32 created=now_dt,
33 modified=now_dt,
34 ),
35 )
36 if created:
37 sched = Schedule(
38 name='Cleanup Expired Sessions',
39 rrule='DTSTART:%s RRULE:FREQ=WEEKLY;INTERVAL=1;COUNT=1' % schedule_time,
40 description='Cleans out expired browser sessions',
41 enabled=True,
42 created=now_dt,
43 modified=now_dt,
44 extra_data={},
45 )
46 sched.unified_job_template = sjt
47 sched.save()
48
49
50 def create_cleartokens_jt(apps, schema_editor):
51
52 SystemJobTemplate = apps.get_model('main', 'SystemJobTemplate')
53 Schedule = apps.get_model('main', 'Schedule')
54 ContentType = apps.get_model('contenttypes', 'ContentType')
55 sjt_ct = ContentType.objects.get_for_model(SystemJobTemplate)
56 now_dt = now()
57 schedule_time = now_dt.strftime('%Y%m%dT%H%M%SZ')
58
59 sjt, created = SystemJobTemplate.objects.get_or_create(
60 job_type='cleanup_tokens',
61 defaults=dict(
62 name='Cleanup Expired OAuth 2 Tokens',
63 description='Cleanup expired OAuth 2 access and refresh tokens',
64 polymorphic_ctype=sjt_ct,
65 created=now_dt,
66 modified=now_dt,
67 ),
68 )
69 if created:
70 sched = Schedule(
71 name='Cleanup Expired OAuth 2 Tokens',
72 rrule='DTSTART:%s RRULE:FREQ=WEEKLY;INTERVAL=1;COUNT=1' % schedule_time,
73 description='Removes expired OAuth 2 access and refresh tokens',
74 enabled=True,
75 created=now_dt,
76 modified=now_dt,
77 extra_data={},
78 )
79 sched.unified_job_template = sjt
80 sched.save()
81
[end of awx/main/migrations/_create_system_jobs.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/awx/main/migrations/_create_system_jobs.py b/awx/main/migrations/_create_system_jobs.py
--- a/awx/main/migrations/_create_system_jobs.py
+++ b/awx/main/migrations/_create_system_jobs.py
@@ -36,7 +36,7 @@
if created:
sched = Schedule(
name='Cleanup Expired Sessions',
- rrule='DTSTART:%s RRULE:FREQ=WEEKLY;INTERVAL=1;COUNT=1' % schedule_time,
+ rrule='DTSTART:%s RRULE:FREQ=WEEKLY;INTERVAL=1' % schedule_time,
description='Cleans out expired browser sessions',
enabled=True,
created=now_dt,
@@ -69,7 +69,7 @@
if created:
sched = Schedule(
name='Cleanup Expired OAuth 2 Tokens',
- rrule='DTSTART:%s RRULE:FREQ=WEEKLY;INTERVAL=1;COUNT=1' % schedule_time,
+ rrule='DTSTART:%s RRULE:FREQ=WEEKLY;INTERVAL=1' % schedule_time,
description='Removes expired OAuth 2 access and refresh tokens',
enabled=True,
created=now_dt,
| {"golden_diff": "diff --git a/awx/main/migrations/_create_system_jobs.py b/awx/main/migrations/_create_system_jobs.py\n--- a/awx/main/migrations/_create_system_jobs.py\n+++ b/awx/main/migrations/_create_system_jobs.py\n@@ -36,7 +36,7 @@\n if created:\n sched = Schedule(\n name='Cleanup Expired Sessions',\n- rrule='DTSTART:%s RRULE:FREQ=WEEKLY;INTERVAL=1;COUNT=1' % schedule_time,\n+ rrule='DTSTART:%s RRULE:FREQ=WEEKLY;INTERVAL=1' % schedule_time,\n description='Cleans out expired browser sessions',\n enabled=True,\n created=now_dt,\n@@ -69,7 +69,7 @@\n if created:\n sched = Schedule(\n name='Cleanup Expired OAuth 2 Tokens',\n- rrule='DTSTART:%s RRULE:FREQ=WEEKLY;INTERVAL=1;COUNT=1' % schedule_time,\n+ rrule='DTSTART:%s RRULE:FREQ=WEEKLY;INTERVAL=1' % schedule_time,\n description='Removes expired OAuth 2 access and refresh tokens',\n enabled=True,\n created=now_dt,\n", "issue": "Token and Session Expiration never run after the first time\n### Please confirm the following\n\n- [X] I agree to follow this project's [code of conduct](https://docs.ansible.com/ansible/latest/community/code_of_conduct.html).\n- [X] I have checked the [current issues](https://github.com/ansible/awx/issues) for duplicates.\n- [X] I understand that AWX is open source software provided for free and that I might not receive a timely response.\n\n### Bug Summary\n\nLooks like when we implemented token and session cleanup way back in https://github.com/ansible/awx/pull/3856\r\n\r\nWe populated the recurrence rule incorrectly:\r\n\r\nhttps://github.com/ansible/awx/blob/8a06ffbe15c9f8e68b1da86e5ca7daf5ecfd6da4/awx/main/migrations/_create_system_jobs.py#L39\r\n\r\nThis schedule will only ever run once due to `COUNT=1`.... we should omit that so that it will periodically run.\n\n### AWX version\n\nlatest\n\n### Select the relevant components\n\n- [ ] UI\n- [X] API\n- [ ] Docs\n- [ ] Collection\n- [ ] CLI\n- [ ] Other\n\n### Installation method\n\nminishift\n\n### Modifications\n\nno\n\n### Ansible version\n\n_No response_\n\n### Operating system\n\n_No response_\n\n### Web browser\n\n_No response_\n\n### Steps to reproduce\n\ninstall awx\n\n### Expected results\n\nperiodic running of these management jobs\n\n### Actual results\n\nthe jobs only run once\n\n### Additional information\n\n_No response_\n", "before_files": [{"content": "import logging\n\nfrom django.utils.timezone import now\n\nlogger = logging.getLogger('awx.main.migrations')\n\n__all__ = ['create_collection_jt', 'create_clearsessions_jt', 'create_cleartokens_jt']\n\n'''\nThese methods are called by migrations to create various system job templates\n\nCreate default system job templates if not present. Create default schedules\nonly if new system job templates were created (i.e. new database).\n'''\n\n\ndef create_clearsessions_jt(apps, schema_editor):\n\n SystemJobTemplate = apps.get_model('main', 'SystemJobTemplate')\n Schedule = apps.get_model('main', 'Schedule')\n ContentType = apps.get_model('contenttypes', 'ContentType')\n sjt_ct = ContentType.objects.get_for_model(SystemJobTemplate)\n now_dt = now()\n schedule_time = now_dt.strftime('%Y%m%dT%H%M%SZ')\n\n sjt, created = SystemJobTemplate.objects.get_or_create(\n job_type='cleanup_sessions',\n defaults=dict(\n name='Cleanup Expired Sessions',\n description='Cleans out expired browser sessions',\n polymorphic_ctype=sjt_ct,\n created=now_dt,\n modified=now_dt,\n ),\n )\n if created:\n sched = Schedule(\n name='Cleanup Expired Sessions',\n rrule='DTSTART:%s RRULE:FREQ=WEEKLY;INTERVAL=1;COUNT=1' % schedule_time,\n description='Cleans out expired browser sessions',\n enabled=True,\n created=now_dt,\n modified=now_dt,\n extra_data={},\n )\n sched.unified_job_template = sjt\n sched.save()\n\n\ndef create_cleartokens_jt(apps, schema_editor):\n\n SystemJobTemplate = apps.get_model('main', 'SystemJobTemplate')\n Schedule = apps.get_model('main', 'Schedule')\n ContentType = apps.get_model('contenttypes', 'ContentType')\n sjt_ct = ContentType.objects.get_for_model(SystemJobTemplate)\n now_dt = now()\n schedule_time = now_dt.strftime('%Y%m%dT%H%M%SZ')\n\n sjt, created = SystemJobTemplate.objects.get_or_create(\n job_type='cleanup_tokens',\n defaults=dict(\n name='Cleanup Expired OAuth 2 Tokens',\n description='Cleanup expired OAuth 2 access and refresh tokens',\n polymorphic_ctype=sjt_ct,\n created=now_dt,\n modified=now_dt,\n ),\n )\n if created:\n sched = Schedule(\n name='Cleanup Expired OAuth 2 Tokens',\n rrule='DTSTART:%s RRULE:FREQ=WEEKLY;INTERVAL=1;COUNT=1' % schedule_time,\n description='Removes expired OAuth 2 access and refresh tokens',\n enabled=True,\n created=now_dt,\n modified=now_dt,\n extra_data={},\n )\n sched.unified_job_template = sjt\n sched.save()\n", "path": "awx/main/migrations/_create_system_jobs.py"}]} | 1,661 | 273 |
gh_patches_debug_8777 | rasdani/github-patches | git_diff | searx__searx-2385 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Startpage not returning any results
**Version:**
I tried it with `v0.17.0` and `820b468bfe96f693d60ce06f1e78af51f00deefc`
**Installation-Method:**
Manually according to wiki (with uwsgi)
**What happened?**
The startpage engine is not returning any results
**How To Reproduce**
Execute a query with `!sp test`
**Expected behavior**
Results from startpage
**Additional context**
I added some log lines to the startpage engine file and it seems like it gets some
response back from startpage.
Maybe startpage changed their formatting?
I didn't have a closer look on the way results are parsed yet.
</issue>
<code>
[start of searx/engines/startpage.py]
1 # Startpage (Web)
2 #
3 # @website https://startpage.com
4 # @provide-api no (nothing found)
5 #
6 # @using-api no
7 # @results HTML
8 # @stable no (HTML can change)
9 # @parse url, title, content
10 #
11 # @todo paging
12
13 from lxml import html
14 from dateutil import parser
15 from datetime import datetime, timedelta
16 import re
17 from unicodedata import normalize, combining
18 from babel import Locale
19 from babel.localedata import locale_identifiers
20 from searx.utils import extract_text, eval_xpath, match_language
21
22 # engine dependent config
23 categories = ['general']
24 # there is a mechanism to block "bot" search
25 # (probably the parameter qid), require
26 # storing of qid's between mulitble search-calls
27
28 paging = True
29 language_support = True
30 supported_languages_url = 'https://www.startpage.com/do/settings'
31
32 # search-url
33 base_url = 'https://startpage.com/'
34 search_url = base_url + 'do/search'
35
36 # specific xpath variables
37 # ads xpath //div[@id="results"]/div[@id="sponsored"]//div[@class="result"]
38 # not ads: div[@class="result"] are the direct childs of div[@id="results"]
39 results_xpath = '//div[@class="w-gl__result"]'
40 link_xpath = './/a[@class="w-gl__result-title"]'
41 content_xpath = './/p[@class="w-gl__description"]'
42
43
44 # do search-request
45 def request(query, params):
46
47 params['url'] = search_url
48 params['method'] = 'POST'
49 params['data'] = {
50 'query': query,
51 'page': params['pageno'],
52 'cat': 'web',
53 'cmd': 'process_search',
54 'engine0': 'v1all',
55 }
56
57 # set language if specified
58 if params['language'] != 'all':
59 lang_code = match_language(params['language'], supported_languages, fallback=None)
60 if lang_code:
61 language_name = supported_languages[lang_code]['alias']
62 params['data']['language'] = language_name
63 params['data']['lui'] = language_name
64
65 return params
66
67
68 # get response from search-request
69 def response(resp):
70 results = []
71
72 dom = html.fromstring(resp.text)
73
74 # parse results
75 for result in eval_xpath(dom, results_xpath):
76 links = eval_xpath(result, link_xpath)
77 if not links:
78 continue
79 link = links[0]
80 url = link.attrib.get('href')
81
82 # block google-ad url's
83 if re.match(r"^http(s|)://(www\.)?google\.[a-z]+/aclk.*$", url):
84 continue
85
86 # block startpage search url's
87 if re.match(r"^http(s|)://(www\.)?startpage\.com/do/search\?.*$", url):
88 continue
89
90 title = extract_text(link)
91
92 if eval_xpath(result, content_xpath):
93 content = extract_text(eval_xpath(result, content_xpath))
94 else:
95 content = ''
96
97 published_date = None
98
99 # check if search result starts with something like: "2 Sep 2014 ... "
100 if re.match(r"^([1-9]|[1-2][0-9]|3[0-1]) [A-Z][a-z]{2} [0-9]{4} \.\.\. ", content):
101 date_pos = content.find('...') + 4
102 date_string = content[0:date_pos - 5]
103 # fix content string
104 content = content[date_pos:]
105
106 try:
107 published_date = parser.parse(date_string, dayfirst=True)
108 except ValueError:
109 pass
110
111 # check if search result starts with something like: "5 days ago ... "
112 elif re.match(r"^[0-9]+ days? ago \.\.\. ", content):
113 date_pos = content.find('...') + 4
114 date_string = content[0:date_pos - 5]
115
116 # calculate datetime
117 published_date = datetime.now() - timedelta(days=int(re.match(r'\d+', date_string).group()))
118
119 # fix content string
120 content = content[date_pos:]
121
122 if published_date:
123 # append result
124 results.append({'url': url,
125 'title': title,
126 'content': content,
127 'publishedDate': published_date})
128 else:
129 # append result
130 results.append({'url': url,
131 'title': title,
132 'content': content})
133
134 # return results
135 return results
136
137
138 # get supported languages from their site
139 def _fetch_supported_languages(resp):
140 # startpage's language selector is a mess
141 # each option has a displayed name and a value, either of which may represent the language name
142 # in the native script, the language name in English, an English transliteration of the native name,
143 # the English name of the writing script used by the language, or occasionally something else entirely.
144
145 # this cases are so special they need to be hardcoded, a couple of them are mispellings
146 language_names = {
147 'english_uk': 'en-GB',
148 'fantizhengwen': ['zh-TW', 'zh-HK'],
149 'hangul': 'ko',
150 'malayam': 'ml',
151 'norsk': 'nb',
152 'sinhalese': 'si',
153 'sudanese': 'su'
154 }
155
156 # get the English name of every language known by babel
157 language_names.update({name.lower(): lang_code for lang_code, name in Locale('en')._data['languages'].items()})
158
159 # get the native name of every language known by babel
160 for lang_code in filter(lambda lang_code: lang_code.find('_') == -1, locale_identifiers()):
161 native_name = Locale(lang_code).get_language_name().lower()
162 # add native name exactly as it is
163 language_names[native_name] = lang_code
164
165 # add "normalized" language name (i.e. français becomes francais and español becomes espanol)
166 unaccented_name = ''.join(filter(lambda c: not combining(c), normalize('NFKD', native_name)))
167 if len(unaccented_name) == len(unaccented_name.encode()):
168 # add only if result is ascii (otherwise "normalization" didn't work)
169 language_names[unaccented_name] = lang_code
170
171 dom = html.fromstring(resp.text)
172 sp_lang_names = []
173 for option in dom.xpath('//form[@id="settings-form"]//select[@name="language"]/option'):
174 sp_lang_names.append((option.get('value'), extract_text(option).lower()))
175
176 supported_languages = {}
177 for sp_option_value, sp_option_text in sp_lang_names:
178 lang_code = language_names.get(sp_option_value) or language_names.get(sp_option_text)
179 if isinstance(lang_code, str):
180 supported_languages[lang_code] = {'alias': sp_option_value}
181 elif isinstance(lang_code, list):
182 for lc in lang_code:
183 supported_languages[lc] = {'alias': sp_option_value}
184 else:
185 print('Unknown language option in Startpage: {} ({})'.format(sp_option_value, sp_option_text))
186
187 return supported_languages
188
[end of searx/engines/startpage.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/searx/engines/startpage.py b/searx/engines/startpage.py
--- a/searx/engines/startpage.py
+++ b/searx/engines/startpage.py
@@ -36,8 +36,8 @@
# specific xpath variables
# ads xpath //div[@id="results"]/div[@id="sponsored"]//div[@class="result"]
# not ads: div[@class="result"] are the direct childs of div[@id="results"]
-results_xpath = '//div[@class="w-gl__result"]'
-link_xpath = './/a[@class="w-gl__result-title"]'
+results_xpath = '//div[@class="w-gl__result__main"]'
+link_xpath = './/a[@class="w-gl__result-url result-link"]'
content_xpath = './/p[@class="w-gl__description"]'
| {"golden_diff": "diff --git a/searx/engines/startpage.py b/searx/engines/startpage.py\n--- a/searx/engines/startpage.py\n+++ b/searx/engines/startpage.py\n@@ -36,8 +36,8 @@\n # specific xpath variables\n # ads xpath //div[@id=\"results\"]/div[@id=\"sponsored\"]//div[@class=\"result\"]\n # not ads: div[@class=\"result\"] are the direct childs of div[@id=\"results\"]\n-results_xpath = '//div[@class=\"w-gl__result\"]'\n-link_xpath = './/a[@class=\"w-gl__result-title\"]'\n+results_xpath = '//div[@class=\"w-gl__result__main\"]'\n+link_xpath = './/a[@class=\"w-gl__result-url result-link\"]'\n content_xpath = './/p[@class=\"w-gl__description\"]'\n", "issue": "Startpage not returning any results\n**Version:**\r\nI tried it with `v0.17.0` and `820b468bfe96f693d60ce06f1e78af51f00deefc`\r\n\r\n**Installation-Method:** \r\nManually according to wiki (with uwsgi)\r\n\r\n**What happened?**\r\nThe startpage engine is not returning any results\r\n\r\n**How To Reproduce**\r\nExecute a query with `!sp test`\r\n\r\n**Expected behavior**\r\nResults from startpage\r\n\r\n**Additional context**\r\nI added some log lines to the startpage engine file and it seems like it gets some\r\nresponse back from startpage. \r\n\r\nMaybe startpage changed their formatting? \r\nI didn't have a closer look on the way results are parsed yet. \n", "before_files": [{"content": "# Startpage (Web)\n#\n# @website https://startpage.com\n# @provide-api no (nothing found)\n#\n# @using-api no\n# @results HTML\n# @stable no (HTML can change)\n# @parse url, title, content\n#\n# @todo paging\n\nfrom lxml import html\nfrom dateutil import parser\nfrom datetime import datetime, timedelta\nimport re\nfrom unicodedata import normalize, combining\nfrom babel import Locale\nfrom babel.localedata import locale_identifiers\nfrom searx.utils import extract_text, eval_xpath, match_language\n\n# engine dependent config\ncategories = ['general']\n# there is a mechanism to block \"bot\" search\n# (probably the parameter qid), require\n# storing of qid's between mulitble search-calls\n\npaging = True\nlanguage_support = True\nsupported_languages_url = 'https://www.startpage.com/do/settings'\n\n# search-url\nbase_url = 'https://startpage.com/'\nsearch_url = base_url + 'do/search'\n\n# specific xpath variables\n# ads xpath //div[@id=\"results\"]/div[@id=\"sponsored\"]//div[@class=\"result\"]\n# not ads: div[@class=\"result\"] are the direct childs of div[@id=\"results\"]\nresults_xpath = '//div[@class=\"w-gl__result\"]'\nlink_xpath = './/a[@class=\"w-gl__result-title\"]'\ncontent_xpath = './/p[@class=\"w-gl__description\"]'\n\n\n# do search-request\ndef request(query, params):\n\n params['url'] = search_url\n params['method'] = 'POST'\n params['data'] = {\n 'query': query,\n 'page': params['pageno'],\n 'cat': 'web',\n 'cmd': 'process_search',\n 'engine0': 'v1all',\n }\n\n # set language if specified\n if params['language'] != 'all':\n lang_code = match_language(params['language'], supported_languages, fallback=None)\n if lang_code:\n language_name = supported_languages[lang_code]['alias']\n params['data']['language'] = language_name\n params['data']['lui'] = language_name\n\n return params\n\n\n# get response from search-request\ndef response(resp):\n results = []\n\n dom = html.fromstring(resp.text)\n\n # parse results\n for result in eval_xpath(dom, results_xpath):\n links = eval_xpath(result, link_xpath)\n if not links:\n continue\n link = links[0]\n url = link.attrib.get('href')\n\n # block google-ad url's\n if re.match(r\"^http(s|)://(www\\.)?google\\.[a-z]+/aclk.*$\", url):\n continue\n\n # block startpage search url's\n if re.match(r\"^http(s|)://(www\\.)?startpage\\.com/do/search\\?.*$\", url):\n continue\n\n title = extract_text(link)\n\n if eval_xpath(result, content_xpath):\n content = extract_text(eval_xpath(result, content_xpath))\n else:\n content = ''\n\n published_date = None\n\n # check if search result starts with something like: \"2 Sep 2014 ... \"\n if re.match(r\"^([1-9]|[1-2][0-9]|3[0-1]) [A-Z][a-z]{2} [0-9]{4} \\.\\.\\. \", content):\n date_pos = content.find('...') + 4\n date_string = content[0:date_pos - 5]\n # fix content string\n content = content[date_pos:]\n\n try:\n published_date = parser.parse(date_string, dayfirst=True)\n except ValueError:\n pass\n\n # check if search result starts with something like: \"5 days ago ... \"\n elif re.match(r\"^[0-9]+ days? ago \\.\\.\\. \", content):\n date_pos = content.find('...') + 4\n date_string = content[0:date_pos - 5]\n\n # calculate datetime\n published_date = datetime.now() - timedelta(days=int(re.match(r'\\d+', date_string).group()))\n\n # fix content string\n content = content[date_pos:]\n\n if published_date:\n # append result\n results.append({'url': url,\n 'title': title,\n 'content': content,\n 'publishedDate': published_date})\n else:\n # append result\n results.append({'url': url,\n 'title': title,\n 'content': content})\n\n # return results\n return results\n\n\n# get supported languages from their site\ndef _fetch_supported_languages(resp):\n # startpage's language selector is a mess\n # each option has a displayed name and a value, either of which may represent the language name\n # in the native script, the language name in English, an English transliteration of the native name,\n # the English name of the writing script used by the language, or occasionally something else entirely.\n\n # this cases are so special they need to be hardcoded, a couple of them are mispellings\n language_names = {\n 'english_uk': 'en-GB',\n 'fantizhengwen': ['zh-TW', 'zh-HK'],\n 'hangul': 'ko',\n 'malayam': 'ml',\n 'norsk': 'nb',\n 'sinhalese': 'si',\n 'sudanese': 'su'\n }\n\n # get the English name of every language known by babel\n language_names.update({name.lower(): lang_code for lang_code, name in Locale('en')._data['languages'].items()})\n\n # get the native name of every language known by babel\n for lang_code in filter(lambda lang_code: lang_code.find('_') == -1, locale_identifiers()):\n native_name = Locale(lang_code).get_language_name().lower()\n # add native name exactly as it is\n language_names[native_name] = lang_code\n\n # add \"normalized\" language name (i.e. fran\u00e7ais becomes francais and espa\u00f1ol becomes espanol)\n unaccented_name = ''.join(filter(lambda c: not combining(c), normalize('NFKD', native_name)))\n if len(unaccented_name) == len(unaccented_name.encode()):\n # add only if result is ascii (otherwise \"normalization\" didn't work)\n language_names[unaccented_name] = lang_code\n\n dom = html.fromstring(resp.text)\n sp_lang_names = []\n for option in dom.xpath('//form[@id=\"settings-form\"]//select[@name=\"language\"]/option'):\n sp_lang_names.append((option.get('value'), extract_text(option).lower()))\n\n supported_languages = {}\n for sp_option_value, sp_option_text in sp_lang_names:\n lang_code = language_names.get(sp_option_value) or language_names.get(sp_option_text)\n if isinstance(lang_code, str):\n supported_languages[lang_code] = {'alias': sp_option_value}\n elif isinstance(lang_code, list):\n for lc in lang_code:\n supported_languages[lc] = {'alias': sp_option_value}\n else:\n print('Unknown language option in Startpage: {} ({})'.format(sp_option_value, sp_option_text))\n\n return supported_languages\n", "path": "searx/engines/startpage.py"}]} | 2,754 | 191 |
gh_patches_debug_20553 | rasdani/github-patches | git_diff | paperless-ngx__paperless-ngx-1605 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[BUG] permission error if no consume share is mapped
### Description
starting with Paperless-ngx 1.8.0, Paperless on docker does not start when no consume share is mapped:
`
SystemCheckError: System check identified some issues:
ERRORS:
?: PAPERLESS_CONSUMPTION_DIR is not writeable
HINT: Set the permissions of
drwxr-xr-x /usr/src/paperless/consume
to be writeable by the user running the Paperless services
`
I've some containers running without a mapped share, as I don't need a consumption folder. Was no issue in previous versions; I assume the permissions of the folder in the docker container has changed.
### Steps to reproduce
1. create docker-compose without mapped consumption folder
2. look at the error message
### Webserver logs
```bash
SystemCheckError: System check identified some issues:
ERRORS:
?: PAPERLESS_CONSUMPTION_DIR is not writeable
HINT: Set the permissions of
drwxr-xr-x /usr/src/paperless/consume
to be writeable by the user running the Paperless services
```
### Paperless-ngx version
1.8,0
### Host OS
docker
### Installation method
Docker - official image
### Browser
_No response_
### Configuration changes
_No response_
### Other
_No response_
</issue>
<code>
[start of src/paperless/checks.py]
1 import os
2 import shutil
3 import stat
4
5 from django.conf import settings
6 from django.core.checks import Error
7 from django.core.checks import register
8 from django.core.checks import Warning
9
10 exists_message = "{} is set but doesn't exist."
11 exists_hint = "Create a directory at {}"
12 writeable_message = "{} is not writeable"
13 writeable_hint = (
14 "Set the permissions of {} to be writeable by the user running the "
15 "Paperless services"
16 )
17
18
19 def path_check(var, directory):
20 messages = []
21 if directory:
22 if not os.path.isdir(directory):
23 messages.append(
24 Error(exists_message.format(var), exists_hint.format(directory)),
25 )
26 else:
27 test_file = os.path.join(
28 directory,
29 f"__paperless_write_test_{os.getpid()}__",
30 )
31 try:
32 with open(test_file, "w"):
33 pass
34 except PermissionError:
35 messages.append(
36 Error(
37 writeable_message.format(var),
38 writeable_hint.format(
39 f"\n{stat.filemode(os.stat(directory).st_mode)} "
40 f"{directory}\n",
41 ),
42 ),
43 )
44 finally:
45 if os.path.isfile(test_file):
46 os.remove(test_file)
47
48 return messages
49
50
51 @register()
52 def paths_check(app_configs, **kwargs):
53 """
54 Check the various paths for existence, readability and writeability
55 """
56
57 return (
58 path_check("PAPERLESS_DATA_DIR", settings.DATA_DIR)
59 + path_check("PAPERLESS_TRASH_DIR", settings.TRASH_DIR)
60 + path_check("PAPERLESS_MEDIA_ROOT", settings.MEDIA_ROOT)
61 + path_check("PAPERLESS_CONSUMPTION_DIR", settings.CONSUMPTION_DIR)
62 )
63
64
65 @register()
66 def binaries_check(app_configs, **kwargs):
67 """
68 Paperless requires the existence of a few binaries, so we do some checks
69 for those here.
70 """
71
72 error = "Paperless can't find {}. Without it, consumption is impossible."
73 hint = "Either it's not in your ${PATH} or it's not installed."
74
75 binaries = (settings.CONVERT_BINARY, "tesseract")
76
77 check_messages = []
78 for binary in binaries:
79 if shutil.which(binary) is None:
80 check_messages.append(Warning(error.format(binary), hint))
81
82 return check_messages
83
84
85 @register()
86 def debug_mode_check(app_configs, **kwargs):
87 if settings.DEBUG:
88 return [
89 Warning(
90 "DEBUG mode is enabled. Disable Debug mode. This is a serious "
91 "security issue, since it puts security overides in place which "
92 "are meant to be only used during development. This "
93 "also means that paperless will tell anyone various "
94 "debugging information when something goes wrong.",
95 ),
96 ]
97 else:
98 return []
99
100
101 @register()
102 def settings_values_check(app_configs, **kwargs):
103 """
104 Validates at least some of the user provided settings
105 """
106
107 def _ocrmypdf_settings_check():
108 """
109 Validates some of the arguments which will be provided to ocrmypdf
110 against the valid options. Use "ocrmypdf --help" to see the valid
111 inputs
112 """
113 msgs = []
114 if settings.OCR_OUTPUT_TYPE not in {
115 "pdfa",
116 "pdf",
117 "pdfa-1",
118 "pdfa-2",
119 "pdfa-3",
120 }:
121 msgs.append(
122 Error(f'OCR output type "{settings.OCR_OUTPUT_TYPE}" is not valid'),
123 )
124
125 if settings.OCR_MODE not in {"force", "skip", "redo_ocr"}:
126 msgs.append(Error(f'OCR output mode "{settings.OCR_MODE}" is not valid'))
127
128 if settings.OCR_CLEAN not in {"clean", "clean_final"}:
129 msgs.append(Error(f'OCR clean mode "{settings.OCR_CLEAN}" is not valid'))
130 return msgs
131
132 def _timezone_validate():
133 """
134 Validates the user provided timezone is a valid timezone
135 """
136 try:
137 import zoneinfo
138 except ImportError: # pragma: nocover
139 import backports.zoneinfo as zoneinfo
140 msgs = []
141 if settings.TIME_ZONE not in zoneinfo.available_timezones():
142 msgs.append(
143 Error(f'Timezone "{settings.TIME_ZONE}" is not a valid timezone'),
144 )
145 return msgs
146
147 return _ocrmypdf_settings_check() + _timezone_validate()
148
[end of src/paperless/checks.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/paperless/checks.py b/src/paperless/checks.py
--- a/src/paperless/checks.py
+++ b/src/paperless/checks.py
@@ -1,4 +1,6 @@
+import grp
import os
+import pwd
import shutil
import stat
@@ -32,12 +34,15 @@
with open(test_file, "w"):
pass
except PermissionError:
+ dir_stat = os.stat(directory)
+ dir_mode = stat.filemode(dir_stat.st_mode)
+ dir_owner = pwd.getpwuid(dir_stat.st_uid).pw_name
+ dir_group = grp.getgrgid(dir_stat.st_gid).gr_name
messages.append(
Error(
writeable_message.format(var),
writeable_hint.format(
- f"\n{stat.filemode(os.stat(directory).st_mode)} "
- f"{directory}\n",
+ f"\n{dir_mode} {dir_owner} {dir_group} " f"{directory}\n",
),
),
)
| {"golden_diff": "diff --git a/src/paperless/checks.py b/src/paperless/checks.py\n--- a/src/paperless/checks.py\n+++ b/src/paperless/checks.py\n@@ -1,4 +1,6 @@\n+import grp\n import os\n+import pwd\n import shutil\n import stat\n \n@@ -32,12 +34,15 @@\n with open(test_file, \"w\"):\n pass\n except PermissionError:\n+ dir_stat = os.stat(directory)\n+ dir_mode = stat.filemode(dir_stat.st_mode)\n+ dir_owner = pwd.getpwuid(dir_stat.st_uid).pw_name\n+ dir_group = grp.getgrgid(dir_stat.st_gid).gr_name\n messages.append(\n Error(\n writeable_message.format(var),\n writeable_hint.format(\n- f\"\\n{stat.filemode(os.stat(directory).st_mode)} \"\n- f\"{directory}\\n\",\n+ f\"\\n{dir_mode} {dir_owner} {dir_group} \" f\"{directory}\\n\",\n ),\n ),\n )\n", "issue": "[BUG] permission error if no consume share is mapped\n### Description\r\n\r\nstarting with Paperless-ngx 1.8.0, Paperless on docker does not start when no consume share is mapped:\r\n\r\n`\r\n SystemCheckError: System check identified some issues:\r\n ERRORS:\r\n ?: PAPERLESS_CONSUMPTION_DIR is not writeable\r\n\tHINT: Set the permissions of \r\n drwxr-xr-x /usr/src/paperless/consume\r\n to be writeable by the user running the Paperless services\r\n\r\n`\r\n\r\nI've some containers running without a mapped share, as I don't need a consumption folder. Was no issue in previous versions; I assume the permissions of the folder in the docker container has changed.\r\n\r\n### Steps to reproduce\r\n\r\n1. create docker-compose without mapped consumption folder\r\n2. look at the error message\r\n\r\n### Webserver logs\r\n\r\n```bash\r\nSystemCheckError: System check identified some issues:\r\nERRORS:\r\n?: PAPERLESS_CONSUMPTION_DIR is not writeable\r\n\tHINT: Set the permissions of \r\ndrwxr-xr-x /usr/src/paperless/consume\r\n to be writeable by the user running the Paperless services\r\n```\r\n\r\n\r\n### Paperless-ngx version\r\n\r\n1.8,0\r\n\r\n### Host OS\r\n\r\ndocker\r\n\r\n### Installation method\r\n\r\nDocker - official image\r\n\r\n### Browser\r\n\r\n_No response_\r\n\r\n### Configuration changes\r\n\r\n_No response_\r\n\r\n### Other\r\n\r\n_No response_\n", "before_files": [{"content": "import os\nimport shutil\nimport stat\n\nfrom django.conf import settings\nfrom django.core.checks import Error\nfrom django.core.checks import register\nfrom django.core.checks import Warning\n\nexists_message = \"{} is set but doesn't exist.\"\nexists_hint = \"Create a directory at {}\"\nwriteable_message = \"{} is not writeable\"\nwriteable_hint = (\n \"Set the permissions of {} to be writeable by the user running the \"\n \"Paperless services\"\n)\n\n\ndef path_check(var, directory):\n messages = []\n if directory:\n if not os.path.isdir(directory):\n messages.append(\n Error(exists_message.format(var), exists_hint.format(directory)),\n )\n else:\n test_file = os.path.join(\n directory,\n f\"__paperless_write_test_{os.getpid()}__\",\n )\n try:\n with open(test_file, \"w\"):\n pass\n except PermissionError:\n messages.append(\n Error(\n writeable_message.format(var),\n writeable_hint.format(\n f\"\\n{stat.filemode(os.stat(directory).st_mode)} \"\n f\"{directory}\\n\",\n ),\n ),\n )\n finally:\n if os.path.isfile(test_file):\n os.remove(test_file)\n\n return messages\n\n\n@register()\ndef paths_check(app_configs, **kwargs):\n \"\"\"\n Check the various paths for existence, readability and writeability\n \"\"\"\n\n return (\n path_check(\"PAPERLESS_DATA_DIR\", settings.DATA_DIR)\n + path_check(\"PAPERLESS_TRASH_DIR\", settings.TRASH_DIR)\n + path_check(\"PAPERLESS_MEDIA_ROOT\", settings.MEDIA_ROOT)\n + path_check(\"PAPERLESS_CONSUMPTION_DIR\", settings.CONSUMPTION_DIR)\n )\n\n\n@register()\ndef binaries_check(app_configs, **kwargs):\n \"\"\"\n Paperless requires the existence of a few binaries, so we do some checks\n for those here.\n \"\"\"\n\n error = \"Paperless can't find {}. Without it, consumption is impossible.\"\n hint = \"Either it's not in your ${PATH} or it's not installed.\"\n\n binaries = (settings.CONVERT_BINARY, \"tesseract\")\n\n check_messages = []\n for binary in binaries:\n if shutil.which(binary) is None:\n check_messages.append(Warning(error.format(binary), hint))\n\n return check_messages\n\n\n@register()\ndef debug_mode_check(app_configs, **kwargs):\n if settings.DEBUG:\n return [\n Warning(\n \"DEBUG mode is enabled. Disable Debug mode. This is a serious \"\n \"security issue, since it puts security overides in place which \"\n \"are meant to be only used during development. This \"\n \"also means that paperless will tell anyone various \"\n \"debugging information when something goes wrong.\",\n ),\n ]\n else:\n return []\n\n\n@register()\ndef settings_values_check(app_configs, **kwargs):\n \"\"\"\n Validates at least some of the user provided settings\n \"\"\"\n\n def _ocrmypdf_settings_check():\n \"\"\"\n Validates some of the arguments which will be provided to ocrmypdf\n against the valid options. Use \"ocrmypdf --help\" to see the valid\n inputs\n \"\"\"\n msgs = []\n if settings.OCR_OUTPUT_TYPE not in {\n \"pdfa\",\n \"pdf\",\n \"pdfa-1\",\n \"pdfa-2\",\n \"pdfa-3\",\n }:\n msgs.append(\n Error(f'OCR output type \"{settings.OCR_OUTPUT_TYPE}\" is not valid'),\n )\n\n if settings.OCR_MODE not in {\"force\", \"skip\", \"redo_ocr\"}:\n msgs.append(Error(f'OCR output mode \"{settings.OCR_MODE}\" is not valid'))\n\n if settings.OCR_CLEAN not in {\"clean\", \"clean_final\"}:\n msgs.append(Error(f'OCR clean mode \"{settings.OCR_CLEAN}\" is not valid'))\n return msgs\n\n def _timezone_validate():\n \"\"\"\n Validates the user provided timezone is a valid timezone\n \"\"\"\n try:\n import zoneinfo\n except ImportError: # pragma: nocover\n import backports.zoneinfo as zoneinfo\n msgs = []\n if settings.TIME_ZONE not in zoneinfo.available_timezones():\n msgs.append(\n Error(f'Timezone \"{settings.TIME_ZONE}\" is not a valid timezone'),\n )\n return msgs\n\n return _ocrmypdf_settings_check() + _timezone_validate()\n", "path": "src/paperless/checks.py"}]} | 2,115 | 229 |
gh_patches_debug_3959 | rasdani/github-patches | git_diff | great-expectations__great_expectations-5468 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Use cleaner solution for non-truncating division in python 2
Prefer `from __future__ import division` to `1.*x/y`
</issue>
<code>
[start of contrib/capitalone_dataprofiler_expectations/capitalone_dataprofiler_expectations/expectations/__init__.py]
1 # Make sure to include any Expectations your want exported below!
2
3 from .expect_column_values_confidence_for_data_label_to_be_greater_than_or_equal_to_threshold import (
4 ExpectColumnValuesConfidenceForDataLabelToBeGreaterThanOrEqualToThreshold,
5 )
6 from .expect_column_values_confidence_for_data_label_to_be_less_than_or_equal_to_threshold import (
7 ExpectColumnValuesConfidenceForDataLabelToBeLessThanOrEqualToThreshold,
8 )
9 from .expect_column_values_to_be_equal_to_or_greater_than_profile_min import (
10 ExpectColumnValuesToBeEqualToOrGreaterThanProfileMin,
11 )
12 from .expect_column_values_to_be_equal_to_or_less_than_profile_max import (
13 ExpectColumnValuesToBeEqualToOrLessThanProfileMax,
14 )
15 from .expect_column_values_to_be_probabilistically_greater_than_or_equal_to_threshold import (
16 ExpectColumnValuesToBeProbabilisticallyGreaterThanOrEqualToThreshold,
17 )
18 from .expect_profile_numeric_columns_diff_between_threshold_range import (
19 ExpectProfileNumericColumnsDiffBetweenThresholdRange
20 )
21
[end of contrib/capitalone_dataprofiler_expectations/capitalone_dataprofiler_expectations/expectations/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/contrib/capitalone_dataprofiler_expectations/capitalone_dataprofiler_expectations/expectations/__init__.py b/contrib/capitalone_dataprofiler_expectations/capitalone_dataprofiler_expectations/expectations/__init__.py
--- a/contrib/capitalone_dataprofiler_expectations/capitalone_dataprofiler_expectations/expectations/__init__.py
+++ b/contrib/capitalone_dataprofiler_expectations/capitalone_dataprofiler_expectations/expectations/__init__.py
@@ -16,5 +16,5 @@
ExpectColumnValuesToBeProbabilisticallyGreaterThanOrEqualToThreshold,
)
from .expect_profile_numeric_columns_diff_between_threshold_range import (
- ExpectProfileNumericColumnsDiffBetweenThresholdRange
+ ExpectProfileNumericColumnsDiffBetweenThresholdRange,
)
| {"golden_diff": "diff --git a/contrib/capitalone_dataprofiler_expectations/capitalone_dataprofiler_expectations/expectations/__init__.py b/contrib/capitalone_dataprofiler_expectations/capitalone_dataprofiler_expectations/expectations/__init__.py\n--- a/contrib/capitalone_dataprofiler_expectations/capitalone_dataprofiler_expectations/expectations/__init__.py\n+++ b/contrib/capitalone_dataprofiler_expectations/capitalone_dataprofiler_expectations/expectations/__init__.py\n@@ -16,5 +16,5 @@\n ExpectColumnValuesToBeProbabilisticallyGreaterThanOrEqualToThreshold,\n )\n from .expect_profile_numeric_columns_diff_between_threshold_range import (\n- ExpectProfileNumericColumnsDiffBetweenThresholdRange\n+ ExpectProfileNumericColumnsDiffBetweenThresholdRange,\n )\n", "issue": "Use cleaner solution for non-truncating division in python 2\nPrefer `from __future__ import division` to `1.*x/y`\n", "before_files": [{"content": "# Make sure to include any Expectations your want exported below!\n\nfrom .expect_column_values_confidence_for_data_label_to_be_greater_than_or_equal_to_threshold import (\n ExpectColumnValuesConfidenceForDataLabelToBeGreaterThanOrEqualToThreshold,\n)\nfrom .expect_column_values_confidence_for_data_label_to_be_less_than_or_equal_to_threshold import (\n ExpectColumnValuesConfidenceForDataLabelToBeLessThanOrEqualToThreshold,\n)\nfrom .expect_column_values_to_be_equal_to_or_greater_than_profile_min import (\n ExpectColumnValuesToBeEqualToOrGreaterThanProfileMin,\n)\nfrom .expect_column_values_to_be_equal_to_or_less_than_profile_max import (\n ExpectColumnValuesToBeEqualToOrLessThanProfileMax,\n)\nfrom .expect_column_values_to_be_probabilistically_greater_than_or_equal_to_threshold import (\n ExpectColumnValuesToBeProbabilisticallyGreaterThanOrEqualToThreshold,\n)\nfrom .expect_profile_numeric_columns_diff_between_threshold_range import (\n ExpectProfileNumericColumnsDiffBetweenThresholdRange\n)\n", "path": "contrib/capitalone_dataprofiler_expectations/capitalone_dataprofiler_expectations/expectations/__init__.py"}]} | 841 | 181 |
gh_patches_debug_25951 | rasdani/github-patches | git_diff | napari__napari-6475 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Weird docs build error on vortex.py example
### 🐛 Bug Report
It looks like we are missing the gallery example for [vortex.py](https://github.com/napari/napari/blob/1b510bab020aae485565f000ddab842ab17ef608/examples/vortex.py). That file runs fine locally and *I think* ~~it ran fine in the PR~~ (**Edit:** nope, it was [broken](https://output.circle-artifacts.com/output/job/1b750fb4-4df5-462b-84ca-cdddeede41ff/artifacts/0/napari-docs/docs/_build/gallery.html#gallery), but gallery errors aren't errors. I don't know what the right answer is there but either we should turn them into errors or we should turn them into errors *when the contribution is a gallery example*?) But the error suggests some form of race condition during addition of the layer, which makes me think this is an async issue. Here's the error message from [this build](https://github.com/napari/docs/actions/runs/6658042739/job/18094063805#step:9:238):
```pytb
Downloading file 'data/pivchallenge-B-B001_1.tif' from 'https://gitlab.com/scikit-image/data/-/raw/2cdc5ce89b334d28f06a58c9f0ca21aa6992a5ba/pivchallenge/B/B001_1.tif' to '/home/runner/.cache/scikit-image/0.22.0'.
Downloading file 'data/pivchallenge-B-B001_2.tif' from 'https://gitlab.com/scikit-image/data/-/raw/2cdc5ce89b334d28f06a58c9f0ca21aa6992a5ba/pivchallenge/B/B001_2.tif' to '/home/runner/.cache/scikit-image/0.22.0'.
WARNING: /home/runner/work/docs/docs/docs/examples/vortex.py failed to execute correctly: Traceback (most recent call last):
File "/home/runner/work/docs/docs/docs/examples/vortex.py", line 59, in <module>
flow_layer = viewer.add_vectors(
File "/opt/hostedtoolcache/Python/3.10.13/x64/lib/python3.10/site-packages/napari/components/viewer_model.py", line 5, in add_vectors
import os
File "/opt/hostedtoolcache/Python/3.10.13/x64/lib/python3.10/_collections_abc.py", line 1128, in append
self.insert(len(self), value)
File "/opt/hostedtoolcache/Python/3.10.13/x64/lib/python3.10/site-packages/napari/components/layerlist.py", line 194, in insert
super().insert(index, new_layer)
File "/opt/hostedtoolcache/Python/3.10.13/x64/lib/python3.10/site-packages/napari/utils/events/containers/_selectable_list.py", line 71, in insert
self.selection.active = value
File "/opt/hostedtoolcache/Python/3.10.13/x64/lib/python3.10/site-packages/napari/utils/events/containers/_selection.py", line 108, in active
self.events.active(value=value)
File "/opt/hostedtoolcache/Python/3.10.13/x64/lib/python3.10/site-packages/napari/utils/events/event.py", line 771, in __call__
self._invoke_callback(cb, event if pass_event else None)
File "/opt/hostedtoolcache/Python/3.10.13/x64/lib/python3.10/site-packages/napari/utils/events/event.py", line 809, in _invoke_callback
_handle_exception(
File "/opt/hostedtoolcache/Python/3.10.13/x64/lib/python3.10/site-packages/napari/utils/events/event.py", line 796, in _invoke_callback
cb(event)
File "/opt/hostedtoolcache/Python/3.10.13/x64/lib/python3.10/site-packages/napari/_qt/layer_controls/qt_layer_controls_container.py", line 130, in _display
controls = self.widgets[layer]
KeyError: <Vectors layer 'optical flow' at 0x7f6f55be4be0>
```
### 💡 Steps to Reproduce
I don't have a working docs build at the moment so I don't know whether this reproduces in local docs builds, but the example runs fine locally. So this is either a problem with sphinx gallery or with sphinx gallery on CI.
### 💡 Expected Behavior
Example should run fine on GHA.
### 🌎 Environment
napari main on CI 😬
(Note: should we echo `napari --info` on all our CI?)
### 💡 Additional Context
_No response_
</issue>
<code>
[start of examples/vortex.py]
1 """Visualizing optical flow in napari.
2
3 Adapted from the scikit-image gallery [1]_.
4
5 In napari, we can show the flowing vortex as an additional dimension in the
6 image, visible by moving the slider.
7
8 .. tags:: visualization-advanced, layers
9
10 .. [1] https://scikit-image.org/docs/stable/auto_examples/registration/plot_opticalflow.html
11 """
12 import numpy as np
13 from skimage.data import vortex
14 from skimage.registration import optical_flow_ilk
15
16 import napari
17
18 #######################################################################
19 # First, we load the vortex image as a 3D array. (time, row, column)
20
21 vortex_im = np.asarray(vortex())
22
23 #######################################################################
24 # We compute the optical flow using scikit-image. (Note: as of
25 # scikit-image 0.21, there seems to be a transposition of the image in
26 # the output, which we account for later.)
27
28 u, v = optical_flow_ilk(vortex_im[0], vortex_im[1], radius=15)
29
30 #######################################################################
31 # Compute the flow magnitude, for visualization.
32
33 magnitude = np.sqrt(u ** 2 + v ** 2)
34
35 #######################################################################
36 # Create a viewer, add the vortex frames, and overlay the flow
37 # magnitude.
38
39 viewer, vortex_layer = napari.imshow(vortex_im)
40 mag_layer = viewer.add_image(magnitude, colormap='magma', opacity=0.3)
41
42 #######################################################################
43 # Finally, we subsample the vector field to display it — it's too
44 # messy otherwise! And we transpose the rows/columns axes to match the
45 # current scikit-image output.
46
47 nvec = 21
48 nr, nc = magnitude.shape
49 step = max(nr//nvec, nc//nvec)
50 offset = step // 2
51 usub = u[offset::step, offset::step]
52 vsub = v[offset::step, offset::step]
53
54 vectors_field = np.transpose( # transpose required — skimage bug?
55 np.stack([usub, vsub], axis=-1),
56 (1, 0, 2),
57 )
58
59 flow_layer = viewer.add_vectors(
60 vectors_field,
61 name='optical flow',
62 scale=[step, step],
63 translate=[offset, offset],
64 edge_width=0.3,
65 length=0.3,
66 )
67
68 if __name__ == '__main__':
69 napari.run()
70
[end of examples/vortex.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/examples/vortex.py b/examples/vortex.py
--- a/examples/vortex.py
+++ b/examples/vortex.py
@@ -1,4 +1,6 @@
-"""Visualizing optical flow in napari.
+"""
+Visualizing optical flow in napari
+==================================
Adapted from the scikit-image gallery [1]_.
@@ -33,14 +35,7 @@
magnitude = np.sqrt(u ** 2 + v ** 2)
#######################################################################
-# Create a viewer, add the vortex frames, and overlay the flow
-# magnitude.
-
-viewer, vortex_layer = napari.imshow(vortex_im)
-mag_layer = viewer.add_image(magnitude, colormap='magma', opacity=0.3)
-
-#######################################################################
-# Finally, we subsample the vector field to display it — it's too
+# We subsample the vector field to display it — it's too
# messy otherwise! And we transpose the rows/columns axes to match the
# current scikit-image output.
@@ -56,6 +51,12 @@
(1, 0, 2),
)
+#######################################################################
+# Finally, we create a viewer, and add the vortex frames, the flow
+# magnitude, and the vector field.
+
+viewer, vortex_layer = napari.imshow(vortex_im)
+mag_layer = viewer.add_image(magnitude, colormap='magma', opacity=0.3)
flow_layer = viewer.add_vectors(
vectors_field,
name='optical flow',
| {"golden_diff": "diff --git a/examples/vortex.py b/examples/vortex.py\n--- a/examples/vortex.py\n+++ b/examples/vortex.py\n@@ -1,4 +1,6 @@\n-\"\"\"Visualizing optical flow in napari.\n+\"\"\"\n+Visualizing optical flow in napari\n+==================================\n \n Adapted from the scikit-image gallery [1]_.\n \n@@ -33,14 +35,7 @@\n magnitude = np.sqrt(u ** 2 + v ** 2)\n \n #######################################################################\n-# Create a viewer, add the vortex frames, and overlay the flow\n-# magnitude.\n-\n-viewer, vortex_layer = napari.imshow(vortex_im)\n-mag_layer = viewer.add_image(magnitude, colormap='magma', opacity=0.3)\n-\n-#######################################################################\n-# Finally, we subsample the vector field to display it \u2014 it's too\n+# We subsample the vector field to display it \u2014 it's too\n # messy otherwise! And we transpose the rows/columns axes to match the\n # current scikit-image output.\n \n@@ -56,6 +51,12 @@\n (1, 0, 2),\n )\n \n+#######################################################################\n+# Finally, we create a viewer, and add the vortex frames, the flow\n+# magnitude, and the vector field.\n+\n+viewer, vortex_layer = napari.imshow(vortex_im)\n+mag_layer = viewer.add_image(magnitude, colormap='magma', opacity=0.3)\n flow_layer = viewer.add_vectors(\n vectors_field,\n name='optical flow',\n", "issue": "Weird docs build error on vortex.py example\n### \ud83d\udc1b Bug Report\n\nIt looks like we are missing the gallery example for [vortex.py](https://github.com/napari/napari/blob/1b510bab020aae485565f000ddab842ab17ef608/examples/vortex.py). That file runs fine locally and *I think* ~~it ran fine in the PR~~ (**Edit:** nope, it was [broken](https://output.circle-artifacts.com/output/job/1b750fb4-4df5-462b-84ca-cdddeede41ff/artifacts/0/napari-docs/docs/_build/gallery.html#gallery), but gallery errors aren't errors. I don't know what the right answer is there but either we should turn them into errors or we should turn them into errors *when the contribution is a gallery example*?) But the error suggests some form of race condition during addition of the layer, which makes me think this is an async issue. Here's the error message from [this build](https://github.com/napari/docs/actions/runs/6658042739/job/18094063805#step:9:238):\r\n\r\n```pytb\r\nDownloading file 'data/pivchallenge-B-B001_1.tif' from 'https://gitlab.com/scikit-image/data/-/raw/2cdc5ce89b334d28f06a58c9f0ca21aa6992a5ba/pivchallenge/B/B001_1.tif' to '/home/runner/.cache/scikit-image/0.22.0'.\r\nDownloading file 'data/pivchallenge-B-B001_2.tif' from 'https://gitlab.com/scikit-image/data/-/raw/2cdc5ce89b334d28f06a58c9f0ca21aa6992a5ba/pivchallenge/B/B001_2.tif' to '/home/runner/.cache/scikit-image/0.22.0'.\r\nWARNING: /home/runner/work/docs/docs/docs/examples/vortex.py failed to execute correctly: Traceback (most recent call last):\r\n File \"/home/runner/work/docs/docs/docs/examples/vortex.py\", line 59, in <module>\r\n flow_layer = viewer.add_vectors(\r\n File \"/opt/hostedtoolcache/Python/3.10.13/x64/lib/python3.10/site-packages/napari/components/viewer_model.py\", line 5, in add_vectors\r\n import os\r\n File \"/opt/hostedtoolcache/Python/3.10.13/x64/lib/python3.10/_collections_abc.py\", line 1128, in append\r\n self.insert(len(self), value)\r\n File \"/opt/hostedtoolcache/Python/3.10.13/x64/lib/python3.10/site-packages/napari/components/layerlist.py\", line 194, in insert\r\n super().insert(index, new_layer)\r\n File \"/opt/hostedtoolcache/Python/3.10.13/x64/lib/python3.10/site-packages/napari/utils/events/containers/_selectable_list.py\", line 71, in insert\r\n self.selection.active = value\r\n File \"/opt/hostedtoolcache/Python/3.10.13/x64/lib/python3.10/site-packages/napari/utils/events/containers/_selection.py\", line 108, in active\r\n self.events.active(value=value)\r\n File \"/opt/hostedtoolcache/Python/3.10.13/x64/lib/python3.10/site-packages/napari/utils/events/event.py\", line 771, in __call__\r\n self._invoke_callback(cb, event if pass_event else None)\r\n File \"/opt/hostedtoolcache/Python/3.10.13/x64/lib/python3.10/site-packages/napari/utils/events/event.py\", line 809, in _invoke_callback\r\n _handle_exception(\r\n File \"/opt/hostedtoolcache/Python/3.10.13/x64/lib/python3.10/site-packages/napari/utils/events/event.py\", line 796, in _invoke_callback\r\n cb(event)\r\n File \"/opt/hostedtoolcache/Python/3.10.13/x64/lib/python3.10/site-packages/napari/_qt/layer_controls/qt_layer_controls_container.py\", line 130, in _display\r\n controls = self.widgets[layer]\r\nKeyError: <Vectors layer 'optical flow' at 0x7f6f55be4be0>\r\n```\n\n### \ud83d\udca1 Steps to Reproduce\n\nI don't have a working docs build at the moment so I don't know whether this reproduces in local docs builds, but the example runs fine locally. So this is either a problem with sphinx gallery or with sphinx gallery on CI.\n\n### \ud83d\udca1 Expected Behavior\n\nExample should run fine on GHA.\n\n### \ud83c\udf0e Environment\n\nnapari main on CI \ud83d\ude2c \r\n\r\n(Note: should we echo `napari --info` on all our CI?)\n\n### \ud83d\udca1 Additional Context\n\n_No response_\n", "before_files": [{"content": "\"\"\"Visualizing optical flow in napari.\n\nAdapted from the scikit-image gallery [1]_.\n\nIn napari, we can show the flowing vortex as an additional dimension in the\nimage, visible by moving the slider.\n\n.. tags:: visualization-advanced, layers\n\n.. [1] https://scikit-image.org/docs/stable/auto_examples/registration/plot_opticalflow.html\n\"\"\"\nimport numpy as np\nfrom skimage.data import vortex\nfrom skimage.registration import optical_flow_ilk\n\nimport napari\n\n#######################################################################\n# First, we load the vortex image as a 3D array. (time, row, column)\n\nvortex_im = np.asarray(vortex())\n\n#######################################################################\n# We compute the optical flow using scikit-image. (Note: as of\n# scikit-image 0.21, there seems to be a transposition of the image in\n# the output, which we account for later.)\n\nu, v = optical_flow_ilk(vortex_im[0], vortex_im[1], radius=15)\n\n#######################################################################\n# Compute the flow magnitude, for visualization.\n\nmagnitude = np.sqrt(u ** 2 + v ** 2)\n\n#######################################################################\n# Create a viewer, add the vortex frames, and overlay the flow\n# magnitude.\n\nviewer, vortex_layer = napari.imshow(vortex_im)\nmag_layer = viewer.add_image(magnitude, colormap='magma', opacity=0.3)\n\n#######################################################################\n# Finally, we subsample the vector field to display it \u2014 it's too\n# messy otherwise! And we transpose the rows/columns axes to match the\n# current scikit-image output.\n\nnvec = 21\nnr, nc = magnitude.shape\nstep = max(nr//nvec, nc//nvec)\noffset = step // 2\nusub = u[offset::step, offset::step]\nvsub = v[offset::step, offset::step]\n\nvectors_field = np.transpose( # transpose required \u2014 skimage bug?\n np.stack([usub, vsub], axis=-1),\n (1, 0, 2),\n )\n\nflow_layer = viewer.add_vectors(\n vectors_field,\n name='optical flow',\n scale=[step, step],\n translate=[offset, offset],\n edge_width=0.3,\n length=0.3,\n )\n\nif __name__ == '__main__':\n napari.run()\n", "path": "examples/vortex.py"}]} | 2,349 | 330 |
gh_patches_debug_29254 | rasdani/github-patches | git_diff | web2py__web2py-1907 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Streamer.py handles IOError exception in non-python3 compatible way
Under python 3 and the latest repository, browsing to `http://127.0.0.1:8000/welcome/favicon.ico` causes `streamer.py` to crash because it treats the `IOError` exception in a non-python3 compatible way. The IOError exception occurs because `favicon.ico` is not found at `/ `. No error ticket is generated.
except IOError as e:
if e[0] == errno.EISDIR:
raise HTTP(403, error_message, web2py_error='file is a directory')
elif e[0] == errno.EACCES:
raise HTTP(403, error_message, web2py_error='inaccessible file')
else:
raise HTTP(404, error_message, web2py_error='invalid file')
This works in python 2, but `e[0]` should be accesed as `e.errno` under python 3
Partial stack trace:
Traceback (most recent call last):
File "C:\web2py\gluon\main.py", line 329, in wsgibase
response.stream(static_file, request=request)
File "C:\web2py\gluon\globals.py", line 617, in stream
status=self.status)
File "C:\web2py\gluon\streamer.py", line 66, in stream_file_or_304_or_206
if e[0] == errno.EISDIR:
TypeError: 'FileNotFoundError' object is not subscriptable
</issue>
<code>
[start of gluon/streamer.py]
1 #!/usr/bin/env python
2 # -*- coding: utf-8 -*-
3
4 """
5 | This file is part of the web2py Web Framework
6 | Copyrighted by Massimo Di Pierro <[email protected]>
7 | License: LGPLv3 (http://www.gnu.org/licenses/lgpl.html)
8
9 Facilities to handle file streaming
10 ------------------------------------
11 """
12
13 import os
14 import stat
15 import time
16 import re
17 import errno
18 from gluon.http import HTTP
19 from gluon.contenttype import contenttype
20 from gluon._compat import PY2
21
22
23 regex_start_range = re.compile('\d+(?=\-)')
24 regex_stop_range = re.compile('(?<=\-)\d+')
25
26 DEFAULT_CHUNK_SIZE = 64 * 1024
27
28 def streamer(stream, chunk_size=DEFAULT_CHUNK_SIZE, bytes=None, callback=None):
29 try:
30 offset = 0
31 while bytes is None or offset < bytes:
32 if not bytes is None and bytes - offset < chunk_size:
33 chunk_size = bytes - offset
34 data = stream.read(chunk_size)
35 length = len(data)
36 if not length:
37 break
38 else:
39 yield data
40 if length < chunk_size:
41 break
42 offset += length
43 finally:
44 stream.close()
45 if callback:
46 callback()
47
48 def stream_file_or_304_or_206(
49 static_file,
50 chunk_size=DEFAULT_CHUNK_SIZE,
51 request=None,
52 headers={},
53 status=200,
54 error_message=None
55 ):
56 # FIX THIS
57 # if error_message is None:
58 # error_message = rewrite.THREAD_LOCAL.routes.error_message % 'invalid request'
59 try:
60 if PY2:
61 open_f = file # this makes no sense but without it GAE cannot open files
62 else:
63 open_f = open
64 fp = open_f(static_file,'rb')
65 except IOError as e:
66 if e[0] == errno.EISDIR:
67 raise HTTP(403, error_message, web2py_error='file is a directory')
68 elif e[0] == errno.EACCES:
69 raise HTTP(403, error_message, web2py_error='inaccessible file')
70 else:
71 raise HTTP(404, error_message, web2py_error='invalid file')
72 else:
73 fp.close()
74 stat_file = os.stat(static_file)
75 fsize = stat_file[stat.ST_SIZE]
76 modified = stat_file[stat.ST_MTIME]
77 mtime = time.strftime('%a, %d %b %Y %H:%M:%S GMT', time.gmtime(modified))
78 headers.setdefault('Content-Type', contenttype(static_file))
79 headers.setdefault('Last-Modified', mtime)
80 headers.setdefault('Pragma', 'cache')
81 headers.setdefault('Cache-Control', 'private')
82
83 # if this is a normal response and not a respnse to an error page
84 if status == 200:
85 if request and request.env.http_if_modified_since == mtime:
86 raise HTTP(304, **{'Content-Type': headers['Content-Type']})
87
88 elif request and request.env.http_range:
89 start_items = regex_start_range.findall(request.env.http_range)
90 if not start_items:
91 start_items = [0]
92 stop_items = regex_stop_range.findall(request.env.http_range)
93 if not stop_items or int(stop_items[0]) > fsize - 1:
94 stop_items = [fsize - 1]
95 part = (int(start_items[0]), int(stop_items[0]), fsize)
96 bytes = part[1] - part[0] + 1
97 try:
98 stream = open(static_file, 'rb')
99 except IOError as e:
100 if e[0] in (errno.EISDIR, errno.EACCES):
101 raise HTTP(403)
102 else:
103 raise HTTP(404)
104 stream.seek(part[0])
105 headers['Content-Range'] = 'bytes %i-%i/%i' % part
106 headers['Content-Length'] = '%i' % bytes
107 status = 206
108 # in all the other cases (not 304, not 206, but 200 or error page)
109 if status != 206:
110 enc = request.env.http_accept_encoding
111 if enc and 'gzip' in enc and not 'Content-Encoding' in headers:
112 gzipped = static_file + '.gz'
113 if os.path.isfile(gzipped) and os.path.getmtime(gzipped) >= modified:
114 static_file = gzipped
115 fsize = os.path.getsize(gzipped)
116 headers['Content-Encoding'] = 'gzip'
117 headers['Vary'] = 'Accept-Encoding'
118 try:
119 stream = open(static_file, 'rb')
120 except IOError as e:
121 # this better does not happer when returning an error page ;-)
122 if e[0] in (errno.EISDIR, errno.EACCES):
123 raise HTTP(403)
124 else:
125 raise HTTP(404)
126 headers['Content-Length'] = fsize
127 bytes = None
128 if request and request.env.web2py_use_wsgi_file_wrapper:
129 wrapped = request.env.wsgi_file_wrapper(stream, chunk_size)
130 else:
131 wrapped = streamer(stream, chunk_size=chunk_size, bytes=bytes)
132 raise HTTP(status, wrapped, **headers)
133
[end of gluon/streamer.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/gluon/streamer.py b/gluon/streamer.py
--- a/gluon/streamer.py
+++ b/gluon/streamer.py
@@ -63,9 +63,9 @@
open_f = open
fp = open_f(static_file,'rb')
except IOError as e:
- if e[0] == errno.EISDIR:
+ if e.errno == errno.EISDIR:
raise HTTP(403, error_message, web2py_error='file is a directory')
- elif e[0] == errno.EACCES:
+ elif e.errno == errno.EACCES:
raise HTTP(403, error_message, web2py_error='inaccessible file')
else:
raise HTTP(404, error_message, web2py_error='invalid file')
@@ -97,7 +97,7 @@
try:
stream = open(static_file, 'rb')
except IOError as e:
- if e[0] in (errno.EISDIR, errno.EACCES):
+ if e.errno in (errno.EISDIR, errno.EACCES):
raise HTTP(403)
else:
raise HTTP(404)
@@ -118,8 +118,8 @@
try:
stream = open(static_file, 'rb')
except IOError as e:
- # this better does not happer when returning an error page ;-)
- if e[0] in (errno.EISDIR, errno.EACCES):
+ # this better not happen when returning an error page ;-)
+ if e.errno in (errno.EISDIR, errno.EACCES):
raise HTTP(403)
else:
raise HTTP(404)
| {"golden_diff": "diff --git a/gluon/streamer.py b/gluon/streamer.py\n--- a/gluon/streamer.py\n+++ b/gluon/streamer.py\n@@ -63,9 +63,9 @@\n open_f = open\n fp = open_f(static_file,'rb')\n except IOError as e:\n- if e[0] == errno.EISDIR:\n+ if e.errno == errno.EISDIR:\n raise HTTP(403, error_message, web2py_error='file is a directory')\n- elif e[0] == errno.EACCES:\n+ elif e.errno == errno.EACCES:\n raise HTTP(403, error_message, web2py_error='inaccessible file')\n else:\n raise HTTP(404, error_message, web2py_error='invalid file')\n@@ -97,7 +97,7 @@\n try:\n stream = open(static_file, 'rb')\n except IOError as e:\n- if e[0] in (errno.EISDIR, errno.EACCES):\n+ if e.errno in (errno.EISDIR, errno.EACCES):\n raise HTTP(403)\n else:\n raise HTTP(404)\n@@ -118,8 +118,8 @@\n try:\n stream = open(static_file, 'rb')\n except IOError as e:\n- # this better does not happer when returning an error page ;-)\n- if e[0] in (errno.EISDIR, errno.EACCES):\n+ # this better not happen when returning an error page ;-)\n+ if e.errno in (errno.EISDIR, errno.EACCES):\n raise HTTP(403)\n else:\n raise HTTP(404)\n", "issue": "Streamer.py handles IOError exception in non-python3 compatible way\nUnder python 3 and the latest repository, browsing to `http://127.0.0.1:8000/welcome/favicon.ico` causes `streamer.py` to crash because it treats the `IOError` exception in a non-python3 compatible way. The IOError exception occurs because `favicon.ico` is not found at `/ `. No error ticket is generated.\r\n\r\n except IOError as e:\r\n if e[0] == errno.EISDIR:\r\n raise HTTP(403, error_message, web2py_error='file is a directory')\r\n elif e[0] == errno.EACCES:\r\n raise HTTP(403, error_message, web2py_error='inaccessible file')\r\n else:\r\n raise HTTP(404, error_message, web2py_error='invalid file')\r\n\r\nThis works in python 2, but `e[0]` should be accesed as `e.errno` under python 3\r\n\r\nPartial stack trace:\r\n\r\n Traceback (most recent call last):\r\n File \"C:\\web2py\\gluon\\main.py\", line 329, in wsgibase\r\n response.stream(static_file, request=request)\r\n File \"C:\\web2py\\gluon\\globals.py\", line 617, in stream\r\n status=self.status)\r\n File \"C:\\web2py\\gluon\\streamer.py\", line 66, in stream_file_or_304_or_206\r\n if e[0] == errno.EISDIR:\r\n TypeError: 'FileNotFoundError' object is not subscriptable\r\n\n", "before_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\n\"\"\"\n| This file is part of the web2py Web Framework\n| Copyrighted by Massimo Di Pierro <[email protected]>\n| License: LGPLv3 (http://www.gnu.org/licenses/lgpl.html)\n\nFacilities to handle file streaming\n------------------------------------\n\"\"\"\n\nimport os\nimport stat\nimport time\nimport re\nimport errno\nfrom gluon.http import HTTP\nfrom gluon.contenttype import contenttype\nfrom gluon._compat import PY2\n\n\nregex_start_range = re.compile('\\d+(?=\\-)')\nregex_stop_range = re.compile('(?<=\\-)\\d+')\n\nDEFAULT_CHUNK_SIZE = 64 * 1024\n\ndef streamer(stream, chunk_size=DEFAULT_CHUNK_SIZE, bytes=None, callback=None):\n try:\n offset = 0\n while bytes is None or offset < bytes:\n if not bytes is None and bytes - offset < chunk_size:\n chunk_size = bytes - offset\n data = stream.read(chunk_size)\n length = len(data)\n if not length:\n break\n else:\n yield data\n if length < chunk_size:\n break\n offset += length\n finally:\n stream.close()\n if callback:\n callback()\n\ndef stream_file_or_304_or_206(\n static_file,\n chunk_size=DEFAULT_CHUNK_SIZE,\n request=None,\n headers={},\n status=200,\n error_message=None\n ):\n # FIX THIS\n # if error_message is None:\n # error_message = rewrite.THREAD_LOCAL.routes.error_message % 'invalid request'\n try:\n if PY2:\n open_f = file # this makes no sense but without it GAE cannot open files\n else:\n open_f = open\n fp = open_f(static_file,'rb')\n except IOError as e:\n if e[0] == errno.EISDIR:\n raise HTTP(403, error_message, web2py_error='file is a directory')\n elif e[0] == errno.EACCES:\n raise HTTP(403, error_message, web2py_error='inaccessible file')\n else:\n raise HTTP(404, error_message, web2py_error='invalid file')\n else:\n fp.close()\n stat_file = os.stat(static_file)\n fsize = stat_file[stat.ST_SIZE]\n modified = stat_file[stat.ST_MTIME]\n mtime = time.strftime('%a, %d %b %Y %H:%M:%S GMT', time.gmtime(modified))\n headers.setdefault('Content-Type', contenttype(static_file))\n headers.setdefault('Last-Modified', mtime)\n headers.setdefault('Pragma', 'cache')\n headers.setdefault('Cache-Control', 'private')\n\n # if this is a normal response and not a respnse to an error page\n if status == 200:\n if request and request.env.http_if_modified_since == mtime:\n raise HTTP(304, **{'Content-Type': headers['Content-Type']})\n\n elif request and request.env.http_range:\n start_items = regex_start_range.findall(request.env.http_range)\n if not start_items:\n start_items = [0]\n stop_items = regex_stop_range.findall(request.env.http_range)\n if not stop_items or int(stop_items[0]) > fsize - 1:\n stop_items = [fsize - 1]\n part = (int(start_items[0]), int(stop_items[0]), fsize)\n bytes = part[1] - part[0] + 1\n try:\n stream = open(static_file, 'rb')\n except IOError as e:\n if e[0] in (errno.EISDIR, errno.EACCES):\n raise HTTP(403)\n else:\n raise HTTP(404)\n stream.seek(part[0])\n headers['Content-Range'] = 'bytes %i-%i/%i' % part\n headers['Content-Length'] = '%i' % bytes\n status = 206\n # in all the other cases (not 304, not 206, but 200 or error page)\n if status != 206:\n enc = request.env.http_accept_encoding\n if enc and 'gzip' in enc and not 'Content-Encoding' in headers:\n gzipped = static_file + '.gz'\n if os.path.isfile(gzipped) and os.path.getmtime(gzipped) >= modified:\n static_file = gzipped\n fsize = os.path.getsize(gzipped)\n headers['Content-Encoding'] = 'gzip'\n headers['Vary'] = 'Accept-Encoding'\n try:\n stream = open(static_file, 'rb')\n except IOError as e:\n # this better does not happer when returning an error page ;-)\n if e[0] in (errno.EISDIR, errno.EACCES):\n raise HTTP(403)\n else:\n raise HTTP(404)\n headers['Content-Length'] = fsize\n bytes = None\n if request and request.env.web2py_use_wsgi_file_wrapper:\n wrapped = request.env.wsgi_file_wrapper(stream, chunk_size)\n else:\n wrapped = streamer(stream, chunk_size=chunk_size, bytes=bytes)\n raise HTTP(status, wrapped, **headers)\n", "path": "gluon/streamer.py"}]} | 2,347 | 389 |
gh_patches_debug_5751 | rasdani/github-patches | git_diff | ansible__ansible-lint-1128 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[701] No 'galaxy_info' found results in meta/requirements.yml file
### Summary
ansible-lint reporting `[701] No 'galaxy_info' found` in my `meta/requirements.yml`, a file that unlike `meta/main.yml` does not (to my knowledge) support a `galaxy_info` field.
##### Issue Type
- Bug Report
##### Ansible and Ansible Lint details
<!--- Paste verbatim output between tripple backticks -->
```console (paste below)
$ ansible --version
ansible 2.10.1
$ ansible-lint --version
ansible-lint 4.3.5
```
- ansible installation method: pipenv (pip)
- ansible-lint installation method: pipenv (pip)
##### OS / ENVIRONMENT
MacOS 10.15.7 (Catalina Latest)
##### STEPS TO REPRODUCE
Using this `meta/requirements.yml`
```bash
---
# insert third party deps here. download with:
# ansible-galaxy install -r requirements.yml
# https://docs.ansible.com/ansible/galaxy.html
- name: singleplatform-eng.users
version: v1.2.6
- name: weareinteractive.sudo
version: 1.14.1
- name: geerlingguy.fluentd
version: 1.1.0
```
Note that `meta/main.yml` does include `galaxy_info`, but does not include as dependencies, the roles listed in requirements.yml. This is purposeful because I'm choosing `meta/requirements.yml` instead of `meta/main.yml` because I prefer the workflow and do not want the roles running first, as they do when in `meta/main.yml`. I'm following the previously linked user-guide on this topic.
To reproduce, I simply run ansible-lint directly or via molecule.
##### Desired Behaviour
I would expect ansible-lint not to flag these as issues... unless I'm completely misunderstanding the finding and misreading the documentation associated with this rule.
##### Actual Behaviour
Below are the ansible-lint results when run on my role.
```bash
$ ansible-lint
[701] No 'galaxy_info' found
meta/requirements.yml:7
{'meta/main.yml': {'name': 'singleplatform-eng.users', 'version': 'v1.2.6', '__line__': 7, '__file__': '/Users/tmichael/orgs/tmb/ansible_roles/base/meta/requirements.yml', 'skipped_rules': []}}
[701] No 'galaxy_info' found
meta/requirements.yml:10
{'meta/main.yml': {'name': 'weareinteractive.sudo', 'version': '1.14.1', '__line__': 10, '__file__': '/Users/tmichael/orgs/tmb/ansible_roles/base/meta/requirements.yml'}}
[701] No 'galaxy_info' found
meta/requirements.yml:13
{'meta/main.yml': {'name': 'geerlingguy.fluentd', 'version': '1.1.0', '__line__': 13, '__file__': '/Users/tmichael/orgs/tmb/ansible_roles/base/meta/requirements.yml'}}
```
</issue>
<code>
[start of lib/ansiblelint/rules/MetaMainHasInfoRule.py]
1 # Copyright (c) 2016, Will Thames and contributors
2 # Copyright (c) 2018, Ansible Project
3
4 from ansiblelint.rules import AnsibleLintRule
5
6 META_STR_INFO = (
7 'author',
8 'description'
9 )
10 META_INFO = tuple(list(META_STR_INFO) + [
11 'license',
12 'min_ansible_version',
13 'platforms',
14 ])
15
16
17 def _platform_info_errors_itr(platforms):
18 if not isinstance(platforms, list):
19 yield 'Platforms should be a list of dictionaries'
20 return
21
22 for platform in platforms:
23 if not isinstance(platform, dict):
24 yield 'Platforms should be a list of dictionaries'
25 elif 'name' not in platform:
26 yield 'Platform should contain name'
27
28
29 def _galaxy_info_errors_itr(galaxy_info,
30 info_list=META_INFO,
31 str_info_list=META_STR_INFO):
32 for info in info_list:
33 ginfo = galaxy_info.get(info, False)
34 if ginfo:
35 if info in str_info_list and not isinstance(ginfo, str):
36 yield '{info} should be a string'.format(info=info)
37 elif info == 'platforms':
38 for err in _platform_info_errors_itr(ginfo):
39 yield err
40 else:
41 yield 'Role info should contain {info}'.format(info=info)
42
43
44 class MetaMainHasInfoRule(AnsibleLintRule):
45 id = '701'
46 shortdesc = 'meta/main.yml should contain relevant info'
47 str_info = META_STR_INFO
48 info = META_INFO
49 description = (
50 'meta/main.yml should contain: ``{}``'.format(', '.join(info))
51 )
52 severity = 'HIGH'
53 tags = ['metadata']
54 version_added = 'v4.0.0'
55
56 def matchplay(self, file, data):
57 if file['type'] != 'meta':
58 return False
59
60 meta = {'meta/main.yml': data}
61 galaxy_info = data.get('galaxy_info', False)
62 if galaxy_info:
63 return [(meta, err) for err
64 in _galaxy_info_errors_itr(galaxy_info)]
65
66 return [(meta, "No 'galaxy_info' found")]
67
[end of lib/ansiblelint/rules/MetaMainHasInfoRule.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/lib/ansiblelint/rules/MetaMainHasInfoRule.py b/lib/ansiblelint/rules/MetaMainHasInfoRule.py
--- a/lib/ansiblelint/rules/MetaMainHasInfoRule.py
+++ b/lib/ansiblelint/rules/MetaMainHasInfoRule.py
@@ -57,6 +57,11 @@
if file['type'] != 'meta':
return False
+ # since Ansible 2.10 we can add a meta/requirements.yml but
+ # we only want to match on meta/main.yml
+ if not file['path'].endswith('/main.yml'):
+ return False
+
meta = {'meta/main.yml': data}
galaxy_info = data.get('galaxy_info', False)
if galaxy_info:
| {"golden_diff": "diff --git a/lib/ansiblelint/rules/MetaMainHasInfoRule.py b/lib/ansiblelint/rules/MetaMainHasInfoRule.py\n--- a/lib/ansiblelint/rules/MetaMainHasInfoRule.py\n+++ b/lib/ansiblelint/rules/MetaMainHasInfoRule.py\n@@ -57,6 +57,11 @@\n if file['type'] != 'meta':\n return False\n \n+ # since Ansible 2.10 we can add a meta/requirements.yml but\n+ # we only want to match on meta/main.yml\n+ if not file['path'].endswith('/main.yml'):\n+ return False\n+\n meta = {'meta/main.yml': data}\n galaxy_info = data.get('galaxy_info', False)\n if galaxy_info:\n", "issue": "[701] No 'galaxy_info' found results in meta/requirements.yml file\n### Summary\r\n\r\nansible-lint reporting `[701] No 'galaxy_info' found` in my `meta/requirements.yml`, a file that unlike `meta/main.yml` does not (to my knowledge) support a `galaxy_info` field.\r\n\r\n##### Issue Type\r\n\r\n- Bug Report\r\n\r\n##### Ansible and Ansible Lint details\r\n<!--- Paste verbatim output between tripple backticks -->\r\n```console (paste below)\r\n$ ansible --version\r\nansible 2.10.1\r\n\r\n$ ansible-lint --version\r\nansible-lint 4.3.5\r\n```\r\n\r\n- ansible installation method: pipenv (pip)\r\n- ansible-lint installation method: pipenv (pip)\r\n\r\n##### OS / ENVIRONMENT\r\nMacOS 10.15.7 (Catalina Latest)\r\n\r\n##### STEPS TO REPRODUCE\r\n\r\nUsing this `meta/requirements.yml`\r\n```bash\r\n---\r\n\r\n# insert third party deps here. download with:\r\n# ansible-galaxy install -r requirements.yml\r\n# https://docs.ansible.com/ansible/galaxy.html\r\n\r\n- name: singleplatform-eng.users\r\n version: v1.2.6\r\n\r\n- name: weareinteractive.sudo\r\n version: 1.14.1\r\n\r\n- name: geerlingguy.fluentd\r\n version: 1.1.0\r\n```\r\n\r\nNote that `meta/main.yml` does include `galaxy_info`, but does not include as dependencies, the roles listed in requirements.yml. This is purposeful because I'm choosing `meta/requirements.yml` instead of `meta/main.yml` because I prefer the workflow and do not want the roles running first, as they do when in `meta/main.yml`. I'm following the previously linked user-guide on this topic.\r\n\r\nTo reproduce, I simply run ansible-lint directly or via molecule.\r\n\r\n##### Desired Behaviour\r\n\r\nI would expect ansible-lint not to flag these as issues... unless I'm completely misunderstanding the finding and misreading the documentation associated with this rule.\r\n\r\n##### Actual Behaviour\r\n\r\n\r\nBelow are the ansible-lint results when run on my role.\r\n```bash\r\n$ ansible-lint\r\n[701] No 'galaxy_info' found\r\nmeta/requirements.yml:7\r\n{'meta/main.yml': {'name': 'singleplatform-eng.users', 'version': 'v1.2.6', '__line__': 7, '__file__': '/Users/tmichael/orgs/tmb/ansible_roles/base/meta/requirements.yml', 'skipped_rules': []}}\r\n\r\n[701] No 'galaxy_info' found\r\nmeta/requirements.yml:10\r\n{'meta/main.yml': {'name': 'weareinteractive.sudo', 'version': '1.14.1', '__line__': 10, '__file__': '/Users/tmichael/orgs/tmb/ansible_roles/base/meta/requirements.yml'}}\r\n\r\n[701] No 'galaxy_info' found\r\nmeta/requirements.yml:13\r\n{'meta/main.yml': {'name': 'geerlingguy.fluentd', 'version': '1.1.0', '__line__': 13, '__file__': '/Users/tmichael/orgs/tmb/ansible_roles/base/meta/requirements.yml'}}\r\n```\n", "before_files": [{"content": "# Copyright (c) 2016, Will Thames and contributors\n# Copyright (c) 2018, Ansible Project\n\nfrom ansiblelint.rules import AnsibleLintRule\n\nMETA_STR_INFO = (\n 'author',\n 'description'\n)\nMETA_INFO = tuple(list(META_STR_INFO) + [\n 'license',\n 'min_ansible_version',\n 'platforms',\n])\n\n\ndef _platform_info_errors_itr(platforms):\n if not isinstance(platforms, list):\n yield 'Platforms should be a list of dictionaries'\n return\n\n for platform in platforms:\n if not isinstance(platform, dict):\n yield 'Platforms should be a list of dictionaries'\n elif 'name' not in platform:\n yield 'Platform should contain name'\n\n\ndef _galaxy_info_errors_itr(galaxy_info,\n info_list=META_INFO,\n str_info_list=META_STR_INFO):\n for info in info_list:\n ginfo = galaxy_info.get(info, False)\n if ginfo:\n if info in str_info_list and not isinstance(ginfo, str):\n yield '{info} should be a string'.format(info=info)\n elif info == 'platforms':\n for err in _platform_info_errors_itr(ginfo):\n yield err\n else:\n yield 'Role info should contain {info}'.format(info=info)\n\n\nclass MetaMainHasInfoRule(AnsibleLintRule):\n id = '701'\n shortdesc = 'meta/main.yml should contain relevant info'\n str_info = META_STR_INFO\n info = META_INFO\n description = (\n 'meta/main.yml should contain: ``{}``'.format(', '.join(info))\n )\n severity = 'HIGH'\n tags = ['metadata']\n version_added = 'v4.0.0'\n\n def matchplay(self, file, data):\n if file['type'] != 'meta':\n return False\n\n meta = {'meta/main.yml': data}\n galaxy_info = data.get('galaxy_info', False)\n if galaxy_info:\n return [(meta, err) for err\n in _galaxy_info_errors_itr(galaxy_info)]\n\n return [(meta, \"No 'galaxy_info' found\")]\n", "path": "lib/ansiblelint/rules/MetaMainHasInfoRule.py"}]} | 1,859 | 169 |
gh_patches_debug_32180 | rasdani/github-patches | git_diff | mathesar-foundation__mathesar-3459 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Demo mode results in 5XX
## Description
<!-- A clear and concise description of what the bug is. -->
Mathesar is broken (as of 0.1.4) for Demo Mode. It doesn't load, and just says "Server Error (500)" instead.
## Expected behavior
<!-- A clear and concise description of what you expected to happen. -->
Demo mode should work.
## To Reproduce
<!-- How can we recreate this bug? Please try to provide a Minimal, Complete, and Verifiable (http://stackoverflow.com/help/mcve) example if code-related. -->
Change the `.env` file according to the comment to use the demo mode settings, try to build and start mathesar (dev environment is fine)
</issue>
<code>
[start of mathesar/install.py]
1 """
2 This script installs functions and types for Mathesar onto the configured DB.
3 """
4 import getopt
5 import os
6 import sys
7
8 import django
9 from django.core import management
10 from decouple import config as decouple_config
11 from django.conf import settings
12 from django.db.utils import IntegrityError
13 from sqlalchemy.exc import OperationalError
14 from db import install
15
16
17 def main(skip_static_collection=False):
18 # skip_confirm is temporarily enabled by default as we don't have any use
19 # for interactive prompts with docker only deployments
20 skip_confirm = True
21 (opts, _) = getopt.getopt(sys.argv[1:], ":s", ["skip-confirm"])
22 for (opt, value) in opts:
23 if (opt == "-s") or (opt == "--skip-confirm"):
24 skip_confirm = True
25 os.environ.setdefault("DJANGO_SETTINGS_MODULE", "config.settings.production")
26 django.setup()
27 management.call_command('migrate')
28 debug_mode = decouple_config('DEBUG', default=False, cast=bool)
29 #
30 if not debug_mode and not skip_static_collection:
31 management.call_command('collectstatic', '--noinput', '--clear')
32 print("------------Setting up User Databases------------")
33 django_db_key = decouple_config('DJANGO_DATABASE_KEY', default="default")
34 user_databases = [key for key in settings.DATABASES if key != django_db_key]
35 for database_key in user_databases:
36 try:
37 install_on_db_with_key(database_key, skip_confirm)
38 except IntegrityError:
39 continue
40
41
42 def install_on_db_with_key(database_key, skip_confirm):
43 from mathesar.models.base import Database
44 db_model = Database.create_from_settings_key(database_key)
45 db_model.save()
46 try:
47 install.install_mathesar(
48 database_name=db_model.db_name,
49 hostname=db_model.host,
50 username=db_model.username,
51 password=db_model.password,
52 port=db_model.port,
53 skip_confirm=skip_confirm
54 )
55 except OperationalError as e:
56 db_model.delete()
57 raise e
58
59
60 if __name__ == "__main__":
61 main()
62
[end of mathesar/install.py]
[start of demo/settings.py]
1 from config.settings.production import * # noqa
2 from config.settings import * # noqa
3 from decouple import config as decouple_config
4
5 INSTALLED_APPS += [ # noqa
6 "demo"
7 ]
8
9 MIDDLEWARE += [ # noqa
10 "demo.middleware.LiveDemoModeMiddleware",
11 ]
12
13 MATHESAR_LIVE_DEMO = True
14 MATHESAR_LIVE_DEMO_USERNAME = decouple_config('MATHESAR_LIVE_DEMO_USERNAME', default=None)
15 MATHESAR_LIVE_DEMO_PASSWORD = decouple_config('MATHESAR_LIVE_DEMO_PASSWORD', default=None)
16
17 MATHESAR_DEMO_TEMPLATE = 'mathesar_demo_template'
18 MATHESAR_DEMO_ARXIV_LOG_PATH = decouple_config(
19 'MATHESAR_DEMO_ARXIV_LOG_PATH',
20 default='/var/lib/mathesar/demo/arxiv_db_schema_log'
21 )
22 BASE_TEMPLATE_ADDITIONAL_SCRIPT_TEMPLATES += ['demo/analytics.html'] # noqa
23 ROOT_URLCONF = "demo.urls"
24
[end of demo/settings.py]
[start of demo/management/commands/setup_demo_template_db.py]
1 from sqlalchemy import text
2 from sqlalchemy.exc import OperationalError
3 from django.conf import settings
4 from django.core.management import BaseCommand
5
6 from db.install import install_mathesar
7 from demo.install.datasets import load_datasets
8 from mathesar.database.base import create_mathesar_engine
9 from mathesar.models.base import Database
10
11
12 class Command(BaseCommand):
13 help = 'Initialize the demo template database.'
14
15 def handle(self, *args, **options):
16 _setup_demo_template_db()
17
18
19 def _setup_demo_template_db():
20 print("Initializing demo template database...")
21
22 template_db_name = settings.MATHESAR_DEMO_TEMPLATE
23 django_model = Database.current_objects.get(name=settings.DATABASES["default"]["NAME"])
24 root_engine = create_mathesar_engine(django_model)
25 with root_engine.connect() as conn:
26 conn.execution_options(isolation_level="AUTOCOMMIT")
27 conn.execute(text(f"DROP DATABASE IF EXISTS {template_db_name} WITH (FORCE)"))
28 root_engine.dispose()
29 db_model, _ = Database.current_objects.get_or_create(
30 name=template_db_name,
31 defaults={
32 'db_name': template_db_name,
33 'username': django_model.username,
34 'password': django_model.password,
35 'host': django_model.host,
36 'port': django_model.port
37 }
38 )
39 try:
40 install_mathesar(
41 database_name=template_db_name,
42 hostname=db_model.host,
43 username=db_model.username,
44 password=db_model.password,
45 port=db_model.port,
46 skip_confirm=True
47 )
48 except OperationalError as e:
49 db_model.delete()
50 raise e
51 user_engine = create_mathesar_engine(db_model)
52 load_datasets(user_engine)
53 user_engine.dispose()
54
[end of demo/management/commands/setup_demo_template_db.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/demo/management/commands/setup_demo_template_db.py b/demo/management/commands/setup_demo_template_db.py
--- a/demo/management/commands/setup_demo_template_db.py
+++ b/demo/management/commands/setup_demo_template_db.py
@@ -20,7 +20,7 @@
print("Initializing demo template database...")
template_db_name = settings.MATHESAR_DEMO_TEMPLATE
- django_model = Database.current_objects.get(name=settings.DATABASES["default"]["NAME"])
+ django_model = Database.create_from_settings_key("default")
root_engine = create_mathesar_engine(django_model)
with root_engine.connect() as conn:
conn.execution_options(isolation_level="AUTOCOMMIT")
diff --git a/demo/settings.py b/demo/settings.py
--- a/demo/settings.py
+++ b/demo/settings.py
@@ -1,5 +1,4 @@
-from config.settings.production import * # noqa
-from config.settings import * # noqa
+from config.settings.common_settings import * # noqa
from decouple import config as decouple_config
INSTALLED_APPS += [ # noqa
@@ -10,6 +9,7 @@
"demo.middleware.LiveDemoModeMiddleware",
]
+MATHESAR_MODE = 'PRODUCTION'
MATHESAR_LIVE_DEMO = True
MATHESAR_LIVE_DEMO_USERNAME = decouple_config('MATHESAR_LIVE_DEMO_USERNAME', default=None)
MATHESAR_LIVE_DEMO_PASSWORD = decouple_config('MATHESAR_LIVE_DEMO_PASSWORD', default=None)
diff --git a/mathesar/install.py b/mathesar/install.py
--- a/mathesar/install.py
+++ b/mathesar/install.py
@@ -37,6 +37,14 @@
install_on_db_with_key(database_key, skip_confirm)
except IntegrityError:
continue
+ if getattr(settings, 'MATHESAR_LIVE_DEMO', False) is True:
+ management.call_command(
+ 'createsuperuser',
+ '--no-input',
+ '--username', 'demo',
+ '--email', '[email protected]',
+ )
+ management.call_command('setup_demo_template_db')
def install_on_db_with_key(database_key, skip_confirm):
| {"golden_diff": "diff --git a/demo/management/commands/setup_demo_template_db.py b/demo/management/commands/setup_demo_template_db.py\n--- a/demo/management/commands/setup_demo_template_db.py\n+++ b/demo/management/commands/setup_demo_template_db.py\n@@ -20,7 +20,7 @@\n print(\"Initializing demo template database...\")\n \n template_db_name = settings.MATHESAR_DEMO_TEMPLATE\n- django_model = Database.current_objects.get(name=settings.DATABASES[\"default\"][\"NAME\"])\n+ django_model = Database.create_from_settings_key(\"default\")\n root_engine = create_mathesar_engine(django_model)\n with root_engine.connect() as conn:\n conn.execution_options(isolation_level=\"AUTOCOMMIT\")\ndiff --git a/demo/settings.py b/demo/settings.py\n--- a/demo/settings.py\n+++ b/demo/settings.py\n@@ -1,5 +1,4 @@\n-from config.settings.production import * # noqa\n-from config.settings import * # noqa\n+from config.settings.common_settings import * # noqa\n from decouple import config as decouple_config\n \n INSTALLED_APPS += [ # noqa\n@@ -10,6 +9,7 @@\n \"demo.middleware.LiveDemoModeMiddleware\",\n ]\n \n+MATHESAR_MODE = 'PRODUCTION'\n MATHESAR_LIVE_DEMO = True\n MATHESAR_LIVE_DEMO_USERNAME = decouple_config('MATHESAR_LIVE_DEMO_USERNAME', default=None)\n MATHESAR_LIVE_DEMO_PASSWORD = decouple_config('MATHESAR_LIVE_DEMO_PASSWORD', default=None)\ndiff --git a/mathesar/install.py b/mathesar/install.py\n--- a/mathesar/install.py\n+++ b/mathesar/install.py\n@@ -37,6 +37,14 @@\n install_on_db_with_key(database_key, skip_confirm)\n except IntegrityError:\n continue\n+ if getattr(settings, 'MATHESAR_LIVE_DEMO', False) is True:\n+ management.call_command(\n+ 'createsuperuser',\n+ '--no-input',\n+ '--username', 'demo',\n+ '--email', '[email protected]',\n+ )\n+ management.call_command('setup_demo_template_db')\n \n \n def install_on_db_with_key(database_key, skip_confirm):\n", "issue": "Demo mode results in 5XX\n## Description\r\n<!-- A clear and concise description of what the bug is. -->\r\n\r\nMathesar is broken (as of 0.1.4) for Demo Mode. It doesn't load, and just says \"Server Error (500)\" instead.\r\n\r\n## Expected behavior\r\n<!-- A clear and concise description of what you expected to happen. -->\r\n\r\nDemo mode should work.\r\n\r\n## To Reproduce\r\n<!-- How can we recreate this bug? Please try to provide a Minimal, Complete, and Verifiable (http://stackoverflow.com/help/mcve) example if code-related. -->\r\n\r\nChange the `.env` file according to the comment to use the demo mode settings, try to build and start mathesar (dev environment is fine)\n", "before_files": [{"content": "\"\"\"\nThis script installs functions and types for Mathesar onto the configured DB.\n\"\"\"\nimport getopt\nimport os\nimport sys\n\nimport django\nfrom django.core import management\nfrom decouple import config as decouple_config\nfrom django.conf import settings\nfrom django.db.utils import IntegrityError\nfrom sqlalchemy.exc import OperationalError\nfrom db import install\n\n\ndef main(skip_static_collection=False):\n # skip_confirm is temporarily enabled by default as we don't have any use\n # for interactive prompts with docker only deployments\n skip_confirm = True\n (opts, _) = getopt.getopt(sys.argv[1:], \":s\", [\"skip-confirm\"])\n for (opt, value) in opts:\n if (opt == \"-s\") or (opt == \"--skip-confirm\"):\n skip_confirm = True\n os.environ.setdefault(\"DJANGO_SETTINGS_MODULE\", \"config.settings.production\")\n django.setup()\n management.call_command('migrate')\n debug_mode = decouple_config('DEBUG', default=False, cast=bool)\n #\n if not debug_mode and not skip_static_collection:\n management.call_command('collectstatic', '--noinput', '--clear')\n print(\"------------Setting up User Databases------------\")\n django_db_key = decouple_config('DJANGO_DATABASE_KEY', default=\"default\")\n user_databases = [key for key in settings.DATABASES if key != django_db_key]\n for database_key in user_databases:\n try:\n install_on_db_with_key(database_key, skip_confirm)\n except IntegrityError:\n continue\n\n\ndef install_on_db_with_key(database_key, skip_confirm):\n from mathesar.models.base import Database\n db_model = Database.create_from_settings_key(database_key)\n db_model.save()\n try:\n install.install_mathesar(\n database_name=db_model.db_name,\n hostname=db_model.host,\n username=db_model.username,\n password=db_model.password,\n port=db_model.port,\n skip_confirm=skip_confirm\n )\n except OperationalError as e:\n db_model.delete()\n raise e\n\n\nif __name__ == \"__main__\":\n main()\n", "path": "mathesar/install.py"}, {"content": "from config.settings.production import * # noqa\nfrom config.settings import * # noqa\nfrom decouple import config as decouple_config\n\nINSTALLED_APPS += [ # noqa\n \"demo\"\n]\n\nMIDDLEWARE += [ # noqa\n \"demo.middleware.LiveDemoModeMiddleware\",\n]\n\nMATHESAR_LIVE_DEMO = True\nMATHESAR_LIVE_DEMO_USERNAME = decouple_config('MATHESAR_LIVE_DEMO_USERNAME', default=None)\nMATHESAR_LIVE_DEMO_PASSWORD = decouple_config('MATHESAR_LIVE_DEMO_PASSWORD', default=None)\n\nMATHESAR_DEMO_TEMPLATE = 'mathesar_demo_template'\nMATHESAR_DEMO_ARXIV_LOG_PATH = decouple_config(\n 'MATHESAR_DEMO_ARXIV_LOG_PATH',\n default='/var/lib/mathesar/demo/arxiv_db_schema_log'\n)\nBASE_TEMPLATE_ADDITIONAL_SCRIPT_TEMPLATES += ['demo/analytics.html'] # noqa\nROOT_URLCONF = \"demo.urls\"\n", "path": "demo/settings.py"}, {"content": "from sqlalchemy import text\nfrom sqlalchemy.exc import OperationalError\nfrom django.conf import settings\nfrom django.core.management import BaseCommand\n\nfrom db.install import install_mathesar\nfrom demo.install.datasets import load_datasets\nfrom mathesar.database.base import create_mathesar_engine\nfrom mathesar.models.base import Database\n\n\nclass Command(BaseCommand):\n help = 'Initialize the demo template database.'\n\n def handle(self, *args, **options):\n _setup_demo_template_db()\n\n\ndef _setup_demo_template_db():\n print(\"Initializing demo template database...\")\n\n template_db_name = settings.MATHESAR_DEMO_TEMPLATE\n django_model = Database.current_objects.get(name=settings.DATABASES[\"default\"][\"NAME\"])\n root_engine = create_mathesar_engine(django_model)\n with root_engine.connect() as conn:\n conn.execution_options(isolation_level=\"AUTOCOMMIT\")\n conn.execute(text(f\"DROP DATABASE IF EXISTS {template_db_name} WITH (FORCE)\"))\n root_engine.dispose()\n db_model, _ = Database.current_objects.get_or_create(\n name=template_db_name,\n defaults={\n 'db_name': template_db_name,\n 'username': django_model.username,\n 'password': django_model.password,\n 'host': django_model.host,\n 'port': django_model.port\n }\n )\n try:\n install_mathesar(\n database_name=template_db_name,\n hostname=db_model.host,\n username=db_model.username,\n password=db_model.password,\n port=db_model.port,\n skip_confirm=True\n )\n except OperationalError as e:\n db_model.delete()\n raise e\n user_engine = create_mathesar_engine(db_model)\n load_datasets(user_engine)\n user_engine.dispose()\n", "path": "demo/management/commands/setup_demo_template_db.py"}]} | 1,994 | 481 |
gh_patches_debug_35751 | rasdani/github-patches | git_diff | litestar-org__litestar-2810 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Bug: Exception in event listener breaks listener
### Description
In a separate production code base, we encountered what I believe to be an issue/behavior that could be fixed/improved and did not find anything other references of it when searching the issues here.
Basically, if an exception occurs inside of an event listener you will then see
```log
INFO: ASGI 'lifespan' protocol appears unsupported.
```
and the listener stream is closed which then prevents any additional events from being picked up. Any time `request.app.emit` is then called you will get `ClosedResourceError`.
I don't think a single exception inside an event execution should prevent all future events from getting picked up or at least the error should be a little more obvious as to what is going on.
I've created an example repo that recreates the issue consistently.
### URL to code causing the issue
https://github.com/bnjmn/litestar-emit-fail-example
### MCVE
See [full example here](https://github.com/bnjmn/litestar-emit-fail-example/tree/main)
```python
from litestar import Litestar, get, Request
from litestar.events import listener
import logging
logger = logging.getLogger(__name__)
@listener("raise_exception")
async def raise_exception_if_odd(value) -> None:
"""Raise an exception to test Emit error."""
if value is not None and value % 2 != 0:
raise ValueError(f"{value} is odd")
else:
return "The value is even. No exception raised."
@get("/")
async def index() -> str:
return "Hello, world!"
@get("/check-value/{value:int}")
async def check_value(request: Request, value: int) -> str:
try:
request.app.emit("raise_exception", value)
return f"Checked {value}: No exception raised."
except ValueError as e:
return str(e)
app = Litestar([index, check_value], listeners=[raise_exception_if_odd])
```
### Steps to reproduce
1. Follow the steps outlined in the [README](https://github.com/bnjmn/litestar-emit-fail-example/blob/main/README.md#run-it)
```bash
pipenv install
litestar run -d
```
- Go to http://localhost:8000/check-value/2 to see a successful request
- Go to http://localhost:8000/check-value/3 to see an exception raised in the listener
- Attempt to go to http://localhost:8000/check-value/2 again, but you should get ClosedResourceError
### Litestar Version
Recreated on both 2.3.2 and 2.4.1
### Platform
- [ ] Linux
- [X] Mac
- [ ] Windows
- [x] Other (Please specify in the description above)
<!-- POLAR PLEDGE BADGE START -->
---
> [!NOTE]
> While we are open for sponsoring on [GitHub Sponsors](https://github.com/sponsors/litestar-org/) and
> [OpenCollective](https://opencollective.com/litestar), we also utilize [Polar.sh](https://polar.sh/) to engage in pledge-based sponsorship.
>
> Check out all issues funded or available for funding [on our Polar.sh Litestar dashboard](https://polar.sh/litestar-org)
> * If you would like to see an issue prioritized, make a pledge towards it!
> * We receive the pledge once the issue is completed & verified
> * This, along with engagement in the community, helps us know which features are a priority to our users.
<a href="https://polar.sh/litestar-org/litestar/issues/2809">
<picture>
<source media="(prefers-color-scheme: dark)" srcset="https://polar.sh/api/github/litestar-org/litestar/issues/2809/pledge.svg?darkmode=1">
<img alt="Fund with Polar" src="https://polar.sh/api/github/litestar-org/litestar/issues/2809/pledge.svg">
</picture>
</a>
<!-- POLAR PLEDGE BADGE END -->
</issue>
<code>
[start of litestar/events/emitter.py]
1 from __future__ import annotations
2
3 import math
4 import sys
5 from abc import ABC, abstractmethod
6 from collections import defaultdict
7 from contextlib import AsyncExitStack
8 from functools import partial
9 from typing import TYPE_CHECKING, Any, Sequence
10
11 if sys.version_info < (3, 9):
12 from typing import AsyncContextManager
13 else:
14 from contextlib import AbstractAsyncContextManager as AsyncContextManager
15
16 import anyio
17
18 from litestar.exceptions import ImproperlyConfiguredException
19
20 __all__ = ("BaseEventEmitterBackend", "SimpleEventEmitter")
21
22
23 if TYPE_CHECKING:
24 from types import TracebackType
25
26 from anyio.streams.memory import MemoryObjectReceiveStream, MemoryObjectSendStream
27
28 from litestar.events.listener import EventListener
29
30
31 class BaseEventEmitterBackend(AsyncContextManager["BaseEventEmitterBackend"], ABC):
32 """Abstract class used to define event emitter backends."""
33
34 __slots__ = ("listeners",)
35
36 listeners: defaultdict[str, set[EventListener]]
37
38 def __init__(self, listeners: Sequence[EventListener]) -> None:
39 """Create an event emitter instance.
40
41 Args:
42 listeners: A list of listeners.
43 """
44 self.listeners = defaultdict(set)
45 for listener in listeners:
46 for event_id in listener.event_ids:
47 self.listeners[event_id].add(listener)
48
49 @abstractmethod
50 def emit(self, event_id: str, *args: Any, **kwargs: Any) -> None:
51 """Emit an event to all attached listeners.
52
53 Args:
54 event_id: The ID of the event to emit, e.g 'my_event'.
55 *args: args to pass to the listener(s).
56 **kwargs: kwargs to pass to the listener(s)
57
58 Returns:
59 None
60 """
61 raise NotImplementedError("not implemented")
62
63
64 class SimpleEventEmitter(BaseEventEmitterBackend):
65 """Event emitter the works only in the current process"""
66
67 __slots__ = ("_queue", "_exit_stack", "_receive_stream", "_send_stream")
68
69 def __init__(self, listeners: Sequence[EventListener]) -> None:
70 """Create an event emitter instance.
71
72 Args:
73 listeners: A list of listeners.
74 """
75 super().__init__(listeners=listeners)
76 self._receive_stream: MemoryObjectReceiveStream | None = None
77 self._send_stream: MemoryObjectSendStream | None = None
78 self._exit_stack: AsyncExitStack | None = None
79
80 @staticmethod
81 async def _worker(receive_stream: MemoryObjectReceiveStream) -> None:
82 """Run items from ``receive_stream`` in a task group.
83
84 Returns:
85 None
86 """
87 async with receive_stream, anyio.create_task_group() as task_group:
88 async for item in receive_stream:
89 fn, args, kwargs = item
90 if kwargs:
91 fn = partial(fn, **kwargs)
92 task_group.start_soon(fn, *args)
93
94 async def __aenter__(self) -> SimpleEventEmitter:
95 self._exit_stack = AsyncExitStack()
96 send_stream, receive_stream = anyio.create_memory_object_stream(math.inf) # type: ignore[var-annotated]
97 self._send_stream = send_stream
98 task_group = anyio.create_task_group()
99
100 await self._exit_stack.enter_async_context(task_group)
101 await self._exit_stack.enter_async_context(send_stream)
102 task_group.start_soon(self._worker, receive_stream)
103
104 return self
105
106 async def __aexit__(
107 self,
108 exc_type: type[BaseException] | None,
109 exc_val: BaseException | None,
110 exc_tb: TracebackType | None,
111 ) -> None:
112 if self._exit_stack:
113 await self._exit_stack.__aexit__(exc_type, exc_val, exc_tb)
114
115 self._exit_stack = None
116 self._send_stream = None
117
118 def emit(self, event_id: str, *args: Any, **kwargs: Any) -> None:
119 """Emit an event to all attached listeners.
120
121 Args:
122 event_id: The ID of the event to emit, e.g 'my_event'.
123 *args: args to pass to the listener(s).
124 **kwargs: kwargs to pass to the listener(s)
125
126 Returns:
127 None
128 """
129 if not (self._send_stream and self._exit_stack):
130 raise RuntimeError("Emitter not initialized")
131
132 if listeners := self.listeners.get(event_id):
133 for listener in listeners:
134 self._send_stream.send_nowait((listener.fn, args, kwargs))
135 return
136 raise ImproperlyConfiguredException(f"no event listeners are registered for event ID: {event_id}")
137
[end of litestar/events/emitter.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/litestar/events/emitter.py b/litestar/events/emitter.py
--- a/litestar/events/emitter.py
+++ b/litestar/events/emitter.py
@@ -1,5 +1,6 @@
from __future__ import annotations
+import logging
import math
import sys
from abc import ABC, abstractmethod
@@ -17,9 +18,6 @@
from litestar.exceptions import ImproperlyConfiguredException
-__all__ = ("BaseEventEmitterBackend", "SimpleEventEmitter")
-
-
if TYPE_CHECKING:
from types import TracebackType
@@ -27,6 +25,10 @@
from litestar.events.listener import EventListener
+__all__ = ("BaseEventEmitterBackend", "SimpleEventEmitter")
+
+logger = logging.getLogger(__name__)
+
class BaseEventEmitterBackend(AsyncContextManager["BaseEventEmitterBackend"], ABC):
"""Abstract class used to define event emitter backends."""
@@ -77,19 +79,25 @@
self._send_stream: MemoryObjectSendStream | None = None
self._exit_stack: AsyncExitStack | None = None
- @staticmethod
- async def _worker(receive_stream: MemoryObjectReceiveStream) -> None:
+ async def _worker(self, receive_stream: MemoryObjectReceiveStream) -> None:
"""Run items from ``receive_stream`` in a task group.
Returns:
None
"""
- async with receive_stream, anyio.create_task_group() as task_group:
+ async with receive_stream:
async for item in receive_stream:
- fn, args, kwargs = item
+ await self._run_listener_in_task_group(*item)
+
+ @staticmethod
+ async def _run_listener_in_task_group(fn: Any, args: tuple[Any], kwargs: dict[str, Any]) -> None:
+ try:
+ async with anyio.create_task_group() as task_group:
if kwargs:
fn = partial(fn, **kwargs)
task_group.start_soon(fn, *args)
+ except Exception as exc:
+ logger.exception("Error in event listener: %s", exc)
async def __aenter__(self) -> SimpleEventEmitter:
self._exit_stack = AsyncExitStack()
| {"golden_diff": "diff --git a/litestar/events/emitter.py b/litestar/events/emitter.py\n--- a/litestar/events/emitter.py\n+++ b/litestar/events/emitter.py\n@@ -1,5 +1,6 @@\n from __future__ import annotations\n \n+import logging\n import math\n import sys\n from abc import ABC, abstractmethod\n@@ -17,9 +18,6 @@\n \n from litestar.exceptions import ImproperlyConfiguredException\n \n-__all__ = (\"BaseEventEmitterBackend\", \"SimpleEventEmitter\")\n-\n-\n if TYPE_CHECKING:\n from types import TracebackType\n \n@@ -27,6 +25,10 @@\n \n from litestar.events.listener import EventListener\n \n+__all__ = (\"BaseEventEmitterBackend\", \"SimpleEventEmitter\")\n+\n+logger = logging.getLogger(__name__)\n+\n \n class BaseEventEmitterBackend(AsyncContextManager[\"BaseEventEmitterBackend\"], ABC):\n \"\"\"Abstract class used to define event emitter backends.\"\"\"\n@@ -77,19 +79,25 @@\n self._send_stream: MemoryObjectSendStream | None = None\n self._exit_stack: AsyncExitStack | None = None\n \n- @staticmethod\n- async def _worker(receive_stream: MemoryObjectReceiveStream) -> None:\n+ async def _worker(self, receive_stream: MemoryObjectReceiveStream) -> None:\n \"\"\"Run items from ``receive_stream`` in a task group.\n \n Returns:\n None\n \"\"\"\n- async with receive_stream, anyio.create_task_group() as task_group:\n+ async with receive_stream:\n async for item in receive_stream:\n- fn, args, kwargs = item\n+ await self._run_listener_in_task_group(*item)\n+\n+ @staticmethod\n+ async def _run_listener_in_task_group(fn: Any, args: tuple[Any], kwargs: dict[str, Any]) -> None:\n+ try:\n+ async with anyio.create_task_group() as task_group:\n if kwargs:\n fn = partial(fn, **kwargs)\n task_group.start_soon(fn, *args)\n+ except Exception as exc:\n+ logger.exception(\"Error in event listener: %s\", exc)\n \n async def __aenter__(self) -> SimpleEventEmitter:\n self._exit_stack = AsyncExitStack()\n", "issue": "Bug: Exception in event listener breaks listener\n### Description\r\n\r\nIn a separate production code base, we encountered what I believe to be an issue/behavior that could be fixed/improved and did not find anything other references of it when searching the issues here. \r\n\r\nBasically, if an exception occurs inside of an event listener you will then see\r\n```log\r\nINFO: ASGI 'lifespan' protocol appears unsupported.\r\n```\r\n and the listener stream is closed which then prevents any additional events from being picked up. Any time `request.app.emit` is then called you will get `ClosedResourceError`.\r\n\r\nI don't think a single exception inside an event execution should prevent all future events from getting picked up or at least the error should be a little more obvious as to what is going on.\r\n\r\nI've created an example repo that recreates the issue consistently.\r\n\r\n### URL to code causing the issue\r\n\r\nhttps://github.com/bnjmn/litestar-emit-fail-example\r\n\r\n### MCVE\r\n\r\nSee [full example here](https://github.com/bnjmn/litestar-emit-fail-example/tree/main)\r\n```python\r\nfrom litestar import Litestar, get, Request\r\nfrom litestar.events import listener\r\n\r\nimport logging\r\n\r\nlogger = logging.getLogger(__name__)\r\n\r\n@listener(\"raise_exception\")\r\nasync def raise_exception_if_odd(value) -> None:\r\n \"\"\"Raise an exception to test Emit error.\"\"\"\r\n if value is not None and value % 2 != 0:\r\n raise ValueError(f\"{value} is odd\")\r\n else:\r\n return \"The value is even. No exception raised.\"\r\n\r\n@get(\"/\")\r\nasync def index() -> str:\r\n return \"Hello, world!\"\r\n\r\n@get(\"/check-value/{value:int}\")\r\nasync def check_value(request: Request, value: int) -> str:\r\n try:\r\n request.app.emit(\"raise_exception\", value)\r\n return f\"Checked {value}: No exception raised.\"\r\n except ValueError as e:\r\n return str(e)\r\n\r\napp = Litestar([index, check_value], listeners=[raise_exception_if_odd])\r\n```\r\n\r\n\r\n### Steps to reproduce\r\n\r\n1. Follow the steps outlined in the [README](https://github.com/bnjmn/litestar-emit-fail-example/blob/main/README.md#run-it)\r\n```bash\r\npipenv install\r\nlitestar run -d\r\n```\r\n- Go to http://localhost:8000/check-value/2 to see a successful request\r\n- Go to http://localhost:8000/check-value/3 to see an exception raised in the listener\r\n- Attempt to go to http://localhost:8000/check-value/2 again, but you should get ClosedResourceError\r\n\r\n### Litestar Version\r\n\r\nRecreated on both 2.3.2 and 2.4.1\r\n\r\n### Platform\r\n\r\n- [ ] Linux\r\n- [X] Mac\r\n- [ ] Windows\r\n- [x] Other (Please specify in the description above)\r\n\r\n<!-- POLAR PLEDGE BADGE START -->\r\n---\r\n> [!NOTE] \r\n> While we are open for sponsoring on [GitHub Sponsors](https://github.com/sponsors/litestar-org/) and \r\n> [OpenCollective](https://opencollective.com/litestar), we also utilize [Polar.sh](https://polar.sh/) to engage in pledge-based sponsorship.\r\n>\r\n> Check out all issues funded or available for funding [on our Polar.sh Litestar dashboard](https://polar.sh/litestar-org)\r\n> * If you would like to see an issue prioritized, make a pledge towards it!\r\n> * We receive the pledge once the issue is completed & verified\r\n> * This, along with engagement in the community, helps us know which features are a priority to our users.\r\n\r\n<a href=\"https://polar.sh/litestar-org/litestar/issues/2809\">\r\n<picture>\r\n <source media=\"(prefers-color-scheme: dark)\" srcset=\"https://polar.sh/api/github/litestar-org/litestar/issues/2809/pledge.svg?darkmode=1\">\r\n <img alt=\"Fund with Polar\" src=\"https://polar.sh/api/github/litestar-org/litestar/issues/2809/pledge.svg\">\r\n</picture>\r\n</a>\r\n<!-- POLAR PLEDGE BADGE END -->\r\n\n", "before_files": [{"content": "from __future__ import annotations\n\nimport math\nimport sys\nfrom abc import ABC, abstractmethod\nfrom collections import defaultdict\nfrom contextlib import AsyncExitStack\nfrom functools import partial\nfrom typing import TYPE_CHECKING, Any, Sequence\n\nif sys.version_info < (3, 9):\n from typing import AsyncContextManager\nelse:\n from contextlib import AbstractAsyncContextManager as AsyncContextManager\n\nimport anyio\n\nfrom litestar.exceptions import ImproperlyConfiguredException\n\n__all__ = (\"BaseEventEmitterBackend\", \"SimpleEventEmitter\")\n\n\nif TYPE_CHECKING:\n from types import TracebackType\n\n from anyio.streams.memory import MemoryObjectReceiveStream, MemoryObjectSendStream\n\n from litestar.events.listener import EventListener\n\n\nclass BaseEventEmitterBackend(AsyncContextManager[\"BaseEventEmitterBackend\"], ABC):\n \"\"\"Abstract class used to define event emitter backends.\"\"\"\n\n __slots__ = (\"listeners\",)\n\n listeners: defaultdict[str, set[EventListener]]\n\n def __init__(self, listeners: Sequence[EventListener]) -> None:\n \"\"\"Create an event emitter instance.\n\n Args:\n listeners: A list of listeners.\n \"\"\"\n self.listeners = defaultdict(set)\n for listener in listeners:\n for event_id in listener.event_ids:\n self.listeners[event_id].add(listener)\n\n @abstractmethod\n def emit(self, event_id: str, *args: Any, **kwargs: Any) -> None:\n \"\"\"Emit an event to all attached listeners.\n\n Args:\n event_id: The ID of the event to emit, e.g 'my_event'.\n *args: args to pass to the listener(s).\n **kwargs: kwargs to pass to the listener(s)\n\n Returns:\n None\n \"\"\"\n raise NotImplementedError(\"not implemented\")\n\n\nclass SimpleEventEmitter(BaseEventEmitterBackend):\n \"\"\"Event emitter the works only in the current process\"\"\"\n\n __slots__ = (\"_queue\", \"_exit_stack\", \"_receive_stream\", \"_send_stream\")\n\n def __init__(self, listeners: Sequence[EventListener]) -> None:\n \"\"\"Create an event emitter instance.\n\n Args:\n listeners: A list of listeners.\n \"\"\"\n super().__init__(listeners=listeners)\n self._receive_stream: MemoryObjectReceiveStream | None = None\n self._send_stream: MemoryObjectSendStream | None = None\n self._exit_stack: AsyncExitStack | None = None\n\n @staticmethod\n async def _worker(receive_stream: MemoryObjectReceiveStream) -> None:\n \"\"\"Run items from ``receive_stream`` in a task group.\n\n Returns:\n None\n \"\"\"\n async with receive_stream, anyio.create_task_group() as task_group:\n async for item in receive_stream:\n fn, args, kwargs = item\n if kwargs:\n fn = partial(fn, **kwargs)\n task_group.start_soon(fn, *args)\n\n async def __aenter__(self) -> SimpleEventEmitter:\n self._exit_stack = AsyncExitStack()\n send_stream, receive_stream = anyio.create_memory_object_stream(math.inf) # type: ignore[var-annotated]\n self._send_stream = send_stream\n task_group = anyio.create_task_group()\n\n await self._exit_stack.enter_async_context(task_group)\n await self._exit_stack.enter_async_context(send_stream)\n task_group.start_soon(self._worker, receive_stream)\n\n return self\n\n async def __aexit__(\n self,\n exc_type: type[BaseException] | None,\n exc_val: BaseException | None,\n exc_tb: TracebackType | None,\n ) -> None:\n if self._exit_stack:\n await self._exit_stack.__aexit__(exc_type, exc_val, exc_tb)\n\n self._exit_stack = None\n self._send_stream = None\n\n def emit(self, event_id: str, *args: Any, **kwargs: Any) -> None:\n \"\"\"Emit an event to all attached listeners.\n\n Args:\n event_id: The ID of the event to emit, e.g 'my_event'.\n *args: args to pass to the listener(s).\n **kwargs: kwargs to pass to the listener(s)\n\n Returns:\n None\n \"\"\"\n if not (self._send_stream and self._exit_stack):\n raise RuntimeError(\"Emitter not initialized\")\n\n if listeners := self.listeners.get(event_id):\n for listener in listeners:\n self._send_stream.send_nowait((listener.fn, args, kwargs))\n return\n raise ImproperlyConfiguredException(f\"no event listeners are registered for event ID: {event_id}\")\n", "path": "litestar/events/emitter.py"}]} | 2,741 | 498 |
gh_patches_debug_38209 | rasdani/github-patches | git_diff | digitalfabrik__integreat-cms-445 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Improve documentation of cms rules
Explain the rules module and how it interacts with our permission management. Add docstrings of the following format:
```
"""
[Summary]
:param [ParamName]: [ParamDescription], defaults to [DefaultParamVal]
:type [ParamName]: [ParamType](, optional)
...
:raises [ErrorType]: [ErrorDescription]
...
:return: [ReturnDescription]
:rtype: [ReturnType]
"""
```
Improve documentation of cms rules
Explain the rules module and how it interacts with our permission management. Add docstrings of the following format:
```
"""
[Summary]
:param [ParamName]: [ParamDescription], defaults to [DefaultParamVal]
:type [ParamName]: [ParamType](, optional)
...
:raises [ErrorType]: [ErrorDescription]
...
:return: [ReturnDescription]
:rtype: [ReturnType]
"""
```
</issue>
<code>
[start of src/cms/rules.py]
1 from rules import add_perm, predicate
2
3
4 # Predicates
5
6 @predicate
7 def is_page_editor(user, page):
8 if not page:
9 return False
10 return user in page.editors.all()
11
12 @predicate
13 def is_page_publisher(user, page):
14 if not page:
15 return False
16 return user in page.publishers.all()
17
18 @predicate
19 # pylint: disable=unused-argument
20 def can_edit_all_pages(user, page):
21 return user.has_perm('cms.edit_pages')
22
23 @predicate
24 # pylint: disable=unused-argument
25 def can_publish_all_pages(user, page):
26 return user.has_perm('cms.publish_pages')
27
28
29 # Permissions
30
31 add_perm('cms.edit_page', can_edit_all_pages | is_page_editor | can_publish_all_pages | is_page_publisher)
32 add_perm('cms.publish_page', can_publish_all_pages | is_page_publisher)
33
[end of src/cms/rules.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/cms/rules.py b/src/cms/rules.py
--- a/src/cms/rules.py
+++ b/src/cms/rules.py
@@ -1,3 +1,22 @@
+"""
+We use `django-rules <https://pypi.org/project/rules/>`_ to add custom permissions for specific pages.
+
+For a given user and page, the following permissions are added:
+
+* ``cms.edit_page`` if one of the following predicates return true:
+
+ * :func:`~cms.rules.can_edit_all_pages`
+ * :func:`~cms.rules.is_page_editor`
+ * :func:`~cms.rules.can_publish_all_pages`
+ * :func:`~cms.rules.is_page_publisher`
+
+* ``cms.publish_page`` if one of the following predicates return true:
+
+ * :func:`~cms.rules.can_publish_all_pages`
+ * :func:`~cms.rules.is_page_publisher`
+
+See the project's `README <https://github.com/dfunckt/django-rules/blob/master/README.rst>`_ to learn more.
+"""
from rules import add_perm, predicate
@@ -5,12 +24,36 @@
@predicate
def is_page_editor(user, page):
+ """
+ This predicate checks whether the given user is one of the editors of the given page.
+
+ :param user: The user who's permission should be checked
+ :type user: ~django.contrib.auth.models.User
+
+ :param page: The requested page
+ :type page: ~cms.models.pages.page.Page
+
+ :return: Whether or not ``user`` is an editor of ``page``
+ :rtype: bool
+ """
if not page:
return False
return user in page.editors.all()
@predicate
def is_page_publisher(user, page):
+ """
+ This predicate checks whether the given user is one of the publishers of the given page.
+
+ :param user: The user who's permission should be checked
+ :type user: ~django.contrib.auth.models.User
+
+ :param page: The requested page
+ :type page: ~cms.models.pages.page.Page
+
+ :return: Whether or not ``user`` is a publisher of ``page``
+ :rtype: bool
+ """
if not page:
return False
return user in page.publishers.all()
@@ -18,11 +61,35 @@
@predicate
# pylint: disable=unused-argument
def can_edit_all_pages(user, page):
+ """
+ This predicate checks whether the given user can edit all pages.
+
+ :param user: The user who's permission should be checked
+ :type user: ~django.contrib.auth.models.User
+
+ :param page: Unused page parameter (the function signature must match the other predicates)
+ :type page: ~cms.models.pages.page.Page
+
+ :return: Whether or not ``user`` can edit all pages
+ :rtype: bool
+ """
return user.has_perm('cms.edit_pages')
@predicate
# pylint: disable=unused-argument
def can_publish_all_pages(user, page):
+ """
+ This predicate checks whether the given user can publish all pages.
+
+ :param user: The user who's permission should be checked
+ :type user: ~django.contrib.auth.models.User
+
+ :param page: Unused page parameter (the function signature must match the other predicates)
+ :type page: ~cms.models.pages.page.Page
+
+ :return: Whether or not ``user`` can publish all pages
+ :rtype: bool
+ """
return user.has_perm('cms.publish_pages')
| {"golden_diff": "diff --git a/src/cms/rules.py b/src/cms/rules.py\n--- a/src/cms/rules.py\n+++ b/src/cms/rules.py\n@@ -1,3 +1,22 @@\n+\"\"\"\n+We use `django-rules <https://pypi.org/project/rules/>`_ to add custom permissions for specific pages.\n+\n+For a given user and page, the following permissions are added:\n+\n+* ``cms.edit_page`` if one of the following predicates return true:\n+\n+ * :func:`~cms.rules.can_edit_all_pages`\n+ * :func:`~cms.rules.is_page_editor`\n+ * :func:`~cms.rules.can_publish_all_pages`\n+ * :func:`~cms.rules.is_page_publisher`\n+\n+* ``cms.publish_page`` if one of the following predicates return true:\n+\n+ * :func:`~cms.rules.can_publish_all_pages`\n+ * :func:`~cms.rules.is_page_publisher`\n+\n+See the project's `README <https://github.com/dfunckt/django-rules/blob/master/README.rst>`_ to learn more.\n+\"\"\"\n from rules import add_perm, predicate\n \n \n@@ -5,12 +24,36 @@\n \n @predicate\n def is_page_editor(user, page):\n+ \"\"\"\n+ This predicate checks whether the given user is one of the editors of the given page.\n+\n+ :param user: The user who's permission should be checked\n+ :type user: ~django.contrib.auth.models.User\n+\n+ :param page: The requested page\n+ :type page: ~cms.models.pages.page.Page\n+\n+ :return: Whether or not ``user`` is an editor of ``page``\n+ :rtype: bool\n+ \"\"\"\n if not page:\n return False\n return user in page.editors.all()\n \n @predicate\n def is_page_publisher(user, page):\n+ \"\"\"\n+ This predicate checks whether the given user is one of the publishers of the given page.\n+\n+ :param user: The user who's permission should be checked\n+ :type user: ~django.contrib.auth.models.User\n+\n+ :param page: The requested page\n+ :type page: ~cms.models.pages.page.Page\n+\n+ :return: Whether or not ``user`` is a publisher of ``page``\n+ :rtype: bool\n+ \"\"\"\n if not page:\n return False\n return user in page.publishers.all()\n@@ -18,11 +61,35 @@\n @predicate\n # pylint: disable=unused-argument\n def can_edit_all_pages(user, page):\n+ \"\"\"\n+ This predicate checks whether the given user can edit all pages.\n+\n+ :param user: The user who's permission should be checked\n+ :type user: ~django.contrib.auth.models.User\n+\n+ :param page: Unused page parameter (the function signature must match the other predicates)\n+ :type page: ~cms.models.pages.page.Page\n+\n+ :return: Whether or not ``user`` can edit all pages\n+ :rtype: bool\n+ \"\"\"\n return user.has_perm('cms.edit_pages')\n \n @predicate\n # pylint: disable=unused-argument\n def can_publish_all_pages(user, page):\n+ \"\"\"\n+ This predicate checks whether the given user can publish all pages.\n+\n+ :param user: The user who's permission should be checked\n+ :type user: ~django.contrib.auth.models.User\n+\n+ :param page: Unused page parameter (the function signature must match the other predicates)\n+ :type page: ~cms.models.pages.page.Page\n+\n+ :return: Whether or not ``user`` can publish all pages\n+ :rtype: bool\n+ \"\"\"\n return user.has_perm('cms.publish_pages')\n", "issue": "Improve documentation of cms rules\nExplain the rules module and how it interacts with our permission management. Add docstrings of the following format:\r\n```\r\n\"\"\"\r\n[Summary]\r\n\r\n:param [ParamName]: [ParamDescription], defaults to [DefaultParamVal]\r\n:type [ParamName]: [ParamType](, optional)\r\n...\r\n:raises [ErrorType]: [ErrorDescription]\r\n...\r\n:return: [ReturnDescription]\r\n:rtype: [ReturnType]\r\n\"\"\"\r\n```\nImprove documentation of cms rules\nExplain the rules module and how it interacts with our permission management. Add docstrings of the following format:\r\n```\r\n\"\"\"\r\n[Summary]\r\n\r\n:param [ParamName]: [ParamDescription], defaults to [DefaultParamVal]\r\n:type [ParamName]: [ParamType](, optional)\r\n...\r\n:raises [ErrorType]: [ErrorDescription]\r\n...\r\n:return: [ReturnDescription]\r\n:rtype: [ReturnType]\r\n\"\"\"\r\n```\n", "before_files": [{"content": "from rules import add_perm, predicate\n\n\n# Predicates\n\n@predicate\ndef is_page_editor(user, page):\n if not page:\n return False\n return user in page.editors.all()\n\n@predicate\ndef is_page_publisher(user, page):\n if not page:\n return False\n return user in page.publishers.all()\n\n@predicate\n# pylint: disable=unused-argument\ndef can_edit_all_pages(user, page):\n return user.has_perm('cms.edit_pages')\n\n@predicate\n# pylint: disable=unused-argument\ndef can_publish_all_pages(user, page):\n return user.has_perm('cms.publish_pages')\n\n\n# Permissions\n\nadd_perm('cms.edit_page', can_edit_all_pages | is_page_editor | can_publish_all_pages | is_page_publisher)\nadd_perm('cms.publish_page', can_publish_all_pages | is_page_publisher)\n", "path": "src/cms/rules.py"}]} | 955 | 807 |
gh_patches_debug_28434 | rasdani/github-patches | git_diff | openstates__openstates-scrapers-2400 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
NE failing since at least 2018-06-28
NE has been failing since 2018-06-28
Based on automated runs it appears that NE has not run successfully in 2 days (2018-06-28).
```
02:30:54 INFO pupa: save membership 5b4c2cb2-7b6e-11e8-9e19-02e29baaa692 membership in ~{"classification": "legislature"} as membership_5b4c30ea-7b6e-11e8-9e19-02e29baaa692.json
02:30:54 INFO pupa: save membership 5b4c2cb2-7b6e-11e8-9e19-02e29baaa692 membership in ~{"classification": "party", "name": "Nonpartisan"} as membership_5b4c3400-7b6e-11e8-9e19-02e29baaa692.json
02:30:54 INFO scrapelib: GET - http://news.legislature.ne.gov/dist44
02:30:55 INFO pupa: save person Dan Hughes as person_5bea39ca-7b6e-11e8-9e19-02e29baaa692.json
02:30:55 INFO pupa: save membership 5bea39ca-7b6e-11e8-9e19-02e29baaa692 membership in ~{"classification": "legislature"} as membership_5bea3dda-7b6e-11e8-9e19-02e29baaa692.json
02:30:55 INFO pupa: save membership 5bea39ca-7b6e-11e8-9e19-02e29baaa692 membership in ~{"classification": "party", "name": "Nonpartisan"} as membership_5bea4028-7b6e-11e8-9e19-02e29baaa692.json
02:30:55 INFO scrapelib: GET - http://news.legislature.ne.gov/dist45
02:30:56 INFO pupa: save person Sue Crawford as person_5c938c46-7b6e-11e8-9e19-02e29baaa692.json
02:30:56 INFO pupa: save membership 5c938c46-7b6e-11e8-9e19-02e29baaa692 membership in ~{"classification": "legislature"} as membership_5c93909c-7b6e-11e8-9e19-02e29baaa692.json
02:30:56 INFO pupa: save membership 5c938c46-7b6e-11e8-9e19-02e29baaa692 membership in ~{"classification": "party", "name": "Nonpartisan"} as membership_5c93939e-7b6e-11e8-9e19-02e29baaa692.json
02:30:56 INFO scrapelib: GET - http://news.legislature.ne.gov/dist46
02:30:57 INFO pupa: save person Adam Morfeld as person_5d0a16ea-7b6e-11e8-9e19-02e29baaa692.json
02:30:57 INFO pupa: save membership 5d0a16ea-7b6e-11e8-9e19-02e29baaa692 membership in ~{"classification": "legislature"} as membership_5d0a1ae6-7b6e-11e8-9e19-02e29baaa692.json
02:30:57 INFO pupa: save membership 5d0a16ea-7b6e-11e8-9e19-02e29baaa692 membership in ~{"classification": "party", "name": "Nonpartisan"} as membership_5d0a1d2a-7b6e-11e8-9e19-02e29baaa692.json
02:30:57 INFO scrapelib: GET - http://news.legislature.ne.gov/dist47
02:30:58 INFO pupa: save person Steve Erdman as person_5dc322ac-7b6e-11e8-9e19-02e29baaa692.json
02:30:58 INFO pupa: save membership 5dc322ac-7b6e-11e8-9e19-02e29baaa692 membership in ~{"classification": "legislature"} as membership_5dc32694-7b6e-11e8-9e19-02e29baaa692.json
02:30:58 INFO pupa: save membership 5dc322ac-7b6e-11e8-9e19-02e29baaa692 membership in ~{"classification": "party", "name": "Nonpartisan"} as membership_5dc32afe-7b6e-11e8-9e19-02e29baaa692.json
02:30:58 INFO scrapelib: GET - http://news.legislature.ne.gov/dist48
02:30:59 INFO pupa: save person John Stinner as person_5e89a800-7b6e-11e8-9e19-02e29baaa692.json
02:30:59 INFO pupa: save membership 5e89a800-7b6e-11e8-9e19-02e29baaa692 membership in ~{"classification": "legislature"} as membership_5e89ac4c-7b6e-11e8-9e19-02e29baaa692.json
02:30:59 INFO pupa: save membership 5e89a800-7b6e-11e8-9e19-02e29baaa692 membership in ~{"classification": "party", "name": "Nonpartisan"} as membership_5e89aed6-7b6e-11e8-9e19-02e29baaa692.json
02:30:59 INFO scrapelib: GET - http://news.legislature.ne.gov/dist49
02:31:00 INFO pupa: save person John Murante as person_5ed7988a-7b6e-11e8-9e19-02e29baaa692.json
02:31:00 INFO pupa: save membership 5ed7988a-7b6e-11e8-9e19-02e29baaa692 membership in ~{"classification": "legislature"} as membership_5ed79c90-7b6e-11e8-9e19-02e29baaa692.json
02:31:00 INFO pupa: save membership 5ed7988a-7b6e-11e8-9e19-02e29baaa692 membership in ~{"classification": "party", "name": "Nonpartisan"} as membership_5ed79ede-7b6e-11e8-9e19-02e29baaa692.json
02:31:00 INFO scrapelib: GET - http://www.nebraskalegislature.gov/committees/standing-committees.php
02:31:00 INFO scrapelib: GET - http://www.nebraskalegislature.gov/committees/select-committees.php
02:31:01 WARNING pupa: No members found in Select committees committee.
02:31:01 WARNING pupa: No members found in Special committees committee.
Traceback (most recent call last):
File "/opt/openstates/venv-pupa//bin/pupa", line 11, in <module>
load_entry_point('pupa', 'console_scripts', 'pupa')()
File "/opt/openstates/venv-pupa/src/pupa/pupa/cli/__main__.py", line 68, in main
subcommands[args.subcommand].handle(args, other)
File "/opt/openstates/venv-pupa/src/pupa/pupa/cli/commands/update.py", line 260, in handle
return self.do_handle(args, other, juris)
File "/opt/openstates/venv-pupa/src/pupa/pupa/cli/commands/update.py", line 305, in do_handle
report['scrape'] = self.do_scrape(juris, args, scrapers)
File "/opt/openstates/venv-pupa/src/pupa/pupa/cli/commands/update.py", line 173, in do_scrape
report[scraper_name] = scraper.do_scrape(**scrape_args)
File "/opt/openstates/venv-pupa/src/pupa/pupa/scrape/base.py", line 120, in do_scrape
raise ScrapeError('no objects returned from {} scrape'.format(self.__class__.__name__))
pupa.exceptions.ScrapeError: no objects returned from NECommitteeScraper scrape
loaded Open States pupa settings...
ne (scrape, import)
bills: {}
votes: {}
people: {}
committees: {}
```
Visit http://bobsled.openstates.org for more info.
</issue>
<code>
[start of openstates/ne/committees.py]
1 import re
2
3 from pupa.scrape import Scraper, Organization
4
5 from openstates.utils import LXMLMixin
6
7
8 class NECommitteeScraper(Scraper, LXMLMixin):
9 def _scrape_standing_committees(self):
10 """Scrapes the Standing Committees page of the Nebraska state
11 legislature."""
12 main_url = 'http://www.nebraskalegislature.gov/committees/standing-committees.php'
13 page = self.lxmlize(main_url)
14
15 committee_nodes = self.get_nodes(
16 page,
17 '//div[@class="main-content"]/div[@class="panel panel-leg"][1]/'
18 'div[@class="list-group"]/a[@class="list-group-item"]')
19
20 for committee_node in committee_nodes:
21 committee_page_url = committee_node.attrib['href']
22 committee_page = self.lxmlize(committee_page_url)
23
24 name_text = self.get_node(
25 committee_page,
26 '//div[@class="container view-front"]/div[@class="row"]/'
27 'div[@class="col-sm-6 col-md-7"]/h1/text()[normalize-space()]')
28 name = name_text.split()[0:-1]
29
30 committee_name = ''
31 for x in range(len(name)):
32 committee_name += name[x] + ' '
33 committee_name = committee_name[0: -1]
34
35 org = Organization(name=committee_name, chamber='legislature',
36 classification='committee')
37
38 members = self.get_nodes(
39 committee_page,
40 '//div[@class="col-sm-4 col-md-3 ltc-col-right"][1]/'
41 'div[@class="block-box"][1]/ul[@class="list-unstyled '
42 'feature-content"]/li/a/text()[normalize-space()]')
43
44 for member in members:
45 member_name = re.sub(r'\Sen\.\s+', '', member)
46 member_name = re.sub(r', Chairperson', '', member_name).strip()
47 if 'Chairperson' in member:
48 member_role = 'Chairperson'
49 else:
50 member_role = 'member'
51 org.add_member(member_name, member_role)
52
53 org.add_source(main_url)
54 org.add_source(committee_page_url)
55
56 yield org
57
58 def _scrape_select_special_committees(self):
59 """Scrapes the Select and Special Committees page of the
60 Nebraska state legislature."""
61 main_url = 'http://www.nebraskalegislature.gov/committees/select-committees.php'
62 page = self.lxmlize(main_url)
63
64 committee_nodes = self.get_nodes(
65 page,
66 '//div[@class="main-content"]/div[@class="panel panel-leg"]')
67
68 for committee_node in committee_nodes:
69 committee_name = self.get_node(
70 committee_node,
71 './/h2[@class="panel-title"]/text()[normalize-space()]')
72
73 if committee_name is None:
74 committee_name = self.get_node(
75 committee_node,
76 './/h2[@class="panel-title"]/a/text()[normalize-space()]')
77
78 org = Organization(name=committee_name, chamber='legislature',
79 classification='committee')
80 org.add_source(main_url)
81
82 members = self.get_nodes(
83 committee_node,
84 './div[@class="list-group"]/a[@class="list-group-item"]/'
85 'text()[normalize-space()]')
86
87 for member in members:
88 member_name = re.sub(r'\Sen\.\s+', '', member)
89 member_name = re.sub(r', Chairperson', '', member_name).strip()
90 if 'Chairperson' in member:
91 member_role = 'Chairperson'
92 else:
93 member_role = 'member'
94 org.add_member(member_name, member_role)
95
96 if not org._related:
97 self.warning('No members found in {} committee.'.format(
98 org.name))
99 else:
100 yield org
101
102 def scrape(self):
103 yield from self._scrape_standing_committees()
104 yield from self._scrape_select_special_committees()
105
[end of openstates/ne/committees.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/openstates/ne/committees.py b/openstates/ne/committees.py
--- a/openstates/ne/committees.py
+++ b/openstates/ne/committees.py
@@ -14,8 +14,9 @@
committee_nodes = self.get_nodes(
page,
- '//div[@class="main-content"]/div[@class="panel panel-leg"][1]/'
- 'div[@class="list-group"]/a[@class="list-group-item"]')
+ '//a[@class="accordion-switch"][contains(text(), "Standing Committees")]'
+ '/ancestor::div[@class="panel panel-leg"]//div[@class="list-group"]'
+ '/a[@class="list-group-item"]')
for committee_node in committee_nodes:
committee_page_url = committee_node.attrib['href']
@@ -63,7 +64,8 @@
committee_nodes = self.get_nodes(
page,
- '//div[@class="main-content"]/div[@class="panel panel-leg"]')
+ '//a[contains(@class, "accordion-switch")]'
+ '/ancestor::div[@class="panel panel-leg"]')
for committee_node in committee_nodes:
committee_name = self.get_node(
@@ -81,8 +83,8 @@
members = self.get_nodes(
committee_node,
- './div[@class="list-group"]/a[@class="list-group-item"]/'
- 'text()[normalize-space()]')
+ './/a[@class="list-group-item"]'
+ '/text()[normalize-space()]')
for member in members:
member_name = re.sub(r'\Sen\.\s+', '', member)
| {"golden_diff": "diff --git a/openstates/ne/committees.py b/openstates/ne/committees.py\n--- a/openstates/ne/committees.py\n+++ b/openstates/ne/committees.py\n@@ -14,8 +14,9 @@\n \n committee_nodes = self.get_nodes(\n page,\n- '//div[@class=\"main-content\"]/div[@class=\"panel panel-leg\"][1]/'\n- 'div[@class=\"list-group\"]/a[@class=\"list-group-item\"]')\n+ '//a[@class=\"accordion-switch\"][contains(text(), \"Standing Committees\")]'\n+ '/ancestor::div[@class=\"panel panel-leg\"]//div[@class=\"list-group\"]'\n+ '/a[@class=\"list-group-item\"]')\n \n for committee_node in committee_nodes:\n committee_page_url = committee_node.attrib['href']\n@@ -63,7 +64,8 @@\n \n committee_nodes = self.get_nodes(\n page,\n- '//div[@class=\"main-content\"]/div[@class=\"panel panel-leg\"]')\n+ '//a[contains(@class, \"accordion-switch\")]'\n+ '/ancestor::div[@class=\"panel panel-leg\"]')\n \n for committee_node in committee_nodes:\n committee_name = self.get_node(\n@@ -81,8 +83,8 @@\n \n members = self.get_nodes(\n committee_node,\n- './div[@class=\"list-group\"]/a[@class=\"list-group-item\"]/'\n- 'text()[normalize-space()]')\n+ './/a[@class=\"list-group-item\"]'\n+ '/text()[normalize-space()]')\n \n for member in members:\n member_name = re.sub(r'\\Sen\\.\\s+', '', member)\n", "issue": "NE failing since at least 2018-06-28\nNE has been failing since 2018-06-28\n\nBased on automated runs it appears that NE has not run successfully in 2 days (2018-06-28).\n\n\n```\n 02:30:54 INFO pupa: save membership 5b4c2cb2-7b6e-11e8-9e19-02e29baaa692 membership in ~{\"classification\": \"legislature\"} as membership_5b4c30ea-7b6e-11e8-9e19-02e29baaa692.json\n02:30:54 INFO pupa: save membership 5b4c2cb2-7b6e-11e8-9e19-02e29baaa692 membership in ~{\"classification\": \"party\", \"name\": \"Nonpartisan\"} as membership_5b4c3400-7b6e-11e8-9e19-02e29baaa692.json\n02:30:54 INFO scrapelib: GET - http://news.legislature.ne.gov/dist44\n02:30:55 INFO pupa: save person Dan Hughes as person_5bea39ca-7b6e-11e8-9e19-02e29baaa692.json\n02:30:55 INFO pupa: save membership 5bea39ca-7b6e-11e8-9e19-02e29baaa692 membership in ~{\"classification\": \"legislature\"} as membership_5bea3dda-7b6e-11e8-9e19-02e29baaa692.json\n02:30:55 INFO pupa: save membership 5bea39ca-7b6e-11e8-9e19-02e29baaa692 membership in ~{\"classification\": \"party\", \"name\": \"Nonpartisan\"} as membership_5bea4028-7b6e-11e8-9e19-02e29baaa692.json\n02:30:55 INFO scrapelib: GET - http://news.legislature.ne.gov/dist45\n02:30:56 INFO pupa: save person Sue Crawford as person_5c938c46-7b6e-11e8-9e19-02e29baaa692.json\n02:30:56 INFO pupa: save membership 5c938c46-7b6e-11e8-9e19-02e29baaa692 membership in ~{\"classification\": \"legislature\"} as membership_5c93909c-7b6e-11e8-9e19-02e29baaa692.json\n02:30:56 INFO pupa: save membership 5c938c46-7b6e-11e8-9e19-02e29baaa692 membership in ~{\"classification\": \"party\", \"name\": \"Nonpartisan\"} as membership_5c93939e-7b6e-11e8-9e19-02e29baaa692.json\n02:30:56 INFO scrapelib: GET - http://news.legislature.ne.gov/dist46\n02:30:57 INFO pupa: save person Adam Morfeld as person_5d0a16ea-7b6e-11e8-9e19-02e29baaa692.json\n02:30:57 INFO pupa: save membership 5d0a16ea-7b6e-11e8-9e19-02e29baaa692 membership in ~{\"classification\": \"legislature\"} as membership_5d0a1ae6-7b6e-11e8-9e19-02e29baaa692.json\n02:30:57 INFO pupa: save membership 5d0a16ea-7b6e-11e8-9e19-02e29baaa692 membership in ~{\"classification\": \"party\", \"name\": \"Nonpartisan\"} as membership_5d0a1d2a-7b6e-11e8-9e19-02e29baaa692.json\n02:30:57 INFO scrapelib: GET - http://news.legislature.ne.gov/dist47\n02:30:58 INFO pupa: save person Steve Erdman as person_5dc322ac-7b6e-11e8-9e19-02e29baaa692.json\n02:30:58 INFO pupa: save membership 5dc322ac-7b6e-11e8-9e19-02e29baaa692 membership in ~{\"classification\": \"legislature\"} as membership_5dc32694-7b6e-11e8-9e19-02e29baaa692.json\n02:30:58 INFO pupa: save membership 5dc322ac-7b6e-11e8-9e19-02e29baaa692 membership in ~{\"classification\": \"party\", \"name\": \"Nonpartisan\"} as membership_5dc32afe-7b6e-11e8-9e19-02e29baaa692.json\n02:30:58 INFO scrapelib: GET - http://news.legislature.ne.gov/dist48\n02:30:59 INFO pupa: save person John Stinner as person_5e89a800-7b6e-11e8-9e19-02e29baaa692.json\n02:30:59 INFO pupa: save membership 5e89a800-7b6e-11e8-9e19-02e29baaa692 membership in ~{\"classification\": \"legislature\"} as membership_5e89ac4c-7b6e-11e8-9e19-02e29baaa692.json\n02:30:59 INFO pupa: save membership 5e89a800-7b6e-11e8-9e19-02e29baaa692 membership in ~{\"classification\": \"party\", \"name\": \"Nonpartisan\"} as membership_5e89aed6-7b6e-11e8-9e19-02e29baaa692.json\n02:30:59 INFO scrapelib: GET - http://news.legislature.ne.gov/dist49\n02:31:00 INFO pupa: save person John Murante as person_5ed7988a-7b6e-11e8-9e19-02e29baaa692.json\n02:31:00 INFO pupa: save membership 5ed7988a-7b6e-11e8-9e19-02e29baaa692 membership in ~{\"classification\": \"legislature\"} as membership_5ed79c90-7b6e-11e8-9e19-02e29baaa692.json\n02:31:00 INFO pupa: save membership 5ed7988a-7b6e-11e8-9e19-02e29baaa692 membership in ~{\"classification\": \"party\", \"name\": \"Nonpartisan\"} as membership_5ed79ede-7b6e-11e8-9e19-02e29baaa692.json\n02:31:00 INFO scrapelib: GET - http://www.nebraskalegislature.gov/committees/standing-committees.php\n02:31:00 INFO scrapelib: GET - http://www.nebraskalegislature.gov/committees/select-committees.php\n02:31:01 WARNING pupa: No members found in Select committees committee.\n02:31:01 WARNING pupa: No members found in Special committees committee.\nTraceback (most recent call last):\n File \"/opt/openstates/venv-pupa//bin/pupa\", line 11, in <module>\n load_entry_point('pupa', 'console_scripts', 'pupa')()\n File \"/opt/openstates/venv-pupa/src/pupa/pupa/cli/__main__.py\", line 68, in main\n subcommands[args.subcommand].handle(args, other)\n File \"/opt/openstates/venv-pupa/src/pupa/pupa/cli/commands/update.py\", line 260, in handle\n return self.do_handle(args, other, juris)\n File \"/opt/openstates/venv-pupa/src/pupa/pupa/cli/commands/update.py\", line 305, in do_handle\n report['scrape'] = self.do_scrape(juris, args, scrapers)\n File \"/opt/openstates/venv-pupa/src/pupa/pupa/cli/commands/update.py\", line 173, in do_scrape\n report[scraper_name] = scraper.do_scrape(**scrape_args)\n File \"/opt/openstates/venv-pupa/src/pupa/pupa/scrape/base.py\", line 120, in do_scrape\n raise ScrapeError('no objects returned from {} scrape'.format(self.__class__.__name__))\npupa.exceptions.ScrapeError: no objects returned from NECommitteeScraper scrape\nloaded Open States pupa settings...\nne (scrape, import)\n bills: {}\n votes: {}\n people: {}\n committees: {}\n```\n\nVisit http://bobsled.openstates.org for more info.\n\n", "before_files": [{"content": "import re\n\nfrom pupa.scrape import Scraper, Organization\n\nfrom openstates.utils import LXMLMixin\n\n\nclass NECommitteeScraper(Scraper, LXMLMixin):\n def _scrape_standing_committees(self):\n \"\"\"Scrapes the Standing Committees page of the Nebraska state\n legislature.\"\"\"\n main_url = 'http://www.nebraskalegislature.gov/committees/standing-committees.php'\n page = self.lxmlize(main_url)\n\n committee_nodes = self.get_nodes(\n page,\n '//div[@class=\"main-content\"]/div[@class=\"panel panel-leg\"][1]/'\n 'div[@class=\"list-group\"]/a[@class=\"list-group-item\"]')\n\n for committee_node in committee_nodes:\n committee_page_url = committee_node.attrib['href']\n committee_page = self.lxmlize(committee_page_url)\n\n name_text = self.get_node(\n committee_page,\n '//div[@class=\"container view-front\"]/div[@class=\"row\"]/'\n 'div[@class=\"col-sm-6 col-md-7\"]/h1/text()[normalize-space()]')\n name = name_text.split()[0:-1]\n\n committee_name = ''\n for x in range(len(name)):\n committee_name += name[x] + ' '\n committee_name = committee_name[0: -1]\n\n org = Organization(name=committee_name, chamber='legislature',\n classification='committee')\n\n members = self.get_nodes(\n committee_page,\n '//div[@class=\"col-sm-4 col-md-3 ltc-col-right\"][1]/'\n 'div[@class=\"block-box\"][1]/ul[@class=\"list-unstyled '\n 'feature-content\"]/li/a/text()[normalize-space()]')\n\n for member in members:\n member_name = re.sub(r'\\Sen\\.\\s+', '', member)\n member_name = re.sub(r', Chairperson', '', member_name).strip()\n if 'Chairperson' in member:\n member_role = 'Chairperson'\n else:\n member_role = 'member'\n org.add_member(member_name, member_role)\n\n org.add_source(main_url)\n org.add_source(committee_page_url)\n\n yield org\n\n def _scrape_select_special_committees(self):\n \"\"\"Scrapes the Select and Special Committees page of the\n Nebraska state legislature.\"\"\"\n main_url = 'http://www.nebraskalegislature.gov/committees/select-committees.php'\n page = self.lxmlize(main_url)\n\n committee_nodes = self.get_nodes(\n page,\n '//div[@class=\"main-content\"]/div[@class=\"panel panel-leg\"]')\n\n for committee_node in committee_nodes:\n committee_name = self.get_node(\n committee_node,\n './/h2[@class=\"panel-title\"]/text()[normalize-space()]')\n\n if committee_name is None:\n committee_name = self.get_node(\n committee_node,\n './/h2[@class=\"panel-title\"]/a/text()[normalize-space()]')\n\n org = Organization(name=committee_name, chamber='legislature',\n classification='committee')\n org.add_source(main_url)\n\n members = self.get_nodes(\n committee_node,\n './div[@class=\"list-group\"]/a[@class=\"list-group-item\"]/'\n 'text()[normalize-space()]')\n\n for member in members:\n member_name = re.sub(r'\\Sen\\.\\s+', '', member)\n member_name = re.sub(r', Chairperson', '', member_name).strip()\n if 'Chairperson' in member:\n member_role = 'Chairperson'\n else:\n member_role = 'member'\n org.add_member(member_name, member_role)\n\n if not org._related:\n self.warning('No members found in {} committee.'.format(\n org.name))\n else:\n yield org\n\n def scrape(self):\n yield from self._scrape_standing_committees()\n yield from self._scrape_select_special_committees()\n", "path": "openstates/ne/committees.py"}]} | 4,005 | 360 |
gh_patches_debug_10614 | rasdani/github-patches | git_diff | getredash__redash-2134 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Use can't download dataset before saving query
Because the query results url contains the query id, before saving the user can't download the dataset.
We need to allow addressing query results without query id.
</issue>
<code>
[start of redash/handlers/api.py]
1 from flask_restful import Api
2 from werkzeug.wrappers import Response
3 from flask import make_response
4
5 from redash.utils import json_dumps
6 from redash.handlers.base import org_scoped_rule
7 from redash.handlers.permissions import ObjectPermissionsListResource, CheckPermissionResource
8 from redash.handlers.alerts import AlertResource, AlertListResource, AlertSubscriptionListResource, AlertSubscriptionResource
9 from redash.handlers.dashboards import DashboardListResource, RecentDashboardsResource, DashboardResource, DashboardShareResource, PublicDashboardResource
10 from redash.handlers.data_sources import DataSourceTypeListResource, DataSourceListResource, DataSourceSchemaResource, DataSourceResource, DataSourcePauseResource, DataSourceTestResource
11 from redash.handlers.events import EventResource
12 from redash.handlers.queries import QueryForkResource, QueryRefreshResource, QueryListResource, QueryRecentResource, QuerySearchResource, QueryResource, MyQueriesResource
13 from redash.handlers.query_results import QueryResultListResource, QueryResultResource, JobResource
14 from redash.handlers.users import UserResource, UserListResource, UserInviteResource, UserResetPasswordResource
15 from redash.handlers.visualizations import VisualizationListResource
16 from redash.handlers.visualizations import VisualizationResource
17 from redash.handlers.widgets import WidgetResource, WidgetListResource
18 from redash.handlers.groups import GroupListResource, GroupResource, GroupMemberListResource, GroupMemberResource, \
19 GroupDataSourceListResource, GroupDataSourceResource
20 from redash.handlers.destinations import DestinationTypeListResource, DestinationResource, DestinationListResource
21 from redash.handlers.query_snippets import QuerySnippetListResource, QuerySnippetResource
22
23
24 class ApiExt(Api):
25 def add_org_resource(self, resource, *urls, **kwargs):
26 urls = [org_scoped_rule(url) for url in urls]
27 return self.add_resource(resource, *urls, **kwargs)
28
29 api = ApiExt()
30
31
32 @api.representation('application/json')
33 def json_representation(data, code, headers=None):
34 # Flask-Restful checks only for flask.Response but flask-login uses werkzeug.wrappers.Response
35 if isinstance(data, Response):
36 return data
37 resp = make_response(json_dumps(data), code)
38 resp.headers.extend(headers or {})
39 return resp
40
41
42 api.add_org_resource(AlertResource, '/api/alerts/<alert_id>', endpoint='alert')
43 api.add_org_resource(AlertSubscriptionListResource, '/api/alerts/<alert_id>/subscriptions', endpoint='alert_subscriptions')
44 api.add_org_resource(AlertSubscriptionResource, '/api/alerts/<alert_id>/subscriptions/<subscriber_id>', endpoint='alert_subscription')
45 api.add_org_resource(AlertListResource, '/api/alerts', endpoint='alerts')
46
47 api.add_org_resource(DashboardListResource, '/api/dashboards', endpoint='dashboards')
48 api.add_org_resource(RecentDashboardsResource, '/api/dashboards/recent', endpoint='recent_dashboards')
49 api.add_org_resource(DashboardResource, '/api/dashboards/<dashboard_slug>', endpoint='dashboard')
50 api.add_org_resource(PublicDashboardResource, '/api/dashboards/public/<token>', endpoint='public_dashboard')
51 api.add_org_resource(DashboardShareResource, '/api/dashboards/<dashboard_id>/share', endpoint='dashboard_share')
52
53 api.add_org_resource(DataSourceTypeListResource, '/api/data_sources/types', endpoint='data_source_types')
54 api.add_org_resource(DataSourceListResource, '/api/data_sources', endpoint='data_sources')
55 api.add_org_resource(DataSourceSchemaResource, '/api/data_sources/<data_source_id>/schema')
56 api.add_org_resource(DataSourcePauseResource, '/api/data_sources/<data_source_id>/pause')
57 api.add_org_resource(DataSourceTestResource, '/api/data_sources/<data_source_id>/test')
58 api.add_org_resource(DataSourceResource, '/api/data_sources/<data_source_id>', endpoint='data_source')
59
60 api.add_org_resource(GroupListResource, '/api/groups', endpoint='groups')
61 api.add_org_resource(GroupResource, '/api/groups/<group_id>', endpoint='group')
62 api.add_org_resource(GroupMemberListResource, '/api/groups/<group_id>/members', endpoint='group_members')
63 api.add_org_resource(GroupMemberResource, '/api/groups/<group_id>/members/<user_id>', endpoint='group_member')
64 api.add_org_resource(GroupDataSourceListResource, '/api/groups/<group_id>/data_sources', endpoint='group_data_sources')
65 api.add_org_resource(GroupDataSourceResource, '/api/groups/<group_id>/data_sources/<data_source_id>', endpoint='group_data_source')
66
67 api.add_org_resource(EventResource, '/api/events', endpoint='events')
68
69 api.add_org_resource(QuerySearchResource, '/api/queries/search', endpoint='queries_search')
70 api.add_org_resource(QueryRecentResource, '/api/queries/recent', endpoint='recent_queries')
71 api.add_org_resource(QueryListResource, '/api/queries', endpoint='queries')
72 api.add_org_resource(MyQueriesResource, '/api/queries/my', endpoint='my_queries')
73 api.add_org_resource(QueryRefreshResource, '/api/queries/<query_id>/refresh', endpoint='query_refresh')
74 api.add_org_resource(QueryResource, '/api/queries/<query_id>', endpoint='query')
75 api.add_org_resource(QueryForkResource, '/api/queries/<query_id>/fork', endpoint='query_fork')
76
77 api.add_org_resource(ObjectPermissionsListResource, '/api/<object_type>/<object_id>/acl', endpoint='object_permissions')
78 api.add_org_resource(CheckPermissionResource, '/api/<object_type>/<object_id>/acl/<access_type>', endpoint='check_permissions')
79
80 api.add_org_resource(QueryResultListResource, '/api/query_results', endpoint='query_results')
81 api.add_org_resource(QueryResultResource,
82 '/api/query_results/<query_result_id>',
83 '/api/queries/<query_id>/results.<filetype>',
84 '/api/queries/<query_id>/results/<query_result_id>.<filetype>',
85 endpoint='query_result')
86 api.add_org_resource(JobResource, '/api/jobs/<job_id>', endpoint='job')
87
88 api.add_org_resource(UserListResource, '/api/users', endpoint='users')
89 api.add_org_resource(UserResource, '/api/users/<user_id>', endpoint='user')
90 api.add_org_resource(UserInviteResource, '/api/users/<user_id>/invite', endpoint='user_invite')
91 api.add_org_resource(UserResetPasswordResource, '/api/users/<user_id>/reset_password', endpoint='user_reset_password')
92
93 api.add_org_resource(VisualizationListResource, '/api/visualizations', endpoint='visualizations')
94 api.add_org_resource(VisualizationResource, '/api/visualizations/<visualization_id>', endpoint='visualization')
95
96 api.add_org_resource(WidgetListResource, '/api/widgets', endpoint='widgets')
97 api.add_org_resource(WidgetResource, '/api/widgets/<int:widget_id>', endpoint='widget')
98
99 api.add_org_resource(DestinationTypeListResource, '/api/destinations/types', endpoint='destination_types')
100 api.add_org_resource(DestinationResource, '/api/destinations/<destination_id>', endpoint='destination')
101 api.add_org_resource(DestinationListResource, '/api/destinations', endpoint='destinations')
102
103 api.add_org_resource(QuerySnippetResource, '/api/query_snippets/<snippet_id>', endpoint='query_snippet')
104 api.add_org_resource(QuerySnippetListResource, '/api/query_snippets', endpoint='query_snippets')
105
[end of redash/handlers/api.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/redash/handlers/api.py b/redash/handlers/api.py
--- a/redash/handlers/api.py
+++ b/redash/handlers/api.py
@@ -79,6 +79,7 @@
api.add_org_resource(QueryResultListResource, '/api/query_results', endpoint='query_results')
api.add_org_resource(QueryResultResource,
+ '/api/query_results/<query_result_id>.<filetype>',
'/api/query_results/<query_result_id>',
'/api/queries/<query_id>/results.<filetype>',
'/api/queries/<query_id>/results/<query_result_id>.<filetype>',
| {"golden_diff": "diff --git a/redash/handlers/api.py b/redash/handlers/api.py\n--- a/redash/handlers/api.py\n+++ b/redash/handlers/api.py\n@@ -79,6 +79,7 @@\n \n api.add_org_resource(QueryResultListResource, '/api/query_results', endpoint='query_results')\n api.add_org_resource(QueryResultResource,\n+ '/api/query_results/<query_result_id>.<filetype>',\n '/api/query_results/<query_result_id>',\n '/api/queries/<query_id>/results.<filetype>',\n '/api/queries/<query_id>/results/<query_result_id>.<filetype>',\n", "issue": "Use can't download dataset before saving query\nBecause the query results url contains the query id, before saving the user can't download the dataset. \n\nWe need to allow addressing query results without query id.\n\n", "before_files": [{"content": "from flask_restful import Api\nfrom werkzeug.wrappers import Response\nfrom flask import make_response\n\nfrom redash.utils import json_dumps\nfrom redash.handlers.base import org_scoped_rule\nfrom redash.handlers.permissions import ObjectPermissionsListResource, CheckPermissionResource\nfrom redash.handlers.alerts import AlertResource, AlertListResource, AlertSubscriptionListResource, AlertSubscriptionResource\nfrom redash.handlers.dashboards import DashboardListResource, RecentDashboardsResource, DashboardResource, DashboardShareResource, PublicDashboardResource \nfrom redash.handlers.data_sources import DataSourceTypeListResource, DataSourceListResource, DataSourceSchemaResource, DataSourceResource, DataSourcePauseResource, DataSourceTestResource\nfrom redash.handlers.events import EventResource\nfrom redash.handlers.queries import QueryForkResource, QueryRefreshResource, QueryListResource, QueryRecentResource, QuerySearchResource, QueryResource, MyQueriesResource\nfrom redash.handlers.query_results import QueryResultListResource, QueryResultResource, JobResource\nfrom redash.handlers.users import UserResource, UserListResource, UserInviteResource, UserResetPasswordResource\nfrom redash.handlers.visualizations import VisualizationListResource\nfrom redash.handlers.visualizations import VisualizationResource\nfrom redash.handlers.widgets import WidgetResource, WidgetListResource\nfrom redash.handlers.groups import GroupListResource, GroupResource, GroupMemberListResource, GroupMemberResource, \\\n GroupDataSourceListResource, GroupDataSourceResource\nfrom redash.handlers.destinations import DestinationTypeListResource, DestinationResource, DestinationListResource\nfrom redash.handlers.query_snippets import QuerySnippetListResource, QuerySnippetResource\n\n\nclass ApiExt(Api):\n def add_org_resource(self, resource, *urls, **kwargs):\n urls = [org_scoped_rule(url) for url in urls]\n return self.add_resource(resource, *urls, **kwargs)\n\napi = ApiExt()\n\n\[email protected]('application/json')\ndef json_representation(data, code, headers=None):\n # Flask-Restful checks only for flask.Response but flask-login uses werkzeug.wrappers.Response\n if isinstance(data, Response):\n return data\n resp = make_response(json_dumps(data), code)\n resp.headers.extend(headers or {})\n return resp\n\n\napi.add_org_resource(AlertResource, '/api/alerts/<alert_id>', endpoint='alert')\napi.add_org_resource(AlertSubscriptionListResource, '/api/alerts/<alert_id>/subscriptions', endpoint='alert_subscriptions')\napi.add_org_resource(AlertSubscriptionResource, '/api/alerts/<alert_id>/subscriptions/<subscriber_id>', endpoint='alert_subscription')\napi.add_org_resource(AlertListResource, '/api/alerts', endpoint='alerts')\n\napi.add_org_resource(DashboardListResource, '/api/dashboards', endpoint='dashboards')\napi.add_org_resource(RecentDashboardsResource, '/api/dashboards/recent', endpoint='recent_dashboards')\napi.add_org_resource(DashboardResource, '/api/dashboards/<dashboard_slug>', endpoint='dashboard')\napi.add_org_resource(PublicDashboardResource, '/api/dashboards/public/<token>', endpoint='public_dashboard')\napi.add_org_resource(DashboardShareResource, '/api/dashboards/<dashboard_id>/share', endpoint='dashboard_share')\n\napi.add_org_resource(DataSourceTypeListResource, '/api/data_sources/types', endpoint='data_source_types')\napi.add_org_resource(DataSourceListResource, '/api/data_sources', endpoint='data_sources')\napi.add_org_resource(DataSourceSchemaResource, '/api/data_sources/<data_source_id>/schema')\napi.add_org_resource(DataSourcePauseResource, '/api/data_sources/<data_source_id>/pause')\napi.add_org_resource(DataSourceTestResource, '/api/data_sources/<data_source_id>/test')\napi.add_org_resource(DataSourceResource, '/api/data_sources/<data_source_id>', endpoint='data_source')\n\napi.add_org_resource(GroupListResource, '/api/groups', endpoint='groups')\napi.add_org_resource(GroupResource, '/api/groups/<group_id>', endpoint='group')\napi.add_org_resource(GroupMemberListResource, '/api/groups/<group_id>/members', endpoint='group_members')\napi.add_org_resource(GroupMemberResource, '/api/groups/<group_id>/members/<user_id>', endpoint='group_member')\napi.add_org_resource(GroupDataSourceListResource, '/api/groups/<group_id>/data_sources', endpoint='group_data_sources')\napi.add_org_resource(GroupDataSourceResource, '/api/groups/<group_id>/data_sources/<data_source_id>', endpoint='group_data_source')\n\napi.add_org_resource(EventResource, '/api/events', endpoint='events')\n\napi.add_org_resource(QuerySearchResource, '/api/queries/search', endpoint='queries_search')\napi.add_org_resource(QueryRecentResource, '/api/queries/recent', endpoint='recent_queries')\napi.add_org_resource(QueryListResource, '/api/queries', endpoint='queries')\napi.add_org_resource(MyQueriesResource, '/api/queries/my', endpoint='my_queries')\napi.add_org_resource(QueryRefreshResource, '/api/queries/<query_id>/refresh', endpoint='query_refresh')\napi.add_org_resource(QueryResource, '/api/queries/<query_id>', endpoint='query')\napi.add_org_resource(QueryForkResource, '/api/queries/<query_id>/fork', endpoint='query_fork')\n\napi.add_org_resource(ObjectPermissionsListResource, '/api/<object_type>/<object_id>/acl', endpoint='object_permissions')\napi.add_org_resource(CheckPermissionResource, '/api/<object_type>/<object_id>/acl/<access_type>', endpoint='check_permissions')\n\napi.add_org_resource(QueryResultListResource, '/api/query_results', endpoint='query_results')\napi.add_org_resource(QueryResultResource,\n '/api/query_results/<query_result_id>',\n '/api/queries/<query_id>/results.<filetype>',\n '/api/queries/<query_id>/results/<query_result_id>.<filetype>',\n endpoint='query_result')\napi.add_org_resource(JobResource, '/api/jobs/<job_id>', endpoint='job')\n\napi.add_org_resource(UserListResource, '/api/users', endpoint='users')\napi.add_org_resource(UserResource, '/api/users/<user_id>', endpoint='user')\napi.add_org_resource(UserInviteResource, '/api/users/<user_id>/invite', endpoint='user_invite')\napi.add_org_resource(UserResetPasswordResource, '/api/users/<user_id>/reset_password', endpoint='user_reset_password')\n\napi.add_org_resource(VisualizationListResource, '/api/visualizations', endpoint='visualizations')\napi.add_org_resource(VisualizationResource, '/api/visualizations/<visualization_id>', endpoint='visualization')\n\napi.add_org_resource(WidgetListResource, '/api/widgets', endpoint='widgets')\napi.add_org_resource(WidgetResource, '/api/widgets/<int:widget_id>', endpoint='widget')\n\napi.add_org_resource(DestinationTypeListResource, '/api/destinations/types', endpoint='destination_types')\napi.add_org_resource(DestinationResource, '/api/destinations/<destination_id>', endpoint='destination')\napi.add_org_resource(DestinationListResource, '/api/destinations', endpoint='destinations')\n\napi.add_org_resource(QuerySnippetResource, '/api/query_snippets/<snippet_id>', endpoint='query_snippet')\napi.add_org_resource(QuerySnippetListResource, '/api/query_snippets', endpoint='query_snippets')\n", "path": "redash/handlers/api.py"}]} | 2,293 | 138 |
gh_patches_debug_30965 | rasdani/github-patches | git_diff | napari__napari-5085 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Bug: values of zero in 3D MIPs render as transparent, rather than black, even in opaque blending mode
## 🐛 Bug
noticed by @maweigert:
if a volume with a bunch of zeros is in front of another volume, we would have expected that it would essentially occluded any layers behind it, assuming, of course, that the blending mode is opaque. (in additive blending, naturally, any nonzero values in other layers would result in a non-black MIP pixel).
However, MIP values of zero appear to render as transparent, even in opaque blending mode, showing layers behind it:
```python
import numpy as np
import napari
x = np.zeros((20,20,20), np.float32)
x[10,10,10] = 100
y = np.zeros((20,20,20), np.float32)
y[1:-1,1:-1,1:-1] = 100
v = napari.Viewer()
v.add_image(y)
v.add_image(x)
v.dims.ndisplay=3
```
<img width="1311" alt="Screenshot 2022-09-01 at 09 19 26" src="https://user-images.githubusercontent.com/1609449/187933993-1454960c-f9fc-4ab4-b23e-a6603e093281.png">
changing `viewer.layers['x'].rendering = 'additive'` _does_ then occlude all of `y`
not 100% sure it's a bug, but it did strike us as unexpected. @brisvag?
</issue>
<code>
[start of napari/_vispy/layers/image.py]
1 import warnings
2
3 import numpy as np
4 from vispy.color import Colormap as VispyColormap
5 from vispy.scene.node import Node
6
7 from ...utils.translations import trans
8 from ..utils.gl import fix_data_dtype, get_gl_extensions
9 from ..visuals.image import Image as ImageNode
10 from ..visuals.volume import Volume as VolumeNode
11 from .base import VispyBaseLayer
12
13
14 class ImageLayerNode:
15 def __init__(self, custom_node: Node = None, texture_format=None):
16 if (
17 texture_format == 'auto'
18 and 'texture_float' not in get_gl_extensions()
19 ):
20 # if the GPU doesn't support float textures, texture_format auto
21 # WILL fail on float dtypes
22 # https://github.com/napari/napari/issues/3988
23 texture_format = None
24
25 self._custom_node = custom_node
26 self._image_node = ImageNode(
27 None,
28 method='auto',
29 texture_format=texture_format,
30 )
31 self._volume_node = VolumeNode(
32 np.zeros((1, 1, 1), dtype=np.float32),
33 clim=[0, 1],
34 texture_format=texture_format,
35 )
36
37 def get_node(self, ndisplay: int) -> Node:
38
39 # Return custom node if we have one.
40 if self._custom_node is not None:
41 return self._custom_node
42
43 # Return Image or Volume node based on 2D or 3D.
44 if ndisplay == 2:
45 return self._image_node
46 return self._volume_node
47
48
49 class VispyImageLayer(VispyBaseLayer):
50 def __init__(self, layer, node=None, texture_format='auto'):
51
52 # Use custom node from caller, or our standard image/volume nodes.
53 self._layer_node = ImageLayerNode(node, texture_format=texture_format)
54
55 # Default to 2D (image) node.
56 super().__init__(layer, self._layer_node.get_node(2))
57
58 self._array_like = True
59
60 self.layer.events.rendering.connect(self._on_rendering_change)
61 self.layer.events.depiction.connect(self._on_depiction_change)
62 self.layer.events.interpolation2d.connect(
63 self._on_interpolation_change
64 )
65 self.layer.events.interpolation3d.connect(
66 self._on_interpolation_change
67 )
68 self.layer.events.colormap.connect(self._on_colormap_change)
69 self.layer.events.contrast_limits.connect(
70 self._on_contrast_limits_change
71 )
72 self.layer.events.gamma.connect(self._on_gamma_change)
73 self.layer.events.iso_threshold.connect(self._on_iso_threshold_change)
74 self.layer.events.attenuation.connect(self._on_attenuation_change)
75 self.layer.plane.events.position.connect(
76 self._on_plane_position_change
77 )
78 self.layer.plane.events.thickness.connect(
79 self._on_plane_thickness_change
80 )
81 self.layer.plane.events.normal.connect(self._on_plane_normal_change)
82
83 # display_change is special (like data_change) because it requires a self.reset()
84 # this means that we have to call it manually. Also, it must be called before reset
85 # in order to set the appropriate node first
86 self._on_display_change()
87 self.reset()
88 self._on_data_change()
89
90 def _on_display_change(self, data=None):
91 parent = self.node.parent
92 self.node.parent = None
93
94 self.node = self._layer_node.get_node(self.layer._ndisplay)
95
96 if data is None:
97 data = np.zeros((1,) * self.layer._ndisplay, dtype=np.float32)
98
99 if self.layer._empty:
100 self.node.visible = False
101 else:
102 self.node.visible = self.layer.visible
103
104 if self.layer.loaded:
105 self.node.set_data(data)
106
107 self.node.parent = parent
108 self.node.order = self.order
109 self.reset()
110
111 def _on_data_change(self):
112 if not self.layer.loaded:
113 # Do nothing if we are not yet loaded. Calling astype below could
114 # be very expensive. Lets not do it until our data has been loaded.
115 return
116
117 self._set_node_data(self.node, self.layer._data_view)
118
119 def _set_node_data(self, node, data):
120 """Our self.layer._data_view has been updated, update our node."""
121
122 data = fix_data_dtype(data)
123
124 if self.layer._ndisplay == 3 and self.layer.ndim == 2:
125 data = np.expand_dims(data, axis=0)
126
127 # Check if data exceeds MAX_TEXTURE_SIZE and downsample
128 if self.MAX_TEXTURE_SIZE_2D is not None and self.layer._ndisplay == 2:
129 data = self.downsample_texture(data, self.MAX_TEXTURE_SIZE_2D)
130 elif (
131 self.MAX_TEXTURE_SIZE_3D is not None and self.layer._ndisplay == 3
132 ):
133 data = self.downsample_texture(data, self.MAX_TEXTURE_SIZE_3D)
134
135 # Check if ndisplay has changed current node type needs updating
136 if (
137 self.layer._ndisplay == 3 and not isinstance(node, VolumeNode)
138 ) or (self.layer._ndisplay == 2 and not isinstance(node, ImageNode)):
139 self._on_display_change(data)
140 else:
141 node.set_data(data)
142
143 if self.layer._empty:
144 node.visible = False
145 else:
146 node.visible = self.layer.visible
147
148 # Call to update order of translation values with new dims:
149 self._on_matrix_change()
150 node.update()
151
152 def _on_interpolation_change(self):
153 self.node.interpolation = (
154 self.layer.interpolation2d
155 if self.layer._ndisplay == 2
156 else self.layer.interpolation3d
157 )
158
159 def _on_rendering_change(self):
160 if isinstance(self.node, VolumeNode):
161 self.node.method = self.layer.rendering
162 self._on_attenuation_change()
163 self._on_iso_threshold_change()
164
165 def _on_depiction_change(self):
166 if isinstance(self.node, VolumeNode):
167 self.node.raycasting_mode = str(self.layer.depiction)
168
169 def _on_colormap_change(self):
170 self.node.cmap = VispyColormap(*self.layer.colormap)
171
172 def _on_contrast_limits_change(self):
173 self.node.clim = self.layer.contrast_limits
174 if isinstance(self.node, VolumeNode):
175 self.node.mip_cutoff = self.node._texture.clim_normalized[0]
176 self.node.minip_cutoff = self.node._texture.clim_normalized[1]
177
178 def _on_gamma_change(self):
179 if len(self.node.shared_program.frag._set_items) > 0:
180 self.node.gamma = self.layer.gamma
181
182 def _on_iso_threshold_change(self):
183 if isinstance(self.node, VolumeNode):
184 self.node.threshold = self.layer.iso_threshold
185
186 def _on_attenuation_change(self):
187 if isinstance(self.node, VolumeNode):
188 self.node.attenuation = self.layer.attenuation
189
190 def _on_plane_thickness_change(self):
191 if isinstance(self.node, VolumeNode):
192 self.node.plane_thickness = self.layer.plane.thickness
193
194 def _on_plane_position_change(self):
195 if isinstance(self.node, VolumeNode):
196 self.node.plane_position = self.layer.plane.position
197
198 def _on_plane_normal_change(self):
199 if isinstance(self.node, VolumeNode):
200 self.node.plane_normal = self.layer.plane.normal
201
202 def reset(self, event=None):
203 super().reset()
204 self._on_interpolation_change()
205 self._on_colormap_change()
206 self._on_contrast_limits_change()
207 self._on_gamma_change()
208 self._on_rendering_change()
209 self._on_depiction_change()
210 self._on_plane_position_change()
211 self._on_plane_normal_change()
212 self._on_plane_thickness_change()
213
214 def downsample_texture(self, data, MAX_TEXTURE_SIZE):
215 """Downsample data based on maximum allowed texture size.
216
217 Parameters
218 ----------
219 data : array
220 Data to be downsampled if needed.
221 MAX_TEXTURE_SIZE : int
222 Maximum allowed texture size.
223
224 Returns
225 -------
226 data : array
227 Data that now fits inside texture.
228 """
229 if np.any(np.greater(data.shape, MAX_TEXTURE_SIZE)):
230 if self.layer.multiscale:
231 raise ValueError(
232 trans._(
233 "Shape of in dividual tiles in multiscale {shape} cannot exceed GL_MAX_TEXTURE_SIZE {texture_size}. Rendering is currently in {ndisplay}D mode.",
234 deferred=True,
235 shape=data.shape,
236 texture_size=MAX_TEXTURE_SIZE,
237 ndisplay=self.layer._ndisplay,
238 )
239 )
240 warnings.warn(
241 trans._(
242 "data shape {shape} exceeds GL_MAX_TEXTURE_SIZE {texture_size} in at least one axis and will be downsampled. Rendering is currently in {ndisplay}D mode.",
243 deferred=True,
244 shape=data.shape,
245 texture_size=MAX_TEXTURE_SIZE,
246 ndisplay=self.layer._ndisplay,
247 )
248 )
249 downsample = np.ceil(
250 np.divide(data.shape, MAX_TEXTURE_SIZE)
251 ).astype(int)
252 scale = np.ones(self.layer.ndim)
253 for i, d in enumerate(self.layer._dims_displayed):
254 scale[d] = downsample[i]
255 self.layer._transforms['tile2data'].scale = scale
256 self._on_matrix_change()
257 slices = tuple(slice(None, None, ds) for ds in downsample)
258 data = data[slices]
259 return data
260
[end of napari/_vispy/layers/image.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/napari/_vispy/layers/image.py b/napari/_vispy/layers/image.py
--- a/napari/_vispy/layers/image.py
+++ b/napari/_vispy/layers/image.py
@@ -4,6 +4,7 @@
from vispy.color import Colormap as VispyColormap
from vispy.scene.node import Node
+from ...layers.base._base_constants import Blending
from ...utils.translations import trans
from ..utils.gl import fix_data_dtype, get_gl_extensions
from ..visuals.image import Image as ImageNode
@@ -169,11 +170,29 @@
def _on_colormap_change(self):
self.node.cmap = VispyColormap(*self.layer.colormap)
+ def _update_mip_minip_cutoff(self):
+ # discard fragments beyond contrast limits, but only with translucent blending
+ if isinstance(self.node, VolumeNode):
+ if self.layer.blending in {
+ Blending.TRANSLUCENT,
+ Blending.TRANSLUCENT_NO_DEPTH,
+ }:
+ self.node.mip_cutoff = self.node._texture.clim_normalized[0]
+ self.node.minip_cutoff = self.node._texture.clim_normalized[1]
+ else:
+ self.node.mip_cutoff = None
+ self.node.minip_cutoff = None
+
def _on_contrast_limits_change(self):
self.node.clim = self.layer.contrast_limits
- if isinstance(self.node, VolumeNode):
- self.node.mip_cutoff = self.node._texture.clim_normalized[0]
- self.node.minip_cutoff = self.node._texture.clim_normalized[1]
+ # cutoffs must be updated after clims, so we can set them to the new values
+ self._update_mip_minip_cutoff()
+
+ def _on_blending_change(self):
+ super()._on_blending_change()
+ # cutoffs must be updated after blending, so we can know if
+ # the new blending is a translucent one
+ self._update_mip_minip_cutoff()
def _on_gamma_change(self):
if len(self.node.shared_program.frag._set_items) > 0:
| {"golden_diff": "diff --git a/napari/_vispy/layers/image.py b/napari/_vispy/layers/image.py\n--- a/napari/_vispy/layers/image.py\n+++ b/napari/_vispy/layers/image.py\n@@ -4,6 +4,7 @@\n from vispy.color import Colormap as VispyColormap\n from vispy.scene.node import Node\n \n+from ...layers.base._base_constants import Blending\n from ...utils.translations import trans\n from ..utils.gl import fix_data_dtype, get_gl_extensions\n from ..visuals.image import Image as ImageNode\n@@ -169,11 +170,29 @@\n def _on_colormap_change(self):\n self.node.cmap = VispyColormap(*self.layer.colormap)\n \n+ def _update_mip_minip_cutoff(self):\n+ # discard fragments beyond contrast limits, but only with translucent blending\n+ if isinstance(self.node, VolumeNode):\n+ if self.layer.blending in {\n+ Blending.TRANSLUCENT,\n+ Blending.TRANSLUCENT_NO_DEPTH,\n+ }:\n+ self.node.mip_cutoff = self.node._texture.clim_normalized[0]\n+ self.node.minip_cutoff = self.node._texture.clim_normalized[1]\n+ else:\n+ self.node.mip_cutoff = None\n+ self.node.minip_cutoff = None\n+\n def _on_contrast_limits_change(self):\n self.node.clim = self.layer.contrast_limits\n- if isinstance(self.node, VolumeNode):\n- self.node.mip_cutoff = self.node._texture.clim_normalized[0]\n- self.node.minip_cutoff = self.node._texture.clim_normalized[1]\n+ # cutoffs must be updated after clims, so we can set them to the new values\n+ self._update_mip_minip_cutoff()\n+\n+ def _on_blending_change(self):\n+ super()._on_blending_change()\n+ # cutoffs must be updated after blending, so we can know if\n+ # the new blending is a translucent one\n+ self._update_mip_minip_cutoff()\n \n def _on_gamma_change(self):\n if len(self.node.shared_program.frag._set_items) > 0:\n", "issue": "Bug: values of zero in 3D MIPs render as transparent, rather than black, even in opaque blending mode\n## \ud83d\udc1b Bug\r\nnoticed by @maweigert:\r\n\r\nif a volume with a bunch of zeros is in front of another volume, we would have expected that it would essentially occluded any layers behind it, assuming, of course, that the blending mode is opaque. (in additive blending, naturally, any nonzero values in other layers would result in a non-black MIP pixel).\r\n\r\nHowever, MIP values of zero appear to render as transparent, even in opaque blending mode, showing layers behind it:\r\n\r\n```python\r\nimport numpy as np \r\nimport napari \r\n\r\nx = np.zeros((20,20,20), np.float32)\r\nx[10,10,10] = 100\r\n\r\ny = np.zeros((20,20,20), np.float32)\r\ny[1:-1,1:-1,1:-1] = 100\r\n\r\nv = napari.Viewer()\r\nv.add_image(y)\r\nv.add_image(x)\r\n\r\nv.dims.ndisplay=3\r\n```\r\n\r\n<img width=\"1311\" alt=\"Screenshot 2022-09-01 at 09 19 26\" src=\"https://user-images.githubusercontent.com/1609449/187933993-1454960c-f9fc-4ab4-b23e-a6603e093281.png\">\r\n\r\nchanging `viewer.layers['x'].rendering = 'additive'` _does_ then occlude all of `y`\r\nnot 100% sure it's a bug, but it did strike us as unexpected. @brisvag?\r\n\n", "before_files": [{"content": "import warnings\n\nimport numpy as np\nfrom vispy.color import Colormap as VispyColormap\nfrom vispy.scene.node import Node\n\nfrom ...utils.translations import trans\nfrom ..utils.gl import fix_data_dtype, get_gl_extensions\nfrom ..visuals.image import Image as ImageNode\nfrom ..visuals.volume import Volume as VolumeNode\nfrom .base import VispyBaseLayer\n\n\nclass ImageLayerNode:\n def __init__(self, custom_node: Node = None, texture_format=None):\n if (\n texture_format == 'auto'\n and 'texture_float' not in get_gl_extensions()\n ):\n # if the GPU doesn't support float textures, texture_format auto\n # WILL fail on float dtypes\n # https://github.com/napari/napari/issues/3988\n texture_format = None\n\n self._custom_node = custom_node\n self._image_node = ImageNode(\n None,\n method='auto',\n texture_format=texture_format,\n )\n self._volume_node = VolumeNode(\n np.zeros((1, 1, 1), dtype=np.float32),\n clim=[0, 1],\n texture_format=texture_format,\n )\n\n def get_node(self, ndisplay: int) -> Node:\n\n # Return custom node if we have one.\n if self._custom_node is not None:\n return self._custom_node\n\n # Return Image or Volume node based on 2D or 3D.\n if ndisplay == 2:\n return self._image_node\n return self._volume_node\n\n\nclass VispyImageLayer(VispyBaseLayer):\n def __init__(self, layer, node=None, texture_format='auto'):\n\n # Use custom node from caller, or our standard image/volume nodes.\n self._layer_node = ImageLayerNode(node, texture_format=texture_format)\n\n # Default to 2D (image) node.\n super().__init__(layer, self._layer_node.get_node(2))\n\n self._array_like = True\n\n self.layer.events.rendering.connect(self._on_rendering_change)\n self.layer.events.depiction.connect(self._on_depiction_change)\n self.layer.events.interpolation2d.connect(\n self._on_interpolation_change\n )\n self.layer.events.interpolation3d.connect(\n self._on_interpolation_change\n )\n self.layer.events.colormap.connect(self._on_colormap_change)\n self.layer.events.contrast_limits.connect(\n self._on_contrast_limits_change\n )\n self.layer.events.gamma.connect(self._on_gamma_change)\n self.layer.events.iso_threshold.connect(self._on_iso_threshold_change)\n self.layer.events.attenuation.connect(self._on_attenuation_change)\n self.layer.plane.events.position.connect(\n self._on_plane_position_change\n )\n self.layer.plane.events.thickness.connect(\n self._on_plane_thickness_change\n )\n self.layer.plane.events.normal.connect(self._on_plane_normal_change)\n\n # display_change is special (like data_change) because it requires a self.reset()\n # this means that we have to call it manually. Also, it must be called before reset\n # in order to set the appropriate node first\n self._on_display_change()\n self.reset()\n self._on_data_change()\n\n def _on_display_change(self, data=None):\n parent = self.node.parent\n self.node.parent = None\n\n self.node = self._layer_node.get_node(self.layer._ndisplay)\n\n if data is None:\n data = np.zeros((1,) * self.layer._ndisplay, dtype=np.float32)\n\n if self.layer._empty:\n self.node.visible = False\n else:\n self.node.visible = self.layer.visible\n\n if self.layer.loaded:\n self.node.set_data(data)\n\n self.node.parent = parent\n self.node.order = self.order\n self.reset()\n\n def _on_data_change(self):\n if not self.layer.loaded:\n # Do nothing if we are not yet loaded. Calling astype below could\n # be very expensive. Lets not do it until our data has been loaded.\n return\n\n self._set_node_data(self.node, self.layer._data_view)\n\n def _set_node_data(self, node, data):\n \"\"\"Our self.layer._data_view has been updated, update our node.\"\"\"\n\n data = fix_data_dtype(data)\n\n if self.layer._ndisplay == 3 and self.layer.ndim == 2:\n data = np.expand_dims(data, axis=0)\n\n # Check if data exceeds MAX_TEXTURE_SIZE and downsample\n if self.MAX_TEXTURE_SIZE_2D is not None and self.layer._ndisplay == 2:\n data = self.downsample_texture(data, self.MAX_TEXTURE_SIZE_2D)\n elif (\n self.MAX_TEXTURE_SIZE_3D is not None and self.layer._ndisplay == 3\n ):\n data = self.downsample_texture(data, self.MAX_TEXTURE_SIZE_3D)\n\n # Check if ndisplay has changed current node type needs updating\n if (\n self.layer._ndisplay == 3 and not isinstance(node, VolumeNode)\n ) or (self.layer._ndisplay == 2 and not isinstance(node, ImageNode)):\n self._on_display_change(data)\n else:\n node.set_data(data)\n\n if self.layer._empty:\n node.visible = False\n else:\n node.visible = self.layer.visible\n\n # Call to update order of translation values with new dims:\n self._on_matrix_change()\n node.update()\n\n def _on_interpolation_change(self):\n self.node.interpolation = (\n self.layer.interpolation2d\n if self.layer._ndisplay == 2\n else self.layer.interpolation3d\n )\n\n def _on_rendering_change(self):\n if isinstance(self.node, VolumeNode):\n self.node.method = self.layer.rendering\n self._on_attenuation_change()\n self._on_iso_threshold_change()\n\n def _on_depiction_change(self):\n if isinstance(self.node, VolumeNode):\n self.node.raycasting_mode = str(self.layer.depiction)\n\n def _on_colormap_change(self):\n self.node.cmap = VispyColormap(*self.layer.colormap)\n\n def _on_contrast_limits_change(self):\n self.node.clim = self.layer.contrast_limits\n if isinstance(self.node, VolumeNode):\n self.node.mip_cutoff = self.node._texture.clim_normalized[0]\n self.node.minip_cutoff = self.node._texture.clim_normalized[1]\n\n def _on_gamma_change(self):\n if len(self.node.shared_program.frag._set_items) > 0:\n self.node.gamma = self.layer.gamma\n\n def _on_iso_threshold_change(self):\n if isinstance(self.node, VolumeNode):\n self.node.threshold = self.layer.iso_threshold\n\n def _on_attenuation_change(self):\n if isinstance(self.node, VolumeNode):\n self.node.attenuation = self.layer.attenuation\n\n def _on_plane_thickness_change(self):\n if isinstance(self.node, VolumeNode):\n self.node.plane_thickness = self.layer.plane.thickness\n\n def _on_plane_position_change(self):\n if isinstance(self.node, VolumeNode):\n self.node.plane_position = self.layer.plane.position\n\n def _on_plane_normal_change(self):\n if isinstance(self.node, VolumeNode):\n self.node.plane_normal = self.layer.plane.normal\n\n def reset(self, event=None):\n super().reset()\n self._on_interpolation_change()\n self._on_colormap_change()\n self._on_contrast_limits_change()\n self._on_gamma_change()\n self._on_rendering_change()\n self._on_depiction_change()\n self._on_plane_position_change()\n self._on_plane_normal_change()\n self._on_plane_thickness_change()\n\n def downsample_texture(self, data, MAX_TEXTURE_SIZE):\n \"\"\"Downsample data based on maximum allowed texture size.\n\n Parameters\n ----------\n data : array\n Data to be downsampled if needed.\n MAX_TEXTURE_SIZE : int\n Maximum allowed texture size.\n\n Returns\n -------\n data : array\n Data that now fits inside texture.\n \"\"\"\n if np.any(np.greater(data.shape, MAX_TEXTURE_SIZE)):\n if self.layer.multiscale:\n raise ValueError(\n trans._(\n \"Shape of in dividual tiles in multiscale {shape} cannot exceed GL_MAX_TEXTURE_SIZE {texture_size}. Rendering is currently in {ndisplay}D mode.\",\n deferred=True,\n shape=data.shape,\n texture_size=MAX_TEXTURE_SIZE,\n ndisplay=self.layer._ndisplay,\n )\n )\n warnings.warn(\n trans._(\n \"data shape {shape} exceeds GL_MAX_TEXTURE_SIZE {texture_size} in at least one axis and will be downsampled. Rendering is currently in {ndisplay}D mode.\",\n deferred=True,\n shape=data.shape,\n texture_size=MAX_TEXTURE_SIZE,\n ndisplay=self.layer._ndisplay,\n )\n )\n downsample = np.ceil(\n np.divide(data.shape, MAX_TEXTURE_SIZE)\n ).astype(int)\n scale = np.ones(self.layer.ndim)\n for i, d in enumerate(self.layer._dims_displayed):\n scale[d] = downsample[i]\n self.layer._transforms['tile2data'].scale = scale\n self._on_matrix_change()\n slices = tuple(slice(None, None, ds) for ds in downsample)\n data = data[slices]\n return data\n", "path": "napari/_vispy/layers/image.py"}]} | 3,666 | 487 |
gh_patches_debug_3675 | rasdani/github-patches | git_diff | conan-io__conan-center-index-8132 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[request] perfetto/v21.0
### Package Details
* Package Name/Version: **perfetto/v21.0**
* Changelog: **https://github.com/google/perfetto/releases/tag/v21.0**
The above mentioned version is newly released by the upstream project and not yet available as a recipe. PR follows
</issue>
<code>
[start of recipes/perfetto/all/conanfile.py]
1 from conans import ConanFile, CMake, tools
2 from conans.errors import ConanInvalidConfiguration
3
4 import os
5
6 required_conan_version = ">=1.33.0"
7
8
9 class PerfettoConan(ConanFile):
10 name = "perfetto"
11 license = "Apache-2.0"
12 homepage = "https://perfetto.dev"
13 url = "https://github.com/conan-io/conan-center-index"
14 description = "Performance instrumentation and tracing for Android, Linux and Chrome"
15 topics = ("linux", "profiling", "tracing")
16 settings = "os", "compiler", "build_type", "arch"
17 options = {
18 "shared": [True, False],
19 "fPIC": [True, False]
20 }
21 default_options = {
22 "shared": False,
23 "fPIC": True
24 }
25
26 exports_sources = ["CMakeLists.txt"]
27 generators = "cmake"
28
29 _cmake = None
30
31 @property
32 def _source_subfolder(self):
33 return "source_subfolder"
34
35 def config_options(self):
36 if self.settings.os == "Windows":
37 del self.options.fPIC
38
39 def configure(self):
40 if self.options.shared:
41 del self.options.fPIC
42
43 def validate(self):
44 if self.settings.compiler == "gcc" and tools.Version(self.settings.compiler.version) < 7:
45 raise ConanInvalidConfiguration ("perfetto requires gcc >= 7")
46 if self.settings.compiler.cppstd:
47 tools.check_min_cppstd(self, 11)
48
49 def source(self):
50 tools.get(**self.conan_data["sources"][self.version],
51 strip_root=True, destination=self._source_subfolder)
52
53 def _configure_cmake(self):
54 if self._cmake:
55 return self._cmake
56 self._cmake = CMake(self)
57 self._cmake.configure()
58 return self._cmake
59
60 def build(self):
61 cmake = self._configure_cmake()
62 cmake.build()
63
64 def package(self):
65 self.copy("LICENSE", src=self._source_subfolder, dst="licenses")
66 cmake = self._configure_cmake()
67 cmake.install()
68
69 def package_info(self):
70 self.cpp_info.libs = ["perfetto"]
71 self.cpp_info.names["pkgconfig"] = "perfetto"
72 if self.settings.os == "Linux":
73 self.cpp_info.system_libs.append("pthread")
74 if self.settings.os == "Windows":
75 self.cpp_info.system_libs.append("ws2_32")
76
77
[end of recipes/perfetto/all/conanfile.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/recipes/perfetto/all/conanfile.py b/recipes/perfetto/all/conanfile.py
--- a/recipes/perfetto/all/conanfile.py
+++ b/recipes/perfetto/all/conanfile.py
@@ -68,7 +68,6 @@
def package_info(self):
self.cpp_info.libs = ["perfetto"]
- self.cpp_info.names["pkgconfig"] = "perfetto"
if self.settings.os == "Linux":
self.cpp_info.system_libs.append("pthread")
if self.settings.os == "Windows":
| {"golden_diff": "diff --git a/recipes/perfetto/all/conanfile.py b/recipes/perfetto/all/conanfile.py\n--- a/recipes/perfetto/all/conanfile.py\n+++ b/recipes/perfetto/all/conanfile.py\n@@ -68,7 +68,6 @@\n \n def package_info(self):\n self.cpp_info.libs = [\"perfetto\"]\n- self.cpp_info.names[\"pkgconfig\"] = \"perfetto\"\n if self.settings.os == \"Linux\":\n self.cpp_info.system_libs.append(\"pthread\")\n if self.settings.os == \"Windows\":\n", "issue": "[request] perfetto/v21.0\n### Package Details\r\n * Package Name/Version: **perfetto/v21.0**\r\n * Changelog: **https://github.com/google/perfetto/releases/tag/v21.0**\r\n\r\n\r\nThe above mentioned version is newly released by the upstream project and not yet available as a recipe. PR follows\r\n\n", "before_files": [{"content": "from conans import ConanFile, CMake, tools\nfrom conans.errors import ConanInvalidConfiguration\n\nimport os\n\nrequired_conan_version = \">=1.33.0\"\n\n\nclass PerfettoConan(ConanFile):\n name = \"perfetto\"\n license = \"Apache-2.0\"\n homepage = \"https://perfetto.dev\"\n url = \"https://github.com/conan-io/conan-center-index\"\n description = \"Performance instrumentation and tracing for Android, Linux and Chrome\"\n topics = (\"linux\", \"profiling\", \"tracing\")\n settings = \"os\", \"compiler\", \"build_type\", \"arch\"\n options = {\n \"shared\": [True, False],\n \"fPIC\": [True, False]\n }\n default_options = {\n \"shared\": False,\n \"fPIC\": True\n }\n\n exports_sources = [\"CMakeLists.txt\"]\n generators = \"cmake\"\n\n _cmake = None\n\n @property\n def _source_subfolder(self):\n return \"source_subfolder\"\n\n def config_options(self):\n if self.settings.os == \"Windows\":\n del self.options.fPIC\n\n def configure(self):\n if self.options.shared:\n del self.options.fPIC\n\n def validate(self):\n if self.settings.compiler == \"gcc\" and tools.Version(self.settings.compiler.version) < 7:\n raise ConanInvalidConfiguration (\"perfetto requires gcc >= 7\")\n if self.settings.compiler.cppstd:\n tools.check_min_cppstd(self, 11)\n\n def source(self):\n tools.get(**self.conan_data[\"sources\"][self.version],\n strip_root=True, destination=self._source_subfolder)\n\n def _configure_cmake(self):\n if self._cmake:\n return self._cmake\n self._cmake = CMake(self)\n self._cmake.configure()\n return self._cmake\n\n def build(self):\n cmake = self._configure_cmake()\n cmake.build()\n\n def package(self):\n self.copy(\"LICENSE\", src=self._source_subfolder, dst=\"licenses\")\n cmake = self._configure_cmake()\n cmake.install()\n\n def package_info(self):\n self.cpp_info.libs = [\"perfetto\"]\n self.cpp_info.names[\"pkgconfig\"] = \"perfetto\"\n if self.settings.os == \"Linux\":\n self.cpp_info.system_libs.append(\"pthread\")\n if self.settings.os == \"Windows\":\n self.cpp_info.system_libs.append(\"ws2_32\")\n\n", "path": "recipes/perfetto/all/conanfile.py"}]} | 1,309 | 125 |
gh_patches_debug_15591 | rasdani/github-patches | git_diff | plone__Products.CMFPlone-2714 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Bug: `_smtp` is missing at SMTPMailer's __init__ patching what is introduced in zope.sendmail from version 4.1.0
### What I did:
I am working on internal Addon development (adding support for Plone 5.2) , sending mail notification one of the part functionalities. FYI: some how I forget to active MockMailhost.
When I run all my existing unittests and got unexpected errors :
```
File "/home/nazrul/.cache/buildout/eggs/plone.testing-7.0.0-py2.7.egg/plone/testing/zope.py", line 859, in testTearDown
transaction.abort()
File "/home/nazrul/.cache/buildout/eggs/transaction-2.4.0-py2.7.egg/transaction/_manager.py", line 255, in abort
return self.manager.abort()
File "/home/nazrul/.cache/buildout/eggs/transaction-2.4.0-py2.7.egg/transaction/_manager.py", line 136, in abort
return self.get().abort()
File "/home/nazrul/.cache/buildout/eggs/transaction-2.4.0-py2.7.egg/transaction/_transaction.py", line 529, in abort
reraise(t, v, tb)
File "/home/nazrul/.cache/buildout/eggs/transaction-2.4.0-py2.7.egg/transaction/_transaction.py", line 515, in abort
rm.abort(self)
File "/home/nazrul/.cache/buildout/eggs/zope.sendmail-4.2-py2.7.egg/zope/sendmail/delivery.py", line 57, in abort
self.onAbort()
File "/home/nazrul/.cache/buildout/eggs/zope.sendmail-4.2-py2.7.egg/zope/sendmail/mailer.py", line 78, in abort
if self.connection is None:
File "/home/nazrul/.cache/buildout/eggs/zope.sendmail-4.2-py2.7.egg/zope/sendmail/mailer.py", line 48, in <lambda>
return property(lambda self: getattr(self._smtp, name),
AttributeError: 'SMTPMailer' object has no attribute '_smtp'
```
All tests are passing for earlier version of Plone.
### What is my prediction:
After day long investigation, I found [SMTPMailer __init__ method is patched here](https://github.com/plone/Products.CMFPlone/blob/master/Products/CMFPlone/patches/sendmail.py#L39) , beside Also found that [zope.sendmail from version 4.1.0 the SMTPMailer's __init__](https://github.com/zopefoundation/zope.sendmail/blob/4.1.0/src/zope/sendmail/mailer.py#L45)
has been introduced a new attribute `_smtp` what is ignored during patching.
### How to reproduce:
This is only for Plone 5.2.x
1. disable Mock Mail
2. Try to send email from your tests code
3. Or try send mail from your production/testing server.
</issue>
<code>
[start of Products/CMFPlone/patches/sendmail.py]
1 # -*- coding: utf-8 -*-
2 from plone.registry.interfaces import IRegistry
3 from Products.CMFPlone.interfaces import IMailSchema
4 from transaction._transaction import Status
5 from zope.component import getUtility
6 from zope.sendmail.mailer import SMTPMailer
7
8 import logging
9 import transaction
10
11 log = logging.getLogger("MailDataManager")
12
13
14 # BBB remove when zope.sendmail 3.8.0 is released.
15 def catchAllExceptions(func):
16 def _catch(*args, **kwargs):
17 try:
18 return func(*args, **kwargs)
19 except Exception as e:
20 txn = transaction.get()
21 if txn.status == Status.ACTIVE:
22 # sent with immediate=True
23 raise
24 else:
25 # Avoid raising errors during tpc_finish as these could lead to
26 # inconsistent state
27 log.exception(e)
28
29 return _catch
30
31
32 def applyPatches():
33 from zope.sendmail.mailer import SMTPMailer
34 old_mailer = getattr(SMTPMailer, 'vote', None) is None
35 if old_mailer:
36 SMTPMailer.send = catchAllExceptions(SMTPMailer.send)
37
38
39 def new_init(
40 self,
41 hostname='localhost',
42 port=25,
43 username=None,
44 password=None,
45 no_tls=False,
46 force_tls=False):
47
48 registry = getUtility(IRegistry)
49 mail_settings = registry.forInterface(IMailSchema, prefix='plone')
50 self.hostname = mail_settings.smtp_host
51 self.port = mail_settings.smtp_port
52 self.username = mail_settings.smtp_userid
53 self.password = mail_settings.smtp_pass
54 self.force_tls = force_tls
55 self.no_tls = no_tls
56
57 SMTPMailer.__init__ = new_init
58
[end of Products/CMFPlone/patches/sendmail.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/Products/CMFPlone/patches/sendmail.py b/Products/CMFPlone/patches/sendmail.py
--- a/Products/CMFPlone/patches/sendmail.py
+++ b/Products/CMFPlone/patches/sendmail.py
@@ -3,11 +3,13 @@
from Products.CMFPlone.interfaces import IMailSchema
from transaction._transaction import Status
from zope.component import getUtility
+from zope.sendmail.mailer import _SMTPState
from zope.sendmail.mailer import SMTPMailer
import logging
import transaction
+
log = logging.getLogger("MailDataManager")
@@ -53,5 +55,7 @@
self.password = mail_settings.smtp_pass
self.force_tls = force_tls
self.no_tls = no_tls
+ self._smtp = _SMTPState()
+
SMTPMailer.__init__ = new_init
| {"golden_diff": "diff --git a/Products/CMFPlone/patches/sendmail.py b/Products/CMFPlone/patches/sendmail.py\n--- a/Products/CMFPlone/patches/sendmail.py\n+++ b/Products/CMFPlone/patches/sendmail.py\n@@ -3,11 +3,13 @@\n from Products.CMFPlone.interfaces import IMailSchema\n from transaction._transaction import Status\n from zope.component import getUtility\n+from zope.sendmail.mailer import _SMTPState\n from zope.sendmail.mailer import SMTPMailer\n \n import logging\n import transaction\n \n+\n log = logging.getLogger(\"MailDataManager\")\n \n \n@@ -53,5 +55,7 @@\n self.password = mail_settings.smtp_pass\n self.force_tls = force_tls\n self.no_tls = no_tls\n+ self._smtp = _SMTPState()\n+\n \n SMTPMailer.__init__ = new_init\n", "issue": "Bug: `_smtp` is missing at SMTPMailer's __init__ patching what is introduced in zope.sendmail from version 4.1.0\n### What I did:\r\nI am working on internal Addon development (adding support for Plone 5.2) , sending mail notification one of the part functionalities. FYI: some how I forget to active MockMailhost.\r\nWhen I run all my existing unittests and got unexpected errors : \r\n\r\n```\r\nFile \"/home/nazrul/.cache/buildout/eggs/plone.testing-7.0.0-py2.7.egg/plone/testing/zope.py\", line 859, in testTearDown\r\n transaction.abort()\r\n File \"/home/nazrul/.cache/buildout/eggs/transaction-2.4.0-py2.7.egg/transaction/_manager.py\", line 255, in abort\r\n return self.manager.abort()\r\n File \"/home/nazrul/.cache/buildout/eggs/transaction-2.4.0-py2.7.egg/transaction/_manager.py\", line 136, in abort\r\n return self.get().abort()\r\n File \"/home/nazrul/.cache/buildout/eggs/transaction-2.4.0-py2.7.egg/transaction/_transaction.py\", line 529, in abort\r\n reraise(t, v, tb)\r\n File \"/home/nazrul/.cache/buildout/eggs/transaction-2.4.0-py2.7.egg/transaction/_transaction.py\", line 515, in abort\r\n rm.abort(self)\r\n File \"/home/nazrul/.cache/buildout/eggs/zope.sendmail-4.2-py2.7.egg/zope/sendmail/delivery.py\", line 57, in abort\r\n self.onAbort()\r\n File \"/home/nazrul/.cache/buildout/eggs/zope.sendmail-4.2-py2.7.egg/zope/sendmail/mailer.py\", line 78, in abort\r\n if self.connection is None:\r\n File \"/home/nazrul/.cache/buildout/eggs/zope.sendmail-4.2-py2.7.egg/zope/sendmail/mailer.py\", line 48, in <lambda>\r\n return property(lambda self: getattr(self._smtp, name),\r\nAttributeError: 'SMTPMailer' object has no attribute '_smtp'\r\n```\r\nAll tests are passing for earlier version of Plone.\r\n\r\n### What is my prediction: \r\nAfter day long investigation, I found [SMTPMailer __init__ method is patched here](https://github.com/plone/Products.CMFPlone/blob/master/Products/CMFPlone/patches/sendmail.py#L39) , beside Also found that [zope.sendmail from version 4.1.0 the SMTPMailer's __init__](https://github.com/zopefoundation/zope.sendmail/blob/4.1.0/src/zope/sendmail/mailer.py#L45)\r\nhas been introduced a new attribute `_smtp` what is ignored during patching.\r\n\r\n### How to reproduce:\r\n\r\nThis is only for Plone 5.2.x\r\n\r\n1. disable Mock Mail\r\n2. Try to send email from your tests code \r\n3. Or try send mail from your production/testing server.\r\n\r\n\r\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\nfrom plone.registry.interfaces import IRegistry\nfrom Products.CMFPlone.interfaces import IMailSchema\nfrom transaction._transaction import Status\nfrom zope.component import getUtility\nfrom zope.sendmail.mailer import SMTPMailer\n\nimport logging\nimport transaction\n\nlog = logging.getLogger(\"MailDataManager\")\n\n\n# BBB remove when zope.sendmail 3.8.0 is released.\ndef catchAllExceptions(func):\n def _catch(*args, **kwargs):\n try:\n return func(*args, **kwargs)\n except Exception as e:\n txn = transaction.get()\n if txn.status == Status.ACTIVE:\n # sent with immediate=True\n raise\n else:\n # Avoid raising errors during tpc_finish as these could lead to\n # inconsistent state\n log.exception(e)\n\n return _catch\n\n\ndef applyPatches():\n from zope.sendmail.mailer import SMTPMailer\n old_mailer = getattr(SMTPMailer, 'vote', None) is None\n if old_mailer:\n SMTPMailer.send = catchAllExceptions(SMTPMailer.send)\n\n\ndef new_init(\n self,\n hostname='localhost',\n port=25,\n username=None,\n password=None,\n no_tls=False,\n force_tls=False):\n\n registry = getUtility(IRegistry)\n mail_settings = registry.forInterface(IMailSchema, prefix='plone')\n self.hostname = mail_settings.smtp_host\n self.port = mail_settings.smtp_port\n self.username = mail_settings.smtp_userid\n self.password = mail_settings.smtp_pass\n self.force_tls = force_tls\n self.no_tls = no_tls\n\nSMTPMailer.__init__ = new_init\n", "path": "Products/CMFPlone/patches/sendmail.py"}]} | 1,747 | 197 |
gh_patches_debug_28476 | rasdani/github-patches | git_diff | pantsbuild__pants-13669 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Docker environment not passed to docker publish command
**Describe the bug**
The configured `[docker].env_vars` are not passed to `docker publish`.
**Pants version**
2.9.0.dev1
**OS**
Any
**Additional info**
As reported by chenkai036 on [Slack](https://pantsbuild.slack.com/archives/C046T6T9U/p1637248172462800?thread_ts=1637136003.393600&cid=C046T6T9U)
</issue>
<code>
[start of src/python/pants/backend/docker/util_rules/docker_binary.py]
1 # Copyright 2021 Pants project contributors (see CONTRIBUTORS.md).
2 # Licensed under the Apache License, Version 2.0 (see LICENSE).
3
4 from __future__ import annotations
5
6 from dataclasses import dataclass
7 from typing import Mapping
8
9 from pants.backend.docker.util_rules.docker_build_args import DockerBuildArgs
10 from pants.engine.fs import Digest
11 from pants.engine.process import (
12 BinaryNotFoundError,
13 BinaryPath,
14 BinaryPathRequest,
15 BinaryPaths,
16 BinaryPathTest,
17 Process,
18 ProcessCacheScope,
19 SearchPath,
20 )
21 from pants.engine.rules import Get, collect_rules, rule
22 from pants.util.logging import LogLevel
23 from pants.util.strutil import pluralize
24
25
26 class DockerBinary(BinaryPath):
27 """The `docker` binary."""
28
29 DEFAULT_SEARCH_PATH = SearchPath(("/usr/bin", "/bin", "/usr/local/bin"))
30
31 def build_image(
32 self,
33 tags: tuple[str, ...],
34 digest: Digest,
35 dockerfile: str | None = None,
36 build_args: DockerBuildArgs | None = None,
37 env: Mapping[str, str] | None = None,
38 ) -> Process:
39 args = [self.path, "build"]
40
41 for tag in tags:
42 args.extend(["-t", tag])
43
44 if build_args:
45 for build_arg in build_args:
46 args.extend(["--build-arg", build_arg])
47
48 if dockerfile:
49 args.extend(["-f", dockerfile])
50
51 # Add build context root.
52 args.append(".")
53
54 return Process(
55 argv=tuple(args),
56 description=(
57 f"Building docker image {tags[0]}"
58 + (f" +{pluralize(len(tags)-1, 'additional tag')}." if len(tags) > 1 else ".")
59 ),
60 env=env,
61 input_digest=digest,
62 cache_scope=ProcessCacheScope.PER_SESSION,
63 )
64
65 def push_image(self, tags: tuple[str, ...]) -> Process | None:
66 if not tags:
67 return None
68
69 return Process(
70 argv=(self.path, "push", *tags),
71 cache_scope=ProcessCacheScope.PER_SESSION,
72 description=f"Pushing docker image {tags[0]}",
73 )
74
75
76 @dataclass(frozen=True)
77 class DockerBinaryRequest:
78 search_path: SearchPath = DockerBinary.DEFAULT_SEARCH_PATH
79
80
81 @rule(desc="Finding the `docker` binary", level=LogLevel.DEBUG)
82 async def find_docker(docker_request: DockerBinaryRequest) -> DockerBinary:
83 request = BinaryPathRequest(
84 binary_name="docker",
85 search_path=docker_request.search_path,
86 test=BinaryPathTest(args=["-v"]),
87 )
88 paths = await Get(BinaryPaths, BinaryPathRequest, request)
89 first_path = paths.first_path
90 if not first_path:
91 raise BinaryNotFoundError.from_request(request, rationale="interact with the docker daemon")
92 return DockerBinary(first_path.path, first_path.fingerprint)
93
94
95 @rule
96 async def get_docker() -> DockerBinary:
97 return await Get(DockerBinary, DockerBinaryRequest())
98
99
100 def rules():
101 return collect_rules()
102
[end of src/python/pants/backend/docker/util_rules/docker_binary.py]
[start of src/python/pants/backend/docker/goals/publish.py]
1 # Copyright 2021 Pants project contributors (see CONTRIBUTORS.md).
2 # Licensed under the Apache License, Version 2.0 (see LICENSE).
3
4 from __future__ import annotations
5
6 import logging
7 from dataclasses import dataclass
8 from itertools import chain
9 from typing import cast
10
11 from pants.backend.docker.goals.package_image import BuiltDockerImage
12 from pants.backend.docker.subsystems.docker_options import DockerOptions
13 from pants.backend.docker.target_types import DockerRegistriesField, DockerSkipPushField
14 from pants.backend.docker.util_rules.docker_binary import DockerBinary
15 from pants.core.goals.publish import (
16 PublishFieldSet,
17 PublishOutputData,
18 PublishPackages,
19 PublishProcesses,
20 PublishRequest,
21 )
22 from pants.engine.process import InteractiveProcess
23 from pants.engine.rules import collect_rules, rule
24
25 logger = logging.getLogger(__name__)
26
27
28 class PublishDockerImageRequest(PublishRequest):
29 pass
30
31
32 @dataclass(frozen=True)
33 class PublishDockerImageFieldSet(PublishFieldSet):
34 publish_request_type = PublishDockerImageRequest
35 required_fields = (DockerRegistriesField,)
36
37 registries: DockerRegistriesField
38 skip_push: DockerSkipPushField
39
40 def get_output_data(self) -> PublishOutputData:
41 return PublishOutputData(
42 {
43 "publisher": "docker",
44 "registries": self.registries.value or (),
45 **super().get_output_data(),
46 }
47 )
48
49
50 @rule
51 async def push_docker_images(
52 request: PublishDockerImageRequest, docker: DockerBinary, options: DockerOptions
53 ) -> PublishProcesses:
54 tags = tuple(
55 chain.from_iterable(
56 cast(BuiltDockerImage, image).tags
57 for pkg in request.packages
58 for image in pkg.artifacts
59 )
60 )
61
62 if request.field_set.skip_push.value:
63 return PublishProcesses(
64 [
65 PublishPackages(
66 names=tags,
67 description=f"(by `{request.field_set.skip_push.alias}` on {request.field_set.address})",
68 ),
69 ]
70 )
71
72 process = docker.push_image(tags)
73 return PublishProcesses(
74 [
75 PublishPackages(
76 names=tags,
77 process=InteractiveProcess.from_process(process) if process else None,
78 ),
79 ]
80 )
81
82
83 def rules():
84 return (
85 *collect_rules(),
86 *PublishDockerImageFieldSet.rules(),
87 )
88
[end of src/python/pants/backend/docker/goals/publish.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/python/pants/backend/docker/goals/publish.py b/src/python/pants/backend/docker/goals/publish.py
--- a/src/python/pants/backend/docker/goals/publish.py
+++ b/src/python/pants/backend/docker/goals/publish.py
@@ -19,8 +19,9 @@
PublishProcesses,
PublishRequest,
)
+from pants.engine.environment import Environment, EnvironmentRequest
from pants.engine.process import InteractiveProcess
-from pants.engine.rules import collect_rules, rule
+from pants.engine.rules import Get, collect_rules, rule
logger = logging.getLogger(__name__)
@@ -69,7 +70,8 @@
]
)
- process = docker.push_image(tags)
+ env = await Get(Environment, EnvironmentRequest(options.env_vars))
+ process = docker.push_image(tags, env)
return PublishProcesses(
[
PublishPackages(
diff --git a/src/python/pants/backend/docker/util_rules/docker_binary.py b/src/python/pants/backend/docker/util_rules/docker_binary.py
--- a/src/python/pants/backend/docker/util_rules/docker_binary.py
+++ b/src/python/pants/backend/docker/util_rules/docker_binary.py
@@ -62,7 +62,9 @@
cache_scope=ProcessCacheScope.PER_SESSION,
)
- def push_image(self, tags: tuple[str, ...]) -> Process | None:
+ def push_image(
+ self, tags: tuple[str, ...], env: Mapping[str, str] | None = None
+ ) -> Process | None:
if not tags:
return None
@@ -70,6 +72,7 @@
argv=(self.path, "push", *tags),
cache_scope=ProcessCacheScope.PER_SESSION,
description=f"Pushing docker image {tags[0]}",
+ env=env,
)
| {"golden_diff": "diff --git a/src/python/pants/backend/docker/goals/publish.py b/src/python/pants/backend/docker/goals/publish.py\n--- a/src/python/pants/backend/docker/goals/publish.py\n+++ b/src/python/pants/backend/docker/goals/publish.py\n@@ -19,8 +19,9 @@\n PublishProcesses,\n PublishRequest,\n )\n+from pants.engine.environment import Environment, EnvironmentRequest\n from pants.engine.process import InteractiveProcess\n-from pants.engine.rules import collect_rules, rule\n+from pants.engine.rules import Get, collect_rules, rule\n \n logger = logging.getLogger(__name__)\n \n@@ -69,7 +70,8 @@\n ]\n )\n \n- process = docker.push_image(tags)\n+ env = await Get(Environment, EnvironmentRequest(options.env_vars))\n+ process = docker.push_image(tags, env)\n return PublishProcesses(\n [\n PublishPackages(\ndiff --git a/src/python/pants/backend/docker/util_rules/docker_binary.py b/src/python/pants/backend/docker/util_rules/docker_binary.py\n--- a/src/python/pants/backend/docker/util_rules/docker_binary.py\n+++ b/src/python/pants/backend/docker/util_rules/docker_binary.py\n@@ -62,7 +62,9 @@\n cache_scope=ProcessCacheScope.PER_SESSION,\n )\n \n- def push_image(self, tags: tuple[str, ...]) -> Process | None:\n+ def push_image(\n+ self, tags: tuple[str, ...], env: Mapping[str, str] | None = None\n+ ) -> Process | None:\n if not tags:\n return None\n \n@@ -70,6 +72,7 @@\n argv=(self.path, \"push\", *tags),\n cache_scope=ProcessCacheScope.PER_SESSION,\n description=f\"Pushing docker image {tags[0]}\",\n+ env=env,\n )\n", "issue": "Docker environment not passed to docker publish command\n**Describe the bug**\r\nThe configured `[docker].env_vars` are not passed to `docker publish`.\r\n\r\n**Pants version**\r\n2.9.0.dev1\r\n\r\n**OS**\r\nAny\r\n\r\n**Additional info**\r\nAs reported by chenkai036 on [Slack](https://pantsbuild.slack.com/archives/C046T6T9U/p1637248172462800?thread_ts=1637136003.393600&cid=C046T6T9U)\r\n\n", "before_files": [{"content": "# Copyright 2021 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\nfrom __future__ import annotations\n\nfrom dataclasses import dataclass\nfrom typing import Mapping\n\nfrom pants.backend.docker.util_rules.docker_build_args import DockerBuildArgs\nfrom pants.engine.fs import Digest\nfrom pants.engine.process import (\n BinaryNotFoundError,\n BinaryPath,\n BinaryPathRequest,\n BinaryPaths,\n BinaryPathTest,\n Process,\n ProcessCacheScope,\n SearchPath,\n)\nfrom pants.engine.rules import Get, collect_rules, rule\nfrom pants.util.logging import LogLevel\nfrom pants.util.strutil import pluralize\n\n\nclass DockerBinary(BinaryPath):\n \"\"\"The `docker` binary.\"\"\"\n\n DEFAULT_SEARCH_PATH = SearchPath((\"/usr/bin\", \"/bin\", \"/usr/local/bin\"))\n\n def build_image(\n self,\n tags: tuple[str, ...],\n digest: Digest,\n dockerfile: str | None = None,\n build_args: DockerBuildArgs | None = None,\n env: Mapping[str, str] | None = None,\n ) -> Process:\n args = [self.path, \"build\"]\n\n for tag in tags:\n args.extend([\"-t\", tag])\n\n if build_args:\n for build_arg in build_args:\n args.extend([\"--build-arg\", build_arg])\n\n if dockerfile:\n args.extend([\"-f\", dockerfile])\n\n # Add build context root.\n args.append(\".\")\n\n return Process(\n argv=tuple(args),\n description=(\n f\"Building docker image {tags[0]}\"\n + (f\" +{pluralize(len(tags)-1, 'additional tag')}.\" if len(tags) > 1 else \".\")\n ),\n env=env,\n input_digest=digest,\n cache_scope=ProcessCacheScope.PER_SESSION,\n )\n\n def push_image(self, tags: tuple[str, ...]) -> Process | None:\n if not tags:\n return None\n\n return Process(\n argv=(self.path, \"push\", *tags),\n cache_scope=ProcessCacheScope.PER_SESSION,\n description=f\"Pushing docker image {tags[0]}\",\n )\n\n\n@dataclass(frozen=True)\nclass DockerBinaryRequest:\n search_path: SearchPath = DockerBinary.DEFAULT_SEARCH_PATH\n\n\n@rule(desc=\"Finding the `docker` binary\", level=LogLevel.DEBUG)\nasync def find_docker(docker_request: DockerBinaryRequest) -> DockerBinary:\n request = BinaryPathRequest(\n binary_name=\"docker\",\n search_path=docker_request.search_path,\n test=BinaryPathTest(args=[\"-v\"]),\n )\n paths = await Get(BinaryPaths, BinaryPathRequest, request)\n first_path = paths.first_path\n if not first_path:\n raise BinaryNotFoundError.from_request(request, rationale=\"interact with the docker daemon\")\n return DockerBinary(first_path.path, first_path.fingerprint)\n\n\n@rule\nasync def get_docker() -> DockerBinary:\n return await Get(DockerBinary, DockerBinaryRequest())\n\n\ndef rules():\n return collect_rules()\n", "path": "src/python/pants/backend/docker/util_rules/docker_binary.py"}, {"content": "# Copyright 2021 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\nfrom __future__ import annotations\n\nimport logging\nfrom dataclasses import dataclass\nfrom itertools import chain\nfrom typing import cast\n\nfrom pants.backend.docker.goals.package_image import BuiltDockerImage\nfrom pants.backend.docker.subsystems.docker_options import DockerOptions\nfrom pants.backend.docker.target_types import DockerRegistriesField, DockerSkipPushField\nfrom pants.backend.docker.util_rules.docker_binary import DockerBinary\nfrom pants.core.goals.publish import (\n PublishFieldSet,\n PublishOutputData,\n PublishPackages,\n PublishProcesses,\n PublishRequest,\n)\nfrom pants.engine.process import InteractiveProcess\nfrom pants.engine.rules import collect_rules, rule\n\nlogger = logging.getLogger(__name__)\n\n\nclass PublishDockerImageRequest(PublishRequest):\n pass\n\n\n@dataclass(frozen=True)\nclass PublishDockerImageFieldSet(PublishFieldSet):\n publish_request_type = PublishDockerImageRequest\n required_fields = (DockerRegistriesField,)\n\n registries: DockerRegistriesField\n skip_push: DockerSkipPushField\n\n def get_output_data(self) -> PublishOutputData:\n return PublishOutputData(\n {\n \"publisher\": \"docker\",\n \"registries\": self.registries.value or (),\n **super().get_output_data(),\n }\n )\n\n\n@rule\nasync def push_docker_images(\n request: PublishDockerImageRequest, docker: DockerBinary, options: DockerOptions\n) -> PublishProcesses:\n tags = tuple(\n chain.from_iterable(\n cast(BuiltDockerImage, image).tags\n for pkg in request.packages\n for image in pkg.artifacts\n )\n )\n\n if request.field_set.skip_push.value:\n return PublishProcesses(\n [\n PublishPackages(\n names=tags,\n description=f\"(by `{request.field_set.skip_push.alias}` on {request.field_set.address})\",\n ),\n ]\n )\n\n process = docker.push_image(tags)\n return PublishProcesses(\n [\n PublishPackages(\n names=tags,\n process=InteractiveProcess.from_process(process) if process else None,\n ),\n ]\n )\n\n\ndef rules():\n return (\n *collect_rules(),\n *PublishDockerImageFieldSet.rules(),\n )\n", "path": "src/python/pants/backend/docker/goals/publish.py"}]} | 2,251 | 391 |
gh_patches_debug_39906 | rasdani/github-patches | git_diff | PaddlePaddle__PaddleSpeech-51 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Correct the error rate's computation for multiple sentences
</issue>
<code>
[start of tools/tune.py]
1 """Beam search parameters tuning for DeepSpeech2 model."""
2 from __future__ import absolute_import
3 from __future__ import division
4 from __future__ import print_function
5
6 import sys
7 import os
8 import numpy as np
9 import argparse
10 import functools
11 import gzip
12 import logging
13 import paddle.v2 as paddle
14 import _init_paths
15 from data_utils.data import DataGenerator
16 from decoders.swig_wrapper import Scorer
17 from decoders.swig_wrapper import ctc_beam_search_decoder_batch
18 from model_utils.model import deep_speech_v2_network
19 from utils.error_rate import wer, cer
20 from utils.utility import add_arguments, print_arguments
21
22 parser = argparse.ArgumentParser(description=__doc__)
23 add_arg = functools.partial(add_arguments, argparser=parser)
24 # yapf: disable
25 add_arg('num_batches', int, -1, "# of batches tuning on. "
26 "Default -1, on whole dev set.")
27 add_arg('batch_size', int, 256, "# of samples per batch.")
28 add_arg('trainer_count', int, 8, "# of Trainers (CPUs or GPUs).")
29 add_arg('beam_size', int, 500, "Beam search width.")
30 add_arg('num_proc_bsearch', int, 8, "# of CPUs for beam search.")
31 add_arg('num_proc_data', int, 8, "# of CPUs for data preprocessing.")
32 add_arg('num_conv_layers', int, 2, "# of convolution layers.")
33 add_arg('num_rnn_layers', int, 3, "# of recurrent layers.")
34 add_arg('rnn_layer_size', int, 2048, "# of recurrent cells per layer.")
35 add_arg('num_alphas', int, 45, "# of alpha candidates for tuning.")
36 add_arg('num_betas', int, 8, "# of beta candidates for tuning.")
37 add_arg('alpha_from', float, 1.0, "Where alpha starts tuning from.")
38 add_arg('alpha_to', float, 3.2, "Where alpha ends tuning with.")
39 add_arg('beta_from', float, 0.1, "Where beta starts tuning from.")
40 add_arg('beta_to', float, 0.45, "Where beta ends tuning with.")
41 add_arg('cutoff_prob', float, 1.0, "Cutoff probability for pruning.")
42 add_arg('cutoff_top_n', int, 40, "Cutoff number for pruning.")
43 add_arg('use_gru', bool, False, "Use GRUs instead of simple RNNs.")
44 add_arg('use_gpu', bool, True, "Use GPU or not.")
45 add_arg('share_rnn_weights',bool, True, "Share input-hidden weights across "
46 "bi-directional RNNs. Not for GRU.")
47 add_arg('tune_manifest', str,
48 'data/librispeech/manifest.dev-clean',
49 "Filepath of manifest to tune.")
50 add_arg('mean_std_path', str,
51 'data/librispeech/mean_std.npz',
52 "Filepath of normalizer's mean & std.")
53 add_arg('vocab_path', str,
54 'data/librispeech/vocab.txt',
55 "Filepath of vocabulary.")
56 add_arg('lang_model_path', str,
57 'models/lm/common_crawl_00.prune01111.trie.klm',
58 "Filepath for language model.")
59 add_arg('model_path', str,
60 './checkpoints/libri/params.latest.tar.gz',
61 "If None, the training starts from scratch, "
62 "otherwise, it resumes from the pre-trained model.")
63 add_arg('error_rate_type', str,
64 'wer',
65 "Error rate type for evaluation.",
66 choices=['wer', 'cer'])
67 add_arg('specgram_type', str,
68 'linear',
69 "Audio feature type. Options: linear, mfcc.",
70 choices=['linear', 'mfcc'])
71 # yapf: disable
72 args = parser.parse_args()
73
74
75 logging.basicConfig(
76 format='[%(levelname)s %(asctime)s %(filename)s:%(lineno)d] %(message)s')
77
78 def tune():
79 """Tune parameters alpha and beta incrementally."""
80 if not args.num_alphas >= 0:
81 raise ValueError("num_alphas must be non-negative!")
82 if not args.num_betas >= 0:
83 raise ValueError("num_betas must be non-negative!")
84
85 data_generator = DataGenerator(
86 vocab_filepath=args.vocab_path,
87 mean_std_filepath=args.mean_std_path,
88 augmentation_config='{}',
89 specgram_type=args.specgram_type,
90 num_threads=args.num_proc_data,
91 keep_transcription_text=True,
92 num_conv_layers=args.num_conv_layers)
93
94 audio_data = paddle.layer.data(
95 name="audio_spectrogram",
96 type=paddle.data_type.dense_array(161 * 161))
97 text_data = paddle.layer.data(
98 name="transcript_text",
99 type=paddle.data_type.integer_value_sequence(data_generator.vocab_size))
100 seq_offset_data = paddle.layer.data(
101 name='sequence_offset',
102 type=paddle.data_type.integer_value_sequence(1))
103 seq_len_data = paddle.layer.data(
104 name='sequence_length',
105 type=paddle.data_type.integer_value_sequence(1))
106 index_range_datas = []
107 for i in xrange(args.num_rnn_layers):
108 index_range_datas.append(
109 paddle.layer.data(
110 name='conv%d_index_range' % i,
111 type=paddle.data_type.dense_vector(6)))
112
113 output_probs, _ = deep_speech_v2_network(
114 audio_data=audio_data,
115 text_data=text_data,
116 seq_offset_data=seq_offset_data,
117 seq_len_data=seq_len_data,
118 index_range_datas=index_range_datas,
119 dict_size=data_generator.vocab_size,
120 num_conv_layers=args.num_conv_layers,
121 num_rnn_layers=args.num_rnn_layers,
122 rnn_size=args.rnn_layer_size,
123 use_gru=args.use_gru,
124 share_rnn_weights=args.share_rnn_weights)
125
126 batch_reader = data_generator.batch_reader_creator(
127 manifest_path=args.tune_manifest,
128 batch_size=args.batch_size,
129 sortagrad=False,
130 shuffle_method=None)
131
132 # load parameters
133 if not os.path.isfile(args.model_path):
134 raise IOError("Invaid model path: %s" % args.model_path)
135 parameters = paddle.parameters.Parameters.from_tar(
136 gzip.open(args.model_path))
137
138 inferer = paddle.inference.Inference(
139 output_layer=output_probs, parameters=parameters)
140 # decoders only accept string encoded in utf-8
141 vocab_list = [chars.encode("utf-8") for chars in data_generator.vocab_list]
142
143 # init logger
144 logger = logging.getLogger("")
145 logger.setLevel(level=logging.INFO)
146 # init external scorer
147 logger.info("begin to initialize the external scorer for tuning")
148 if not os.path.isfile(args.lang_model_path):
149 raise IOError("Invaid language model path: %s" % args.lang_model_path)
150 ext_scorer = Scorer(
151 alpha=args.alpha_from,
152 beta=args.beta_from,
153 model_path=args.lang_model_path,
154 vocabulary=vocab_list)
155 logger.info("language model: "
156 "is_character_based = %d," % ext_scorer.is_character_based() +
157 " max_order = %d," % ext_scorer.get_max_order() +
158 " dict_size = %d" % ext_scorer.get_dict_size())
159 logger.info("end initializing scorer. Start tuning ...")
160
161 error_rate_func = cer if args.error_rate_type == 'cer' else wer
162 # create grid for search
163 cand_alphas = np.linspace(args.alpha_from, args.alpha_to, args.num_alphas)
164 cand_betas = np.linspace(args.beta_from, args.beta_to, args.num_betas)
165 params_grid = [(alpha, beta) for alpha in cand_alphas
166 for beta in cand_betas]
167
168 err_sum = [0.0 for i in xrange(len(params_grid))]
169 err_ave = [0.0 for i in xrange(len(params_grid))]
170 num_ins, cur_batch = 0, 0
171 ## incremental tuning parameters over multiple batches
172 for infer_data in batch_reader():
173 if (args.num_batches >= 0) and (cur_batch >= args.num_batches):
174 break
175 infer_results = inferer.infer(input=infer_data,
176 feeding=data_generator.feeding)
177 start_pos = [0] * (len(infer_data) + 1)
178 for i in xrange(len(infer_data)):
179 start_pos[i + 1] = start_pos[i] + infer_data[i][3][0]
180 probs_split = [
181 infer_results[start_pos[i]:start_pos[i + 1]]
182 for i in xrange(0, len(infer_data))
183 ]
184
185 target_transcripts = [ data[1] for data in infer_data ]
186
187 num_ins += len(target_transcripts)
188 # grid search
189 for index, (alpha, beta) in enumerate(params_grid):
190 # reset alpha & beta
191 ext_scorer.reset_params(alpha, beta)
192 beam_search_results = ctc_beam_search_decoder_batch(
193 probs_split=probs_split,
194 vocabulary=vocab_list,
195 beam_size=args.beam_size,
196 num_processes=args.num_proc_bsearch,
197 cutoff_prob=args.cutoff_prob,
198 cutoff_top_n=args.cutoff_top_n,
199 ext_scoring_func=ext_scorer, )
200
201 result_transcripts = [res[0][1] for res in beam_search_results]
202 for target, result in zip(target_transcripts, result_transcripts):
203 err_sum[index] += error_rate_func(target, result)
204 err_ave[index] = err_sum[index] / num_ins
205 if index % 2 == 0:
206 sys.stdout.write('.')
207 sys.stdout.flush()
208
209 # output on-line tuning result at the end of current batch
210 err_ave_min = min(err_ave)
211 min_index = err_ave.index(err_ave_min)
212 print("\nBatch %d [%d/?], current opt (alpha, beta) = (%s, %s), "
213 " min [%s] = %f" %(cur_batch, num_ins,
214 "%.3f" % params_grid[min_index][0],
215 "%.3f" % params_grid[min_index][1],
216 args.error_rate_type, err_ave_min))
217 cur_batch += 1
218
219 # output WER/CER at every (alpha, beta)
220 print("\nFinal %s:\n" % args.error_rate_type)
221 for index in xrange(len(params_grid)):
222 print("(alpha, beta) = (%s, %s), [%s] = %f"
223 % ("%.3f" % params_grid[index][0], "%.3f" % params_grid[index][1],
224 args.error_rate_type, err_ave[index]))
225
226 err_ave_min = min(err_ave)
227 min_index = err_ave.index(err_ave_min)
228 print("\nFinish tuning on %d batches, final opt (alpha, beta) = (%s, %s)"
229 % (args.num_batches, "%.3f" % params_grid[min_index][0],
230 "%.3f" % params_grid[min_index][1]))
231
232 logger.info("finish tuning")
233
234
235 def main():
236 print_arguments(args)
237 paddle.init(use_gpu=args.use_gpu,
238 rnn_use_batch=True,
239 trainer_count=args.trainer_count)
240 tune()
241
242
243 if __name__ == '__main__':
244 main()
245
[end of tools/tune.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/tools/tune.py b/tools/tune.py
--- a/tools/tune.py
+++ b/tools/tune.py
@@ -16,7 +16,7 @@
from decoders.swig_wrapper import Scorer
from decoders.swig_wrapper import ctc_beam_search_decoder_batch
from model_utils.model import deep_speech_v2_network
-from utils.error_rate import wer, cer
+from utils.error_rate import char_errors, word_errors
from utils.utility import add_arguments, print_arguments
parser = argparse.ArgumentParser(description=__doc__)
@@ -158,7 +158,7 @@
" dict_size = %d" % ext_scorer.get_dict_size())
logger.info("end initializing scorer. Start tuning ...")
- error_rate_func = cer if args.error_rate_type == 'cer' else wer
+ errors_func = char_errors if args.error_rate_type == 'cer' else word_errors
# create grid for search
cand_alphas = np.linspace(args.alpha_from, args.alpha_to, args.num_alphas)
cand_betas = np.linspace(args.beta_from, args.beta_to, args.num_betas)
@@ -167,7 +167,7 @@
err_sum = [0.0 for i in xrange(len(params_grid))]
err_ave = [0.0 for i in xrange(len(params_grid))]
- num_ins, cur_batch = 0, 0
+ num_ins, len_refs, cur_batch = 0, 0, 0
## incremental tuning parameters over multiple batches
for infer_data in batch_reader():
if (args.num_batches >= 0) and (cur_batch >= args.num_batches):
@@ -200,8 +200,14 @@
result_transcripts = [res[0][1] for res in beam_search_results]
for target, result in zip(target_transcripts, result_transcripts):
- err_sum[index] += error_rate_func(target, result)
- err_ave[index] = err_sum[index] / num_ins
+ errors, len_ref = errors_func(target, result)
+ err_sum[index] += errors
+ # accumulate the length of references of every batch
+ # in the first iteration
+ if args.alpha_from == alpha and args.beta_from == beta:
+ len_refs += len_ref
+
+ err_ave[index] = err_sum[index] / len_refs
if index % 2 == 0:
sys.stdout.write('.')
sys.stdout.flush()
@@ -226,7 +232,7 @@
err_ave_min = min(err_ave)
min_index = err_ave.index(err_ave_min)
print("\nFinish tuning on %d batches, final opt (alpha, beta) = (%s, %s)"
- % (args.num_batches, "%.3f" % params_grid[min_index][0],
+ % (cur_batch, "%.3f" % params_grid[min_index][0],
"%.3f" % params_grid[min_index][1]))
logger.info("finish tuning")
| {"golden_diff": "diff --git a/tools/tune.py b/tools/tune.py\n--- a/tools/tune.py\n+++ b/tools/tune.py\n@@ -16,7 +16,7 @@\n from decoders.swig_wrapper import Scorer\n from decoders.swig_wrapper import ctc_beam_search_decoder_batch\n from model_utils.model import deep_speech_v2_network\n-from utils.error_rate import wer, cer\n+from utils.error_rate import char_errors, word_errors\n from utils.utility import add_arguments, print_arguments\n \n parser = argparse.ArgumentParser(description=__doc__)\n@@ -158,7 +158,7 @@\n \" dict_size = %d\" % ext_scorer.get_dict_size())\n logger.info(\"end initializing scorer. Start tuning ...\")\n \n- error_rate_func = cer if args.error_rate_type == 'cer' else wer\n+ errors_func = char_errors if args.error_rate_type == 'cer' else word_errors\n # create grid for search\n cand_alphas = np.linspace(args.alpha_from, args.alpha_to, args.num_alphas)\n cand_betas = np.linspace(args.beta_from, args.beta_to, args.num_betas)\n@@ -167,7 +167,7 @@\n \n err_sum = [0.0 for i in xrange(len(params_grid))]\n err_ave = [0.0 for i in xrange(len(params_grid))]\n- num_ins, cur_batch = 0, 0\n+ num_ins, len_refs, cur_batch = 0, 0, 0\n ## incremental tuning parameters over multiple batches\n for infer_data in batch_reader():\n if (args.num_batches >= 0) and (cur_batch >= args.num_batches):\n@@ -200,8 +200,14 @@\n \n result_transcripts = [res[0][1] for res in beam_search_results]\n for target, result in zip(target_transcripts, result_transcripts):\n- err_sum[index] += error_rate_func(target, result)\n- err_ave[index] = err_sum[index] / num_ins\n+ errors, len_ref = errors_func(target, result)\n+ err_sum[index] += errors\n+ # accumulate the length of references of every batch\n+ # in the first iteration\n+ if args.alpha_from == alpha and args.beta_from == beta:\n+ len_refs += len_ref\n+\n+ err_ave[index] = err_sum[index] / len_refs\n if index % 2 == 0:\n sys.stdout.write('.')\n sys.stdout.flush()\n@@ -226,7 +232,7 @@\n err_ave_min = min(err_ave)\n min_index = err_ave.index(err_ave_min)\n print(\"\\nFinish tuning on %d batches, final opt (alpha, beta) = (%s, %s)\"\n- % (args.num_batches, \"%.3f\" % params_grid[min_index][0],\n+ % (cur_batch, \"%.3f\" % params_grid[min_index][0],\n \"%.3f\" % params_grid[min_index][1]))\n \n logger.info(\"finish tuning\")\n", "issue": "Correct the error rate's computation for multiple sentences\n\n", "before_files": [{"content": "\"\"\"Beam search parameters tuning for DeepSpeech2 model.\"\"\"\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport sys\nimport os\nimport numpy as np\nimport argparse\nimport functools\nimport gzip\nimport logging\nimport paddle.v2 as paddle\nimport _init_paths\nfrom data_utils.data import DataGenerator\nfrom decoders.swig_wrapper import Scorer\nfrom decoders.swig_wrapper import ctc_beam_search_decoder_batch\nfrom model_utils.model import deep_speech_v2_network\nfrom utils.error_rate import wer, cer\nfrom utils.utility import add_arguments, print_arguments\n\nparser = argparse.ArgumentParser(description=__doc__)\nadd_arg = functools.partial(add_arguments, argparser=parser)\n# yapf: disable\nadd_arg('num_batches', int, -1, \"# of batches tuning on. \"\n \"Default -1, on whole dev set.\")\nadd_arg('batch_size', int, 256, \"# of samples per batch.\")\nadd_arg('trainer_count', int, 8, \"# of Trainers (CPUs or GPUs).\")\nadd_arg('beam_size', int, 500, \"Beam search width.\")\nadd_arg('num_proc_bsearch', int, 8, \"# of CPUs for beam search.\")\nadd_arg('num_proc_data', int, 8, \"# of CPUs for data preprocessing.\")\nadd_arg('num_conv_layers', int, 2, \"# of convolution layers.\")\nadd_arg('num_rnn_layers', int, 3, \"# of recurrent layers.\")\nadd_arg('rnn_layer_size', int, 2048, \"# of recurrent cells per layer.\")\nadd_arg('num_alphas', int, 45, \"# of alpha candidates for tuning.\")\nadd_arg('num_betas', int, 8, \"# of beta candidates for tuning.\")\nadd_arg('alpha_from', float, 1.0, \"Where alpha starts tuning from.\")\nadd_arg('alpha_to', float, 3.2, \"Where alpha ends tuning with.\")\nadd_arg('beta_from', float, 0.1, \"Where beta starts tuning from.\")\nadd_arg('beta_to', float, 0.45, \"Where beta ends tuning with.\")\nadd_arg('cutoff_prob', float, 1.0, \"Cutoff probability for pruning.\")\nadd_arg('cutoff_top_n', int, 40, \"Cutoff number for pruning.\")\nadd_arg('use_gru', bool, False, \"Use GRUs instead of simple RNNs.\")\nadd_arg('use_gpu', bool, True, \"Use GPU or not.\")\nadd_arg('share_rnn_weights',bool, True, \"Share input-hidden weights across \"\n \"bi-directional RNNs. Not for GRU.\")\nadd_arg('tune_manifest', str,\n 'data/librispeech/manifest.dev-clean',\n \"Filepath of manifest to tune.\")\nadd_arg('mean_std_path', str,\n 'data/librispeech/mean_std.npz',\n \"Filepath of normalizer's mean & std.\")\nadd_arg('vocab_path', str,\n 'data/librispeech/vocab.txt',\n \"Filepath of vocabulary.\")\nadd_arg('lang_model_path', str,\n 'models/lm/common_crawl_00.prune01111.trie.klm',\n \"Filepath for language model.\")\nadd_arg('model_path', str,\n './checkpoints/libri/params.latest.tar.gz',\n \"If None, the training starts from scratch, \"\n \"otherwise, it resumes from the pre-trained model.\")\nadd_arg('error_rate_type', str,\n 'wer',\n \"Error rate type for evaluation.\",\n choices=['wer', 'cer'])\nadd_arg('specgram_type', str,\n 'linear',\n \"Audio feature type. Options: linear, mfcc.\",\n choices=['linear', 'mfcc'])\n# yapf: disable\nargs = parser.parse_args()\n\n\nlogging.basicConfig(\n format='[%(levelname)s %(asctime)s %(filename)s:%(lineno)d] %(message)s')\n\ndef tune():\n \"\"\"Tune parameters alpha and beta incrementally.\"\"\"\n if not args.num_alphas >= 0:\n raise ValueError(\"num_alphas must be non-negative!\")\n if not args.num_betas >= 0:\n raise ValueError(\"num_betas must be non-negative!\")\n\n data_generator = DataGenerator(\n vocab_filepath=args.vocab_path,\n mean_std_filepath=args.mean_std_path,\n augmentation_config='{}',\n specgram_type=args.specgram_type,\n num_threads=args.num_proc_data,\n keep_transcription_text=True,\n num_conv_layers=args.num_conv_layers)\n\n audio_data = paddle.layer.data(\n name=\"audio_spectrogram\",\n type=paddle.data_type.dense_array(161 * 161))\n text_data = paddle.layer.data(\n name=\"transcript_text\",\n type=paddle.data_type.integer_value_sequence(data_generator.vocab_size))\n seq_offset_data = paddle.layer.data(\n name='sequence_offset',\n type=paddle.data_type.integer_value_sequence(1))\n seq_len_data = paddle.layer.data(\n name='sequence_length',\n type=paddle.data_type.integer_value_sequence(1))\n index_range_datas = []\n for i in xrange(args.num_rnn_layers):\n index_range_datas.append(\n paddle.layer.data(\n name='conv%d_index_range' % i,\n type=paddle.data_type.dense_vector(6)))\n\n output_probs, _ = deep_speech_v2_network(\n audio_data=audio_data,\n text_data=text_data,\n seq_offset_data=seq_offset_data,\n seq_len_data=seq_len_data,\n index_range_datas=index_range_datas,\n dict_size=data_generator.vocab_size,\n num_conv_layers=args.num_conv_layers,\n num_rnn_layers=args.num_rnn_layers,\n rnn_size=args.rnn_layer_size,\n use_gru=args.use_gru,\n share_rnn_weights=args.share_rnn_weights)\n\n batch_reader = data_generator.batch_reader_creator(\n manifest_path=args.tune_manifest,\n batch_size=args.batch_size,\n sortagrad=False,\n shuffle_method=None)\n\n # load parameters\n if not os.path.isfile(args.model_path):\n raise IOError(\"Invaid model path: %s\" % args.model_path)\n parameters = paddle.parameters.Parameters.from_tar(\n gzip.open(args.model_path))\n\n inferer = paddle.inference.Inference(\n output_layer=output_probs, parameters=parameters)\n # decoders only accept string encoded in utf-8\n vocab_list = [chars.encode(\"utf-8\") for chars in data_generator.vocab_list]\n\n # init logger\n logger = logging.getLogger(\"\")\n logger.setLevel(level=logging.INFO)\n # init external scorer\n logger.info(\"begin to initialize the external scorer for tuning\")\n if not os.path.isfile(args.lang_model_path):\n raise IOError(\"Invaid language model path: %s\" % args.lang_model_path)\n ext_scorer = Scorer(\n alpha=args.alpha_from,\n beta=args.beta_from,\n model_path=args.lang_model_path,\n vocabulary=vocab_list)\n logger.info(\"language model: \"\n \"is_character_based = %d,\" % ext_scorer.is_character_based() +\n \" max_order = %d,\" % ext_scorer.get_max_order() +\n \" dict_size = %d\" % ext_scorer.get_dict_size())\n logger.info(\"end initializing scorer. Start tuning ...\")\n\n error_rate_func = cer if args.error_rate_type == 'cer' else wer\n # create grid for search\n cand_alphas = np.linspace(args.alpha_from, args.alpha_to, args.num_alphas)\n cand_betas = np.linspace(args.beta_from, args.beta_to, args.num_betas)\n params_grid = [(alpha, beta) for alpha in cand_alphas\n for beta in cand_betas]\n\n err_sum = [0.0 for i in xrange(len(params_grid))]\n err_ave = [0.0 for i in xrange(len(params_grid))]\n num_ins, cur_batch = 0, 0\n ## incremental tuning parameters over multiple batches\n for infer_data in batch_reader():\n if (args.num_batches >= 0) and (cur_batch >= args.num_batches):\n break\n infer_results = inferer.infer(input=infer_data,\n feeding=data_generator.feeding)\n start_pos = [0] * (len(infer_data) + 1)\n for i in xrange(len(infer_data)):\n start_pos[i + 1] = start_pos[i] + infer_data[i][3][0]\n probs_split = [\n infer_results[start_pos[i]:start_pos[i + 1]]\n for i in xrange(0, len(infer_data))\n ]\n\n target_transcripts = [ data[1] for data in infer_data ]\n\n num_ins += len(target_transcripts)\n # grid search\n for index, (alpha, beta) in enumerate(params_grid):\n # reset alpha & beta\n ext_scorer.reset_params(alpha, beta)\n beam_search_results = ctc_beam_search_decoder_batch(\n probs_split=probs_split,\n vocabulary=vocab_list,\n beam_size=args.beam_size,\n num_processes=args.num_proc_bsearch,\n cutoff_prob=args.cutoff_prob,\n cutoff_top_n=args.cutoff_top_n,\n ext_scoring_func=ext_scorer, )\n\n result_transcripts = [res[0][1] for res in beam_search_results]\n for target, result in zip(target_transcripts, result_transcripts):\n err_sum[index] += error_rate_func(target, result)\n err_ave[index] = err_sum[index] / num_ins\n if index % 2 == 0:\n sys.stdout.write('.')\n sys.stdout.flush()\n\n # output on-line tuning result at the end of current batch\n err_ave_min = min(err_ave)\n min_index = err_ave.index(err_ave_min)\n print(\"\\nBatch %d [%d/?], current opt (alpha, beta) = (%s, %s), \"\n \" min [%s] = %f\" %(cur_batch, num_ins,\n \"%.3f\" % params_grid[min_index][0],\n \"%.3f\" % params_grid[min_index][1],\n args.error_rate_type, err_ave_min))\n cur_batch += 1\n\n # output WER/CER at every (alpha, beta)\n print(\"\\nFinal %s:\\n\" % args.error_rate_type)\n for index in xrange(len(params_grid)):\n print(\"(alpha, beta) = (%s, %s), [%s] = %f\"\n % (\"%.3f\" % params_grid[index][0], \"%.3f\" % params_grid[index][1],\n args.error_rate_type, err_ave[index]))\n\n err_ave_min = min(err_ave)\n min_index = err_ave.index(err_ave_min)\n print(\"\\nFinish tuning on %d batches, final opt (alpha, beta) = (%s, %s)\"\n % (args.num_batches, \"%.3f\" % params_grid[min_index][0],\n \"%.3f\" % params_grid[min_index][1]))\n\n logger.info(\"finish tuning\")\n\n\ndef main():\n print_arguments(args)\n paddle.init(use_gpu=args.use_gpu,\n rnn_use_batch=True,\n trainer_count=args.trainer_count)\n tune()\n\n\nif __name__ == '__main__':\n main()\n", "path": "tools/tune.py"}]} | 3,661 | 668 |
gh_patches_debug_10806 | rasdani/github-patches | git_diff | Kinto__kinto-850 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Return 400 if a group contains system.Everyone or a group URL
Unless I'm mistaken:
- We don't support groups for anonymous requests
- We don't support recursivity in groups definitions
So we should reject with `400` if such groups definitons are created
</issue>
<code>
[start of kinto/views/groups.py]
1 import colander
2
3 from kinto.core import resource, utils
4 from kinto.core.events import ResourceChanged, ACTIONS
5 from pyramid.events import subscriber
6
7
8 class GroupSchema(resource.ResourceSchema):
9 members = colander.SchemaNode(colander.Sequence(),
10 colander.SchemaNode(colander.String()))
11
12
13 @resource.register(name='group',
14 collection_path='/buckets/{{bucket_id}}/groups',
15 record_path='/buckets/{{bucket_id}}/groups/{{id}}')
16 class Group(resource.ShareableResource):
17 mapping = GroupSchema()
18
19 def get_parent_id(self, request):
20 bucket_id = request.matchdict['bucket_id']
21 parent_id = utils.instance_uri(request, 'bucket', id=bucket_id)
22 return parent_id
23
24
25 @subscriber(ResourceChanged,
26 for_resources=('group',),
27 for_actions=(ACTIONS.DELETE,))
28 def on_groups_deleted(event):
29 """Some groups were deleted, remove them from users principals.
30 """
31 permission_backend = event.request.registry.permission
32
33 for change in event.impacted_records:
34 group = change['old']
35 bucket_id = event.payload['bucket_id']
36 group_uri = utils.instance_uri(event.request, 'group',
37 bucket_id=bucket_id,
38 id=group['id'])
39
40 permission_backend.remove_principal(group_uri)
41
42
43 @subscriber(ResourceChanged,
44 for_resources=('group',),
45 for_actions=(ACTIONS.CREATE, ACTIONS.UPDATE))
46 def on_groups_changed(event):
47 """Some groups were changed, update users principals.
48 """
49 permission_backend = event.request.registry.permission
50
51 for change in event.impacted_records:
52 if 'old' in change:
53 existing_record_members = set(change['old'].get('members', []))
54 else:
55 existing_record_members = set()
56
57 group = change['new']
58 group_uri = '/buckets/{bucket_id}/groups/{id}'.format(id=group['id'],
59 **event.payload)
60 new_record_members = set(group.get('members', []))
61 new_members = new_record_members - existing_record_members
62 removed_members = existing_record_members - new_record_members
63
64 for member in new_members:
65 # Add the group to the member principal.
66 permission_backend.add_user_principal(member, group_uri)
67
68 for member in removed_members:
69 # Remove the group from the member principal.
70 permission_backend.remove_user_principal(member, group_uri)
71
[end of kinto/views/groups.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/kinto/views/groups.py b/kinto/views/groups.py
--- a/kinto/views/groups.py
+++ b/kinto/views/groups.py
@@ -5,9 +5,15 @@
from pyramid.events import subscriber
+def validate_member(node, member):
+ if member.startswith('/buckets/') or member == 'system.Everyone':
+ raise colander.Invalid(node, "%r is not a valid user ID." % member)
+
+
class GroupSchema(resource.ResourceSchema):
members = colander.SchemaNode(colander.Sequence(),
- colander.SchemaNode(colander.String()))
+ colander.SchemaNode(colander.String(),
+ validator=validate_member))
@resource.register(name='group',
| {"golden_diff": "diff --git a/kinto/views/groups.py b/kinto/views/groups.py\n--- a/kinto/views/groups.py\n+++ b/kinto/views/groups.py\n@@ -5,9 +5,15 @@\n from pyramid.events import subscriber\n \n \n+def validate_member(node, member):\n+ if member.startswith('/buckets/') or member == 'system.Everyone':\n+ raise colander.Invalid(node, \"%r is not a valid user ID.\" % member)\n+\n+\n class GroupSchema(resource.ResourceSchema):\n members = colander.SchemaNode(colander.Sequence(),\n- colander.SchemaNode(colander.String()))\n+ colander.SchemaNode(colander.String(),\n+ validator=validate_member))\n \n \n @resource.register(name='group',\n", "issue": "Return 400 if a group contains system.Everyone or a group URL\nUnless I'm mistaken:\n- We don't support groups for anonymous requests\n- We don't support recursivity in groups definitions\n\nSo we should reject with `400` if such groups definitons are created\n\n", "before_files": [{"content": "import colander\n\nfrom kinto.core import resource, utils\nfrom kinto.core.events import ResourceChanged, ACTIONS\nfrom pyramid.events import subscriber\n\n\nclass GroupSchema(resource.ResourceSchema):\n members = colander.SchemaNode(colander.Sequence(),\n colander.SchemaNode(colander.String()))\n\n\[email protected](name='group',\n collection_path='/buckets/{{bucket_id}}/groups',\n record_path='/buckets/{{bucket_id}}/groups/{{id}}')\nclass Group(resource.ShareableResource):\n mapping = GroupSchema()\n\n def get_parent_id(self, request):\n bucket_id = request.matchdict['bucket_id']\n parent_id = utils.instance_uri(request, 'bucket', id=bucket_id)\n return parent_id\n\n\n@subscriber(ResourceChanged,\n for_resources=('group',),\n for_actions=(ACTIONS.DELETE,))\ndef on_groups_deleted(event):\n \"\"\"Some groups were deleted, remove them from users principals.\n \"\"\"\n permission_backend = event.request.registry.permission\n\n for change in event.impacted_records:\n group = change['old']\n bucket_id = event.payload['bucket_id']\n group_uri = utils.instance_uri(event.request, 'group',\n bucket_id=bucket_id,\n id=group['id'])\n\n permission_backend.remove_principal(group_uri)\n\n\n@subscriber(ResourceChanged,\n for_resources=('group',),\n for_actions=(ACTIONS.CREATE, ACTIONS.UPDATE))\ndef on_groups_changed(event):\n \"\"\"Some groups were changed, update users principals.\n \"\"\"\n permission_backend = event.request.registry.permission\n\n for change in event.impacted_records:\n if 'old' in change:\n existing_record_members = set(change['old'].get('members', []))\n else:\n existing_record_members = set()\n\n group = change['new']\n group_uri = '/buckets/{bucket_id}/groups/{id}'.format(id=group['id'],\n **event.payload)\n new_record_members = set(group.get('members', []))\n new_members = new_record_members - existing_record_members\n removed_members = existing_record_members - new_record_members\n\n for member in new_members:\n # Add the group to the member principal.\n permission_backend.add_user_principal(member, group_uri)\n\n for member in removed_members:\n # Remove the group from the member principal.\n permission_backend.remove_user_principal(member, group_uri)\n", "path": "kinto/views/groups.py"}]} | 1,224 | 150 |
gh_patches_debug_16119 | rasdani/github-patches | git_diff | conan-io__conan-center-index-549 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[package] make/4.2.1: not building for Linux Clang 9
The recipe for `make/4.2.1` does not build under Linux Clang 9.
After generating all the index packages for Clang 9, the ones for this library failed to compile. In this case it doesn't matter that much as this is packaging a tool to be used as a build requirement.
Related to #211
### Package and Environment Details (include every applicable attribute)
* Package Name/Version: **make/4.2.1**
* Operating System+version: **Linux Ubuntu 18.04**
* Compiler+version: **Clang 9**
* Conan version: **conan 1.21.0**
* Python version: **Python 3.7.4**
</issue>
<code>
[start of recipes/make/all/conanfile.py]
1 from conans import ConanFile, tools, AutoToolsBuildEnvironment
2 import os
3
4
5 class MakeConan(ConanFile):
6 name = "make"
7 description = "GNU Make is a tool which controls the generation of executables and other non-source files of a program from the program's source files"
8 topics = ("conan", "make", "build", "makefile")
9 url = "https://github.com/conan-io/conan-center-index"
10 homepage = "https://www.gnu.org/software/make/"
11 license = "GPL-3.0-or-later"
12 settings = "os_build", "arch_build", "compiler"
13 _source_subfolder = "source_subfolder"
14
15 def source(self):
16 tools.get(**self.conan_data["sources"][self.version])
17 extracted_dir = "make-" + self.version
18 os.rename(extracted_dir, self._source_subfolder)
19
20 def configure(self):
21 del self.settings.compiler.libcxx
22 del self.settings.compiler.cppstd
23
24 def build(self):
25 with tools.chdir(self._source_subfolder):
26 # README.W32
27 if self.settings.os_build == "Windows":
28 if self.settings.compiler == "Visual Studio":
29 command = "build_w32.bat --without-guile"
30 else:
31 command = "build_w32.bat --without-guile gcc"
32 else:
33 env_build = AutoToolsBuildEnvironment(self)
34 env_build.configure()
35 command = "./build.sh"
36 with tools.vcvars(self.settings) if self.settings.compiler == "Visual Studio" else tools.no_op():
37 self.run(command)
38
39 def package(self):
40 self.copy(pattern="COPYING", dst="licenses", src=self._source_subfolder)
41 self.copy(pattern="make", dst="bin", src=self._source_subfolder, keep_path=False)
42 self.copy(pattern="*gnumake.exe", dst="bin", src=self._source_subfolder, keep_path=False)
43
44 def package_info(self):
45 make = "gnumake.exe" if self.settings.os_build == "Windows" else "make"
46 make = os.path.join(self.package_folder, "bin", make)
47 self.output.info('Creating CONAN_MAKE_PROGRAM environment variable: %s' % make)
48 self.env_info.CONAN_MAKE_PROGRAM = make
49
50 def package_id(self):
51 del self.info.settings.compiler
52
[end of recipes/make/all/conanfile.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/recipes/make/all/conanfile.py b/recipes/make/all/conanfile.py
--- a/recipes/make/all/conanfile.py
+++ b/recipes/make/all/conanfile.py
@@ -10,6 +10,7 @@
homepage = "https://www.gnu.org/software/make/"
license = "GPL-3.0-or-later"
settings = "os_build", "arch_build", "compiler"
+ exports_sources = ["patches/*"]
_source_subfolder = "source_subfolder"
def source(self):
@@ -22,6 +23,9 @@
del self.settings.compiler.cppstd
def build(self):
+ for patch in self.conan_data["patches"][self.version]:
+ tools.patch(**patch)
+
with tools.chdir(self._source_subfolder):
# README.W32
if self.settings.os_build == "Windows":
| {"golden_diff": "diff --git a/recipes/make/all/conanfile.py b/recipes/make/all/conanfile.py\n--- a/recipes/make/all/conanfile.py\n+++ b/recipes/make/all/conanfile.py\n@@ -10,6 +10,7 @@\n homepage = \"https://www.gnu.org/software/make/\"\n license = \"GPL-3.0-or-later\"\n settings = \"os_build\", \"arch_build\", \"compiler\"\n+ exports_sources = [\"patches/*\"]\n _source_subfolder = \"source_subfolder\"\n \n def source(self):\n@@ -22,6 +23,9 @@\n del self.settings.compiler.cppstd\n \n def build(self):\n+ for patch in self.conan_data[\"patches\"][self.version]:\n+ tools.patch(**patch)\n+\n with tools.chdir(self._source_subfolder):\n # README.W32\n if self.settings.os_build == \"Windows\":\n", "issue": "[package] make/4.2.1: not building for Linux Clang 9\nThe recipe for `make/4.2.1` does not build under Linux Clang 9.\r\n\r\nAfter generating all the index packages for Clang 9, the ones for this library failed to compile. In this case it doesn't matter that much as this is packaging a tool to be used as a build requirement.\r\n\r\nRelated to #211 \r\n\r\n### Package and Environment Details (include every applicable attribute)\r\n * Package Name/Version: **make/4.2.1**\r\n * Operating System+version: **Linux Ubuntu 18.04**\r\n * Compiler+version: **Clang 9**\r\n * Conan version: **conan 1.21.0**\r\n * Python version: **Python 3.7.4**\n", "before_files": [{"content": "from conans import ConanFile, tools, AutoToolsBuildEnvironment\nimport os\n\n\nclass MakeConan(ConanFile):\n name = \"make\"\n description = \"GNU Make is a tool which controls the generation of executables and other non-source files of a program from the program's source files\"\n topics = (\"conan\", \"make\", \"build\", \"makefile\")\n url = \"https://github.com/conan-io/conan-center-index\"\n homepage = \"https://www.gnu.org/software/make/\"\n license = \"GPL-3.0-or-later\"\n settings = \"os_build\", \"arch_build\", \"compiler\"\n _source_subfolder = \"source_subfolder\"\n\n def source(self):\n tools.get(**self.conan_data[\"sources\"][self.version])\n extracted_dir = \"make-\" + self.version\n os.rename(extracted_dir, self._source_subfolder)\n\n def configure(self):\n del self.settings.compiler.libcxx\n del self.settings.compiler.cppstd\n\n def build(self):\n with tools.chdir(self._source_subfolder):\n # README.W32\n if self.settings.os_build == \"Windows\":\n if self.settings.compiler == \"Visual Studio\":\n command = \"build_w32.bat --without-guile\"\n else:\n command = \"build_w32.bat --without-guile gcc\"\n else:\n env_build = AutoToolsBuildEnvironment(self)\n env_build.configure()\n command = \"./build.sh\"\n with tools.vcvars(self.settings) if self.settings.compiler == \"Visual Studio\" else tools.no_op():\n self.run(command)\n\n def package(self):\n self.copy(pattern=\"COPYING\", dst=\"licenses\", src=self._source_subfolder)\n self.copy(pattern=\"make\", dst=\"bin\", src=self._source_subfolder, keep_path=False)\n self.copy(pattern=\"*gnumake.exe\", dst=\"bin\", src=self._source_subfolder, keep_path=False)\n\n def package_info(self):\n make = \"gnumake.exe\" if self.settings.os_build == \"Windows\" else \"make\"\n make = os.path.join(self.package_folder, \"bin\", make)\n self.output.info('Creating CONAN_MAKE_PROGRAM environment variable: %s' % make)\n self.env_info.CONAN_MAKE_PROGRAM = make\n\n def package_id(self):\n del self.info.settings.compiler\n", "path": "recipes/make/all/conanfile.py"}]} | 1,321 | 201 |
gh_patches_debug_25779 | rasdani/github-patches | git_diff | weecology__retriever-1004 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Update FIA links
It's that time of year again where FIA puts out a new release and moves things around. See https://github.com/weecology/retriever/issues/865#issuecomment-325588307
We need to track down the most recent links and update the script.
Thanks to @nestorperez for reporting this.
</issue>
<code>
[start of scripts/forest_inventory_analysis.py]
1 #retriever
2 """Retriever script for Forest Inventory and Analysis
3
4 """
5 from __future__ import print_function
6 from future import standard_library
7 standard_library.install_aliases()
8
9 import os
10
11 from retriever.lib.templates import Script
12 from retriever.lib.models import Table
13 from retriever import open_fr, open_fw, VERSION
14 from pkg_resources import parse_version
15
16
17 class main(Script):
18 def __init__(self, **kwargs):
19 Script.__init__(self, **kwargs)
20 self.title = "Forest Inventory and Analysis"
21 self.name = "forest-inventory-analysis"
22 self.retriever_minimum_version = '2.0.dev'
23 self.version = '1.4.0'
24 self.ref = "http://fia.fs.fed.us/"
25 self.urls = {"main": "https://apps.fs.usda.gov/fiadb-downloads/CSV/",
26 'species': 'https://apps.fs.usda.gov/fiadb-downloads/CSV/REF_SPECIES.csv'}
27 self.keywords = ["plants", "continental-scale", "observational"]
28 self.citation = "DATEOFDOWNLOAD. Forest Inventory and Analysis Database, St. Paul, MN: U.S. Department of Agriculture, Forest Service, Northern Research Station. [Available only on internet: http://apps.fs.fed.us/fiadb-downloads/datamart.html]"
29 self.description = """WARNING: This dataset requires downloading many large files and will probably take several hours to finish installing."""
30 self.addendum = """This dataset requires downloading many large files - please be patient."""
31
32 if parse_version(VERSION) <= parse_version("2.0.0"):
33 self.shortname = self.name
34 self.name = self.title
35 self.tags = self.keywords
36
37 def download(self, engine=None, debug=False):
38 Script.download(self, engine, debug)
39 engine = self.engine
40
41 # download and create species table
42 table = Table('species')
43 self.engine.auto_create_table(table, url=self.urls['species'])
44 self.engine.insert_data_from_url(self.urls['species'])
45
46 # State abbreviations with the year annual inventory began for that state
47 stateslist = [('AL', 2001), ('AK', 2004), ('AZ', 2001), ('AR', 2000),
48 ('CA', 2001), ('CO', 2002), ('CT', 2003), ('DE', 2004),
49 ('FL', 2003), ('GA', 1998), ('ID', 2004), ('IL', 2001),
50 ('IN', 1999), ('IA', 1999), ('KS', 2001), ('KY', 1999),
51 ('LA', 2001), ('ME', 1999), ('MD', 2004), ('MA', 2003),
52 ('MI', 2000), ('MN', 1999), ('MO', 1999), ('MS', 2006),
53 ('MT', 2003), ('NE', 2001), ('NV', 2004), ('NH', 2002),
54 ('NJ', 2004), ('NM', 1999), ('NY', 2002), ('NC', 2003),
55 ('ND', 2001), ('OH', 2001), ('OK', 2008), ('OR', 2001),
56 ('PA', 2000), ('RI', 2003), ('SC', 1999), ('SD', 2001),
57 ('TN', 2000), ('TX', 2001), ('UT', 2000), ('VT', 2003),
58 ('VA', 1998), ('WA', 2002), ('WV', 2004), ('WI', 2000),
59 ('WY', 2000), ('PR', 2001)]
60
61 tablelist = ["SURVEY", "PLOT", "COND", "SUBPLOT", "SUBP_COND", "TREE", "SEEDLING"]
62
63 for table in tablelist:
64 for state, year in stateslist:
65 engine.download_files_from_archive(self.urls["main"] + state + "_" + table + ".ZIP",
66 [state + "_" + table + ".csv"])
67
68 for table in tablelist:
69 print("Scanning data for table %s..." % table)
70 prep_file_name = "%s.csv" % table
71 prep_file = open_fw(engine.format_filename(prep_file_name))
72 this_file = open_fr(engine.format_filename(stateslist[0][0] + "_" + table + ".csv"))
73 col_names = this_file.readline()
74 prep_file.write(col_names)
75 column_names = [col.strip('"') for col in col_names.split(',')]
76 year_column = column_names.index("INVYR")
77 this_file.close()
78
79 for state, year in stateslist:
80 this_file = open_fr(engine.format_filename(state + "_" + table + ".csv"))
81 this_file.readline()
82 for line in this_file:
83 values = line.split(',')
84 this_year = values[year_column]
85 if int(this_year) >= year:
86 prep_file.write(line)
87 prep_file.close()
88 engine.auto_create_table(Table(table), filename=prep_file_name)
89
90 engine.insert_data_from_file(engine.format_filename(prep_file_name))
91
92 try:
93 os.remove(engine.format_filename(prep_file_name))
94 except:
95 pass
96
97 return engine
98
99
100 SCRIPT = main()
101
[end of scripts/forest_inventory_analysis.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/scripts/forest_inventory_analysis.py b/scripts/forest_inventory_analysis.py
--- a/scripts/forest_inventory_analysis.py
+++ b/scripts/forest_inventory_analysis.py
@@ -20,10 +20,10 @@
self.title = "Forest Inventory and Analysis"
self.name = "forest-inventory-analysis"
self.retriever_minimum_version = '2.0.dev'
- self.version = '1.4.0'
+ self.version = '1.4.1'
self.ref = "http://fia.fs.fed.us/"
- self.urls = {"main": "https://apps.fs.usda.gov/fiadb-downloads/CSV/",
- 'species': 'https://apps.fs.usda.gov/fiadb-downloads/CSV/REF_SPECIES.csv'}
+ self.urls = {"main": "https://apps.fs.usda.gov/fia/datamart/CSV/",
+ 'species': 'https://apps.fs.usda.gov/fia/datamart/CSV/REF_SPECIES.csv'}
self.keywords = ["plants", "continental-scale", "observational"]
self.citation = "DATEOFDOWNLOAD. Forest Inventory and Analysis Database, St. Paul, MN: U.S. Department of Agriculture, Forest Service, Northern Research Station. [Available only on internet: http://apps.fs.fed.us/fiadb-downloads/datamart.html]"
self.description = """WARNING: This dataset requires downloading many large files and will probably take several hours to finish installing."""
| {"golden_diff": "diff --git a/scripts/forest_inventory_analysis.py b/scripts/forest_inventory_analysis.py\n--- a/scripts/forest_inventory_analysis.py\n+++ b/scripts/forest_inventory_analysis.py\n@@ -20,10 +20,10 @@\n self.title = \"Forest Inventory and Analysis\"\n self.name = \"forest-inventory-analysis\"\n self.retriever_minimum_version = '2.0.dev'\n- self.version = '1.4.0'\n+ self.version = '1.4.1'\n self.ref = \"http://fia.fs.fed.us/\"\n- self.urls = {\"main\": \"https://apps.fs.usda.gov/fiadb-downloads/CSV/\",\n- 'species': 'https://apps.fs.usda.gov/fiadb-downloads/CSV/REF_SPECIES.csv'}\n+ self.urls = {\"main\": \"https://apps.fs.usda.gov/fia/datamart/CSV/\",\n+ 'species': 'https://apps.fs.usda.gov/fia/datamart/CSV/REF_SPECIES.csv'}\n self.keywords = [\"plants\", \"continental-scale\", \"observational\"]\n self.citation = \"DATEOFDOWNLOAD. Forest Inventory and Analysis Database, St. Paul, MN: U.S. Department of Agriculture, Forest Service, Northern Research Station. [Available only on internet: http://apps.fs.fed.us/fiadb-downloads/datamart.html]\"\n self.description = \"\"\"WARNING: This dataset requires downloading many large files and will probably take several hours to finish installing.\"\"\"\n", "issue": "Update FIA links\nIt's that time of year again where FIA puts out a new release and moves things around. See https://github.com/weecology/retriever/issues/865#issuecomment-325588307\r\n\r\nWe need to track down the most recent links and update the script.\r\n\r\nThanks to @nestorperez for reporting this.\n", "before_files": [{"content": "#retriever\n\"\"\"Retriever script for Forest Inventory and Analysis\n\n\"\"\"\nfrom __future__ import print_function\nfrom future import standard_library\nstandard_library.install_aliases()\n\nimport os\n\nfrom retriever.lib.templates import Script\nfrom retriever.lib.models import Table\nfrom retriever import open_fr, open_fw, VERSION\nfrom pkg_resources import parse_version\n\n\nclass main(Script):\n def __init__(self, **kwargs):\n Script.__init__(self, **kwargs)\n self.title = \"Forest Inventory and Analysis\"\n self.name = \"forest-inventory-analysis\"\n self.retriever_minimum_version = '2.0.dev'\n self.version = '1.4.0'\n self.ref = \"http://fia.fs.fed.us/\"\n self.urls = {\"main\": \"https://apps.fs.usda.gov/fiadb-downloads/CSV/\",\n 'species': 'https://apps.fs.usda.gov/fiadb-downloads/CSV/REF_SPECIES.csv'}\n self.keywords = [\"plants\", \"continental-scale\", \"observational\"]\n self.citation = \"DATEOFDOWNLOAD. Forest Inventory and Analysis Database, St. Paul, MN: U.S. Department of Agriculture, Forest Service, Northern Research Station. [Available only on internet: http://apps.fs.fed.us/fiadb-downloads/datamart.html]\"\n self.description = \"\"\"WARNING: This dataset requires downloading many large files and will probably take several hours to finish installing.\"\"\"\n self.addendum = \"\"\"This dataset requires downloading many large files - please be patient.\"\"\"\n \n if parse_version(VERSION) <= parse_version(\"2.0.0\"):\n self.shortname = self.name\n self.name = self.title\n self.tags = self.keywords\n\n def download(self, engine=None, debug=False):\n Script.download(self, engine, debug)\n engine = self.engine\n\n # download and create species table\n table = Table('species')\n self.engine.auto_create_table(table, url=self.urls['species'])\n self.engine.insert_data_from_url(self.urls['species'])\n\n # State abbreviations with the year annual inventory began for that state\n stateslist = [('AL', 2001), ('AK', 2004), ('AZ', 2001), ('AR', 2000),\n ('CA', 2001), ('CO', 2002), ('CT', 2003), ('DE', 2004),\n ('FL', 2003), ('GA', 1998), ('ID', 2004), ('IL', 2001),\n ('IN', 1999), ('IA', 1999), ('KS', 2001), ('KY', 1999),\n ('LA', 2001), ('ME', 1999), ('MD', 2004), ('MA', 2003),\n ('MI', 2000), ('MN', 1999), ('MO', 1999), ('MS', 2006),\n ('MT', 2003), ('NE', 2001), ('NV', 2004), ('NH', 2002),\n ('NJ', 2004), ('NM', 1999), ('NY', 2002), ('NC', 2003),\n ('ND', 2001), ('OH', 2001), ('OK', 2008), ('OR', 2001),\n ('PA', 2000), ('RI', 2003), ('SC', 1999), ('SD', 2001),\n ('TN', 2000), ('TX', 2001), ('UT', 2000), ('VT', 2003),\n ('VA', 1998), ('WA', 2002), ('WV', 2004), ('WI', 2000),\n ('WY', 2000), ('PR', 2001)]\n\n tablelist = [\"SURVEY\", \"PLOT\", \"COND\", \"SUBPLOT\", \"SUBP_COND\", \"TREE\", \"SEEDLING\"]\n\n for table in tablelist:\n for state, year in stateslist:\n engine.download_files_from_archive(self.urls[\"main\"] + state + \"_\" + table + \".ZIP\",\n [state + \"_\" + table + \".csv\"])\n\n for table in tablelist:\n print(\"Scanning data for table %s...\" % table)\n prep_file_name = \"%s.csv\" % table\n prep_file = open_fw(engine.format_filename(prep_file_name))\n this_file = open_fr(engine.format_filename(stateslist[0][0] + \"_\" + table + \".csv\"))\n col_names = this_file.readline()\n prep_file.write(col_names)\n column_names = [col.strip('\"') for col in col_names.split(',')]\n year_column = column_names.index(\"INVYR\")\n this_file.close()\n\n for state, year in stateslist:\n this_file = open_fr(engine.format_filename(state + \"_\" + table + \".csv\"))\n this_file.readline()\n for line in this_file:\n values = line.split(',')\n this_year = values[year_column]\n if int(this_year) >= year:\n prep_file.write(line)\n prep_file.close()\n engine.auto_create_table(Table(table), filename=prep_file_name)\n\n engine.insert_data_from_file(engine.format_filename(prep_file_name))\n\n try:\n os.remove(engine.format_filename(prep_file_name))\n except:\n pass\n\n return engine\n\n\nSCRIPT = main()\n", "path": "scripts/forest_inventory_analysis.py"}]} | 2,072 | 320 |
gh_patches_debug_1351 | rasdani/github-patches | git_diff | ibis-project__ibis-2249 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
BUG: Multiple aliases on the same column not behaving as expected
``` python
column = table.some_column
table.projection(
[
column.name("alias1"),
column.name("alias2"),
column.name("alias3"),
]
)
```
I think the expected behavior would be a table expression with:
```
Selection[table]
table:
Table: ref_0
selections:
alias1 = Column[float64*] 'some_column' from table
ref_0
alias2 = Column[float64*] 'some_column' from table
ref_0
alias3 = Column[float64*] 'some_column' from table
ref_0
```
However, the result I'm getting is:
```
Selection[table]
table:
Table: ref_0
selections:
alias1 = Column[float64*] 'some_column' from table
ref_0
alias1 = Column[float64*] 'some_column' from table
ref_0
alias1 = Column[float64*] 'some_column' from table
ref_0
```
</issue>
<code>
[start of ibis/expr/format.py]
1 import ibis.expr.operations as ops
2 import ibis.expr.types as ir
3 import ibis.util as util
4
5
6 class FormatMemo:
7 # A little sanity hack to simplify the below
8
9 def __init__(self):
10 from collections import defaultdict
11
12 self.formatted = {}
13 self.aliases = {}
14 self.ops = {}
15 self.counts = defaultdict(int)
16 self._repr_memo = {}
17 self.subexprs = {}
18 self.visit_memo = set()
19
20 def __contains__(self, obj):
21 return self._key(obj) in self.formatted
22
23 def _key(self, expr):
24 memo = self._repr_memo
25 try:
26 result = memo[expr]
27 except KeyError:
28 result = memo[expr] = self._format(expr)
29 return result
30
31 def _format(self, expr):
32 return expr.op()._repr(memo=self)
33
34 def observe(self, expr, formatter=None):
35 if formatter is None:
36 formatter = self._format
37 key = self._key(expr)
38 if key not in self.formatted:
39 self.aliases[key] = 'ref_{:d}'.format(len(self.formatted))
40 self.formatted[key] = formatter(expr)
41 self.ops[key] = expr.op()
42
43 self.counts[key] += 1
44
45 def count(self, expr):
46 return self.counts[self._key(expr)]
47
48 def get_alias(self, expr):
49 return self.aliases[self._key(expr)]
50
51 def get_formatted(self, expr):
52 return self.formatted[self._key(expr)]
53
54
55 class ExprFormatter:
56 """For creating a nice tree-like representation of an expression graph.
57
58 Notes
59 -----
60 TODO: detect reused DAG nodes and do not display redundant information
61
62 """
63
64 def __init__(
65 self, expr, indent_size=2, base_level=0, memo=None, memoize=True
66 ):
67 self.expr = expr
68 self.indent_size = indent_size
69 self.base_level = base_level
70
71 self.memoize = memoize
72
73 # For tracking "extracted" objects, like tables, that we don't want to
74 # print out more than once, and simply alias in the expression tree
75 if memo is None:
76 memo = FormatMemo()
77
78 self.memo = memo
79
80 def get_result(self):
81 what = self.expr.op()
82
83 if self.memoize:
84 self._memoize_tables()
85
86 if isinstance(what, ops.TableNode) and what.has_schema():
87 # This should also catch aggregations
88 if not self.memoize and self.expr in self.memo:
89 text = 'Table: %s' % self.memo.get_alias(self.expr)
90 elif isinstance(what, ops.PhysicalTable):
91 text = self._format_table(self.expr)
92 else:
93 # Any other node type
94 text = self._format_node(self.expr)
95 elif isinstance(what, ops.TableColumn):
96 text = self._format_column(self.expr)
97 elif isinstance(what, ops.Literal):
98 text = 'Literal[{}]\n {}'.format(
99 self._get_type_display(), str(what.value)
100 )
101 elif isinstance(what, ops.ScalarParameter):
102 text = 'ScalarParameter[{}]'.format(self._get_type_display())
103 elif isinstance(what, ops.Node):
104 text = self._format_node(self.expr)
105
106 if isinstance(self.expr, ir.ValueExpr) and self.expr._name is not None:
107 text = '{} = {}'.format(self.expr.get_name(), text)
108
109 if self.memoize:
110 alias_to_text = [
111 (
112 self.memo.aliases[x],
113 self.memo.formatted[x],
114 self.memo.ops[x],
115 )
116 for x in self.memo.formatted
117 ]
118 alias_to_text.sort()
119
120 # A hack to suppress printing out of a ref that is the result of
121 # the top level expression
122 refs = [
123 x + '\n' + y
124 for x, y, op in alias_to_text
125 if not op.equals(what)
126 ]
127
128 text = '\n\n'.join(refs + [text])
129
130 return self._indent(text, self.base_level)
131
132 def _memoize_tables(self):
133 table_memo_ops = (ops.Aggregation, ops.Selection, ops.SelfReference)
134 expr = self.expr
135 if expr.op() in self.memo.visit_memo:
136 return
137
138 stack = [expr]
139 seen = set()
140 memo = self.memo
141
142 while stack:
143 e = stack.pop()
144 op = e.op()
145
146 if op not in seen:
147 seen.add(op)
148
149 if isinstance(op, ops.PhysicalTable):
150 memo.observe(e, self._format_table)
151 elif isinstance(op, ops.Node):
152 stack.extend(
153 arg
154 for arg in reversed(op.args)
155 if isinstance(arg, ir.Expr)
156 )
157 if isinstance(op, table_memo_ops):
158 memo.observe(e, self._format_node)
159 elif isinstance(op, ops.TableNode) and op.has_schema():
160 memo.observe(e, self._format_table)
161 memo.visit_memo.add(op)
162
163 def _indent(self, text, indents=1):
164 return util.indent(text, self.indent_size * indents)
165
166 def _format_table(self, expr):
167 table = expr.op()
168 # format the schema
169 rows = ['name: {}\nschema:'.format(table.name)]
170 rows.extend(
171 map(' {} : {}'.format, table.schema.names, table.schema.types)
172 )
173 opname = type(table).__name__
174 type_display = self._get_type_display(expr)
175 opline = '{}[{}]'.format(opname, type_display)
176 return '{}\n{}'.format(opline, self._indent('\n'.join(rows)))
177
178 def _format_column(self, expr):
179 # HACK: if column is pulled from a Filter of another table, this parent
180 # will not be found in the memo
181 col = expr.op()
182 parent = col.parent()
183
184 if parent not in self.memo:
185 self.memo.observe(parent, formatter=self._format_node)
186
187 table_formatted = self.memo.get_alias(parent)
188 table_formatted = self._indent(table_formatted)
189
190 type_display = self._get_type_display(self.expr)
191 return "Column[{0}] '{1}' from table\n{2}".format(
192 type_display, col.name, table_formatted
193 )
194
195 def _format_node(self, expr):
196 op = expr.op()
197 formatted_args = []
198
199 def visit(what, extra_indents=0):
200 if isinstance(what, ir.Expr):
201 result = self._format_subexpr(what)
202 else:
203 result = self._indent(str(what))
204
205 if extra_indents > 0:
206 result = util.indent(result, self.indent_size)
207
208 formatted_args.append(result)
209
210 arg_names = getattr(op, 'display_argnames', op.argnames)
211
212 if not arg_names:
213 for arg in op.flat_args():
214 visit(arg)
215 else:
216 signature = op.signature
217 arg_name_pairs = (
218 (arg, name)
219 for arg, name in zip(op.args, arg_names)
220 if signature[name].show
221 )
222 for arg, name in arg_name_pairs:
223 if name == 'arg' and isinstance(op, ops.ValueOp):
224 # don't display first argument's name in repr
225 name = None
226 if name is not None:
227 name = self._indent('{}:'.format(name))
228 if util.is_iterable(arg):
229 if name is not None and len(arg) > 0:
230 formatted_args.append(name)
231 indents = 1
232 else:
233 indents = 0
234 for x in arg:
235 visit(x, extra_indents=indents)
236 else:
237 if name is not None:
238 formatted_args.append(name)
239 indents = 1
240 else:
241 indents = 0
242 visit(arg, extra_indents=indents)
243
244 opname = type(op).__name__
245 type_display = self._get_type_display(expr)
246 opline = '{}[{}]'.format(opname, type_display)
247 return '\n'.join([opline] + formatted_args)
248
249 def _format_subexpr(self, expr):
250 subexprs = self.memo.subexprs
251 key = expr.op()
252 try:
253 result = subexprs[key]
254 except KeyError:
255 formatter = ExprFormatter(expr, memo=self.memo, memoize=False)
256 result = subexprs[key] = self._indent(formatter.get_result(), 1)
257 return result
258
259 def _get_type_display(self, expr=None):
260 if expr is None:
261 expr = self.expr
262 return expr._type_display()
263
[end of ibis/expr/format.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/ibis/expr/format.py b/ibis/expr/format.py
--- a/ibis/expr/format.py
+++ b/ibis/expr/format.py
@@ -248,7 +248,7 @@
def _format_subexpr(self, expr):
subexprs = self.memo.subexprs
- key = expr.op()
+ key = expr._key
try:
result = subexprs[key]
except KeyError:
| {"golden_diff": "diff --git a/ibis/expr/format.py b/ibis/expr/format.py\n--- a/ibis/expr/format.py\n+++ b/ibis/expr/format.py\n@@ -248,7 +248,7 @@\n \n def _format_subexpr(self, expr):\n subexprs = self.memo.subexprs\n- key = expr.op()\n+ key = expr._key\n try:\n result = subexprs[key]\n except KeyError:\n", "issue": "BUG: Multiple aliases on the same column not behaving as expected\n\r\n``` python\r\ncolumn = table.some_column\r\ntable.projection(\r\n [\r\n column.name(\"alias1\"),\r\n column.name(\"alias2\"),\r\n column.name(\"alias3\"),\r\n ]\r\n )\r\n```\r\n\r\nI think the expected behavior would be a table expression with:\r\n```\r\nSelection[table]\r\n table:\r\n Table: ref_0\r\n selections:\r\n alias1 = Column[float64*] 'some_column' from table\r\n ref_0\r\n alias2 = Column[float64*] 'some_column' from table\r\n ref_0\r\n alias3 = Column[float64*] 'some_column' from table\r\n ref_0\r\n```\r\n\r\nHowever, the result I'm getting is:\r\n```\r\nSelection[table]\r\n table:\r\n Table: ref_0\r\n selections:\r\n alias1 = Column[float64*] 'some_column' from table\r\n ref_0\r\n alias1 = Column[float64*] 'some_column' from table\r\n ref_0\r\n alias1 = Column[float64*] 'some_column' from table\r\n ref_0\r\n```\n", "before_files": [{"content": "import ibis.expr.operations as ops\nimport ibis.expr.types as ir\nimport ibis.util as util\n\n\nclass FormatMemo:\n # A little sanity hack to simplify the below\n\n def __init__(self):\n from collections import defaultdict\n\n self.formatted = {}\n self.aliases = {}\n self.ops = {}\n self.counts = defaultdict(int)\n self._repr_memo = {}\n self.subexprs = {}\n self.visit_memo = set()\n\n def __contains__(self, obj):\n return self._key(obj) in self.formatted\n\n def _key(self, expr):\n memo = self._repr_memo\n try:\n result = memo[expr]\n except KeyError:\n result = memo[expr] = self._format(expr)\n return result\n\n def _format(self, expr):\n return expr.op()._repr(memo=self)\n\n def observe(self, expr, formatter=None):\n if formatter is None:\n formatter = self._format\n key = self._key(expr)\n if key not in self.formatted:\n self.aliases[key] = 'ref_{:d}'.format(len(self.formatted))\n self.formatted[key] = formatter(expr)\n self.ops[key] = expr.op()\n\n self.counts[key] += 1\n\n def count(self, expr):\n return self.counts[self._key(expr)]\n\n def get_alias(self, expr):\n return self.aliases[self._key(expr)]\n\n def get_formatted(self, expr):\n return self.formatted[self._key(expr)]\n\n\nclass ExprFormatter:\n \"\"\"For creating a nice tree-like representation of an expression graph.\n\n Notes\n -----\n TODO: detect reused DAG nodes and do not display redundant information\n\n \"\"\"\n\n def __init__(\n self, expr, indent_size=2, base_level=0, memo=None, memoize=True\n ):\n self.expr = expr\n self.indent_size = indent_size\n self.base_level = base_level\n\n self.memoize = memoize\n\n # For tracking \"extracted\" objects, like tables, that we don't want to\n # print out more than once, and simply alias in the expression tree\n if memo is None:\n memo = FormatMemo()\n\n self.memo = memo\n\n def get_result(self):\n what = self.expr.op()\n\n if self.memoize:\n self._memoize_tables()\n\n if isinstance(what, ops.TableNode) and what.has_schema():\n # This should also catch aggregations\n if not self.memoize and self.expr in self.memo:\n text = 'Table: %s' % self.memo.get_alias(self.expr)\n elif isinstance(what, ops.PhysicalTable):\n text = self._format_table(self.expr)\n else:\n # Any other node type\n text = self._format_node(self.expr)\n elif isinstance(what, ops.TableColumn):\n text = self._format_column(self.expr)\n elif isinstance(what, ops.Literal):\n text = 'Literal[{}]\\n {}'.format(\n self._get_type_display(), str(what.value)\n )\n elif isinstance(what, ops.ScalarParameter):\n text = 'ScalarParameter[{}]'.format(self._get_type_display())\n elif isinstance(what, ops.Node):\n text = self._format_node(self.expr)\n\n if isinstance(self.expr, ir.ValueExpr) and self.expr._name is not None:\n text = '{} = {}'.format(self.expr.get_name(), text)\n\n if self.memoize:\n alias_to_text = [\n (\n self.memo.aliases[x],\n self.memo.formatted[x],\n self.memo.ops[x],\n )\n for x in self.memo.formatted\n ]\n alias_to_text.sort()\n\n # A hack to suppress printing out of a ref that is the result of\n # the top level expression\n refs = [\n x + '\\n' + y\n for x, y, op in alias_to_text\n if not op.equals(what)\n ]\n\n text = '\\n\\n'.join(refs + [text])\n\n return self._indent(text, self.base_level)\n\n def _memoize_tables(self):\n table_memo_ops = (ops.Aggregation, ops.Selection, ops.SelfReference)\n expr = self.expr\n if expr.op() in self.memo.visit_memo:\n return\n\n stack = [expr]\n seen = set()\n memo = self.memo\n\n while stack:\n e = stack.pop()\n op = e.op()\n\n if op not in seen:\n seen.add(op)\n\n if isinstance(op, ops.PhysicalTable):\n memo.observe(e, self._format_table)\n elif isinstance(op, ops.Node):\n stack.extend(\n arg\n for arg in reversed(op.args)\n if isinstance(arg, ir.Expr)\n )\n if isinstance(op, table_memo_ops):\n memo.observe(e, self._format_node)\n elif isinstance(op, ops.TableNode) and op.has_schema():\n memo.observe(e, self._format_table)\n memo.visit_memo.add(op)\n\n def _indent(self, text, indents=1):\n return util.indent(text, self.indent_size * indents)\n\n def _format_table(self, expr):\n table = expr.op()\n # format the schema\n rows = ['name: {}\\nschema:'.format(table.name)]\n rows.extend(\n map(' {} : {}'.format, table.schema.names, table.schema.types)\n )\n opname = type(table).__name__\n type_display = self._get_type_display(expr)\n opline = '{}[{}]'.format(opname, type_display)\n return '{}\\n{}'.format(opline, self._indent('\\n'.join(rows)))\n\n def _format_column(self, expr):\n # HACK: if column is pulled from a Filter of another table, this parent\n # will not be found in the memo\n col = expr.op()\n parent = col.parent()\n\n if parent not in self.memo:\n self.memo.observe(parent, formatter=self._format_node)\n\n table_formatted = self.memo.get_alias(parent)\n table_formatted = self._indent(table_formatted)\n\n type_display = self._get_type_display(self.expr)\n return \"Column[{0}] '{1}' from table\\n{2}\".format(\n type_display, col.name, table_formatted\n )\n\n def _format_node(self, expr):\n op = expr.op()\n formatted_args = []\n\n def visit(what, extra_indents=0):\n if isinstance(what, ir.Expr):\n result = self._format_subexpr(what)\n else:\n result = self._indent(str(what))\n\n if extra_indents > 0:\n result = util.indent(result, self.indent_size)\n\n formatted_args.append(result)\n\n arg_names = getattr(op, 'display_argnames', op.argnames)\n\n if not arg_names:\n for arg in op.flat_args():\n visit(arg)\n else:\n signature = op.signature\n arg_name_pairs = (\n (arg, name)\n for arg, name in zip(op.args, arg_names)\n if signature[name].show\n )\n for arg, name in arg_name_pairs:\n if name == 'arg' and isinstance(op, ops.ValueOp):\n # don't display first argument's name in repr\n name = None\n if name is not None:\n name = self._indent('{}:'.format(name))\n if util.is_iterable(arg):\n if name is not None and len(arg) > 0:\n formatted_args.append(name)\n indents = 1\n else:\n indents = 0\n for x in arg:\n visit(x, extra_indents=indents)\n else:\n if name is not None:\n formatted_args.append(name)\n indents = 1\n else:\n indents = 0\n visit(arg, extra_indents=indents)\n\n opname = type(op).__name__\n type_display = self._get_type_display(expr)\n opline = '{}[{}]'.format(opname, type_display)\n return '\\n'.join([opline] + formatted_args)\n\n def _format_subexpr(self, expr):\n subexprs = self.memo.subexprs\n key = expr.op()\n try:\n result = subexprs[key]\n except KeyError:\n formatter = ExprFormatter(expr, memo=self.memo, memoize=False)\n result = subexprs[key] = self._indent(formatter.get_result(), 1)\n return result\n\n def _get_type_display(self, expr=None):\n if expr is None:\n expr = self.expr\n return expr._type_display()\n", "path": "ibis/expr/format.py"}]} | 3,350 | 106 |
gh_patches_debug_23792 | rasdani/github-patches | git_diff | svthalia__concrexit-2717 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Summary download not working
### Describe the bug
Some summaries (for example Data Mining 2019-2020 Practice Exam Midterms) are not downloaded when clicked. They also do not work when trying to view them through the Site Administration. Pressing these buttons only leads to a new empty tab.
Some summaries still work however.
### How to reproduce
Steps to reproduce the behaviour:
1. Go to 'education'
2. Scroll down to 'data mining'
3. Click on 'data mining'
4. Click on '2019-2020'
5. Click on 'Practice Exam Midterm'
### Expected behaviour
Clicking this button should download the associated file.
</issue>
<code>
[start of website/education/views.py]
1 """Views provided by the education package."""
2 import os
3 from datetime import date, datetime
4
5 from django.contrib.auth.decorators import login_required
6 from django.contrib.messages.views import SuccessMessageMixin
7 from django.core.exceptions import PermissionDenied
8 from django.db.models import Count
9 from django.http import HttpResponse
10 from django.shortcuts import redirect
11 from django.urls import reverse_lazy
12 from django.utils import timezone
13 from django.utils.decorators import method_decorator
14 from django.utils.translation import gettext_lazy as _
15 from django.views.generic import CreateView, DetailView, ListView, TemplateView
16
17 from members.decorators import membership_required
18 from utils.media.services import get_media_url
19
20 from . import emails
21 from .forms import AddExamForm, AddSummaryForm
22 from .models import Category, Course, Exam, Summary
23
24
25 class CourseIndexView(ListView):
26 """Render an overview of the courses."""
27
28 queryset = (
29 Course.objects.filter(until=None)
30 .prefetch_related("categories", "old_courses")
31 .annotate(summary_count=Count("summary"))
32 .annotate(exam_count=Count("exam"))
33 )
34 template_name = "education/courses.html"
35
36 def get_ordering(self) -> str:
37 return "name"
38
39 def get_context_data(self, **kwargs) -> dict:
40 context = super().get_context_data(**kwargs)
41 context.update(
42 {
43 "courses": (
44 {
45 "course_code": x.course_code,
46 "name": x.name,
47 "categories": x.categories.all(),
48 "document_count": sum(
49 [
50 x.summary_count,
51 x.exam_count,
52 ]
53 + [
54 c.summary_set.filter(accepted=True).count()
55 + c.exam_set.filter(accepted=True).count()
56 for c in x.old_courses.all()
57 ]
58 ),
59 "url": x.get_absolute_url(),
60 }
61 for x in context["object_list"]
62 ),
63 "categories": Category.objects.all(),
64 }
65 )
66 return context
67
68
69 class CourseDetailView(DetailView):
70 """Render the detail page of one specific course."""
71
72 model = Course
73 context_object_name = "course"
74 template_name = "education/course.html"
75
76 def get_context_data(self, **kwargs) -> dict:
77 context = super().get_context_data(**kwargs)
78 obj = context["course"]
79 courses = list(obj.old_courses.all())
80 courses.append(obj)
81 items = {}
82 for course in courses:
83 for summary in course.summary_set.filter(accepted=True):
84 if summary.year not in items:
85 items[summary.year] = {
86 "summaries": [],
87 "exams": [],
88 "legacy": course if course.pk != obj.pk else None,
89 }
90 items[summary.year]["summaries"].append(
91 {
92 "year": summary.year,
93 "name": summary.name,
94 "language": summary.language,
95 "id": summary.id,
96 }
97 )
98 for exam in course.exam_set.filter(accepted=True):
99 if exam.year not in items:
100 items[exam.year] = {
101 "summaries": [],
102 "exams": [],
103 "legacy": course if course.pk != obj.pk else None,
104 }
105 items[exam.year]["exams"].append(
106 {
107 "type": "exam",
108 "year": exam.year,
109 "name": f"{exam.get_type_display()} {exam.name}",
110 "language": exam.language,
111 "id": exam.id,
112 }
113 )
114 context.update({"items": sorted(items.items(), key=lambda x: x[0])})
115 return context
116
117
118 @method_decorator(login_required, "dispatch")
119 @method_decorator(membership_required, "dispatch")
120 class ExamDetailView(DetailView):
121 """Fetch and output the specified exam."""
122
123 model = Exam
124
125 def get(self, request, *args, **kwargs) -> HttpResponse:
126 response = super().get(request, *args, **kwargs)
127 exam = response.context_data["object"]
128 exam.download_count += 1
129 exam.save()
130
131 ext = os.path.splitext(exam.file.name)[1]
132 filename = f"{exam.course.name}-exam{exam.year}{ext}"
133 return redirect(get_media_url(exam.file, filename))
134
135
136 @method_decorator(login_required, "dispatch")
137 @method_decorator(membership_required, "dispatch")
138 class SummaryDetailView(DetailView):
139 """Fetch and output the specified summary."""
140
141 model = Summary
142
143 def get(self, request, *args, **kwargs) -> HttpResponse:
144 response = super().get(request, *args, **kwargs)
145 obj = response.context_data["object"]
146 obj.download_count += 1
147 obj.save()
148
149 ext = os.path.splitext(obj.file.name)[1]
150 filename = f"{obj.course.name}-summary{obj.year}{ext}"
151 return redirect(get_media_url(obj.file, filename))
152
153
154 @method_decorator(login_required, "dispatch")
155 @method_decorator(membership_required, "dispatch")
156 class ExamCreateView(SuccessMessageMixin, CreateView):
157 """Render the form to submit a new exam."""
158
159 model = Exam
160 form_class = AddExamForm
161 template_name = "education/add_exam.html"
162 success_url = reverse_lazy("education:submit-exam")
163 success_message = _("Exam submitted successfully.")
164
165 def get_initial(self) -> dict:
166 initial = super().get_initial()
167 initial["exam_date"] = date.today()
168 initial["course"] = self.kwargs.get("pk", None)
169 return initial
170
171 def form_valid(self, form) -> HttpResponse:
172 self.object = form.save(commit=False)
173 self.object.uploader = self.request.member
174 self.object.uploader_date = datetime.now()
175 self.object.save()
176 emails.send_document_notification(self.object)
177 return super().form_valid(form)
178
179
180 @method_decorator(login_required, "dispatch")
181 @method_decorator(membership_required, "dispatch")
182 class SummaryCreateView(SuccessMessageMixin, CreateView):
183 """Render the form to submit a new summary."""
184
185 model = Summary
186 form_class = AddSummaryForm
187 template_name = "education/add_summary.html"
188 success_url = reverse_lazy("education:submit-summary")
189 success_message = _("Summary submitted successfully.")
190
191 def get_initial(self):
192 initial = super().get_initial()
193 initial["author"] = self.request.member.get_full_name()
194 initial["course"] = self.kwargs.get("pk", None)
195 return initial
196
197 def form_valid(self, form) -> HttpResponse:
198 self.object = form.save(commit=False)
199 self.object.uploader = self.request.member
200 self.object.uploader_date = datetime.now()
201 self.object.save()
202 emails.send_document_notification(self.object)
203 return super().form_valid(form)
204
205
206 @method_decorator(login_required, "dispatch")
207 class BookInfoView(TemplateView):
208 """Render a page with information about book sale.
209
210 Only available to members and to-be members
211 """
212
213 template_name = "education/books.html"
214
215 def dispatch(self, request, *args, **kwargs) -> HttpResponse:
216 if request.member.has_active_membership() or (
217 request.member.earliest_membership
218 and request.member.earliest_membership.since > timezone.now().date()
219 ):
220 return super().dispatch(request, *args, **kwargs)
221 raise PermissionDenied
222
[end of website/education/views.py]
[start of website/utils/media/services.py]
1 import io
2 import os
3
4 from django.conf import settings
5 from django.core import signing
6 from django.core.files.base import ContentFile
7 from django.core.files.storage import DefaultStorage, get_storage_class
8 from django.core.files.uploadedfile import InMemoryUploadedFile
9 from django.db.models.fields.files import FieldFile, ImageFieldFile
10 from django.urls import reverse
11
12
13 def save_image(storage, image, path, format):
14 buffer = io.BytesIO()
15 image.convert("RGB" if format == "JPEG" else "RGBA").save(fp=buffer, format=format)
16 buff_val = buffer.getvalue()
17 content = ContentFile(buff_val)
18 file = InMemoryUploadedFile(
19 content,
20 None,
21 f"foo.{format.lower()}",
22 f"image/{format.lower()}",
23 content.tell,
24 None,
25 )
26 return storage.save(path, file)
27
28
29 def get_media_url(file, attachment=False):
30 """Get the url of the provided media file to serve in a browser.
31
32 If the file is private a signature will be added.
33 Do NOT use this with user input
34 :param file: the file field
35 :param attachment: True if the file is a forced download
36 :return: the url of the media
37 """
38 storage = DefaultStorage()
39 file_name = file
40 if isinstance(file, (ImageFieldFile, FieldFile)):
41 storage = file.storage
42 file_name = file.name
43
44 return f"{storage.url(file_name, attachment)}"
45
46
47 def get_thumbnail_url(file, size, fit=True):
48 """Get the thumbnail url of a media file, NEVER use this with user input.
49
50 If the thumbnail exists this function will return the url of the
51 media file, with signature if necessary. Does it not yet exist a route
52 that executes the :func:`utils.media.views.generate_thumbnail`
53 will be the output.
54 :param file: the file field
55 :param size: size of the image
56 :param fit: False to keep the aspect ratio, True to crop
57 :return: get-thumbnail path
58 """
59 storage = DefaultStorage()
60 name = file
61
62 if isinstance(file, (ImageFieldFile, FieldFile)):
63 storage = file.storage
64 name = file.name
65
66 is_public = isinstance(storage, get_storage_class(settings.PUBLIC_FILE_STORAGE))
67 size_fit = f"{size}_{int(fit)}"
68
69 if name.endswith(".svg") and is_public:
70 return storage.url(name)
71
72 sig_info = {
73 "size": size,
74 "fit": int(fit),
75 "name": name,
76 "thumb_path": f"thumbnails/{size_fit}/{name}",
77 "serve_path": f"thumbnails/{size_fit}/{name}",
78 "storage": f"{storage.__class__.__module__}.{storage.__class__.__name__}",
79 }
80
81 # We provide a URL instead of calling it as a function, so that using
82 # it means kicking off a new GET request. If we would need to check all files for the
83 # thumbnails inline, loading an album overview would have high latency.
84 return (
85 reverse("get-thumbnail", args=[os.path.join(size_fit, sig_info["name"])])
86 + f"?sig={signing.dumps(sig_info)}"
87 )
88
[end of website/utils/media/services.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/website/education/views.py b/website/education/views.py
--- a/website/education/views.py
+++ b/website/education/views.py
@@ -124,13 +124,13 @@
def get(self, request, *args, **kwargs) -> HttpResponse:
response = super().get(request, *args, **kwargs)
- exam = response.context_data["object"]
- exam.download_count += 1
- exam.save()
+ obj = response.context_data["object"]
+ obj.download_count += 1
+ obj.save()
- ext = os.path.splitext(exam.file.name)[1]
- filename = f"{exam.course.name}-exam{exam.year}{ext}"
- return redirect(get_media_url(exam.file, filename))
+ ext = os.path.splitext(obj.file.name)[1]
+ filename = f"{obj.course.name}-summary{obj.year}{ext}"
+ return redirect(get_media_url(obj.file, filename))
@method_decorator(login_required, "dispatch")
diff --git a/website/utils/media/services.py b/website/utils/media/services.py
--- a/website/utils/media/services.py
+++ b/website/utils/media/services.py
@@ -32,7 +32,7 @@
If the file is private a signature will be added.
Do NOT use this with user input
:param file: the file field
- :param attachment: True if the file is a forced download
+ :param attachment: filename to use for the attachment or False to not download as attachment
:return: the url of the media
"""
storage = DefaultStorage()
| {"golden_diff": "diff --git a/website/education/views.py b/website/education/views.py\n--- a/website/education/views.py\n+++ b/website/education/views.py\n@@ -124,13 +124,13 @@\n \n def get(self, request, *args, **kwargs) -> HttpResponse:\n response = super().get(request, *args, **kwargs)\n- exam = response.context_data[\"object\"]\n- exam.download_count += 1\n- exam.save()\n+ obj = response.context_data[\"object\"]\n+ obj.download_count += 1\n+ obj.save()\n \n- ext = os.path.splitext(exam.file.name)[1]\n- filename = f\"{exam.course.name}-exam{exam.year}{ext}\"\n- return redirect(get_media_url(exam.file, filename))\n+ ext = os.path.splitext(obj.file.name)[1]\n+ filename = f\"{obj.course.name}-summary{obj.year}{ext}\"\n+ return redirect(get_media_url(obj.file, filename))\n \n \n @method_decorator(login_required, \"dispatch\")\ndiff --git a/website/utils/media/services.py b/website/utils/media/services.py\n--- a/website/utils/media/services.py\n+++ b/website/utils/media/services.py\n@@ -32,7 +32,7 @@\n If the file is private a signature will be added.\n Do NOT use this with user input\n :param file: the file field\n- :param attachment: True if the file is a forced download\n+ :param attachment: filename to use for the attachment or False to not download as attachment\n :return: the url of the media\n \"\"\"\n storage = DefaultStorage()\n", "issue": "Summary download not working\n### Describe the bug\r\nSome summaries (for example Data Mining 2019-2020 Practice Exam Midterms) are not downloaded when clicked. They also do not work when trying to view them through the Site Administration. Pressing these buttons only leads to a new empty tab. \r\n\r\nSome summaries still work however.\r\n\r\n### How to reproduce\r\nSteps to reproduce the behaviour:\r\n1. Go to 'education'\r\n2. Scroll down to 'data mining'\r\n3. Click on 'data mining'\r\n4. Click on '2019-2020'\r\n5. Click on 'Practice Exam Midterm'\r\n\r\n### Expected behaviour\r\nClicking this button should download the associated file.\r\n\n", "before_files": [{"content": "\"\"\"Views provided by the education package.\"\"\"\nimport os\nfrom datetime import date, datetime\n\nfrom django.contrib.auth.decorators import login_required\nfrom django.contrib.messages.views import SuccessMessageMixin\nfrom django.core.exceptions import PermissionDenied\nfrom django.db.models import Count\nfrom django.http import HttpResponse\nfrom django.shortcuts import redirect\nfrom django.urls import reverse_lazy\nfrom django.utils import timezone\nfrom django.utils.decorators import method_decorator\nfrom django.utils.translation import gettext_lazy as _\nfrom django.views.generic import CreateView, DetailView, ListView, TemplateView\n\nfrom members.decorators import membership_required\nfrom utils.media.services import get_media_url\n\nfrom . import emails\nfrom .forms import AddExamForm, AddSummaryForm\nfrom .models import Category, Course, Exam, Summary\n\n\nclass CourseIndexView(ListView):\n \"\"\"Render an overview of the courses.\"\"\"\n\n queryset = (\n Course.objects.filter(until=None)\n .prefetch_related(\"categories\", \"old_courses\")\n .annotate(summary_count=Count(\"summary\"))\n .annotate(exam_count=Count(\"exam\"))\n )\n template_name = \"education/courses.html\"\n\n def get_ordering(self) -> str:\n return \"name\"\n\n def get_context_data(self, **kwargs) -> dict:\n context = super().get_context_data(**kwargs)\n context.update(\n {\n \"courses\": (\n {\n \"course_code\": x.course_code,\n \"name\": x.name,\n \"categories\": x.categories.all(),\n \"document_count\": sum(\n [\n x.summary_count,\n x.exam_count,\n ]\n + [\n c.summary_set.filter(accepted=True).count()\n + c.exam_set.filter(accepted=True).count()\n for c in x.old_courses.all()\n ]\n ),\n \"url\": x.get_absolute_url(),\n }\n for x in context[\"object_list\"]\n ),\n \"categories\": Category.objects.all(),\n }\n )\n return context\n\n\nclass CourseDetailView(DetailView):\n \"\"\"Render the detail page of one specific course.\"\"\"\n\n model = Course\n context_object_name = \"course\"\n template_name = \"education/course.html\"\n\n def get_context_data(self, **kwargs) -> dict:\n context = super().get_context_data(**kwargs)\n obj = context[\"course\"]\n courses = list(obj.old_courses.all())\n courses.append(obj)\n items = {}\n for course in courses:\n for summary in course.summary_set.filter(accepted=True):\n if summary.year not in items:\n items[summary.year] = {\n \"summaries\": [],\n \"exams\": [],\n \"legacy\": course if course.pk != obj.pk else None,\n }\n items[summary.year][\"summaries\"].append(\n {\n \"year\": summary.year,\n \"name\": summary.name,\n \"language\": summary.language,\n \"id\": summary.id,\n }\n )\n for exam in course.exam_set.filter(accepted=True):\n if exam.year not in items:\n items[exam.year] = {\n \"summaries\": [],\n \"exams\": [],\n \"legacy\": course if course.pk != obj.pk else None,\n }\n items[exam.year][\"exams\"].append(\n {\n \"type\": \"exam\",\n \"year\": exam.year,\n \"name\": f\"{exam.get_type_display()} {exam.name}\",\n \"language\": exam.language,\n \"id\": exam.id,\n }\n )\n context.update({\"items\": sorted(items.items(), key=lambda x: x[0])})\n return context\n\n\n@method_decorator(login_required, \"dispatch\")\n@method_decorator(membership_required, \"dispatch\")\nclass ExamDetailView(DetailView):\n \"\"\"Fetch and output the specified exam.\"\"\"\n\n model = Exam\n\n def get(self, request, *args, **kwargs) -> HttpResponse:\n response = super().get(request, *args, **kwargs)\n exam = response.context_data[\"object\"]\n exam.download_count += 1\n exam.save()\n\n ext = os.path.splitext(exam.file.name)[1]\n filename = f\"{exam.course.name}-exam{exam.year}{ext}\"\n return redirect(get_media_url(exam.file, filename))\n\n\n@method_decorator(login_required, \"dispatch\")\n@method_decorator(membership_required, \"dispatch\")\nclass SummaryDetailView(DetailView):\n \"\"\"Fetch and output the specified summary.\"\"\"\n\n model = Summary\n\n def get(self, request, *args, **kwargs) -> HttpResponse:\n response = super().get(request, *args, **kwargs)\n obj = response.context_data[\"object\"]\n obj.download_count += 1\n obj.save()\n\n ext = os.path.splitext(obj.file.name)[1]\n filename = f\"{obj.course.name}-summary{obj.year}{ext}\"\n return redirect(get_media_url(obj.file, filename))\n\n\n@method_decorator(login_required, \"dispatch\")\n@method_decorator(membership_required, \"dispatch\")\nclass ExamCreateView(SuccessMessageMixin, CreateView):\n \"\"\"Render the form to submit a new exam.\"\"\"\n\n model = Exam\n form_class = AddExamForm\n template_name = \"education/add_exam.html\"\n success_url = reverse_lazy(\"education:submit-exam\")\n success_message = _(\"Exam submitted successfully.\")\n\n def get_initial(self) -> dict:\n initial = super().get_initial()\n initial[\"exam_date\"] = date.today()\n initial[\"course\"] = self.kwargs.get(\"pk\", None)\n return initial\n\n def form_valid(self, form) -> HttpResponse:\n self.object = form.save(commit=False)\n self.object.uploader = self.request.member\n self.object.uploader_date = datetime.now()\n self.object.save()\n emails.send_document_notification(self.object)\n return super().form_valid(form)\n\n\n@method_decorator(login_required, \"dispatch\")\n@method_decorator(membership_required, \"dispatch\")\nclass SummaryCreateView(SuccessMessageMixin, CreateView):\n \"\"\"Render the form to submit a new summary.\"\"\"\n\n model = Summary\n form_class = AddSummaryForm\n template_name = \"education/add_summary.html\"\n success_url = reverse_lazy(\"education:submit-summary\")\n success_message = _(\"Summary submitted successfully.\")\n\n def get_initial(self):\n initial = super().get_initial()\n initial[\"author\"] = self.request.member.get_full_name()\n initial[\"course\"] = self.kwargs.get(\"pk\", None)\n return initial\n\n def form_valid(self, form) -> HttpResponse:\n self.object = form.save(commit=False)\n self.object.uploader = self.request.member\n self.object.uploader_date = datetime.now()\n self.object.save()\n emails.send_document_notification(self.object)\n return super().form_valid(form)\n\n\n@method_decorator(login_required, \"dispatch\")\nclass BookInfoView(TemplateView):\n \"\"\"Render a page with information about book sale.\n\n Only available to members and to-be members\n \"\"\"\n\n template_name = \"education/books.html\"\n\n def dispatch(self, request, *args, **kwargs) -> HttpResponse:\n if request.member.has_active_membership() or (\n request.member.earliest_membership\n and request.member.earliest_membership.since > timezone.now().date()\n ):\n return super().dispatch(request, *args, **kwargs)\n raise PermissionDenied\n", "path": "website/education/views.py"}, {"content": "import io\nimport os\n\nfrom django.conf import settings\nfrom django.core import signing\nfrom django.core.files.base import ContentFile\nfrom django.core.files.storage import DefaultStorage, get_storage_class\nfrom django.core.files.uploadedfile import InMemoryUploadedFile\nfrom django.db.models.fields.files import FieldFile, ImageFieldFile\nfrom django.urls import reverse\n\n\ndef save_image(storage, image, path, format):\n buffer = io.BytesIO()\n image.convert(\"RGB\" if format == \"JPEG\" else \"RGBA\").save(fp=buffer, format=format)\n buff_val = buffer.getvalue()\n content = ContentFile(buff_val)\n file = InMemoryUploadedFile(\n content,\n None,\n f\"foo.{format.lower()}\",\n f\"image/{format.lower()}\",\n content.tell,\n None,\n )\n return storage.save(path, file)\n\n\ndef get_media_url(file, attachment=False):\n \"\"\"Get the url of the provided media file to serve in a browser.\n\n If the file is private a signature will be added.\n Do NOT use this with user input\n :param file: the file field\n :param attachment: True if the file is a forced download\n :return: the url of the media\n \"\"\"\n storage = DefaultStorage()\n file_name = file\n if isinstance(file, (ImageFieldFile, FieldFile)):\n storage = file.storage\n file_name = file.name\n\n return f\"{storage.url(file_name, attachment)}\"\n\n\ndef get_thumbnail_url(file, size, fit=True):\n \"\"\"Get the thumbnail url of a media file, NEVER use this with user input.\n\n If the thumbnail exists this function will return the url of the\n media file, with signature if necessary. Does it not yet exist a route\n that executes the :func:`utils.media.views.generate_thumbnail`\n will be the output.\n :param file: the file field\n :param size: size of the image\n :param fit: False to keep the aspect ratio, True to crop\n :return: get-thumbnail path\n \"\"\"\n storage = DefaultStorage()\n name = file\n\n if isinstance(file, (ImageFieldFile, FieldFile)):\n storage = file.storage\n name = file.name\n\n is_public = isinstance(storage, get_storage_class(settings.PUBLIC_FILE_STORAGE))\n size_fit = f\"{size}_{int(fit)}\"\n\n if name.endswith(\".svg\") and is_public:\n return storage.url(name)\n\n sig_info = {\n \"size\": size,\n \"fit\": int(fit),\n \"name\": name,\n \"thumb_path\": f\"thumbnails/{size_fit}/{name}\",\n \"serve_path\": f\"thumbnails/{size_fit}/{name}\",\n \"storage\": f\"{storage.__class__.__module__}.{storage.__class__.__name__}\",\n }\n\n # We provide a URL instead of calling it as a function, so that using\n # it means kicking off a new GET request. If we would need to check all files for the\n # thumbnails inline, loading an album overview would have high latency.\n return (\n reverse(\"get-thumbnail\", args=[os.path.join(size_fit, sig_info[\"name\"])])\n + f\"?sig={signing.dumps(sig_info)}\"\n )\n", "path": "website/utils/media/services.py"}]} | 3,664 | 359 |
gh_patches_debug_3844 | rasdani/github-patches | git_diff | python-telegram-bot__python-telegram-bot-1968 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[FEATURE] User.send_poll shortcut
We have `Chat.send_poll` as shortcut, but `User` was neglected in #1418
</issue>
<code>
[start of telegram/user.py]
1 #!/usr/bin/env python
2 # pylint: disable=C0103,W0622
3 #
4 # A library that provides a Python interface to the Telegram Bot API
5 # Copyright (C) 2015-2020
6 # Leandro Toledo de Souza <[email protected]>
7 #
8 # This program is free software: you can redistribute it and/or modify
9 # it under the terms of the GNU Lesser Public License as published by
10 # the Free Software Foundation, either version 3 of the License, or
11 # (at your option) any later version.
12 #
13 # This program is distributed in the hope that it will be useful,
14 # but WITHOUT ANY WARRANTY; without even the implied warranty of
15 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
16 # GNU Lesser Public License for more details.
17 #
18 # You should have received a copy of the GNU Lesser Public License
19 # along with this program. If not, see [http://www.gnu.org/licenses/].
20 """This module contains an object that represents a Telegram User."""
21
22 from telegram import TelegramObject
23 from telegram.utils.helpers import mention_html as util_mention_html
24 from telegram.utils.helpers import mention_markdown as util_mention_markdown
25
26
27 class User(TelegramObject):
28 """This object represents a Telegram user or bot.
29
30 Attributes:
31 id (:obj:`int`): Unique identifier for this user or bot.
32 is_bot (:obj:`bool`): True, if this user is a bot
33 first_name (:obj:`str`): User's or bot's first name.
34 last_name (:obj:`str`): Optional. User's or bot's last name.
35 username (:obj:`str`): Optional. User's or bot's username.
36 language_code (:obj:`str`): Optional. IETF language tag of the user's language.
37 can_join_groups (:obj:`str`): Optional. True, if the bot can be invited to groups.
38 Returned only in :attr:`telegram.Bot.get_me` requests.
39 can_read_all_group_messages (:obj:`str`): Optional. True, if privacy mode is disabled
40 for the bot. Returned only in :attr:`telegram.Bot.get_me` requests.
41 supports_inline_queries (:obj:`str`): Optional. True, if the bot supports inline queries.
42 Returned only in :attr:`telegram.Bot.get_me` requests.
43 bot (:class:`telegram.Bot`): Optional. The Bot to use for instance methods.
44
45 Args:
46 id (:obj:`int`): Unique identifier for this user or bot.
47 is_bot (:obj:`bool`): True, if this user is a bot
48 first_name (:obj:`str`): User's or bot's first name.
49 last_name (:obj:`str`, optional): User's or bot's last name.
50 username (:obj:`str`, optional): User's or bot's username.
51 language_code (:obj:`str`, optional): IETF language tag of the user's language.
52 can_join_groups (:obj:`str`, optional): True, if the bot can be invited to groups.
53 Returned only in :attr:`telegram.Bot.get_me` requests.
54 can_read_all_group_messages (:obj:`str`, optional): True, if privacy mode is disabled
55 for the bot. Returned only in :attr:`telegram.Bot.get_me` requests.
56 supports_inline_queries (:obj:`str`, optional): True, if the bot supports inline queries.
57 Returned only in :attr:`telegram.Bot.get_me` requests.
58 bot (:class:`telegram.Bot`, optional): The Bot to use for instance methods.
59
60 """
61
62 def __init__(self,
63 id,
64 first_name,
65 is_bot,
66 last_name=None,
67 username=None,
68 language_code=None,
69 can_join_groups=None,
70 can_read_all_group_messages=None,
71 supports_inline_queries=None,
72 bot=None,
73 **kwargs):
74 # Required
75 self.id = int(id)
76 self.first_name = first_name
77 self.is_bot = is_bot
78 # Optionals
79 self.last_name = last_name
80 self.username = username
81 self.language_code = language_code
82 self.can_join_groups = can_join_groups
83 self.can_read_all_group_messages = can_read_all_group_messages
84 self.supports_inline_queries = supports_inline_queries
85 self.bot = bot
86
87 self._id_attrs = (self.id,)
88
89 @property
90 def name(self):
91 """:obj:`str`: Convenience property. If available, returns the user's :attr:`username`
92 prefixed with "@". If :attr:`username` is not available, returns :attr:`full_name`."""
93 if self.username:
94 return '@{}'.format(self.username)
95 return self.full_name
96
97 @property
98 def full_name(self):
99 """:obj:`str`: Convenience property. The user's :attr:`first_name`, followed by (if
100 available) :attr:`last_name`."""
101
102 if self.last_name:
103 return u'{} {}'.format(self.first_name, self.last_name)
104 return self.first_name
105
106 @property
107 def link(self):
108 """:obj:`str`: Convenience property. If :attr:`username` is available, returns a t.me link
109 of the user."""
110
111 if self.username:
112 return "https://t.me/{}".format(self.username)
113 return None
114
115 @classmethod
116 def de_json(cls, data, bot):
117 if not data:
118 return None
119 data = super(User, cls).de_json(data, bot)
120
121 return cls(bot=bot, **data)
122
123 def get_profile_photos(self, *args, **kwargs):
124 """
125 Shortcut for::
126
127 bot.get_user_profile_photos(update.message.from_user.id, *args, **kwargs)
128
129 """
130
131 return self.bot.get_user_profile_photos(self.id, *args, **kwargs)
132
133 @classmethod
134 def de_list(cls, data, bot):
135 if not data:
136 return []
137
138 users = list()
139 for user in data:
140 users.append(cls.de_json(user, bot))
141
142 return users
143
144 def mention_markdown(self, name=None):
145 """
146 Args:
147 name (:obj:`str`): The name used as a link for the user. Defaults to :attr:`full_name`.
148
149 Returns:
150 :obj:`str`: The inline mention for the user as markdown (version 1).
151
152 """
153 if name:
154 return util_mention_markdown(self.id, name)
155 return util_mention_markdown(self.id, self.full_name)
156
157 def mention_markdown_v2(self, name=None):
158 """
159 Args:
160 name (:obj:`str`): The name used as a link for the user. Defaults to :attr:`full_name`.
161
162 Returns:
163 :obj:`str`: The inline mention for the user as markdown (version 2).
164
165 """
166 if name:
167 return util_mention_markdown(self.id, name, version=2)
168 return util_mention_markdown(self.id, self.full_name, version=2)
169
170 def mention_html(self, name=None):
171 """
172 Args:
173 name (:obj:`str`): The name used as a link for the user. Defaults to :attr:`full_name`.
174
175 Returns:
176 :obj:`str`: The inline mention for the user as HTML.
177
178 """
179 if name:
180 return util_mention_html(self.id, name)
181 return util_mention_html(self.id, self.full_name)
182
183 def send_message(self, *args, **kwargs):
184 """Shortcut for::
185
186 bot.send_message(User.id, *args, **kwargs)
187
188 Where User is the current instance.
189
190 Returns:
191 :class:`telegram.Message`: On success, instance representing the message posted.
192
193 """
194 return self.bot.send_message(self.id, *args, **kwargs)
195
196 def send_photo(self, *args, **kwargs):
197 """Shortcut for::
198
199 bot.send_photo(User.id, *args, **kwargs)
200
201 Where User is the current instance.
202
203 Returns:
204 :class:`telegram.Message`: On success, instance representing the message posted.
205
206 """
207 return self.bot.send_photo(self.id, *args, **kwargs)
208
209 def send_audio(self, *args, **kwargs):
210 """Shortcut for::
211
212 bot.send_audio(User.id, *args, **kwargs)
213
214 Where User is the current instance.
215
216 Returns:
217 :class:`telegram.Message`: On success, instance representing the message posted.
218
219 """
220 return self.bot.send_audio(self.id, *args, **kwargs)
221
222 def send_document(self, *args, **kwargs):
223 """Shortcut for::
224
225 bot.send_document(User.id, *args, **kwargs)
226
227 Where User is the current instance.
228
229 Returns:
230 :class:`telegram.Message`: On success, instance representing the message posted.
231
232 """
233 return self.bot.send_document(self.id, *args, **kwargs)
234
235 def send_animation(self, *args, **kwargs):
236 """Shortcut for::
237
238 bot.send_animation(User.id, *args, **kwargs)
239
240 Where User is the current instance.
241
242 Returns:
243 :class:`telegram.Message`: On success, instance representing the message posted.
244
245 """
246 return self.bot.send_animation(self.id, *args, **kwargs)
247
248 def send_sticker(self, *args, **kwargs):
249 """Shortcut for::
250
251 bot.send_sticker(User.id, *args, **kwargs)
252
253 Where User is the current instance.
254
255 Returns:
256 :class:`telegram.Message`: On success, instance representing the message posted.
257
258 """
259 return self.bot.send_sticker(self.id, *args, **kwargs)
260
261 def send_video(self, *args, **kwargs):
262 """Shortcut for::
263
264 bot.send_video(User.id, *args, **kwargs)
265
266 Where User is the current instance.
267
268 Returns:
269 :class:`telegram.Message`: On success, instance representing the message posted.
270
271 """
272 return self.bot.send_video(self.id, *args, **kwargs)
273
274 def send_video_note(self, *args, **kwargs):
275 """Shortcut for::
276
277 bot.send_video_note(User.id, *args, **kwargs)
278
279 Where User is the current instance.
280
281 Returns:
282 :class:`telegram.Message`: On success, instance representing the message posted.
283
284 """
285 return self.bot.send_video_note(self.id, *args, **kwargs)
286
287 def send_voice(self, *args, **kwargs):
288 """Shortcut for::
289
290 bot.send_voice(User.id, *args, **kwargs)
291
292 Where User is the current instance.
293
294 Returns:
295 :class:`telegram.Message`: On success, instance representing the message posted.
296
297 """
298 return self.bot.send_voice(self.id, *args, **kwargs)
299
[end of telegram/user.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/telegram/user.py b/telegram/user.py
--- a/telegram/user.py
+++ b/telegram/user.py
@@ -296,3 +296,16 @@
"""
return self.bot.send_voice(self.id, *args, **kwargs)
+
+ def send_poll(self, *args, **kwargs):
+ """Shortcut for::
+
+ bot.send_poll(User.id, *args, **kwargs)
+
+ Where User is the current instance.
+
+ Returns:
+ :class:`telegram.Message`: On success, instance representing the message posted.
+
+ """
+ return self.bot.send_poll(self.id, *args, **kwargs)
| {"golden_diff": "diff --git a/telegram/user.py b/telegram/user.py\n--- a/telegram/user.py\n+++ b/telegram/user.py\n@@ -296,3 +296,16 @@\n \n \"\"\"\n return self.bot.send_voice(self.id, *args, **kwargs)\n+\n+ def send_poll(self, *args, **kwargs):\n+ \"\"\"Shortcut for::\n+\n+ bot.send_poll(User.id, *args, **kwargs)\n+\n+ Where User is the current instance.\n+\n+ Returns:\n+ :class:`telegram.Message`: On success, instance representing the message posted.\n+\n+ \"\"\"\n+ return self.bot.send_poll(self.id, *args, **kwargs)\n", "issue": "[FEATURE] User.send_poll shortcut\nWe have `Chat.send_poll` as shortcut, but `User` was neglected in #1418\n", "before_files": [{"content": "#!/usr/bin/env python\n# pylint: disable=C0103,W0622\n#\n# A library that provides a Python interface to the Telegram Bot API\n# Copyright (C) 2015-2020\n# Leandro Toledo de Souza <[email protected]>\n#\n# This program is free software: you can redistribute it and/or modify\n# it under the terms of the GNU Lesser Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# This program is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU Lesser Public License for more details.\n#\n# You should have received a copy of the GNU Lesser Public License\n# along with this program. If not, see [http://www.gnu.org/licenses/].\n\"\"\"This module contains an object that represents a Telegram User.\"\"\"\n\nfrom telegram import TelegramObject\nfrom telegram.utils.helpers import mention_html as util_mention_html\nfrom telegram.utils.helpers import mention_markdown as util_mention_markdown\n\n\nclass User(TelegramObject):\n \"\"\"This object represents a Telegram user or bot.\n\n Attributes:\n id (:obj:`int`): Unique identifier for this user or bot.\n is_bot (:obj:`bool`): True, if this user is a bot\n first_name (:obj:`str`): User's or bot's first name.\n last_name (:obj:`str`): Optional. User's or bot's last name.\n username (:obj:`str`): Optional. User's or bot's username.\n language_code (:obj:`str`): Optional. IETF language tag of the user's language.\n can_join_groups (:obj:`str`): Optional. True, if the bot can be invited to groups.\n Returned only in :attr:`telegram.Bot.get_me` requests.\n can_read_all_group_messages (:obj:`str`): Optional. True, if privacy mode is disabled\n for the bot. Returned only in :attr:`telegram.Bot.get_me` requests.\n supports_inline_queries (:obj:`str`): Optional. True, if the bot supports inline queries.\n Returned only in :attr:`telegram.Bot.get_me` requests.\n bot (:class:`telegram.Bot`): Optional. The Bot to use for instance methods.\n\n Args:\n id (:obj:`int`): Unique identifier for this user or bot.\n is_bot (:obj:`bool`): True, if this user is a bot\n first_name (:obj:`str`): User's or bot's first name.\n last_name (:obj:`str`, optional): User's or bot's last name.\n username (:obj:`str`, optional): User's or bot's username.\n language_code (:obj:`str`, optional): IETF language tag of the user's language.\n can_join_groups (:obj:`str`, optional): True, if the bot can be invited to groups.\n Returned only in :attr:`telegram.Bot.get_me` requests.\n can_read_all_group_messages (:obj:`str`, optional): True, if privacy mode is disabled\n for the bot. Returned only in :attr:`telegram.Bot.get_me` requests.\n supports_inline_queries (:obj:`str`, optional): True, if the bot supports inline queries.\n Returned only in :attr:`telegram.Bot.get_me` requests.\n bot (:class:`telegram.Bot`, optional): The Bot to use for instance methods.\n\n \"\"\"\n\n def __init__(self,\n id,\n first_name,\n is_bot,\n last_name=None,\n username=None,\n language_code=None,\n can_join_groups=None,\n can_read_all_group_messages=None,\n supports_inline_queries=None,\n bot=None,\n **kwargs):\n # Required\n self.id = int(id)\n self.first_name = first_name\n self.is_bot = is_bot\n # Optionals\n self.last_name = last_name\n self.username = username\n self.language_code = language_code\n self.can_join_groups = can_join_groups\n self.can_read_all_group_messages = can_read_all_group_messages\n self.supports_inline_queries = supports_inline_queries\n self.bot = bot\n\n self._id_attrs = (self.id,)\n\n @property\n def name(self):\n \"\"\":obj:`str`: Convenience property. If available, returns the user's :attr:`username`\n prefixed with \"@\". If :attr:`username` is not available, returns :attr:`full_name`.\"\"\"\n if self.username:\n return '@{}'.format(self.username)\n return self.full_name\n\n @property\n def full_name(self):\n \"\"\":obj:`str`: Convenience property. The user's :attr:`first_name`, followed by (if\n available) :attr:`last_name`.\"\"\"\n\n if self.last_name:\n return u'{} {}'.format(self.first_name, self.last_name)\n return self.first_name\n\n @property\n def link(self):\n \"\"\":obj:`str`: Convenience property. If :attr:`username` is available, returns a t.me link\n of the user.\"\"\"\n\n if self.username:\n return \"https://t.me/{}\".format(self.username)\n return None\n\n @classmethod\n def de_json(cls, data, bot):\n if not data:\n return None\n data = super(User, cls).de_json(data, bot)\n\n return cls(bot=bot, **data)\n\n def get_profile_photos(self, *args, **kwargs):\n \"\"\"\n Shortcut for::\n\n bot.get_user_profile_photos(update.message.from_user.id, *args, **kwargs)\n\n \"\"\"\n\n return self.bot.get_user_profile_photos(self.id, *args, **kwargs)\n\n @classmethod\n def de_list(cls, data, bot):\n if not data:\n return []\n\n users = list()\n for user in data:\n users.append(cls.de_json(user, bot))\n\n return users\n\n def mention_markdown(self, name=None):\n \"\"\"\n Args:\n name (:obj:`str`): The name used as a link for the user. Defaults to :attr:`full_name`.\n\n Returns:\n :obj:`str`: The inline mention for the user as markdown (version 1).\n\n \"\"\"\n if name:\n return util_mention_markdown(self.id, name)\n return util_mention_markdown(self.id, self.full_name)\n\n def mention_markdown_v2(self, name=None):\n \"\"\"\n Args:\n name (:obj:`str`): The name used as a link for the user. Defaults to :attr:`full_name`.\n\n Returns:\n :obj:`str`: The inline mention for the user as markdown (version 2).\n\n \"\"\"\n if name:\n return util_mention_markdown(self.id, name, version=2)\n return util_mention_markdown(self.id, self.full_name, version=2)\n\n def mention_html(self, name=None):\n \"\"\"\n Args:\n name (:obj:`str`): The name used as a link for the user. Defaults to :attr:`full_name`.\n\n Returns:\n :obj:`str`: The inline mention for the user as HTML.\n\n \"\"\"\n if name:\n return util_mention_html(self.id, name)\n return util_mention_html(self.id, self.full_name)\n\n def send_message(self, *args, **kwargs):\n \"\"\"Shortcut for::\n\n bot.send_message(User.id, *args, **kwargs)\n\n Where User is the current instance.\n\n Returns:\n :class:`telegram.Message`: On success, instance representing the message posted.\n\n \"\"\"\n return self.bot.send_message(self.id, *args, **kwargs)\n\n def send_photo(self, *args, **kwargs):\n \"\"\"Shortcut for::\n\n bot.send_photo(User.id, *args, **kwargs)\n\n Where User is the current instance.\n\n Returns:\n :class:`telegram.Message`: On success, instance representing the message posted.\n\n \"\"\"\n return self.bot.send_photo(self.id, *args, **kwargs)\n\n def send_audio(self, *args, **kwargs):\n \"\"\"Shortcut for::\n\n bot.send_audio(User.id, *args, **kwargs)\n\n Where User is the current instance.\n\n Returns:\n :class:`telegram.Message`: On success, instance representing the message posted.\n\n \"\"\"\n return self.bot.send_audio(self.id, *args, **kwargs)\n\n def send_document(self, *args, **kwargs):\n \"\"\"Shortcut for::\n\n bot.send_document(User.id, *args, **kwargs)\n\n Where User is the current instance.\n\n Returns:\n :class:`telegram.Message`: On success, instance representing the message posted.\n\n \"\"\"\n return self.bot.send_document(self.id, *args, **kwargs)\n\n def send_animation(self, *args, **kwargs):\n \"\"\"Shortcut for::\n\n bot.send_animation(User.id, *args, **kwargs)\n\n Where User is the current instance.\n\n Returns:\n :class:`telegram.Message`: On success, instance representing the message posted.\n\n \"\"\"\n return self.bot.send_animation(self.id, *args, **kwargs)\n\n def send_sticker(self, *args, **kwargs):\n \"\"\"Shortcut for::\n\n bot.send_sticker(User.id, *args, **kwargs)\n\n Where User is the current instance.\n\n Returns:\n :class:`telegram.Message`: On success, instance representing the message posted.\n\n \"\"\"\n return self.bot.send_sticker(self.id, *args, **kwargs)\n\n def send_video(self, *args, **kwargs):\n \"\"\"Shortcut for::\n\n bot.send_video(User.id, *args, **kwargs)\n\n Where User is the current instance.\n\n Returns:\n :class:`telegram.Message`: On success, instance representing the message posted.\n\n \"\"\"\n return self.bot.send_video(self.id, *args, **kwargs)\n\n def send_video_note(self, *args, **kwargs):\n \"\"\"Shortcut for::\n\n bot.send_video_note(User.id, *args, **kwargs)\n\n Where User is the current instance.\n\n Returns:\n :class:`telegram.Message`: On success, instance representing the message posted.\n\n \"\"\"\n return self.bot.send_video_note(self.id, *args, **kwargs)\n\n def send_voice(self, *args, **kwargs):\n \"\"\"Shortcut for::\n\n bot.send_voice(User.id, *args, **kwargs)\n\n Where User is the current instance.\n\n Returns:\n :class:`telegram.Message`: On success, instance representing the message posted.\n\n \"\"\"\n return self.bot.send_voice(self.id, *args, **kwargs)\n", "path": "telegram/user.py"}]} | 3,662 | 148 |
gh_patches_debug_10795 | rasdani/github-patches | git_diff | pymedusa__Medusa-3141 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Rtorrent Not connecting
### Before submitting your issue:
Enable debug logging in Medusa settings, reproduce the error (be sure to disable after the bug is fixed)
**Branch/Commit:Main/latest
**OS:Ubuntu 16.x
**What you did: Attempted to connect my seedboxes rTorrent instance, doublechecked with seedbox company to confirm my information is correct
**What happened: "Could Not connect to rtorrent"
**What you expected: .... Connection to r torrent
**Logs:**
```
2017-09-06 14:43:36 DEBUG SNATCHQUEUE-MANUALSNATCH-72218 :: [3f23fd2] Traceback (most recent call last):
File "/opt/medusa/medusa/search/queue.py", line 487, in run
self.success = snatch_episode(result)
File "/opt/medusa/medusa/search/core.py", line 153, in snatch_episode
result_downloaded = client.send_torrent(result)
File "/opt/medusa/medusa/clients/torrent/generic.py", line 242, in send_torrent
if not self._get_auth():
File "/opt/medusa/medusa/clients/torrent/rtorrent_client.py", line 50, in _get_auth
self.auth = RTorrent(self.host, self.username, self.password, True, tp_kwargs=tp_kwargs)
File "/opt/medusa/lib/rtorrent/__init__.py", line 83, in __init__
self._verify_conn()
File "/opt/medusa/lib/rtorrent/__init__.py", line 122, in _verify_conn
assert "system.client_version" in self._get_rpc_methods(
File "/opt/medusa/lib/rtorrent/__init__.py", line 161, in _get_rpc_methods
return(self._rpc_methods or self._update_rpc_methods())
File "/opt/medusa/lib/rtorrent/__init__.py", line 150, in _update_rpc_methods
self._rpc_methods = self._get_conn().system.listMethods()
File "/usr/lib/python2.7/xmlrpclib.py", line 1243, in __call__
return self.__send(self.__name, args)
File "/usr/lib/python2.7/xmlrpclib.py", line 1602, in __request
verbose=self.__verbose
File "/opt/medusa/lib/rtorrent/lib/xmlrpc/requests_transport.py", line 150, in request
response.headers)
ProtocolError: <ProtocolError for http://nl4727.dediseedbox.com//rutorrent/plugins/httprpc/action.php: 401 Client Error: Unauthorized for url: http://nl4727.dediseedbox.com//rutorrent/plugins/httprpc/action.php Traceback (most recent call last):
File "/opt/medusa/lib/rtorrent/lib/xmlrpc/requests_transport.py", line 145, in request
response.raise_for_status()
File "/opt/medusa/lib/requests/models.py", line 844, in raise_for_status
raise HTTPError(http_error_msg, response=self)
HTTPError: 401 Client Error: Unauthorized for url: http://nl4727.dediseedbox.com//rutorrent/plugins/httprpc/action.php
```
Side Note, this does work with couchpotato
</issue>
<code>
[start of lib/rtorrent/lib/xmlrpc/requests_transport.py]
1 # Copyright (c) 2013-2015 Alexandre Beloin, <[email protected]>
2 #
3 # This program is free software: you can redistribute it and/or modify
4 # it under the terms of the GNU General Public License as published by
5 # the Free Software Foundation, either version 3 of the License, or
6 # (at your option) any later version.
7 #
8 # This program is distributed in the hope that it will be useful,
9 # but WITHOUT ANY WARRANTY; without even the implied warranty of
10 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
11 # GNU General Public License for more details.
12
13 # You should have received a copy of the GNU General Public License
14 # along with this program. If not, see <http://www.gnu.org/licenses/>.
15
16 """A transport for Python2/3 xmlrpc library using requests
17
18 Support:
19 -SSL with Basic and Digest authentication
20 -Proxies
21 """
22
23 try:
24 import xmlrpc.client as xmlrpc_client
25 except ImportError:
26 import xmlrpclib as xmlrpc_client
27
28 import traceback
29
30 import requests
31 from requests.exceptions import RequestException
32 from requests.auth import HTTPBasicAuth
33 from requests.auth import HTTPDigestAuth
34 from requests.packages.urllib3 import disable_warnings # @UnresolvedImport
35
36
37 class RequestsTransport(xmlrpc_client.Transport):
38
39 """Transport class for xmlrpc using requests"""
40
41 def __init__(self, use_https=True, authtype=None, username=None,
42 password=None, check_ssl_cert=True, proxies=None):
43 """Inits RequestsTransport.
44
45 Args:
46 use_https: If true, https else http
47 authtype: None, basic or digest
48 username: Username
49 password: Password
50 check_ssl_cert: Check SSL certificate
51 proxies: A dict of proxies(
52 Ex: {"http": "http://10.10.1.10:3128",
53 "https": "http://10.10.1.10:1080",})
54
55 Raises:
56 ValueError: Invalid info
57 """
58 # Python 2 can't use super on old style class.
59 if issubclass(xmlrpc_client.Transport, object):
60 super(RequestsTransport, self).__init__()
61 else:
62 xmlrpc_client.Transport.__init__(self)
63
64 self.user_agent = "Python Requests/" + requests.__version__
65
66 self._use_https = use_https
67 self._check_ssl_cert = check_ssl_cert
68
69 if authtype == "basic" or authtype == "digest":
70 self._authtype = authtype
71 else:
72 raise ValueError(
73 "Supported authentication are: basic and digest")
74 if authtype and (not username or not password):
75 raise ValueError(
76 "Username and password required when using authentication")
77
78 self._username = username
79 self._password = password
80 if proxies is None:
81 self._proxies = {}
82 else:
83 self._proxies = proxies
84
85 def request(self, host, handler, request_body, verbose=0):
86 """Replace the xmlrpc request function.
87
88 Process xmlrpc request via requests library.
89
90 Args:
91 host: Target host
92 handler: Target PRC handler.
93 request_body: XML-RPC request body.
94 verbose: Debugging flag.
95
96 Returns:
97 Parsed response.
98
99 Raises:
100 RequestException: Error in requests
101 """
102 if verbose:
103 self._debug()
104
105 if not self._check_ssl_cert:
106 disable_warnings()
107
108 headers = {'User-Agent': self.user_agent, 'Content-Type': 'text/xml', }
109
110 # Need to be done because the schema(http or https) is lost in
111 # xmlrpc.Transport's init.
112 if self._use_https:
113 url = "https://{host}/{handler}".format(host=host, handler=handler)
114 else:
115 url = "http://{host}/{handler}".format(host=host, handler=handler)
116
117 # TODO Construct kwargs query instead
118 try:
119 if self._authtype == "basic":
120 response = requests.post(
121 url,
122 data=request_body,
123 headers=headers,
124 verify=self._check_ssl_cert,
125 auth=HTTPBasicAuth(
126 self._username, self._password),
127 proxies=self._proxies)
128 elif self._authtype == "digest":
129 response = requests.post(
130 url,
131 data=request_body,
132 headers=headers,
133 verify=self._check_ssl_cert,
134 auth=HTTPDigestAuth(
135 self._username, self._password),
136 proxies=self._proxies)
137 else:
138 response = requests.post(
139 url,
140 data=request_body,
141 headers=headers,
142 verify=self._check_ssl_cert,
143 proxies=self._proxies)
144
145 response.raise_for_status()
146 except RequestException as error:
147 raise xmlrpc_client.ProtocolError(url,
148 error.message,
149 traceback.format_exc(),
150 response.headers)
151
152 return self.parse_response(response)
153
154 def parse_response(self, response):
155 """Replace the xmlrpc parse_response function.
156
157 Parse response.
158
159 Args:
160 response: Requests return data
161
162 Returns:
163 Response tuple and target method.
164 """
165 p, u = self.getparser()
166 p.feed(response.text.encode('utf-8'))
167 p.close()
168 return u.close()
169
170 def _debug(self):
171 """Debug requests module.
172
173 Enable verbose logging from requests
174 """
175 # TODO Ugly
176 import logging
177 try:
178 import http.client as http_client
179 except ImportError:
180 import httplib as http_client
181
182 http_client.HTTPConnection.debuglevel = 1
183
184 logging.basicConfig()
185 logging.getLogger().setLevel(logging.DEBUG)
186 requests_log = logging.getLogger("requests.packages.urllib3")
187 requests_log.setLevel(logging.DEBUG)
188 requests_log.propagate = True
189
[end of lib/rtorrent/lib/xmlrpc/requests_transport.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/lib/rtorrent/lib/xmlrpc/requests_transport.py b/lib/rtorrent/lib/xmlrpc/requests_transport.py
--- a/lib/rtorrent/lib/xmlrpc/requests_transport.py
+++ b/lib/rtorrent/lib/xmlrpc/requests_transport.py
@@ -110,9 +110,9 @@
# Need to be done because the schema(http or https) is lost in
# xmlrpc.Transport's init.
if self._use_https:
- url = "https://{host}/{handler}".format(host=host, handler=handler)
+ url = "https://{host}{handler}".format(host=host, handler=handler)
else:
- url = "http://{host}/{handler}".format(host=host, handler=handler)
+ url = "http://{host}{handler}".format(host=host, handler=handler)
# TODO Construct kwargs query instead
try:
| {"golden_diff": "diff --git a/lib/rtorrent/lib/xmlrpc/requests_transport.py b/lib/rtorrent/lib/xmlrpc/requests_transport.py\n--- a/lib/rtorrent/lib/xmlrpc/requests_transport.py\n+++ b/lib/rtorrent/lib/xmlrpc/requests_transport.py\n@@ -110,9 +110,9 @@\n # Need to be done because the schema(http or https) is lost in\n # xmlrpc.Transport's init.\n if self._use_https:\n- url = \"https://{host}/{handler}\".format(host=host, handler=handler)\n+ url = \"https://{host}{handler}\".format(host=host, handler=handler)\n else:\n- url = \"http://{host}/{handler}\".format(host=host, handler=handler)\n+ url = \"http://{host}{handler}\".format(host=host, handler=handler)\n \n # TODO Construct kwargs query instead\n try:\n", "issue": "Rtorrent Not connecting\n### Before submitting your issue:\r\n\r\nEnable debug logging in Medusa settings, reproduce the error (be sure to disable after the bug is fixed)\r\n\r\n**Branch/Commit:Main/latest\r\n**OS:Ubuntu 16.x\r\n**What you did: Attempted to connect my seedboxes rTorrent instance, doublechecked with seedbox company to confirm my information is correct\r\n**What happened: \"Could Not connect to rtorrent\"\r\n**What you expected: .... Connection to r torrent\r\n**Logs:**\r\n```\r\n2017-09-06 14:43:36 DEBUG SNATCHQUEUE-MANUALSNATCH-72218 :: [3f23fd2] Traceback (most recent call last):\r\n File \"/opt/medusa/medusa/search/queue.py\", line 487, in run\r\n self.success = snatch_episode(result)\r\n File \"/opt/medusa/medusa/search/core.py\", line 153, in snatch_episode\r\n result_downloaded = client.send_torrent(result)\r\n File \"/opt/medusa/medusa/clients/torrent/generic.py\", line 242, in send_torrent\r\n if not self._get_auth():\r\n File \"/opt/medusa/medusa/clients/torrent/rtorrent_client.py\", line 50, in _get_auth\r\n self.auth = RTorrent(self.host, self.username, self.password, True, tp_kwargs=tp_kwargs)\r\n File \"/opt/medusa/lib/rtorrent/__init__.py\", line 83, in __init__\r\n self._verify_conn()\r\n File \"/opt/medusa/lib/rtorrent/__init__.py\", line 122, in _verify_conn\r\n assert \"system.client_version\" in self._get_rpc_methods(\r\n File \"/opt/medusa/lib/rtorrent/__init__.py\", line 161, in _get_rpc_methods\r\n return(self._rpc_methods or self._update_rpc_methods())\r\n File \"/opt/medusa/lib/rtorrent/__init__.py\", line 150, in _update_rpc_methods\r\n self._rpc_methods = self._get_conn().system.listMethods()\r\n File \"/usr/lib/python2.7/xmlrpclib.py\", line 1243, in __call__\r\n return self.__send(self.__name, args)\r\n File \"/usr/lib/python2.7/xmlrpclib.py\", line 1602, in __request\r\n verbose=self.__verbose\r\n File \"/opt/medusa/lib/rtorrent/lib/xmlrpc/requests_transport.py\", line 150, in request\r\n response.headers)\r\nProtocolError: <ProtocolError for http://nl4727.dediseedbox.com//rutorrent/plugins/httprpc/action.php: 401 Client Error: Unauthorized for url: http://nl4727.dediseedbox.com//rutorrent/plugins/httprpc/action.php Traceback (most recent call last):\r\n File \"/opt/medusa/lib/rtorrent/lib/xmlrpc/requests_transport.py\", line 145, in request\r\n response.raise_for_status()\r\n File \"/opt/medusa/lib/requests/models.py\", line 844, in raise_for_status\r\n raise HTTPError(http_error_msg, response=self)\r\nHTTPError: 401 Client Error: Unauthorized for url: http://nl4727.dediseedbox.com//rutorrent/plugins/httprpc/action.php\r\n```\r\nSide Note, this does work with couchpotato\n", "before_files": [{"content": "# Copyright (c) 2013-2015 Alexandre Beloin, <[email protected]>\n#\n# This program is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# This program is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n\n# You should have received a copy of the GNU General Public License\n# along with this program. If not, see <http://www.gnu.org/licenses/>.\n\n\"\"\"A transport for Python2/3 xmlrpc library using requests\n\nSupport:\n-SSL with Basic and Digest authentication\n-Proxies\n\"\"\"\n\ntry:\n import xmlrpc.client as xmlrpc_client\nexcept ImportError:\n import xmlrpclib as xmlrpc_client\n\nimport traceback\n\nimport requests\nfrom requests.exceptions import RequestException\nfrom requests.auth import HTTPBasicAuth\nfrom requests.auth import HTTPDigestAuth\nfrom requests.packages.urllib3 import disable_warnings # @UnresolvedImport\n\n\nclass RequestsTransport(xmlrpc_client.Transport):\n\n \"\"\"Transport class for xmlrpc using requests\"\"\"\n\n def __init__(self, use_https=True, authtype=None, username=None,\n password=None, check_ssl_cert=True, proxies=None):\n \"\"\"Inits RequestsTransport.\n\n Args:\n use_https: If true, https else http\n authtype: None, basic or digest\n username: Username\n password: Password\n check_ssl_cert: Check SSL certificate\n proxies: A dict of proxies(\n Ex: {\"http\": \"http://10.10.1.10:3128\",\n \"https\": \"http://10.10.1.10:1080\",})\n\n Raises:\n ValueError: Invalid info\n \"\"\"\n # Python 2 can't use super on old style class.\n if issubclass(xmlrpc_client.Transport, object):\n super(RequestsTransport, self).__init__()\n else:\n xmlrpc_client.Transport.__init__(self)\n\n self.user_agent = \"Python Requests/\" + requests.__version__\n\n self._use_https = use_https\n self._check_ssl_cert = check_ssl_cert\n\n if authtype == \"basic\" or authtype == \"digest\":\n self._authtype = authtype\n else:\n raise ValueError(\n \"Supported authentication are: basic and digest\")\n if authtype and (not username or not password):\n raise ValueError(\n \"Username and password required when using authentication\")\n\n self._username = username\n self._password = password\n if proxies is None:\n self._proxies = {}\n else:\n self._proxies = proxies\n\n def request(self, host, handler, request_body, verbose=0):\n \"\"\"Replace the xmlrpc request function.\n\n Process xmlrpc request via requests library.\n\n Args:\n host: Target host\n handler: Target PRC handler.\n request_body: XML-RPC request body.\n verbose: Debugging flag.\n\n Returns:\n Parsed response.\n\n Raises:\n RequestException: Error in requests\n \"\"\"\n if verbose:\n self._debug()\n\n if not self._check_ssl_cert:\n disable_warnings()\n\n headers = {'User-Agent': self.user_agent, 'Content-Type': 'text/xml', }\n\n # Need to be done because the schema(http or https) is lost in\n # xmlrpc.Transport's init.\n if self._use_https:\n url = \"https://{host}/{handler}\".format(host=host, handler=handler)\n else:\n url = \"http://{host}/{handler}\".format(host=host, handler=handler)\n\n # TODO Construct kwargs query instead\n try:\n if self._authtype == \"basic\":\n response = requests.post(\n url,\n data=request_body,\n headers=headers,\n verify=self._check_ssl_cert,\n auth=HTTPBasicAuth(\n self._username, self._password),\n proxies=self._proxies)\n elif self._authtype == \"digest\":\n response = requests.post(\n url,\n data=request_body,\n headers=headers,\n verify=self._check_ssl_cert,\n auth=HTTPDigestAuth(\n self._username, self._password),\n proxies=self._proxies)\n else:\n response = requests.post(\n url,\n data=request_body,\n headers=headers,\n verify=self._check_ssl_cert,\n proxies=self._proxies)\n\n response.raise_for_status()\n except RequestException as error:\n raise xmlrpc_client.ProtocolError(url,\n error.message,\n traceback.format_exc(),\n response.headers)\n\n return self.parse_response(response)\n\n def parse_response(self, response):\n \"\"\"Replace the xmlrpc parse_response function.\n\n Parse response.\n\n Args:\n response: Requests return data\n\n Returns:\n Response tuple and target method.\n \"\"\"\n p, u = self.getparser()\n p.feed(response.text.encode('utf-8'))\n p.close()\n return u.close()\n\n def _debug(self):\n \"\"\"Debug requests module.\n\n Enable verbose logging from requests\n \"\"\"\n # TODO Ugly\n import logging\n try:\n import http.client as http_client\n except ImportError:\n import httplib as http_client\n\n http_client.HTTPConnection.debuglevel = 1\n\n logging.basicConfig()\n logging.getLogger().setLevel(logging.DEBUG)\n requests_log = logging.getLogger(\"requests.packages.urllib3\")\n requests_log.setLevel(logging.DEBUG)\n requests_log.propagate = True\n", "path": "lib/rtorrent/lib/xmlrpc/requests_transport.py"}]} | 3,032 | 197 |
gh_patches_debug_8852 | rasdani/github-patches | git_diff | pwndbg__pwndbg-363 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Broken `entry` command
### Description
The `entry` command pass arguments differently then the `run` command.
### Steps to reproduce
```
[dc@dc:pwndbg|dev *$%]$ gdb python
Loaded 113 commands. Type pwndbg [filter] for a list.
Reading symbols from python...(no debugging symbols found)...done.
pwndbg> set exception-verbose on
Set whether to print a full stacktracefor exceptions raised in Pwndbg commands to True
pwndbg> run -c "print(1); print(2)"
Starting program: /usr/bin/python -c "print(1); print(2)"
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/usr/lib/libthread_db.so.1".
1
2
[Inferior 1 (process 20590) exited normally]
pwndbg> entry -c "print(1); print(2)"
('-c', 'print(1); print(2)')
Running '%s' run -c print(1); print(2)
/bin/bash: -c: line 0: syntax error near unexpected token `('
/bin/bash: -c: line 0: `exec /usr/bin/python -c print(1); print(2)'
Traceback (most recent call last):
File "/home/dc/installed/pwndbg/pwndbg/commands/__init__.py", line 100, in __call__
return self.function(*args, **kwargs)
File "/home/dc/installed/pwndbg/pwndbg/commands/__init__.py", line 181, in _OnlyWithFile
return function(*a, **kw)
File "/home/dc/installed/pwndbg/pwndbg/commands/start.py", line 72, in entry
gdb.execute(run, from_tty=False)
gdb.error: During startup program exited with code 1.
If that is an issue, you can report it on https://github.com/pwndbg/pwndbg/issues
(Please don't forget to search if it hasn't been reported before)
PS: Pull requests are welcome
```
### My version
```
pwndbg> version
Gdb: GNU gdb (GDB) 8.0.1
Python: 3.6.3 (default, Oct 24 2017, 14:48:20) [GCC 7.2.0]
Pwndbg: 1.0.0 build: 5811010
```
</issue>
<code>
[start of pwndbg/commands/start.py]
1 #!/usr/bin/env python
2 # -*- coding: utf-8 -*-
3 """
4 Launches the target process after setting a breakpoint at a convenient
5 entry point.
6 """
7 from __future__ import absolute_import
8 from __future__ import division
9 from __future__ import print_function
10 from __future__ import unicode_literals
11
12 import gdb
13
14 import pwndbg.commands
15 import pwndbg.elf
16 import pwndbg.events
17 import pwndbg.symbol
18
19 break_on_first_instruction = False
20
21
22 @pwndbg.events.start
23 def on_start():
24 global break_on_first_instruction
25 if break_on_first_instruction:
26 spec = "*%#x" % (int(pwndbg.elf.entry()))
27 gdb.Breakpoint(spec, temporary=True)
28 break_on_first_instruction = False
29
30
31 @pwndbg.commands.Command
32 def start(*a):
33 """
34 Set a breakpoint at a convenient location in the binary,
35 generally 'main', 'init', or the entry point.
36 """
37 run = 'run ' + ' '.join(a)
38
39 symbols = ["main",
40 "_main",
41 "start",
42 "_start",
43 "init",
44 "_init"]
45
46 for symbol in symbols:
47 address = pwndbg.symbol.address(symbol)
48
49 if not address:
50 continue
51
52 b = gdb.Breakpoint(symbol, temporary=True)
53 gdb.execute(run, from_tty=False, to_string=True)
54 return
55
56 # Try a breakpoint at the binary entry
57 entry(*a)
58
59
60 @pwndbg.commands.Command
61 @pwndbg.commands.OnlyWithFile
62 def entry(*a):
63 """
64 Set a breakpoint at the first instruction executed in
65 the target binary.
66 """
67 global break_on_first_instruction
68 break_on_first_instruction = True
69 run = 'run ' + ' '.join(a)
70 gdb.execute(run, from_tty=False)
71
[end of pwndbg/commands/start.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/pwndbg/commands/start.py b/pwndbg/commands/start.py
--- a/pwndbg/commands/start.py
+++ b/pwndbg/commands/start.py
@@ -9,6 +9,8 @@
from __future__ import print_function
from __future__ import unicode_literals
+import shlex
+
import gdb
import pwndbg.commands
@@ -66,5 +68,5 @@
"""
global break_on_first_instruction
break_on_first_instruction = True
- run = 'run ' + ' '.join(a)
+ run = 'run ' + ' '.join(map(shlex.quote, a))
gdb.execute(run, from_tty=False)
| {"golden_diff": "diff --git a/pwndbg/commands/start.py b/pwndbg/commands/start.py\n--- a/pwndbg/commands/start.py\n+++ b/pwndbg/commands/start.py\n@@ -9,6 +9,8 @@\n from __future__ import print_function\n from __future__ import unicode_literals\n \n+import shlex\n+\n import gdb\n \n import pwndbg.commands\n@@ -66,5 +68,5 @@\n \"\"\"\n global break_on_first_instruction\n break_on_first_instruction = True\n- run = 'run ' + ' '.join(a)\n+ run = 'run ' + ' '.join(map(shlex.quote, a))\n gdb.execute(run, from_tty=False)\n", "issue": "Broken `entry` command\n### Description\r\n\r\nThe `entry` command pass arguments differently then the `run` command.\r\n\r\n### Steps to reproduce\r\n\r\n```\r\n[dc@dc:pwndbg|dev *$%]$ gdb python\r\nLoaded 113 commands. Type pwndbg [filter] for a list.\r\nReading symbols from python...(no debugging symbols found)...done.\r\npwndbg> set exception-verbose on\r\nSet whether to print a full stacktracefor exceptions raised in Pwndbg commands to True\r\npwndbg> run -c \"print(1); print(2)\"\r\nStarting program: /usr/bin/python -c \"print(1); print(2)\"\r\n[Thread debugging using libthread_db enabled]\r\nUsing host libthread_db library \"/usr/lib/libthread_db.so.1\".\r\n1\r\n2\r\n[Inferior 1 (process 20590) exited normally]\r\npwndbg> entry -c \"print(1); print(2)\"\r\n('-c', 'print(1); print(2)')\r\nRunning '%s' run -c print(1); print(2)\r\n/bin/bash: -c: line 0: syntax error near unexpected token `('\r\n/bin/bash: -c: line 0: `exec /usr/bin/python -c print(1); print(2)'\r\nTraceback (most recent call last):\r\n File \"/home/dc/installed/pwndbg/pwndbg/commands/__init__.py\", line 100, in __call__\r\n return self.function(*args, **kwargs)\r\n File \"/home/dc/installed/pwndbg/pwndbg/commands/__init__.py\", line 181, in _OnlyWithFile\r\n return function(*a, **kw)\r\n File \"/home/dc/installed/pwndbg/pwndbg/commands/start.py\", line 72, in entry\r\n gdb.execute(run, from_tty=False)\r\ngdb.error: During startup program exited with code 1.\r\n\r\nIf that is an issue, you can report it on https://github.com/pwndbg/pwndbg/issues\r\n(Please don't forget to search if it hasn't been reported before)\r\nPS: Pull requests are welcome\r\n```\r\n\r\n### My version\r\n\r\n```\r\npwndbg> version\r\nGdb: GNU gdb (GDB) 8.0.1\r\nPython: 3.6.3 (default, Oct 24 2017, 14:48:20) [GCC 7.2.0]\r\nPwndbg: 1.0.0 build: 5811010\r\n```\n", "before_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\"\"\"\nLaunches the target process after setting a breakpoint at a convenient\nentry point.\n\"\"\"\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\nfrom __future__ import unicode_literals\n\nimport gdb\n\nimport pwndbg.commands\nimport pwndbg.elf\nimport pwndbg.events\nimport pwndbg.symbol\n\nbreak_on_first_instruction = False\n\n\[email protected]\ndef on_start():\n global break_on_first_instruction\n if break_on_first_instruction:\n spec = \"*%#x\" % (int(pwndbg.elf.entry()))\n gdb.Breakpoint(spec, temporary=True)\n break_on_first_instruction = False\n\n\[email protected]\ndef start(*a):\n \"\"\"\n Set a breakpoint at a convenient location in the binary,\n generally 'main', 'init', or the entry point.\n \"\"\"\n run = 'run ' + ' '.join(a)\n\n symbols = [\"main\",\n \"_main\",\n \"start\",\n \"_start\",\n \"init\",\n \"_init\"]\n\n for symbol in symbols:\n address = pwndbg.symbol.address(symbol)\n\n if not address:\n continue\n\n b = gdb.Breakpoint(symbol, temporary=True)\n gdb.execute(run, from_tty=False, to_string=True)\n return\n\n # Try a breakpoint at the binary entry\n entry(*a)\n\n\[email protected]\[email protected]\ndef entry(*a):\n \"\"\"\n Set a breakpoint at the first instruction executed in\n the target binary.\n \"\"\"\n global break_on_first_instruction\n break_on_first_instruction = True\n run = 'run ' + ' '.join(a)\n gdb.execute(run, from_tty=False)\n", "path": "pwndbg/commands/start.py"}]} | 1,629 | 151 |
gh_patches_debug_7911 | rasdani/github-patches | git_diff | edgedb__edgedb-1946 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
ISE when LIMIT/OFFSET correlated with query
```
sully> SELECT Object LIMIT len(<str>.id);
ERROR: InternalServerError: argument of LIMIT must not contain variables
```
```
sully> SELECT Object OFFSET len(<str>.id);
ERROR: InternalServerError: argument of OFFSET must not contain variables
```
Rejecting these is correct but we want a real error.
</issue>
<code>
[start of edb/edgeql/compiler/clauses.py]
1 #
2 # This source file is part of the EdgeDB open source project.
3 #
4 # Copyright 2008-present MagicStack Inc. and the EdgeDB authors.
5 #
6 # Licensed under the Apache License, Version 2.0 (the "License");
7 # you may not use this file except in compliance with the License.
8 # You may obtain a copy of the License at
9 #
10 # http://www.apache.org/licenses/LICENSE-2.0
11 #
12 # Unless required by applicable law or agreed to in writing, software
13 # distributed under the License is distributed on an "AS IS" BASIS,
14 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
15 # See the License for the specific language governing permissions and
16 # limitations under the License.
17 #
18
19
20 """EdgeQL compiler functions to process shared clauses."""
21
22
23 from __future__ import annotations
24
25 from typing import *
26
27 from edb.edgeql import ast as qlast
28 from edb.ir import ast as irast
29
30 from edb import errors
31
32 from . import context
33 from . import dispatch
34 from . import inference
35 from . import polyres
36 from . import schemactx
37 from . import setgen
38
39
40 def compile_where_clause(
41 ir_stmt: irast.FilteredStmt,
42 where: Optional[qlast.Base], *,
43 ctx: context.ContextLevel) -> None:
44
45 if where is None:
46 return
47
48 with ctx.newscope(fenced=True) as subctx:
49 subctx.path_scope.unnest_fence = True
50 ir_expr = dispatch.compile(where, ctx=subctx)
51 bool_t = ctx.env.get_track_schema_type('std::bool')
52 ir_set = setgen.scoped_set(ir_expr, typehint=bool_t, ctx=subctx)
53
54 ir_stmt.where = ir_set
55
56
57 def compile_orderby_clause(
58 sortexprs: Optional[Iterable[qlast.SortExpr]], *,
59 ctx: context.ContextLevel) -> List[irast.SortExpr]:
60
61 result: List[irast.SortExpr] = []
62 if not sortexprs:
63 return result
64
65 with ctx.new() as subctx:
66 for sortexpr in sortexprs:
67 with subctx.newscope(fenced=True) as exprctx:
68 exprctx.path_scope.unnest_fence = True
69 ir_sortexpr = dispatch.compile(sortexpr.path, ctx=exprctx)
70 ir_sortexpr = setgen.scoped_set(
71 ir_sortexpr, force_reassign=True, ctx=exprctx)
72 ir_sortexpr.context = sortexpr.context
73
74 # Check that the sortexpr type is actually orderable
75 # with either '>' or '<' based on the DESC or ASC sort
76 # order.
77 env = exprctx.env
78 sort_type = inference.infer_type(ir_sortexpr, env)
79 # Postgres by default treats ASC as using '<' and DESC
80 # as using '>'. We should do the same.
81 if sortexpr.direction == qlast.SortDesc:
82 op_name = '>'
83 else:
84 op_name = '<'
85 opers = env.schema.get_operators(
86 op_name, module_aliases=exprctx.modaliases)
87
88 # Verify that a comparison operator is defined for 2
89 # sort_type expressions.
90 matched = polyres.find_callable(
91 opers,
92 args=[(sort_type, ir_sortexpr), (sort_type, ir_sortexpr)],
93 kwargs={},
94 ctx=exprctx)
95 if len(matched) != 1:
96 sort_type_name = schemactx.get_material_type(
97 sort_type, ctx=ctx).get_displayname(env.schema)
98 if len(matched) == 0:
99 raise errors.QueryError(
100 f'type {sort_type_name!r} cannot be used in '
101 f'ORDER BY clause because ordering is not '
102 f'defined for it',
103 context=sortexpr.context)
104
105 elif len(matched) > 1:
106 raise errors.QueryError(
107 f'type {sort_type_name!r} cannot be used in '
108 f'ORDER BY clause because ordering is '
109 f'ambiguous for it',
110 context=sortexpr.context)
111
112 result.append(
113 irast.SortExpr(
114 expr=ir_sortexpr,
115 direction=sortexpr.direction,
116 nones_order=sortexpr.nones_order))
117
118 return result
119
120
121 def compile_limit_offset_clause(
122 expr: Optional[qlast.Base], *,
123 ctx: context.ContextLevel) -> Optional[irast.Set]:
124 if expr is None:
125 ir_set = None
126 else:
127 with ctx.newscope(fenced=True) as subctx:
128 ir_expr = dispatch.compile(expr, ctx=subctx)
129 int_t = ctx.env.get_track_schema_type('std::int64')
130 ir_set = setgen.scoped_set(
131 ir_expr, force_reassign=True, typehint=int_t, ctx=subctx)
132 ir_set.context = expr.context
133
134 return ir_set
135
[end of edb/edgeql/compiler/clauses.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/edb/edgeql/compiler/clauses.py b/edb/edgeql/compiler/clauses.py
--- a/edb/edgeql/compiler/clauses.py
+++ b/edb/edgeql/compiler/clauses.py
@@ -125,6 +125,10 @@
ir_set = None
else:
with ctx.newscope(fenced=True) as subctx:
+ # Clear out the partial_path_prefix, since we aren't in
+ # the scope of the select subject
+ subctx.partial_path_prefix = None
+
ir_expr = dispatch.compile(expr, ctx=subctx)
int_t = ctx.env.get_track_schema_type('std::int64')
ir_set = setgen.scoped_set(
| {"golden_diff": "diff --git a/edb/edgeql/compiler/clauses.py b/edb/edgeql/compiler/clauses.py\n--- a/edb/edgeql/compiler/clauses.py\n+++ b/edb/edgeql/compiler/clauses.py\n@@ -125,6 +125,10 @@\n ir_set = None\n else:\n with ctx.newscope(fenced=True) as subctx:\n+ # Clear out the partial_path_prefix, since we aren't in\n+ # the scope of the select subject\n+ subctx.partial_path_prefix = None\n+\n ir_expr = dispatch.compile(expr, ctx=subctx)\n int_t = ctx.env.get_track_schema_type('std::int64')\n ir_set = setgen.scoped_set(\n", "issue": "ISE when LIMIT/OFFSET correlated with query\n```\r\nsully> SELECT Object LIMIT len(<str>.id);\r\nERROR: InternalServerError: argument of LIMIT must not contain variables\r\n```\r\n\r\n```\r\nsully> SELECT Object OFFSET len(<str>.id);\r\nERROR: InternalServerError: argument of OFFSET must not contain variables\r\n```\r\n\r\nRejecting these is correct but we want a real error.\r\n\n", "before_files": [{"content": "#\n# This source file is part of the EdgeDB open source project.\n#\n# Copyright 2008-present MagicStack Inc. and the EdgeDB authors.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n#\n\n\n\"\"\"EdgeQL compiler functions to process shared clauses.\"\"\"\n\n\nfrom __future__ import annotations\n\nfrom typing import *\n\nfrom edb.edgeql import ast as qlast\nfrom edb.ir import ast as irast\n\nfrom edb import errors\n\nfrom . import context\nfrom . import dispatch\nfrom . import inference\nfrom . import polyres\nfrom . import schemactx\nfrom . import setgen\n\n\ndef compile_where_clause(\n ir_stmt: irast.FilteredStmt,\n where: Optional[qlast.Base], *,\n ctx: context.ContextLevel) -> None:\n\n if where is None:\n return\n\n with ctx.newscope(fenced=True) as subctx:\n subctx.path_scope.unnest_fence = True\n ir_expr = dispatch.compile(where, ctx=subctx)\n bool_t = ctx.env.get_track_schema_type('std::bool')\n ir_set = setgen.scoped_set(ir_expr, typehint=bool_t, ctx=subctx)\n\n ir_stmt.where = ir_set\n\n\ndef compile_orderby_clause(\n sortexprs: Optional[Iterable[qlast.SortExpr]], *,\n ctx: context.ContextLevel) -> List[irast.SortExpr]:\n\n result: List[irast.SortExpr] = []\n if not sortexprs:\n return result\n\n with ctx.new() as subctx:\n for sortexpr in sortexprs:\n with subctx.newscope(fenced=True) as exprctx:\n exprctx.path_scope.unnest_fence = True\n ir_sortexpr = dispatch.compile(sortexpr.path, ctx=exprctx)\n ir_sortexpr = setgen.scoped_set(\n ir_sortexpr, force_reassign=True, ctx=exprctx)\n ir_sortexpr.context = sortexpr.context\n\n # Check that the sortexpr type is actually orderable\n # with either '>' or '<' based on the DESC or ASC sort\n # order.\n env = exprctx.env\n sort_type = inference.infer_type(ir_sortexpr, env)\n # Postgres by default treats ASC as using '<' and DESC\n # as using '>'. We should do the same.\n if sortexpr.direction == qlast.SortDesc:\n op_name = '>'\n else:\n op_name = '<'\n opers = env.schema.get_operators(\n op_name, module_aliases=exprctx.modaliases)\n\n # Verify that a comparison operator is defined for 2\n # sort_type expressions.\n matched = polyres.find_callable(\n opers,\n args=[(sort_type, ir_sortexpr), (sort_type, ir_sortexpr)],\n kwargs={},\n ctx=exprctx)\n if len(matched) != 1:\n sort_type_name = schemactx.get_material_type(\n sort_type, ctx=ctx).get_displayname(env.schema)\n if len(matched) == 0:\n raise errors.QueryError(\n f'type {sort_type_name!r} cannot be used in '\n f'ORDER BY clause because ordering is not '\n f'defined for it',\n context=sortexpr.context)\n\n elif len(matched) > 1:\n raise errors.QueryError(\n f'type {sort_type_name!r} cannot be used in '\n f'ORDER BY clause because ordering is '\n f'ambiguous for it',\n context=sortexpr.context)\n\n result.append(\n irast.SortExpr(\n expr=ir_sortexpr,\n direction=sortexpr.direction,\n nones_order=sortexpr.nones_order))\n\n return result\n\n\ndef compile_limit_offset_clause(\n expr: Optional[qlast.Base], *,\n ctx: context.ContextLevel) -> Optional[irast.Set]:\n if expr is None:\n ir_set = None\n else:\n with ctx.newscope(fenced=True) as subctx:\n ir_expr = dispatch.compile(expr, ctx=subctx)\n int_t = ctx.env.get_track_schema_type('std::int64')\n ir_set = setgen.scoped_set(\n ir_expr, force_reassign=True, typehint=int_t, ctx=subctx)\n ir_set.context = expr.context\n\n return ir_set\n", "path": "edb/edgeql/compiler/clauses.py"}]} | 1,967 | 161 |
gh_patches_debug_34984 | rasdani/github-patches | git_diff | DataDog__dd-trace-py-3177 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Psycopg patching doesn't properly handle execute_values
The `execute_values` extension in psycopg2 composes and executes the query with b-string, even if you passed the query as a string. Below is the full function from psycopg2.extras
```python
def execute_values(cur, sql, argslist, template=None, page_size=100, fetch=False):
from psycopg2.sql import Composable
if isinstance(sql, Composable):
sql = sql.as_string(cur)
# we can't just use sql % vals because vals is bytes: if sql is bytes
# there will be some decoding error because of stupid codec used, and Py3
# doesn't implement % on bytes.
if not isinstance(sql, bytes):
sql = sql.encode(_ext.encodings[cur.connection.encoding])
pre, post = _split_sql(sql)
result = [] if fetch else None
for page in _paginate(argslist, page_size=page_size):
if template is None:
template = b'(' + b','.join([b'%s'] * len(page[0])) + b')'
parts = pre[:]
for args in page:
parts.append(cur.mogrify(template, args))
parts.append(b',')
parts[-1:] = post
cur.execute(b''.join(parts))
if fetch:
result.extend(cur.fetchall())
return result
```
The problem is that ddtrace assumes that the "resource" added to a span is a string. The result is that when `span.finish()` is called in the datadog lambda handler and it tries to serialize the span to json, it blows up with "TypeError: Object of type bytes is not JSON serializable". Upon investigation, I discovered that the ddtrace.internal.encoder.py's JSONEncoder just does a simple json.dumps() on all the spans and the `resource` attribute on the span from the using `execute_values` is bytes, not a string.
I think the solution here is simply to update the Psycopg2TracedCursor class to decode the resource from bytes if it is bytes, like this:
```python
class Psycopg2TracedCursor(dbapi.TracedCursor):
"""TracedCursor for psycopg2"""
def _trace_method(self, method, name, resource, extra_tags, *args, **kwargs):
# treat psycopg2.sql.Composable resource objects as strings
if isinstance(resource, Composable):
resource = resource.as_string(self.__wrapped__)
# THIS IS THE NEW PART BELOW (next 2 lines)
if isinstance(resource, bytes):
resource = resource.decode('utf-8')
return super(Psycopg2TracedCursor, self)._trace_method(method, name, resource, extra_tags, *args, **kwargs)
```
### Which version of dd-trace-py are you using?
Lambda layer, v50.
### Which version of pip are you using?
n/a
### How can we reproduce your problem?
Use `execute_values` while inside a tracing context. It should have a 100% failure rate.
### What is the result that you get?
A type error when span.finish() is called and the metrics are furnished to DD.
### What is the result that you expected?
It should work as normal, with the resource decoded as a string.
</issue>
<code>
[start of ddtrace/internal/encoding.py]
1 import json
2 from typing import Any
3 from typing import Dict
4 from typing import List
5 from typing import Optional
6 from typing import TYPE_CHECKING
7
8 from ._encoding import ListStringTable
9 from ._encoding import MsgpackEncoderV03
10 from ._encoding import MsgpackEncoderV05
11 from .logger import get_logger
12
13
14 __all__ = ["MsgpackEncoderV03", "MsgpackEncoderV05", "ListStringTable", "MSGPACK_ENCODERS"]
15
16
17 if TYPE_CHECKING:
18 from ..span import Span
19
20
21 log = get_logger(__name__)
22
23
24 class _EncoderBase(object):
25 """
26 Encoder interface that provides the logic to encode traces and service.
27 """
28
29 def encode_traces(self, traces):
30 # type: (List[List[Span]]) -> str
31 """
32 Encodes a list of traces, expecting a list of items where each items
33 is a list of spans. Before dumping the string in a serialized format all
34 traces are normalized according to the encoding format. The trace
35 nesting is not changed.
36
37 :param traces: A list of traces that should be serialized
38 """
39 raise NotImplementedError()
40
41 def encode(self, obj):
42 # type: (List[List[Any]]) -> str
43 """
44 Defines the underlying format used during traces or services encoding.
45 This method must be implemented and should only be used by the internal
46 functions.
47 """
48 raise NotImplementedError()
49
50
51 class JSONEncoder(_EncoderBase):
52 content_type = "application/json"
53
54 def encode_traces(self, traces):
55 normalized_traces = [[span.to_dict() for span in trace] for trace in traces]
56 return self.encode(normalized_traces)
57
58 @staticmethod
59 def encode(obj):
60 # type: (Any) -> str
61 return json.dumps(obj)
62
63
64 class JSONEncoderV2(JSONEncoder):
65 """
66 JSONEncoderV2 encodes traces to the new intake API format.
67 """
68
69 content_type = "application/json"
70
71 def encode_traces(self, traces):
72 # type: (List[List[Span]]) -> str
73 normalized_traces = [[JSONEncoderV2._convert_span(span) for span in trace] for trace in traces]
74 return self.encode({"traces": normalized_traces})
75
76 @staticmethod
77 def _convert_span(span):
78 # type: (Span) -> Dict[str, Any]
79 sp = span.to_dict()
80 sp["trace_id"] = JSONEncoderV2._encode_id_to_hex(sp.get("trace_id"))
81 sp["parent_id"] = JSONEncoderV2._encode_id_to_hex(sp.get("parent_id"))
82 sp["span_id"] = JSONEncoderV2._encode_id_to_hex(sp.get("span_id"))
83 return sp
84
85 @staticmethod
86 def _encode_id_to_hex(dd_id):
87 # type: (Optional[int]) -> str
88 if not dd_id:
89 return "0000000000000000"
90 return "%0.16X" % int(dd_id)
91
92 @staticmethod
93 def _decode_id_to_hex(hex_id):
94 # type: (Optional[str]) -> int
95 if not hex_id:
96 return 0
97 return int(hex_id, 16)
98
99
100 MSGPACK_ENCODERS = {
101 "v0.3": MsgpackEncoderV03,
102 "v0.4": MsgpackEncoderV03,
103 "v0.5": MsgpackEncoderV05,
104 }
105
[end of ddtrace/internal/encoding.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/ddtrace/internal/encoding.py b/ddtrace/internal/encoding.py
--- a/ddtrace/internal/encoding.py
+++ b/ddtrace/internal/encoding.py
@@ -8,6 +8,9 @@
from ._encoding import ListStringTable
from ._encoding import MsgpackEncoderV03
from ._encoding import MsgpackEncoderV05
+from .compat import PY3
+from .compat import binary_type
+from .compat import ensure_text
from .logger import get_logger
@@ -48,17 +51,33 @@
raise NotImplementedError()
-class JSONEncoder(_EncoderBase):
+class JSONEncoder(json.JSONEncoder, _EncoderBase):
content_type = "application/json"
def encode_traces(self, traces):
- normalized_traces = [[span.to_dict() for span in trace] for trace in traces]
+ normalized_traces = [[JSONEncoder._normalize_span(span.to_dict()) for span in trace] for trace in traces]
return self.encode(normalized_traces)
@staticmethod
- def encode(obj):
- # type: (Any) -> str
- return json.dumps(obj)
+ def _normalize_span(span):
+ # Ensure all string attributes are actually strings and not bytes
+ # DEV: We are deferring meta/metrics to reduce any performance issues.
+ # Meta/metrics may still contain `bytes` and have encoding issues.
+ span["resource"] = JSONEncoder._normalize_str(span["resource"])
+ span["name"] = JSONEncoder._normalize_str(span["name"])
+ span["service"] = JSONEncoder._normalize_str(span["service"])
+ return span
+
+ @staticmethod
+ def _normalize_str(obj):
+ if obj is None:
+ return obj
+
+ if PY3:
+ return ensure_text(obj, errors="backslashreplace")
+ elif isinstance(obj, binary_type):
+ return obj.decode("utf-8", errors="replace")
+ return obj
class JSONEncoderV2(JSONEncoder):
@@ -77,6 +96,7 @@
def _convert_span(span):
# type: (Span) -> Dict[str, Any]
sp = span.to_dict()
+ sp = JSONEncoderV2._normalize_span(sp)
sp["trace_id"] = JSONEncoderV2._encode_id_to_hex(sp.get("trace_id"))
sp["parent_id"] = JSONEncoderV2._encode_id_to_hex(sp.get("parent_id"))
sp["span_id"] = JSONEncoderV2._encode_id_to_hex(sp.get("span_id"))
| {"golden_diff": "diff --git a/ddtrace/internal/encoding.py b/ddtrace/internal/encoding.py\n--- a/ddtrace/internal/encoding.py\n+++ b/ddtrace/internal/encoding.py\n@@ -8,6 +8,9 @@\n from ._encoding import ListStringTable\n from ._encoding import MsgpackEncoderV03\n from ._encoding import MsgpackEncoderV05\n+from .compat import PY3\n+from .compat import binary_type\n+from .compat import ensure_text\n from .logger import get_logger\n \n \n@@ -48,17 +51,33 @@\n raise NotImplementedError()\n \n \n-class JSONEncoder(_EncoderBase):\n+class JSONEncoder(json.JSONEncoder, _EncoderBase):\n content_type = \"application/json\"\n \n def encode_traces(self, traces):\n- normalized_traces = [[span.to_dict() for span in trace] for trace in traces]\n+ normalized_traces = [[JSONEncoder._normalize_span(span.to_dict()) for span in trace] for trace in traces]\n return self.encode(normalized_traces)\n \n @staticmethod\n- def encode(obj):\n- # type: (Any) -> str\n- return json.dumps(obj)\n+ def _normalize_span(span):\n+ # Ensure all string attributes are actually strings and not bytes\n+ # DEV: We are deferring meta/metrics to reduce any performance issues.\n+ # Meta/metrics may still contain `bytes` and have encoding issues.\n+ span[\"resource\"] = JSONEncoder._normalize_str(span[\"resource\"])\n+ span[\"name\"] = JSONEncoder._normalize_str(span[\"name\"])\n+ span[\"service\"] = JSONEncoder._normalize_str(span[\"service\"])\n+ return span\n+\n+ @staticmethod\n+ def _normalize_str(obj):\n+ if obj is None:\n+ return obj\n+\n+ if PY3:\n+ return ensure_text(obj, errors=\"backslashreplace\")\n+ elif isinstance(obj, binary_type):\n+ return obj.decode(\"utf-8\", errors=\"replace\")\n+ return obj\n \n \n class JSONEncoderV2(JSONEncoder):\n@@ -77,6 +96,7 @@\n def _convert_span(span):\n # type: (Span) -> Dict[str, Any]\n sp = span.to_dict()\n+ sp = JSONEncoderV2._normalize_span(sp)\n sp[\"trace_id\"] = JSONEncoderV2._encode_id_to_hex(sp.get(\"trace_id\"))\n sp[\"parent_id\"] = JSONEncoderV2._encode_id_to_hex(sp.get(\"parent_id\"))\n sp[\"span_id\"] = JSONEncoderV2._encode_id_to_hex(sp.get(\"span_id\"))\n", "issue": "Psycopg patching doesn't properly handle execute_values\nThe `execute_values` extension in psycopg2 composes and executes the query with b-string, even if you passed the query as a string. Below is the full function from psycopg2.extras\r\n\r\n```python\r\ndef execute_values(cur, sql, argslist, template=None, page_size=100, fetch=False):\r\n from psycopg2.sql import Composable\r\n if isinstance(sql, Composable):\r\n sql = sql.as_string(cur)\r\n\r\n # we can't just use sql % vals because vals is bytes: if sql is bytes\r\n # there will be some decoding error because of stupid codec used, and Py3\r\n # doesn't implement % on bytes.\r\n if not isinstance(sql, bytes):\r\n sql = sql.encode(_ext.encodings[cur.connection.encoding])\r\n pre, post = _split_sql(sql)\r\n\r\n result = [] if fetch else None\r\n for page in _paginate(argslist, page_size=page_size):\r\n if template is None:\r\n template = b'(' + b','.join([b'%s'] * len(page[0])) + b')'\r\n parts = pre[:]\r\n for args in page:\r\n parts.append(cur.mogrify(template, args))\r\n parts.append(b',')\r\n parts[-1:] = post\r\n cur.execute(b''.join(parts))\r\n if fetch:\r\n result.extend(cur.fetchall())\r\n\r\n return result\r\n```\r\n\r\nThe problem is that ddtrace assumes that the \"resource\" added to a span is a string. The result is that when `span.finish()` is called in the datadog lambda handler and it tries to serialize the span to json, it blows up with \"TypeError: Object of type bytes is not JSON serializable\". Upon investigation, I discovered that the ddtrace.internal.encoder.py's JSONEncoder just does a simple json.dumps() on all the spans and the `resource` attribute on the span from the using `execute_values` is bytes, not a string.\r\n\r\nI think the solution here is simply to update the Psycopg2TracedCursor class to decode the resource from bytes if it is bytes, like this:\r\n\r\n```python\r\nclass Psycopg2TracedCursor(dbapi.TracedCursor):\r\n \"\"\"TracedCursor for psycopg2\"\"\"\r\n\r\n def _trace_method(self, method, name, resource, extra_tags, *args, **kwargs):\r\n # treat psycopg2.sql.Composable resource objects as strings\r\n if isinstance(resource, Composable):\r\n resource = resource.as_string(self.__wrapped__)\r\n # THIS IS THE NEW PART BELOW (next 2 lines)\r\n if isinstance(resource, bytes):\r\n resource = resource.decode('utf-8')\r\n return super(Psycopg2TracedCursor, self)._trace_method(method, name, resource, extra_tags, *args, **kwargs)\r\n```\r\n\r\n### Which version of dd-trace-py are you using?\r\nLambda layer, v50.\r\n### Which version of pip are you using?\r\nn/a\r\n\r\n### How can we reproduce your problem?\r\nUse `execute_values` while inside a tracing context. It should have a 100% failure rate.\r\n\r\n### What is the result that you get?\r\nA type error when span.finish() is called and the metrics are furnished to DD.\r\n\r\n### What is the result that you expected?\r\nIt should work as normal, with the resource decoded as a string.\r\n\n", "before_files": [{"content": "import json\nfrom typing import Any\nfrom typing import Dict\nfrom typing import List\nfrom typing import Optional\nfrom typing import TYPE_CHECKING\n\nfrom ._encoding import ListStringTable\nfrom ._encoding import MsgpackEncoderV03\nfrom ._encoding import MsgpackEncoderV05\nfrom .logger import get_logger\n\n\n__all__ = [\"MsgpackEncoderV03\", \"MsgpackEncoderV05\", \"ListStringTable\", \"MSGPACK_ENCODERS\"]\n\n\nif TYPE_CHECKING:\n from ..span import Span\n\n\nlog = get_logger(__name__)\n\n\nclass _EncoderBase(object):\n \"\"\"\n Encoder interface that provides the logic to encode traces and service.\n \"\"\"\n\n def encode_traces(self, traces):\n # type: (List[List[Span]]) -> str\n \"\"\"\n Encodes a list of traces, expecting a list of items where each items\n is a list of spans. Before dumping the string in a serialized format all\n traces are normalized according to the encoding format. The trace\n nesting is not changed.\n\n :param traces: A list of traces that should be serialized\n \"\"\"\n raise NotImplementedError()\n\n def encode(self, obj):\n # type: (List[List[Any]]) -> str\n \"\"\"\n Defines the underlying format used during traces or services encoding.\n This method must be implemented and should only be used by the internal\n functions.\n \"\"\"\n raise NotImplementedError()\n\n\nclass JSONEncoder(_EncoderBase):\n content_type = \"application/json\"\n\n def encode_traces(self, traces):\n normalized_traces = [[span.to_dict() for span in trace] for trace in traces]\n return self.encode(normalized_traces)\n\n @staticmethod\n def encode(obj):\n # type: (Any) -> str\n return json.dumps(obj)\n\n\nclass JSONEncoderV2(JSONEncoder):\n \"\"\"\n JSONEncoderV2 encodes traces to the new intake API format.\n \"\"\"\n\n content_type = \"application/json\"\n\n def encode_traces(self, traces):\n # type: (List[List[Span]]) -> str\n normalized_traces = [[JSONEncoderV2._convert_span(span) for span in trace] for trace in traces]\n return self.encode({\"traces\": normalized_traces})\n\n @staticmethod\n def _convert_span(span):\n # type: (Span) -> Dict[str, Any]\n sp = span.to_dict()\n sp[\"trace_id\"] = JSONEncoderV2._encode_id_to_hex(sp.get(\"trace_id\"))\n sp[\"parent_id\"] = JSONEncoderV2._encode_id_to_hex(sp.get(\"parent_id\"))\n sp[\"span_id\"] = JSONEncoderV2._encode_id_to_hex(sp.get(\"span_id\"))\n return sp\n\n @staticmethod\n def _encode_id_to_hex(dd_id):\n # type: (Optional[int]) -> str\n if not dd_id:\n return \"0000000000000000\"\n return \"%0.16X\" % int(dd_id)\n\n @staticmethod\n def _decode_id_to_hex(hex_id):\n # type: (Optional[str]) -> int\n if not hex_id:\n return 0\n return int(hex_id, 16)\n\n\nMSGPACK_ENCODERS = {\n \"v0.3\": MsgpackEncoderV03,\n \"v0.4\": MsgpackEncoderV03,\n \"v0.5\": MsgpackEncoderV05,\n}\n", "path": "ddtrace/internal/encoding.py"}]} | 2,194 | 561 |
gh_patches_debug_30436 | rasdani/github-patches | git_diff | uccser__cs-unplugged-463 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add sorting networks lessons
- [ ] Lesson 2 (8-10) - needs generated resources
- [ ] Lesson 1 (11-14)
- [ ] Lesson 2 (11-14)
</issue>
<code>
[start of csunplugged/resources/views/sorting_network_cards.py]
1 """Module for generating Sorting Network Cards resource."""
2
3 from random import sample
4 from PIL import Image, ImageDraw, ImageFont
5 from utils.retrieve_query_parameter import retrieve_query_parameter
6
7
8 def resource_image(request, resource):
9 """Create a image for Sorting Network Cards resource.
10
11 Args:
12 request: HTTP request object.
13 resource: Object of resource data.
14
15 Returns:
16 A list of Pillow image objects.
17 """
18 IMAGE_SIZE_X = 2000
19 IMAGE_SIZE_Y = 3000
20 LINE_COLOUR = "#000000"
21 LINE_WIDTH = 3
22 font_path = "static/fonts/PatrickHand-Regular.ttf"
23
24 # Retrieve parameters
25 parameter_options = valid_options()
26 card_type = retrieve_query_parameter(request, "type", parameter_options["type"])
27
28 # Create card outlines
29 card_outlines = Image.new("RGB", (IMAGE_SIZE_X, IMAGE_SIZE_Y), "#fff")
30 draw = ImageDraw.Draw(card_outlines)
31 for x_coord in range(0, IMAGE_SIZE_X, IMAGE_SIZE_X - LINE_WIDTH):
32 draw.line([(x_coord, 0), (x_coord, IMAGE_SIZE_Y)], fill=LINE_COLOUR, width=LINE_WIDTH)
33 for y_coord in range(0, IMAGE_SIZE_Y, int(IMAGE_SIZE_Y / 2 - LINE_WIDTH)):
34 draw.line([(0, y_coord), (IMAGE_SIZE_X, y_coord)], fill=LINE_COLOUR, width=LINE_WIDTH)
35
36 # Prepare text data
37 if card_type == "small_numbers":
38 font_size = 800
39 text = ["1", "2", "3", "4", "5", "6"]
40 elif card_type == "large_numbers":
41 font_size = 500
42 text = []
43 numbers = sample(range(1700000, 2100000), 6)
44 for number in numbers:
45 text.append("{:,}".format(number))
46 elif card_type == "fractions":
47 font_size = 900
48 font_path = "static/fonts/NotoSans-Regular.ttf"
49 text = [u"\u00bd", u"\u2153", u"\u2154", u"\u215c", u"\u00be", u"\u215d"]
50 else:
51 font_size = 300
52 text = [
53 "tahi",
54 "rua",
55 "toru",
56 "whā",
57 "rima",
58 "ono",
59 "whitu",
60 "waru",
61 "iwa",
62 "tekau",
63 "tekau mā tahi",
64 "tekau mā waru",
65 "tekau mā toru",
66 "tekau mā whā",
67 "rua tekau",
68 "rua tekau mā ono",
69 ]
70
71 font = ImageFont.truetype(font_path, font_size)
72 card_centers = [
73 (IMAGE_SIZE_X / 2, IMAGE_SIZE_Y / 4),
74 (IMAGE_SIZE_X / 2, (IMAGE_SIZE_Y / 4) * 3),
75 ]
76
77 # Add text to cards
78 images = []
79 for (text_number, text_string) in enumerate(text):
80 if text_number % 2 == 0:
81 page = card_outlines.copy()
82 draw = ImageDraw.Draw(page)
83 (x, y) = card_centers[0]
84 else:
85 (x, y) = card_centers[1]
86
87 text_width, text_height = draw.textsize(text_string, font=font)
88 coord_x = x - (text_width / 2)
89 coord_y = y - (text_height / 1.5)
90 draw.text(
91 (coord_x, coord_y),
92 text_string,
93 font=font,
94 fill="#000"
95 )
96 # If text on second card but not last page
97 if text_number % 2 == 1 and text_number != len(text) - 1:
98 images.append(page)
99 images.append(page)
100
101 return images
102
103
104 def subtitle(request, resource):
105 """Return the subtitle string of the resource.
106
107 Used after the resource name in the filename, and
108 also on the resource image.
109
110 Args:
111 request: HTTP request object
112 resource: Object of resource data.
113
114 Returns:
115 text for subtitle (string)
116 """
117 return "{} - {}".format(
118 retrieve_query_parameter(request, "type").replace("_", " "),
119 retrieve_query_parameter(request, "paper_size")
120 )
121
122
123 def valid_options():
124 """Provide dictionary of all valid parameters.
125
126 This excludes the header text parameter.
127
128 Returns:
129 All valid options (dict).
130 """
131 return {
132 "type": ["small_numbers", "large_numbers", "fractions", "maori_numbers"],
133 "paper_size": ["a4", "letter"],
134 }
135
[end of csunplugged/resources/views/sorting_network_cards.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/csunplugged/resources/views/sorting_network_cards.py b/csunplugged/resources/views/sorting_network_cards.py
--- a/csunplugged/resources/views/sorting_network_cards.py
+++ b/csunplugged/resources/views/sorting_network_cards.py
@@ -47,25 +47,24 @@
font_size = 900
font_path = "static/fonts/NotoSans-Regular.ttf"
text = [u"\u00bd", u"\u2153", u"\u2154", u"\u215c", u"\u00be", u"\u215d"]
- else:
+ elif card_type == "maori_numbers":
font_size = 300
text = [
- "tahi",
- "rua",
- "toru",
- "whā",
- "rima",
- "ono",
- "whitu",
- "waru",
- "iwa",
- "tekau",
- "tekau mā tahi",
- "tekau mā waru",
- "tekau mā toru",
- "tekau mā whā",
- "rua tekau",
- "rua tekau mā ono",
+ "tahi", "rua", "toru", "whā", "rima", "ono", "whitu", "waru",
+ "iwa", "tekau", "tekau mā tahi", "tekau mā waru", "tekau mā toru",
+ "tekau mā whā", "rua tekau", "rua tekau mā ono"
+ ]
+ elif card_type == "words":
+ font_size = 500
+ text = ["crocodile", "crochet", "kiwi", "weka", "kiwi", "kiwano"]
+ elif card_type == "letters":
+ font_size = 800
+ text = ["L", "O", "N", "K", "E", "D", "S", "P", "G", "B", "I", "Y"]
+ else:
+ font_size = 500
+ text = [
+ "whero", "kākāriki", "kiwikiwi", "karaka",
+ "kōwhai", "pango", "māwhero", "mā"
]
font = ImageFont.truetype(font_path, font_size)
@@ -129,6 +128,9 @@
All valid options (dict).
"""
return {
- "type": ["small_numbers", "large_numbers", "fractions", "maori_numbers"],
+ "type": [
+ "small_numbers", "large_numbers", "fractions", "maori_numbers",
+ "words", "letters", "maori_colours"
+ ],
"paper_size": ["a4", "letter"],
}
| {"golden_diff": "diff --git a/csunplugged/resources/views/sorting_network_cards.py b/csunplugged/resources/views/sorting_network_cards.py\n--- a/csunplugged/resources/views/sorting_network_cards.py\n+++ b/csunplugged/resources/views/sorting_network_cards.py\n@@ -47,25 +47,24 @@\n font_size = 900\n font_path = \"static/fonts/NotoSans-Regular.ttf\"\n text = [u\"\\u00bd\", u\"\\u2153\", u\"\\u2154\", u\"\\u215c\", u\"\\u00be\", u\"\\u215d\"]\n- else:\n+ elif card_type == \"maori_numbers\":\n font_size = 300\n text = [\n- \"tahi\",\n- \"rua\",\n- \"toru\",\n- \"wh\u0101\",\n- \"rima\",\n- \"ono\",\n- \"whitu\",\n- \"waru\",\n- \"iwa\",\n- \"tekau\",\n- \"tekau m\u0101 tahi\",\n- \"tekau m\u0101 waru\",\n- \"tekau m\u0101 toru\",\n- \"tekau m\u0101 wh\u0101\",\n- \"rua tekau\",\n- \"rua tekau m\u0101 ono\",\n+ \"tahi\", \"rua\", \"toru\", \"wh\u0101\", \"rima\", \"ono\", \"whitu\", \"waru\",\n+ \"iwa\", \"tekau\", \"tekau m\u0101 tahi\", \"tekau m\u0101 waru\", \"tekau m\u0101 toru\",\n+ \"tekau m\u0101 wh\u0101\", \"rua tekau\", \"rua tekau m\u0101 ono\"\n+ ]\n+ elif card_type == \"words\":\n+ font_size = 500\n+ text = [\"crocodile\", \"crochet\", \"kiwi\", \"weka\", \"kiwi\", \"kiwano\"]\n+ elif card_type == \"letters\":\n+ font_size = 800\n+ text = [\"L\", \"O\", \"N\", \"K\", \"E\", \"D\", \"S\", \"P\", \"G\", \"B\", \"I\", \"Y\"]\n+ else:\n+ font_size = 500\n+ text = [\n+ \"whero\", \"k\u0101k\u0101riki\", \"kiwikiwi\", \"karaka\",\n+ \"k\u014dwhai\", \"pango\", \"m\u0101whero\", \"m\u0101\"\n ]\n \n font = ImageFont.truetype(font_path, font_size)\n@@ -129,6 +128,9 @@\n All valid options (dict).\n \"\"\"\n return {\n- \"type\": [\"small_numbers\", \"large_numbers\", \"fractions\", \"maori_numbers\"],\n+ \"type\": [\n+ \"small_numbers\", \"large_numbers\", \"fractions\", \"maori_numbers\",\n+ \"words\", \"letters\", \"maori_colours\"\n+ ],\n \"paper_size\": [\"a4\", \"letter\"],\n }\n", "issue": "Add sorting networks lessons\n- [ ] Lesson 2 (8-10) - needs generated resources\r\n- [ ] Lesson 1 (11-14)\r\n- [ ] Lesson 2 (11-14)\n", "before_files": [{"content": "\"\"\"Module for generating Sorting Network Cards resource.\"\"\"\n\nfrom random import sample\nfrom PIL import Image, ImageDraw, ImageFont\nfrom utils.retrieve_query_parameter import retrieve_query_parameter\n\n\ndef resource_image(request, resource):\n \"\"\"Create a image for Sorting Network Cards resource.\n\n Args:\n request: HTTP request object.\n resource: Object of resource data.\n\n Returns:\n A list of Pillow image objects.\n \"\"\"\n IMAGE_SIZE_X = 2000\n IMAGE_SIZE_Y = 3000\n LINE_COLOUR = \"#000000\"\n LINE_WIDTH = 3\n font_path = \"static/fonts/PatrickHand-Regular.ttf\"\n\n # Retrieve parameters\n parameter_options = valid_options()\n card_type = retrieve_query_parameter(request, \"type\", parameter_options[\"type\"])\n\n # Create card outlines\n card_outlines = Image.new(\"RGB\", (IMAGE_SIZE_X, IMAGE_SIZE_Y), \"#fff\")\n draw = ImageDraw.Draw(card_outlines)\n for x_coord in range(0, IMAGE_SIZE_X, IMAGE_SIZE_X - LINE_WIDTH):\n draw.line([(x_coord, 0), (x_coord, IMAGE_SIZE_Y)], fill=LINE_COLOUR, width=LINE_WIDTH)\n for y_coord in range(0, IMAGE_SIZE_Y, int(IMAGE_SIZE_Y / 2 - LINE_WIDTH)):\n draw.line([(0, y_coord), (IMAGE_SIZE_X, y_coord)], fill=LINE_COLOUR, width=LINE_WIDTH)\n\n # Prepare text data\n if card_type == \"small_numbers\":\n font_size = 800\n text = [\"1\", \"2\", \"3\", \"4\", \"5\", \"6\"]\n elif card_type == \"large_numbers\":\n font_size = 500\n text = []\n numbers = sample(range(1700000, 2100000), 6)\n for number in numbers:\n text.append(\"{:,}\".format(number))\n elif card_type == \"fractions\":\n font_size = 900\n font_path = \"static/fonts/NotoSans-Regular.ttf\"\n text = [u\"\\u00bd\", u\"\\u2153\", u\"\\u2154\", u\"\\u215c\", u\"\\u00be\", u\"\\u215d\"]\n else:\n font_size = 300\n text = [\n \"tahi\",\n \"rua\",\n \"toru\",\n \"wh\u0101\",\n \"rima\",\n \"ono\",\n \"whitu\",\n \"waru\",\n \"iwa\",\n \"tekau\",\n \"tekau m\u0101 tahi\",\n \"tekau m\u0101 waru\",\n \"tekau m\u0101 toru\",\n \"tekau m\u0101 wh\u0101\",\n \"rua tekau\",\n \"rua tekau m\u0101 ono\",\n ]\n\n font = ImageFont.truetype(font_path, font_size)\n card_centers = [\n (IMAGE_SIZE_X / 2, IMAGE_SIZE_Y / 4),\n (IMAGE_SIZE_X / 2, (IMAGE_SIZE_Y / 4) * 3),\n ]\n\n # Add text to cards\n images = []\n for (text_number, text_string) in enumerate(text):\n if text_number % 2 == 0:\n page = card_outlines.copy()\n draw = ImageDraw.Draw(page)\n (x, y) = card_centers[0]\n else:\n (x, y) = card_centers[1]\n\n text_width, text_height = draw.textsize(text_string, font=font)\n coord_x = x - (text_width / 2)\n coord_y = y - (text_height / 1.5)\n draw.text(\n (coord_x, coord_y),\n text_string,\n font=font,\n fill=\"#000\"\n )\n # If text on second card but not last page\n if text_number % 2 == 1 and text_number != len(text) - 1:\n images.append(page)\n images.append(page)\n\n return images\n\n\ndef subtitle(request, resource):\n \"\"\"Return the subtitle string of the resource.\n\n Used after the resource name in the filename, and\n also on the resource image.\n\n Args:\n request: HTTP request object\n resource: Object of resource data.\n\n Returns:\n text for subtitle (string)\n \"\"\"\n return \"{} - {}\".format(\n retrieve_query_parameter(request, \"type\").replace(\"_\", \" \"),\n retrieve_query_parameter(request, \"paper_size\")\n )\n\n\ndef valid_options():\n \"\"\"Provide dictionary of all valid parameters.\n\n This excludes the header text parameter.\n\n Returns:\n All valid options (dict).\n \"\"\"\n return {\n \"type\": [\"small_numbers\", \"large_numbers\", \"fractions\", \"maori_numbers\"],\n \"paper_size\": [\"a4\", \"letter\"],\n }\n", "path": "csunplugged/resources/views/sorting_network_cards.py"}]} | 1,958 | 676 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.