problem_id
stringlengths 18
22
| source
stringclasses 1
value | task_type
stringclasses 1
value | in_source_id
stringlengths 13
58
| prompt
stringlengths 1.71k
18.9k
| golden_diff
stringlengths 145
5.13k
| verification_info
stringlengths 465
23.6k
| num_tokens_prompt
int64 556
4.1k
| num_tokens_diff
int64 47
1.02k
|
---|---|---|---|---|---|---|---|---|
gh_patches_debug_17784
|
rasdani/github-patches
|
git_diff
|
lmfit__lmfit-py-150
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Cannot deploy to pypi repo dues to tuples in the `setup.py` attributes
Due to a python-bug (http://bugs.python.org/issue19610) i cannot install and deploy lmfit with `python setup install`
I discovered this issue while trying to fix #149
</issue>
<code>
[start of setup.py]
1 #!/usr/bin/env python
2 # from distutils.core import setup
3 from setuptools import setup
4
5 import lmfit as lmfit
6 import numpy, scipy
7
8 long_desc = """A library for least-squares minimization and data fitting in
9 Python. Built on top of scipy.optimize, lmfit provides a Parameter object
10 which can be set as fixed or free, can have upper and/or lower bounds, or
11 can be written in terms of algebraic constraints of other Parameters. The
12 user writes a function to be minimized as a function of these Parameters,
13 and the scipy.optimize methods are used to find the optimal values for the
14 Parameters. The Levenberg-Marquardt (leastsq) is the default minimization
15 algorithm, and provides estimated standard errors and correlations between
16 varied Parameters. Other minimization methods, including Nelder-Mead's
17 downhill simplex, Powell's method, BFGS, Sequential Least Squares, and
18 others are also supported. Bounds and contraints can be placed on
19 Parameters for all of these methods.
20
21 In addition, methods for explicitly calculating confidence intervals are
22 provided for exploring minmization problems where the approximation of
23 estimating Parameter uncertainties from the covariance matrix is
24 questionable. """
25
26
27 setup(name = 'lmfit',
28 version = lmfit.__version__,
29 author = 'LMFit Development Team',
30 author_email = '[email protected]',
31 url = 'http://lmfit.github.io/lmfit-py/',
32 download_url = 'http://lmfit.github.io//lmfit-py/',
33 requires = ('numpy', 'scipy'),
34 license = 'BSD',
35 description = "Least-Squares Minimization with Bounds and Constraints",
36 long_description = long_desc,
37 platforms = ('Windows', 'Linux', 'Mac OS X'),
38 classifiers=['Intended Audience :: Science/Research',
39 'Operating System :: OS Independent',
40 'Programming Language :: Python',
41 'Topic :: Scientific/Engineering',
42 ],
43 # test_suite='nose.collector',
44 # test_requires=['Nose'],
45 package_dir = {'lmfit': 'lmfit'},
46 packages = ['lmfit', 'lmfit.ui', 'lmfit.uncertainties'],
47 )
48
49
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -30,11 +30,11 @@
author_email = '[email protected]',
url = 'http://lmfit.github.io/lmfit-py/',
download_url = 'http://lmfit.github.io//lmfit-py/',
- requires = ('numpy', 'scipy'),
+ requires = ['numpy', 'scipy'],
license = 'BSD',
description = "Least-Squares Minimization with Bounds and Constraints",
long_description = long_desc,
- platforms = ('Windows', 'Linux', 'Mac OS X'),
+ platforms = ['Windows', 'Linux', 'Mac OS X'],
classifiers=['Intended Audience :: Science/Research',
'Operating System :: OS Independent',
'Programming Language :: Python',
|
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -30,11 +30,11 @@\n author_email = '[email protected]',\n url = 'http://lmfit.github.io/lmfit-py/',\n download_url = 'http://lmfit.github.io//lmfit-py/',\n- requires = ('numpy', 'scipy'),\n+ requires = ['numpy', 'scipy'],\n license = 'BSD',\n description = \"Least-Squares Minimization with Bounds and Constraints\",\n long_description = long_desc,\n- platforms = ('Windows', 'Linux', 'Mac OS X'),\n+ platforms = ['Windows', 'Linux', 'Mac OS X'],\n classifiers=['Intended Audience :: Science/Research',\n 'Operating System :: OS Independent',\n 'Programming Language :: Python',\n", "issue": "Cannot deploy to pypi repo dues to tuples in the `setup.py` attributes\nDue to a python-bug (http://bugs.python.org/issue19610) i cannot install and deploy lmfit with `python setup install`\n\nI discovered this issue while trying to fix #149 \n\n", "before_files": [{"content": "#!/usr/bin/env python\n# from distutils.core import setup\nfrom setuptools import setup\n\nimport lmfit as lmfit\nimport numpy, scipy\n\nlong_desc = \"\"\"A library for least-squares minimization and data fitting in\nPython. Built on top of scipy.optimize, lmfit provides a Parameter object\nwhich can be set as fixed or free, can have upper and/or lower bounds, or\ncan be written in terms of algebraic constraints of other Parameters. The\nuser writes a function to be minimized as a function of these Parameters,\nand the scipy.optimize methods are used to find the optimal values for the\nParameters. The Levenberg-Marquardt (leastsq) is the default minimization\nalgorithm, and provides estimated standard errors and correlations between\nvaried Parameters. Other minimization methods, including Nelder-Mead's\ndownhill simplex, Powell's method, BFGS, Sequential Least Squares, and\nothers are also supported. Bounds and contraints can be placed on\nParameters for all of these methods.\n\nIn addition, methods for explicitly calculating confidence intervals are\nprovided for exploring minmization problems where the approximation of\nestimating Parameter uncertainties from the covariance matrix is\nquestionable. \"\"\"\n\n\nsetup(name = 'lmfit',\n version = lmfit.__version__,\n author = 'LMFit Development Team',\n author_email = '[email protected]',\n url = 'http://lmfit.github.io/lmfit-py/',\n download_url = 'http://lmfit.github.io//lmfit-py/',\n requires = ('numpy', 'scipy'),\n license = 'BSD',\n description = \"Least-Squares Minimization with Bounds and Constraints\",\n long_description = long_desc,\n platforms = ('Windows', 'Linux', 'Mac OS X'),\n classifiers=['Intended Audience :: Science/Research',\n 'Operating System :: OS Independent',\n 'Programming Language :: Python',\n 'Topic :: Scientific/Engineering',\n ],\n # test_suite='nose.collector',\n # test_requires=['Nose'],\n package_dir = {'lmfit': 'lmfit'},\n packages = ['lmfit', 'lmfit.ui', 'lmfit.uncertainties'],\n )\n\n", "path": "setup.py"}]}
| 1,151 | 185 |
gh_patches_debug_22010
|
rasdani/github-patches
|
git_diff
|
ckan__ckan-561
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Postgresql 8.4 error when running paster db init
When running the paster db init command with the CKAN 2.0 beta, there is an error encountered that appears to be related to use of the left() string function in ckan/migration/versions/067_turn_extras_to_strings.py. According to the documentation and my own simple test, this function is not support in Postgresql 8.4. For a stack trace, see: https://gist.github.com/thriuin/5067819.
Is there a new minimum version of Postgresql required -- documentation still says 8.4 which unfortunately is what comes with RedHat Enterprise.
</issue>
<code>
[start of ckan/migration/versions/067_turn_extras_to_strings.py]
1 import json
2
3 def upgrade(migrate_engine):
4
5 with migrate_engine.begin() as connection:
6 tables = 'package_extra group_extra'
7 revision_tables = 'package_extra_revision group_extra_revision'
8
9 for table in tables.split():
10 sql = """select id, value from {table} where left(value,1) = '"' """.format(table=table)
11 results = connection.execute(sql)
12 for result in results:
13 id, value = result
14 update_sql = 'update {table} set value = %s where id = %s'
15 connection.execute(update_sql.format(table=table),
16 json.loads(value), id)
17
18 for table in revision_tables.split():
19 sql = """select id, revision_id, value from {table} where left(value,1) = '"' """.format(table=table)
20
21 results = connection.execute(sql)
22 for result in results:
23 id, revision_id, value = result
24 update_sql = 'update {table} set value = %s where id = %s and revision_id = %s'
25 connection.execute(update_sql.format(table=table),
26 json.loads(value), id, revision_id)
27
28
29
[end of ckan/migration/versions/067_turn_extras_to_strings.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/ckan/migration/versions/067_turn_extras_to_strings.py b/ckan/migration/versions/067_turn_extras_to_strings.py
--- a/ckan/migration/versions/067_turn_extras_to_strings.py
+++ b/ckan/migration/versions/067_turn_extras_to_strings.py
@@ -7,7 +7,7 @@
revision_tables = 'package_extra_revision group_extra_revision'
for table in tables.split():
- sql = """select id, value from {table} where left(value,1) = '"' """.format(table=table)
+ sql = """select id, value from {table} where substr(value,0,1) = '"' """.format(table=table)
results = connection.execute(sql)
for result in results:
id, value = result
@@ -16,7 +16,7 @@
json.loads(value), id)
for table in revision_tables.split():
- sql = """select id, revision_id, value from {table} where left(value,1) = '"' """.format(table=table)
+ sql = """select id, revision_id, value from {table} where substr(value,0,1) = '"' """.format(table=table)
results = connection.execute(sql)
for result in results:
|
{"golden_diff": "diff --git a/ckan/migration/versions/067_turn_extras_to_strings.py b/ckan/migration/versions/067_turn_extras_to_strings.py\n--- a/ckan/migration/versions/067_turn_extras_to_strings.py\n+++ b/ckan/migration/versions/067_turn_extras_to_strings.py\n@@ -7,7 +7,7 @@\n revision_tables = 'package_extra_revision group_extra_revision'\n \n for table in tables.split():\n- sql = \"\"\"select id, value from {table} where left(value,1) = '\"' \"\"\".format(table=table)\n+ sql = \"\"\"select id, value from {table} where substr(value,0,1) = '\"' \"\"\".format(table=table)\n results = connection.execute(sql)\n for result in results:\n id, value = result\n@@ -16,7 +16,7 @@\n json.loads(value), id)\n \n for table in revision_tables.split():\n- sql = \"\"\"select id, revision_id, value from {table} where left(value,1) = '\"' \"\"\".format(table=table)\n+ sql = \"\"\"select id, revision_id, value from {table} where substr(value,0,1) = '\"' \"\"\".format(table=table)\n \n results = connection.execute(sql)\n for result in results:\n", "issue": "Postgresql 8.4 error when running paster db init\nWhen running the paster db init command with the CKAN 2.0 beta, there is an error encountered that appears to be related to use of the left() string function in ckan/migration/versions/067_turn_extras_to_strings.py. According to the documentation and my own simple test, this function is not support in Postgresql 8.4. For a stack trace, see: https://gist.github.com/thriuin/5067819.\n\nIs there a new minimum version of Postgresql required -- documentation still says 8.4 which unfortunately is what comes with RedHat Enterprise.\n\n", "before_files": [{"content": "import json\n\ndef upgrade(migrate_engine):\n\n with migrate_engine.begin() as connection:\n tables = 'package_extra group_extra'\n revision_tables = 'package_extra_revision group_extra_revision'\n\n for table in tables.split():\n sql = \"\"\"select id, value from {table} where left(value,1) = '\"' \"\"\".format(table=table)\n results = connection.execute(sql)\n for result in results:\n id, value = result\n update_sql = 'update {table} set value = %s where id = %s'\n connection.execute(update_sql.format(table=table),\n json.loads(value), id)\n\n for table in revision_tables.split():\n sql = \"\"\"select id, revision_id, value from {table} where left(value,1) = '\"' \"\"\".format(table=table)\n\n results = connection.execute(sql)\n for result in results:\n id, revision_id, value = result\n update_sql = 'update {table} set value = %s where id = %s and revision_id = %s'\n connection.execute(update_sql.format(table=table),\n json.loads(value), id, revision_id)\n\n\n", "path": "ckan/migration/versions/067_turn_extras_to_strings.py"}]}
| 991 | 290 |
gh_patches_debug_41504
|
rasdani/github-patches
|
git_diff
|
mars-project__mars-1080
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[BUG] Tile of rechunk() outputs inconsistent dtypes and columns_value
**Describe the bug**
DataFrame.rechunk() produces chunks with different number of dtypes and columns_value, which may impair descendant chunks.
Code to reproduce this:
```python
In [2]: import pandas as pd
In [4]: from mars.dataframe.datasource.dataframe import from_pandas as from_pandas_df
In [6]: cols = [chr(ord('A') + i) for i in range(10)]
In [8]: df_raw = pd.DataFrame(dict((c, [i ** 2 for i in range(20)]) for c in cols))
In [9]: df = from_pandas_df(df_raw, chunk_size=5)
In [13]: rechunked = df.rechunk((20, 1)).tiles()
In [14]: rechunked.chunks[0].columns_value.to_pandas()
Out[14]: Index(['A'], dtype='object')
In [15]: rechunked.chunks[0].dtypes
Out[15]:
A int64
B int64
C int64
D int64
E int64
dtype: object
```
</issue>
<code>
[start of mars/dataframe/base/rechunk.py]
1 # Copyright 1999-2020 Alibaba Group Holding Ltd.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import itertools
16
17 from ... import opcodes as OperandDef
18 from ...serialize import KeyField, AnyField, Int32Field, Int64Field
19 from ...tensor.rechunk.core import get_nsplits, plan_rechunks, compute_rechunk_slices
20 from ...tensor.utils import calc_sliced_size
21 from ...utils import check_chunks_unknown_shape
22 from ...tiles import TilesError
23 from ..operands import DataFrameOperand, DataFrameOperandMixin, DATAFRAME_TYPE, ObjectType
24 from ..utils import indexing_index_value, merge_index_value
25
26
27 class DataFrameRechunk(DataFrameOperand, DataFrameOperandMixin):
28 _op_type_ = OperandDef.RECHUNK
29
30 _input = KeyField('input')
31 _chunk_size = AnyField('chunk_size')
32 _threshold = Int32Field('threshold')
33 _chunk_size_limit = Int64Field('chunk_size_limit')
34
35 def __init__(self, chunk_size=None, threshold=None, chunk_size_limit=None, object_type=None, **kw):
36 super().__init__(_chunk_size=chunk_size, _threshold=threshold,
37 _chunk_size_limit=chunk_size_limit, _object_type=object_type, **kw)
38
39 @property
40 def chunk_size(self):
41 return self._chunk_size
42
43 @property
44 def threshold(self):
45 return self._threshold
46
47 @property
48 def chunk_size_limit(self):
49 return self._chunk_size_limit
50
51 def _set_inputs(self, inputs):
52 super()._set_inputs(inputs)
53 self._input = self._inputs[0]
54
55 def __call__(self, x):
56 if isinstance(x, DATAFRAME_TYPE):
57 self._object_type = ObjectType.dataframe
58 return self.new_dataframe([x], shape=x.shape, dtypes=x.dtypes,
59 columns_value=x.columns_value, index_value=x.index_value)
60 else:
61 self._object_type = ObjectType.series
62 return self.new_series([x], shape=x.shape, dtype=x.dtype, index_value=x.index_value, name=x.name)
63
64 @classmethod
65 def tile(cls, op):
66 check_chunks_unknown_shape(op.inputs, TilesError)
67 out = op.outputs[0]
68 new_chunk_size = op.chunk_size
69 if isinstance(out, DATAFRAME_TYPE):
70 itemsize = max(dt.itemsize for dt in out.dtypes)
71 else:
72 itemsize = out.dtype.itemsize
73 steps = plan_rechunks(op.inputs[0], new_chunk_size, itemsize,
74 threshold=op.threshold,
75 chunk_size_limit=op.chunk_size_limit)
76 for c in steps:
77 out = compute_rechunk(out.inputs[0], c)
78
79 return [out]
80
81
82 def rechunk(a, chunk_size, threshold=None, chunk_size_limit=None):
83 if isinstance(a, DATAFRAME_TYPE):
84 itemsize = max(dt.itemsize for dt in a.dtypes)
85 else:
86 itemsize = a.dtype.itemsize
87 chunk_size = get_nsplits(a, chunk_size, itemsize)
88 if chunk_size == a.nsplits:
89 return a
90
91 op = DataFrameRechunk(chunk_size, threshold, chunk_size_limit)
92 return op(a)
93
94
95 def _concat_dataframe_index_and_columns(to_concat_chunks):
96 if to_concat_chunks[0].index_value.to_pandas().empty:
97 index_value = to_concat_chunks[0].index_value
98 else:
99 idx_to_index_value = dict((c.index[0], c.index_value) for c in to_concat_chunks if c.index[1] == 0)
100 index_value = merge_index_value(idx_to_index_value)
101
102 idx_to_columns_value = dict((c.index[1], c.columns_value) for c in to_concat_chunks if c.index[0] == 0)
103 columns_value = merge_index_value(idx_to_columns_value, store_data=True)
104 return index_value, columns_value
105
106
107 def _concat_series_index(to_concat_chunks):
108 if to_concat_chunks[0].index_value.to_pandas().empty:
109 index_value = to_concat_chunks[0].index_value
110 else:
111 idx_to_index_value = dict((c.index[0], c.index_value) for c in to_concat_chunks)
112 index_value = merge_index_value(idx_to_index_value)
113 return index_value
114
115
116 def compute_rechunk(a, chunk_size):
117 from ..indexing.iloc import DataFrameIlocGetItem, SeriesIlocGetItem
118 from ..merge.concat import DataFrameConcat
119
120 result_slices = compute_rechunk_slices(a, chunk_size)
121 result_chunks = []
122 idxes = itertools.product(*[range(len(c)) for c in chunk_size])
123 chunk_slices = itertools.product(*result_slices)
124 chunk_shapes = itertools.product(*chunk_size)
125 is_dataframe = isinstance(a, DATAFRAME_TYPE)
126 for idx, chunk_slice, chunk_shape in zip(idxes, chunk_slices, chunk_shapes):
127 to_merge = []
128 merge_idxes = itertools.product(*[range(len(i)) for i in chunk_slice])
129 for merge_idx, index_slices in zip(merge_idxes, itertools.product(*chunk_slice)):
130 chunk_index, chunk_slice = zip(*index_slices)
131 old_chunk = a.cix[chunk_index]
132 merge_chunk_shape = tuple(calc_sliced_size(s, chunk_slice[0]) for s in old_chunk.shape)
133 new_index_value = indexing_index_value(old_chunk.index_value, chunk_slice[0])
134 if is_dataframe:
135 new_columns_value = indexing_index_value(old_chunk.columns_value, chunk_slice[1], store_data=True)
136 merge_chunk_op = DataFrameIlocGetItem(chunk_slice, sparse=old_chunk.op.sparse,
137 object_type=ObjectType.dataframe)
138 merge_chunk = merge_chunk_op.new_chunk([old_chunk], shape=merge_chunk_shape,
139 index=merge_idx, index_value=new_index_value,
140 columns_value=new_columns_value, dtypes=old_chunk.dtypes)
141 else:
142 merge_chunk_op = SeriesIlocGetItem(chunk_slice, sparse=old_chunk.op.sparse,
143 object_type=ObjectType.series)
144 merge_chunk = merge_chunk_op.new_chunk([old_chunk], shape=merge_chunk_shape,
145 index=merge_idx, index_value=new_index_value,
146 dtype=old_chunk.dtype)
147 to_merge.append(merge_chunk)
148 if len(to_merge) == 1:
149 chunk_op = to_merge[0].op.copy()
150 if is_dataframe:
151 out_chunk = chunk_op.new_chunk(to_merge[0].op.inputs, shape=chunk_shape,
152 index=idx, index_value=to_merge[0].index_value,
153 columns_value=to_merge[0].columns_value,
154 dtypes=to_merge[0].dtypes)
155 else:
156 out_chunk = chunk_op.new_chunk(to_merge[0].op.inputs, shape=chunk_shape,
157 index=idx, index_value=to_merge[0].index_value,
158 name=to_merge[0].name, dtype=to_merge[0].dtype)
159 result_chunks.append(out_chunk)
160 else:
161 if is_dataframe:
162 chunk_op = DataFrameConcat(object_type=ObjectType.dataframe)
163 index_value, columns_value = _concat_dataframe_index_and_columns(to_merge)
164 out_chunk = chunk_op.new_chunk(to_merge, shape=chunk_shape,
165 index=idx, index_value=index_value,
166 columns_value=columns_value,
167 dtypes=to_merge[0].dtypes)
168 else:
169 chunk_op = DataFrameConcat(object_type=ObjectType.series)
170 index_value = _concat_series_index(to_merge)
171 out_chunk = chunk_op.new_chunk(to_merge, shape=chunk_shape,
172 index=idx, index_value=index_value,
173 dtype=to_merge[0].dtype)
174 result_chunks.append(out_chunk)
175
176 if is_dataframe:
177 op = DataFrameRechunk(chunk_size, object_type=ObjectType.dataframe)
178 return op.new_dataframe([a], a.shape, dtypes=a.dtypes, columns_value=a.columns_value,
179 index_value=a.index_value, nsplits=chunk_size, chunks=result_chunks)
180 else:
181 op = DataFrameRechunk(chunk_size, object_type=ObjectType.series)
182 return op.new_series([a], a.shape, dtype=a.dtype, index_value=a.index_value,
183 nsplits=chunk_size, chunks=result_chunks)
184
[end of mars/dataframe/base/rechunk.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/mars/dataframe/base/rechunk.py b/mars/dataframe/base/rechunk.py
--- a/mars/dataframe/base/rechunk.py
+++ b/mars/dataframe/base/rechunk.py
@@ -14,6 +14,8 @@
import itertools
+import pandas as pd
+
from ... import opcodes as OperandDef
from ...serialize import KeyField, AnyField, Int32Field, Int64Field
from ...tensor.rechunk.core import get_nsplits, plan_rechunks, compute_rechunk_slices
@@ -92,7 +94,7 @@
return op(a)
-def _concat_dataframe_index_and_columns(to_concat_chunks):
+def _concat_dataframe_meta(to_concat_chunks):
if to_concat_chunks[0].index_value.to_pandas().empty:
index_value = to_concat_chunks[0].index_value
else:
@@ -101,7 +103,10 @@
idx_to_columns_value = dict((c.index[1], c.columns_value) for c in to_concat_chunks if c.index[0] == 0)
columns_value = merge_index_value(idx_to_columns_value, store_data=True)
- return index_value, columns_value
+
+ idx_to_dtypes = dict((c.index[1], c.dtypes) for c in to_concat_chunks if c.index[0] == 0)
+ dtypes = pd.concat([v[1] for v in list(sorted(idx_to_dtypes.items()))])
+ return index_value, columns_value, dtypes
def _concat_series_index(to_concat_chunks):
@@ -137,7 +142,8 @@
object_type=ObjectType.dataframe)
merge_chunk = merge_chunk_op.new_chunk([old_chunk], shape=merge_chunk_shape,
index=merge_idx, index_value=new_index_value,
- columns_value=new_columns_value, dtypes=old_chunk.dtypes)
+ columns_value=new_columns_value,
+ dtypes=old_chunk.dtypes.iloc[chunk_slice[1]])
else:
merge_chunk_op = SeriesIlocGetItem(chunk_slice, sparse=old_chunk.op.sparse,
object_type=ObjectType.series)
@@ -160,11 +166,11 @@
else:
if is_dataframe:
chunk_op = DataFrameConcat(object_type=ObjectType.dataframe)
- index_value, columns_value = _concat_dataframe_index_and_columns(to_merge)
+ index_value, columns_value, dtypes = _concat_dataframe_meta(to_merge)
out_chunk = chunk_op.new_chunk(to_merge, shape=chunk_shape,
index=idx, index_value=index_value,
columns_value=columns_value,
- dtypes=to_merge[0].dtypes)
+ dtypes=dtypes)
else:
chunk_op = DataFrameConcat(object_type=ObjectType.series)
index_value = _concat_series_index(to_merge)
|
{"golden_diff": "diff --git a/mars/dataframe/base/rechunk.py b/mars/dataframe/base/rechunk.py\n--- a/mars/dataframe/base/rechunk.py\n+++ b/mars/dataframe/base/rechunk.py\n@@ -14,6 +14,8 @@\n \n import itertools\n \n+import pandas as pd\n+\n from ... import opcodes as OperandDef\n from ...serialize import KeyField, AnyField, Int32Field, Int64Field\n from ...tensor.rechunk.core import get_nsplits, plan_rechunks, compute_rechunk_slices\n@@ -92,7 +94,7 @@\n return op(a)\n \n \n-def _concat_dataframe_index_and_columns(to_concat_chunks):\n+def _concat_dataframe_meta(to_concat_chunks):\n if to_concat_chunks[0].index_value.to_pandas().empty:\n index_value = to_concat_chunks[0].index_value\n else:\n@@ -101,7 +103,10 @@\n \n idx_to_columns_value = dict((c.index[1], c.columns_value) for c in to_concat_chunks if c.index[0] == 0)\n columns_value = merge_index_value(idx_to_columns_value, store_data=True)\n- return index_value, columns_value\n+\n+ idx_to_dtypes = dict((c.index[1], c.dtypes) for c in to_concat_chunks if c.index[0] == 0)\n+ dtypes = pd.concat([v[1] for v in list(sorted(idx_to_dtypes.items()))])\n+ return index_value, columns_value, dtypes\n \n \n def _concat_series_index(to_concat_chunks):\n@@ -137,7 +142,8 @@\n object_type=ObjectType.dataframe)\n merge_chunk = merge_chunk_op.new_chunk([old_chunk], shape=merge_chunk_shape,\n index=merge_idx, index_value=new_index_value,\n- columns_value=new_columns_value, dtypes=old_chunk.dtypes)\n+ columns_value=new_columns_value,\n+ dtypes=old_chunk.dtypes.iloc[chunk_slice[1]])\n else:\n merge_chunk_op = SeriesIlocGetItem(chunk_slice, sparse=old_chunk.op.sparse,\n object_type=ObjectType.series)\n@@ -160,11 +166,11 @@\n else:\n if is_dataframe:\n chunk_op = DataFrameConcat(object_type=ObjectType.dataframe)\n- index_value, columns_value = _concat_dataframe_index_and_columns(to_merge)\n+ index_value, columns_value, dtypes = _concat_dataframe_meta(to_merge)\n out_chunk = chunk_op.new_chunk(to_merge, shape=chunk_shape,\n index=idx, index_value=index_value,\n columns_value=columns_value,\n- dtypes=to_merge[0].dtypes)\n+ dtypes=dtypes)\n else:\n chunk_op = DataFrameConcat(object_type=ObjectType.series)\n index_value = _concat_series_index(to_merge)\n", "issue": "[BUG] Tile of rechunk() outputs inconsistent dtypes and columns_value\n**Describe the bug**\r\nDataFrame.rechunk() produces chunks with different number of dtypes and columns_value, which may impair descendant chunks.\r\n\r\nCode to reproduce this:\r\n\r\n```python \r\nIn [2]: import pandas as pd \r\n\r\nIn [4]: from mars.dataframe.datasource.dataframe import from_pandas as from_pandas_df \r\n\r\nIn [6]: cols = [chr(ord('A') + i) for i in range(10)] \r\n\r\nIn [8]: df_raw = pd.DataFrame(dict((c, [i ** 2 for i in range(20)]) for c in cols)) \r\n\r\nIn [9]: df = from_pandas_df(df_raw, chunk_size=5) \r\n\r\nIn [13]: rechunked = df.rechunk((20, 1)).tiles() \r\n\r\nIn [14]: rechunked.chunks[0].columns_value.to_pandas() \r\nOut[14]: Index(['A'], dtype='object')\r\n\r\nIn [15]: rechunked.chunks[0].dtypes \r\nOut[15]: \r\nA int64\r\nB int64\r\nC int64\r\nD int64\r\nE int64\r\ndtype: object\r\n```\n", "before_files": [{"content": "# Copyright 1999-2020 Alibaba Group Holding Ltd.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport itertools\n\nfrom ... import opcodes as OperandDef\nfrom ...serialize import KeyField, AnyField, Int32Field, Int64Field\nfrom ...tensor.rechunk.core import get_nsplits, plan_rechunks, compute_rechunk_slices\nfrom ...tensor.utils import calc_sliced_size\nfrom ...utils import check_chunks_unknown_shape\nfrom ...tiles import TilesError\nfrom ..operands import DataFrameOperand, DataFrameOperandMixin, DATAFRAME_TYPE, ObjectType\nfrom ..utils import indexing_index_value, merge_index_value\n\n\nclass DataFrameRechunk(DataFrameOperand, DataFrameOperandMixin):\n _op_type_ = OperandDef.RECHUNK\n\n _input = KeyField('input')\n _chunk_size = AnyField('chunk_size')\n _threshold = Int32Field('threshold')\n _chunk_size_limit = Int64Field('chunk_size_limit')\n\n def __init__(self, chunk_size=None, threshold=None, chunk_size_limit=None, object_type=None, **kw):\n super().__init__(_chunk_size=chunk_size, _threshold=threshold,\n _chunk_size_limit=chunk_size_limit, _object_type=object_type, **kw)\n\n @property\n def chunk_size(self):\n return self._chunk_size\n\n @property\n def threshold(self):\n return self._threshold\n\n @property\n def chunk_size_limit(self):\n return self._chunk_size_limit\n\n def _set_inputs(self, inputs):\n super()._set_inputs(inputs)\n self._input = self._inputs[0]\n\n def __call__(self, x):\n if isinstance(x, DATAFRAME_TYPE):\n self._object_type = ObjectType.dataframe\n return self.new_dataframe([x], shape=x.shape, dtypes=x.dtypes,\n columns_value=x.columns_value, index_value=x.index_value)\n else:\n self._object_type = ObjectType.series\n return self.new_series([x], shape=x.shape, dtype=x.dtype, index_value=x.index_value, name=x.name)\n\n @classmethod\n def tile(cls, op):\n check_chunks_unknown_shape(op.inputs, TilesError)\n out = op.outputs[0]\n new_chunk_size = op.chunk_size\n if isinstance(out, DATAFRAME_TYPE):\n itemsize = max(dt.itemsize for dt in out.dtypes)\n else:\n itemsize = out.dtype.itemsize\n steps = plan_rechunks(op.inputs[0], new_chunk_size, itemsize,\n threshold=op.threshold,\n chunk_size_limit=op.chunk_size_limit)\n for c in steps:\n out = compute_rechunk(out.inputs[0], c)\n\n return [out]\n\n\ndef rechunk(a, chunk_size, threshold=None, chunk_size_limit=None):\n if isinstance(a, DATAFRAME_TYPE):\n itemsize = max(dt.itemsize for dt in a.dtypes)\n else:\n itemsize = a.dtype.itemsize\n chunk_size = get_nsplits(a, chunk_size, itemsize)\n if chunk_size == a.nsplits:\n return a\n\n op = DataFrameRechunk(chunk_size, threshold, chunk_size_limit)\n return op(a)\n\n\ndef _concat_dataframe_index_and_columns(to_concat_chunks):\n if to_concat_chunks[0].index_value.to_pandas().empty:\n index_value = to_concat_chunks[0].index_value\n else:\n idx_to_index_value = dict((c.index[0], c.index_value) for c in to_concat_chunks if c.index[1] == 0)\n index_value = merge_index_value(idx_to_index_value)\n\n idx_to_columns_value = dict((c.index[1], c.columns_value) for c in to_concat_chunks if c.index[0] == 0)\n columns_value = merge_index_value(idx_to_columns_value, store_data=True)\n return index_value, columns_value\n\n\ndef _concat_series_index(to_concat_chunks):\n if to_concat_chunks[0].index_value.to_pandas().empty:\n index_value = to_concat_chunks[0].index_value\n else:\n idx_to_index_value = dict((c.index[0], c.index_value) for c in to_concat_chunks)\n index_value = merge_index_value(idx_to_index_value)\n return index_value\n\n\ndef compute_rechunk(a, chunk_size):\n from ..indexing.iloc import DataFrameIlocGetItem, SeriesIlocGetItem\n from ..merge.concat import DataFrameConcat\n\n result_slices = compute_rechunk_slices(a, chunk_size)\n result_chunks = []\n idxes = itertools.product(*[range(len(c)) for c in chunk_size])\n chunk_slices = itertools.product(*result_slices)\n chunk_shapes = itertools.product(*chunk_size)\n is_dataframe = isinstance(a, DATAFRAME_TYPE)\n for idx, chunk_slice, chunk_shape in zip(idxes, chunk_slices, chunk_shapes):\n to_merge = []\n merge_idxes = itertools.product(*[range(len(i)) for i in chunk_slice])\n for merge_idx, index_slices in zip(merge_idxes, itertools.product(*chunk_slice)):\n chunk_index, chunk_slice = zip(*index_slices)\n old_chunk = a.cix[chunk_index]\n merge_chunk_shape = tuple(calc_sliced_size(s, chunk_slice[0]) for s in old_chunk.shape)\n new_index_value = indexing_index_value(old_chunk.index_value, chunk_slice[0])\n if is_dataframe:\n new_columns_value = indexing_index_value(old_chunk.columns_value, chunk_slice[1], store_data=True)\n merge_chunk_op = DataFrameIlocGetItem(chunk_slice, sparse=old_chunk.op.sparse,\n object_type=ObjectType.dataframe)\n merge_chunk = merge_chunk_op.new_chunk([old_chunk], shape=merge_chunk_shape,\n index=merge_idx, index_value=new_index_value,\n columns_value=new_columns_value, dtypes=old_chunk.dtypes)\n else:\n merge_chunk_op = SeriesIlocGetItem(chunk_slice, sparse=old_chunk.op.sparse,\n object_type=ObjectType.series)\n merge_chunk = merge_chunk_op.new_chunk([old_chunk], shape=merge_chunk_shape,\n index=merge_idx, index_value=new_index_value,\n dtype=old_chunk.dtype)\n to_merge.append(merge_chunk)\n if len(to_merge) == 1:\n chunk_op = to_merge[0].op.copy()\n if is_dataframe:\n out_chunk = chunk_op.new_chunk(to_merge[0].op.inputs, shape=chunk_shape,\n index=idx, index_value=to_merge[0].index_value,\n columns_value=to_merge[0].columns_value,\n dtypes=to_merge[0].dtypes)\n else:\n out_chunk = chunk_op.new_chunk(to_merge[0].op.inputs, shape=chunk_shape,\n index=idx, index_value=to_merge[0].index_value,\n name=to_merge[0].name, dtype=to_merge[0].dtype)\n result_chunks.append(out_chunk)\n else:\n if is_dataframe:\n chunk_op = DataFrameConcat(object_type=ObjectType.dataframe)\n index_value, columns_value = _concat_dataframe_index_and_columns(to_merge)\n out_chunk = chunk_op.new_chunk(to_merge, shape=chunk_shape,\n index=idx, index_value=index_value,\n columns_value=columns_value,\n dtypes=to_merge[0].dtypes)\n else:\n chunk_op = DataFrameConcat(object_type=ObjectType.series)\n index_value = _concat_series_index(to_merge)\n out_chunk = chunk_op.new_chunk(to_merge, shape=chunk_shape,\n index=idx, index_value=index_value,\n dtype=to_merge[0].dtype)\n result_chunks.append(out_chunk)\n\n if is_dataframe:\n op = DataFrameRechunk(chunk_size, object_type=ObjectType.dataframe)\n return op.new_dataframe([a], a.shape, dtypes=a.dtypes, columns_value=a.columns_value,\n index_value=a.index_value, nsplits=chunk_size, chunks=result_chunks)\n else:\n op = DataFrameRechunk(chunk_size, object_type=ObjectType.series)\n return op.new_series([a], a.shape, dtype=a.dtype, index_value=a.index_value,\n nsplits=chunk_size, chunks=result_chunks)\n", "path": "mars/dataframe/base/rechunk.py"}]}
| 3,127 | 617 |
gh_patches_debug_5351
|
rasdani/github-patches
|
git_diff
|
coala__coala-2795
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Make exception tracebacks default
Instead of asking the user to run coala with `-L DEBUG`
</issue>
<code>
[start of coalib/output/printers/LogPrinter.py]
1 import traceback
2
3 from pyprint.ColorPrinter import ColorPrinter
4
5 from coalib.output.printers.LOG_LEVEL import LOG_LEVEL, LOG_LEVEL_COLORS
6 from coalib.processes.communication.LogMessage import LogMessage
7
8
9 class LogPrinter:
10 """
11 The LogPrinter class allows to print log messages to an underlying Printer.
12
13 This class is an adapter, means you can create a LogPrinter from every
14 existing Printer instance.
15 """
16
17 def __init__(self,
18 printer,
19 log_level=LOG_LEVEL.INFO,
20 timestamp_format="%X"):
21 """
22 Creates a new log printer from an existing Printer.
23
24 :param printer: The underlying Printer where log messages
25 shall be written to. If you inherit from
26 LogPrinter, set it to self.
27 :param log_level: The minimum log level, everything below will
28 not be logged.
29 :param timestamp_format: The format string for the
30 datetime.today().strftime(format) method.
31 """
32 self._printer = printer
33 self.log_level = log_level
34 self.timestamp_format = timestamp_format
35
36 @property
37 def printer(self):
38 """
39 Returns the underlying printer where logs are printed to.
40 """
41 return self._printer
42
43 def _get_log_prefix(self, log_level, timestamp):
44 datetime_string = timestamp.strftime(self.timestamp_format)
45
46 if datetime_string != "":
47 datetime_string = "[" + datetime_string + "]"
48
49 return '[{}]{}'.format(LOG_LEVEL.reverse.get(log_level, "ERROR"),
50 datetime_string)
51
52 def debug(self, *messages, delimiter=" ", timestamp=None, **kwargs):
53 self.log_message(LogMessage(LOG_LEVEL.DEBUG,
54 *messages,
55 delimiter=delimiter,
56 timestamp=timestamp),
57 **kwargs)
58
59 def info(self, *messages, delimiter=" ", timestamp=None, **kwargs):
60 self.log_message(LogMessage(LOG_LEVEL.INFO,
61 *messages,
62 delimiter=delimiter,
63 timestamp=timestamp),
64 **kwargs)
65
66 def warn(self, *messages, delimiter=" ", timestamp=None, **kwargs):
67 self.log_message(LogMessage(LOG_LEVEL.WARNING,
68 *messages,
69 delimiter=delimiter,
70 timestamp=timestamp),
71 **kwargs)
72
73 def err(self, *messages, delimiter=" ", timestamp=None, **kwargs):
74 self.log_message(LogMessage(LOG_LEVEL.ERROR,
75 *messages,
76 delimiter=delimiter,
77 timestamp=timestamp),
78 **kwargs)
79
80 def log(self, log_level, message, timestamp=None, **kwargs):
81 self.log_message(LogMessage(log_level,
82 message,
83 timestamp=timestamp),
84 **kwargs)
85
86 def log_exception(self,
87 message,
88 exception,
89 log_level=LOG_LEVEL.ERROR,
90 timestamp=None,
91 **kwargs):
92 """
93 If the log_level of the printer is greater than DEBUG, it prints
94 only the message. If it is DEBUG or lower, it shows the message
95 along with the traceback of the exception.
96
97 :param message: The message to print.
98 :param exception: The exception to print.
99 :param log_level: The log_level of this message (not used when
100 logging the traceback. Tracebacks always have
101 a level of DEBUG).
102 :param timestamp: The time at which this log occurred. Defaults to
103 the current time.
104 :param kwargs: Keyword arguments to be passed when logging the
105 message (not used when logging the traceback).
106 """
107 if not isinstance(exception, BaseException):
108 raise TypeError("log_exception can only log derivatives of "
109 "BaseException.")
110
111 traceback_str = "\n".join(
112 traceback.format_exception(type(exception),
113 exception,
114 exception.__traceback__))
115
116 self.log(log_level, message, timestamp=timestamp, **kwargs)
117 self.log_message(
118 LogMessage(LOG_LEVEL.DEBUG,
119 "Exception was:" + "\n" + traceback_str,
120 timestamp=timestamp),
121 **kwargs)
122
123 def log_message(self, log_message, **kwargs):
124 if not isinstance(log_message, LogMessage):
125 raise TypeError("log_message should be of type LogMessage.")
126
127 if log_message.log_level < self.log_level:
128 return
129
130 self._print_log_message(
131 self._get_log_prefix(log_message.log_level, log_message.timestamp),
132 log_message,
133 **kwargs)
134
135 def _print_log_message(self, prefix, log_message, **kwargs):
136 """
137 Override this if you want to influence how the log message is printed.
138
139 If the underlying printer is a ColorPrinter, then colored logging is
140 used. You can turn it off in the underlying ColorPrinter if you want to
141 print uncolored.
142
143 :param prefix: The prefix to print (as string).
144 :param log_message: The LogMessage object to print.
145 :param kwargs: Any other keyword arguments.
146 """
147 if isinstance(self._printer, ColorPrinter):
148 self.printer.print(prefix,
149 end=" ",
150 color=LOG_LEVEL_COLORS[log_message.log_level],
151 **kwargs)
152 self.printer.print(log_message.message, **kwargs)
153 else:
154 self.printer.print(prefix, log_message.message, **kwargs)
155
[end of coalib/output/printers/LogPrinter.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/coalib/output/printers/LogPrinter.py b/coalib/output/printers/LogPrinter.py
--- a/coalib/output/printers/LogPrinter.py
+++ b/coalib/output/printers/LogPrinter.py
@@ -115,7 +115,7 @@
self.log(log_level, message, timestamp=timestamp, **kwargs)
self.log_message(
- LogMessage(LOG_LEVEL.DEBUG,
+ LogMessage(LOG_LEVEL.INFO,
"Exception was:" + "\n" + traceback_str,
timestamp=timestamp),
**kwargs)
|
{"golden_diff": "diff --git a/coalib/output/printers/LogPrinter.py b/coalib/output/printers/LogPrinter.py\n--- a/coalib/output/printers/LogPrinter.py\n+++ b/coalib/output/printers/LogPrinter.py\n@@ -115,7 +115,7 @@\n \n self.log(log_level, message, timestamp=timestamp, **kwargs)\n self.log_message(\n- LogMessage(LOG_LEVEL.DEBUG,\n+ LogMessage(LOG_LEVEL.INFO,\n \"Exception was:\" + \"\\n\" + traceback_str,\n timestamp=timestamp),\n **kwargs)\n", "issue": "Make exception tracebacks default\nInstead of asking the user to run coala with `-L DEBUG`\n\n", "before_files": [{"content": "import traceback\n\nfrom pyprint.ColorPrinter import ColorPrinter\n\nfrom coalib.output.printers.LOG_LEVEL import LOG_LEVEL, LOG_LEVEL_COLORS\nfrom coalib.processes.communication.LogMessage import LogMessage\n\n\nclass LogPrinter:\n \"\"\"\n The LogPrinter class allows to print log messages to an underlying Printer.\n\n This class is an adapter, means you can create a LogPrinter from every\n existing Printer instance.\n \"\"\"\n\n def __init__(self,\n printer,\n log_level=LOG_LEVEL.INFO,\n timestamp_format=\"%X\"):\n \"\"\"\n Creates a new log printer from an existing Printer.\n\n :param printer: The underlying Printer where log messages\n shall be written to. If you inherit from\n LogPrinter, set it to self.\n :param log_level: The minimum log level, everything below will\n not be logged.\n :param timestamp_format: The format string for the\n datetime.today().strftime(format) method.\n \"\"\"\n self._printer = printer\n self.log_level = log_level\n self.timestamp_format = timestamp_format\n\n @property\n def printer(self):\n \"\"\"\n Returns the underlying printer where logs are printed to.\n \"\"\"\n return self._printer\n\n def _get_log_prefix(self, log_level, timestamp):\n datetime_string = timestamp.strftime(self.timestamp_format)\n\n if datetime_string != \"\":\n datetime_string = \"[\" + datetime_string + \"]\"\n\n return '[{}]{}'.format(LOG_LEVEL.reverse.get(log_level, \"ERROR\"),\n datetime_string)\n\n def debug(self, *messages, delimiter=\" \", timestamp=None, **kwargs):\n self.log_message(LogMessage(LOG_LEVEL.DEBUG,\n *messages,\n delimiter=delimiter,\n timestamp=timestamp),\n **kwargs)\n\n def info(self, *messages, delimiter=\" \", timestamp=None, **kwargs):\n self.log_message(LogMessage(LOG_LEVEL.INFO,\n *messages,\n delimiter=delimiter,\n timestamp=timestamp),\n **kwargs)\n\n def warn(self, *messages, delimiter=\" \", timestamp=None, **kwargs):\n self.log_message(LogMessage(LOG_LEVEL.WARNING,\n *messages,\n delimiter=delimiter,\n timestamp=timestamp),\n **kwargs)\n\n def err(self, *messages, delimiter=\" \", timestamp=None, **kwargs):\n self.log_message(LogMessage(LOG_LEVEL.ERROR,\n *messages,\n delimiter=delimiter,\n timestamp=timestamp),\n **kwargs)\n\n def log(self, log_level, message, timestamp=None, **kwargs):\n self.log_message(LogMessage(log_level,\n message,\n timestamp=timestamp),\n **kwargs)\n\n def log_exception(self,\n message,\n exception,\n log_level=LOG_LEVEL.ERROR,\n timestamp=None,\n **kwargs):\n \"\"\"\n If the log_level of the printer is greater than DEBUG, it prints\n only the message. If it is DEBUG or lower, it shows the message\n along with the traceback of the exception.\n\n :param message: The message to print.\n :param exception: The exception to print.\n :param log_level: The log_level of this message (not used when\n logging the traceback. Tracebacks always have\n a level of DEBUG).\n :param timestamp: The time at which this log occurred. Defaults to\n the current time.\n :param kwargs: Keyword arguments to be passed when logging the\n message (not used when logging the traceback).\n \"\"\"\n if not isinstance(exception, BaseException):\n raise TypeError(\"log_exception can only log derivatives of \"\n \"BaseException.\")\n\n traceback_str = \"\\n\".join(\n traceback.format_exception(type(exception),\n exception,\n exception.__traceback__))\n\n self.log(log_level, message, timestamp=timestamp, **kwargs)\n self.log_message(\n LogMessage(LOG_LEVEL.DEBUG,\n \"Exception was:\" + \"\\n\" + traceback_str,\n timestamp=timestamp),\n **kwargs)\n\n def log_message(self, log_message, **kwargs):\n if not isinstance(log_message, LogMessage):\n raise TypeError(\"log_message should be of type LogMessage.\")\n\n if log_message.log_level < self.log_level:\n return\n\n self._print_log_message(\n self._get_log_prefix(log_message.log_level, log_message.timestamp),\n log_message,\n **kwargs)\n\n def _print_log_message(self, prefix, log_message, **kwargs):\n \"\"\"\n Override this if you want to influence how the log message is printed.\n\n If the underlying printer is a ColorPrinter, then colored logging is\n used. You can turn it off in the underlying ColorPrinter if you want to\n print uncolored.\n\n :param prefix: The prefix to print (as string).\n :param log_message: The LogMessage object to print.\n :param kwargs: Any other keyword arguments.\n \"\"\"\n if isinstance(self._printer, ColorPrinter):\n self.printer.print(prefix,\n end=\" \",\n color=LOG_LEVEL_COLORS[log_message.log_level],\n **kwargs)\n self.printer.print(log_message.message, **kwargs)\n else:\n self.printer.print(prefix, log_message.message, **kwargs)\n", "path": "coalib/output/printers/LogPrinter.py"}]}
| 2,007 | 125 |
gh_patches_debug_21305
|
rasdani/github-patches
|
git_diff
|
pre-commit__pre-commit-335
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Latest virtualenv breaks pre-commit
See also #299
Failure looks like:
```
17:00:19 hookid: sort-simple-yaml
17:00:19
17:00:19 bash: /nail/home/push/.pre-commit/reposkzFrD//tmp/tmp.cEk6TCoZOS/srv-configs/py_env-default/bin/activate: No such file or directory
```
```
$ pip install virtualenv --upgrade
Downloading/unpacking virtualenv
Downloading virtualenv-14.0.0-py2.py3-none-any.whl (1.8MB): 1.8MB downloaded
Installing collected packages: virtualenv
Successfully installed virtualenv
Cleaning up...
$ python
Python 2.6.7 (r267:88850, Dec 2 2011, 20:27:26)
[GCC 4.4.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import virtualenv
>>> virtualenv.path_locations('foo')
('/nail/home/asottile/foo', '/nail/home/asottile/foo/lib/python2.6', '/nail/home/asottile/foo/include/python2.6', '/nail/home/asottile/foo/bin')
>>>
$ pip install virtualenv==1.11.5
Downloading/unpacking virtualenv==1.11.5
Downloading virtualenv-1.11.5.tar.gz (1.8MB): 1.8MB downloaded
Running setup.py (path:/nail/home/asottile/venv/build/virtualenv/setup.py) egg_info for package virtualenv
warning: no previously-included files matching '*' found under directory 'docs/_templates'
warning: no previously-included files matching '*' found under directory 'docs/_build'
Installing collected packages: virtualenv
Found existing installation: virtualenv 14.0.0
Uninstalling virtualenv:
Successfully uninstalled virtualenv
Running setup.py install for virtualenv
warning: no previously-included files matching '*' found under directory 'docs/_templates'
warning: no previously-included files matching '*' found under directory 'docs/_build'
Installing virtualenv script to /nail/home/asottile/venv/bin
Installing virtualenv-2.6 script to /nail/home/asottile/venv/bin
Successfully installed virtualenv
Cleaning up...
$ python
Python 2.6.7 (r267:88850, Dec 2 2011, 20:27:26)
[GCC 4.4.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import virtualenv
>>> virtualenv.path_locations('foo')
('foo', 'foo/lib/python2.6', 'foo/include/python2.6', 'foo/bin')
>>>
```
</issue>
<code>
[start of pre_commit/languages/python.py]
1 from __future__ import unicode_literals
2
3 import contextlib
4 import distutils.spawn
5 import os
6 import sys
7
8 import virtualenv
9
10 from pre_commit.languages import helpers
11 from pre_commit.util import clean_path_on_failure
12 from pre_commit.util import shell_escape
13
14
15 ENVIRONMENT_DIR = 'py_env'
16
17
18 class PythonEnv(helpers.Environment):
19 @property
20 def env_prefix(self):
21 return ". '{{prefix}}{0}activate' &&".format(
22 virtualenv.path_locations(
23 helpers.environment_dir(ENVIRONMENT_DIR, self.language_version)
24 )[-1].rstrip(os.sep) + os.sep,
25 )
26
27
28 @contextlib.contextmanager
29 def in_env(repo_cmd_runner, language_version):
30 yield PythonEnv(repo_cmd_runner, language_version)
31
32
33 def norm_version(version):
34 if os.name == 'nt': # pragma: no cover (windows)
35 # Try looking up by name
36 if distutils.spawn.find_executable(version):
37 return version
38
39 # If it is in the form pythonx.x search in the default
40 # place on windows
41 if version.startswith('python'):
42 return r'C:\{0}\python.exe'.format(version.replace('.', ''))
43
44 # Otherwise assume it is a path
45 return os.path.expanduser(version)
46
47
48 def install_environment(
49 repo_cmd_runner,
50 version='default',
51 additional_dependencies=None,
52 ):
53 assert repo_cmd_runner.exists('setup.py')
54 directory = helpers.environment_dir(ENVIRONMENT_DIR, version)
55
56 # Install a virtualenv
57 with clean_path_on_failure(repo_cmd_runner.path(directory)):
58 venv_cmd = [
59 sys.executable, '-m', 'virtualenv',
60 '{{prefix}}{0}'.format(directory)
61 ]
62 if version != 'default':
63 venv_cmd.extend(['-p', norm_version(version)])
64 repo_cmd_runner.run(venv_cmd)
65 with in_env(repo_cmd_runner, version) as env:
66 env.run("cd '{prefix}' && pip install .", encoding=None)
67 if additional_dependencies:
68 env.run(
69 "cd '{prefix}' && pip install " +
70 ' '.join(
71 shell_escape(dep) for dep in additional_dependencies
72 ),
73 encoding=None,
74 )
75
76
77 def run_hook(repo_cmd_runner, hook, file_args):
78 with in_env(repo_cmd_runner, hook['language_version']) as env:
79 return helpers.run_hook(env, hook, file_args)
80
[end of pre_commit/languages/python.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/pre_commit/languages/python.py b/pre_commit/languages/python.py
--- a/pre_commit/languages/python.py
+++ b/pre_commit/languages/python.py
@@ -5,8 +5,6 @@
import os
import sys
-import virtualenv
-
from pre_commit.languages import helpers
from pre_commit.util import clean_path_on_failure
from pre_commit.util import shell_escape
@@ -15,13 +13,22 @@
ENVIRONMENT_DIR = 'py_env'
+def bin_dir(venv):
+ """On windows there's a different directory for the virtualenv"""
+ if os.name == 'nt': # pragma: no cover (windows)
+ return os.path.join(venv, 'Scripts')
+ else:
+ return os.path.join(venv, 'bin')
+
+
class PythonEnv(helpers.Environment):
@property
def env_prefix(self):
- return ". '{{prefix}}{0}activate' &&".format(
- virtualenv.path_locations(
+ return ". '{{prefix}}{0}{1}activate' &&".format(
+ bin_dir(
helpers.environment_dir(ENVIRONMENT_DIR, self.language_version)
- )[-1].rstrip(os.sep) + os.sep,
+ ),
+ os.sep,
)
|
{"golden_diff": "diff --git a/pre_commit/languages/python.py b/pre_commit/languages/python.py\n--- a/pre_commit/languages/python.py\n+++ b/pre_commit/languages/python.py\n@@ -5,8 +5,6 @@\n import os\n import sys\n \n-import virtualenv\n-\n from pre_commit.languages import helpers\n from pre_commit.util import clean_path_on_failure\n from pre_commit.util import shell_escape\n@@ -15,13 +13,22 @@\n ENVIRONMENT_DIR = 'py_env'\n \n \n+def bin_dir(venv):\n+ \"\"\"On windows there's a different directory for the virtualenv\"\"\"\n+ if os.name == 'nt': # pragma: no cover (windows)\n+ return os.path.join(venv, 'Scripts')\n+ else:\n+ return os.path.join(venv, 'bin')\n+\n+\n class PythonEnv(helpers.Environment):\n @property\n def env_prefix(self):\n- return \". '{{prefix}}{0}activate' &&\".format(\n- virtualenv.path_locations(\n+ return \". '{{prefix}}{0}{1}activate' &&\".format(\n+ bin_dir(\n helpers.environment_dir(ENVIRONMENT_DIR, self.language_version)\n- )[-1].rstrip(os.sep) + os.sep,\n+ ),\n+ os.sep,\n )\n", "issue": "Latest virtualenv breaks pre-commit\nSee also #299 \n\nFailure looks like:\n\n```\n17:00:19 hookid: sort-simple-yaml\n17:00:19 \n17:00:19 bash: /nail/home/push/.pre-commit/reposkzFrD//tmp/tmp.cEk6TCoZOS/srv-configs/py_env-default/bin/activate: No such file or directory\n```\n\n```\n$ pip install virtualenv --upgrade\nDownloading/unpacking virtualenv\n Downloading virtualenv-14.0.0-py2.py3-none-any.whl (1.8MB): 1.8MB downloaded\nInstalling collected packages: virtualenv\nSuccessfully installed virtualenv\nCleaning up...\n$ python\nPython 2.6.7 (r267:88850, Dec 2 2011, 20:27:26) \n[GCC 4.4.3] on linux2\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\n>>> import virtualenv\n>>> virtualenv.path_locations('foo')\n('/nail/home/asottile/foo', '/nail/home/asottile/foo/lib/python2.6', '/nail/home/asottile/foo/include/python2.6', '/nail/home/asottile/foo/bin')\n>>> \n$ pip install virtualenv==1.11.5\nDownloading/unpacking virtualenv==1.11.5\n Downloading virtualenv-1.11.5.tar.gz (1.8MB): 1.8MB downloaded\n Running setup.py (path:/nail/home/asottile/venv/build/virtualenv/setup.py) egg_info for package virtualenv\n warning: no previously-included files matching '*' found under directory 'docs/_templates'\n warning: no previously-included files matching '*' found under directory 'docs/_build'\nInstalling collected packages: virtualenv\n Found existing installation: virtualenv 14.0.0\n Uninstalling virtualenv:\n Successfully uninstalled virtualenv\n Running setup.py install for virtualenv\n warning: no previously-included files matching '*' found under directory 'docs/_templates'\n warning: no previously-included files matching '*' found under directory 'docs/_build'\n Installing virtualenv script to /nail/home/asottile/venv/bin\n Installing virtualenv-2.6 script to /nail/home/asottile/venv/bin\nSuccessfully installed virtualenv\nCleaning up...\n$ python\nPython 2.6.7 (r267:88850, Dec 2 2011, 20:27:26) \n[GCC 4.4.3] on linux2\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\n>>> import virtualenv\n>>> virtualenv.path_locations('foo')\n('foo', 'foo/lib/python2.6', 'foo/include/python2.6', 'foo/bin')\n>>>\n```\n\n", "before_files": [{"content": "from __future__ import unicode_literals\n\nimport contextlib\nimport distutils.spawn\nimport os\nimport sys\n\nimport virtualenv\n\nfrom pre_commit.languages import helpers\nfrom pre_commit.util import clean_path_on_failure\nfrom pre_commit.util import shell_escape\n\n\nENVIRONMENT_DIR = 'py_env'\n\n\nclass PythonEnv(helpers.Environment):\n @property\n def env_prefix(self):\n return \". '{{prefix}}{0}activate' &&\".format(\n virtualenv.path_locations(\n helpers.environment_dir(ENVIRONMENT_DIR, self.language_version)\n )[-1].rstrip(os.sep) + os.sep,\n )\n\n\[email protected]\ndef in_env(repo_cmd_runner, language_version):\n yield PythonEnv(repo_cmd_runner, language_version)\n\n\ndef norm_version(version):\n if os.name == 'nt': # pragma: no cover (windows)\n # Try looking up by name\n if distutils.spawn.find_executable(version):\n return version\n\n # If it is in the form pythonx.x search in the default\n # place on windows\n if version.startswith('python'):\n return r'C:\\{0}\\python.exe'.format(version.replace('.', ''))\n\n # Otherwise assume it is a path\n return os.path.expanduser(version)\n\n\ndef install_environment(\n repo_cmd_runner,\n version='default',\n additional_dependencies=None,\n):\n assert repo_cmd_runner.exists('setup.py')\n directory = helpers.environment_dir(ENVIRONMENT_DIR, version)\n\n # Install a virtualenv\n with clean_path_on_failure(repo_cmd_runner.path(directory)):\n venv_cmd = [\n sys.executable, '-m', 'virtualenv',\n '{{prefix}}{0}'.format(directory)\n ]\n if version != 'default':\n venv_cmd.extend(['-p', norm_version(version)])\n repo_cmd_runner.run(venv_cmd)\n with in_env(repo_cmd_runner, version) as env:\n env.run(\"cd '{prefix}' && pip install .\", encoding=None)\n if additional_dependencies:\n env.run(\n \"cd '{prefix}' && pip install \" +\n ' '.join(\n shell_escape(dep) for dep in additional_dependencies\n ),\n encoding=None,\n )\n\n\ndef run_hook(repo_cmd_runner, hook, file_args):\n with in_env(repo_cmd_runner, hook['language_version']) as env:\n return helpers.run_hook(env, hook, file_args)\n", "path": "pre_commit/languages/python.py"}]}
| 1,862 | 278 |
gh_patches_debug_17631
|
rasdani/github-patches
|
git_diff
|
cal-itp__benefits-391
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Create a specific `eligibility/start.html` template, update the view to use it
Like for some other pages, we need a specific template file for the Eligibility Start page, which has some unique elements to render and behaviors to handle.
The scope of this task it to simply create the new template, `{% extends "core/page.html" %}`, and get the view to use it -- keeping everything else about the page the same.
</issue>
<code>
[start of benefits/eligibility/views.py]
1 """
2 The eligibility application: view definitions for the eligibility verification flow.
3 """
4 from django.contrib import messages
5 from django.shortcuts import redirect
6 from django.urls import reverse
7 from django.utils.decorators import decorator_from_middleware
8 from django.utils.translation import pgettext, gettext as _
9
10 from benefits.core import middleware, recaptcha, session, viewmodels
11 from benefits.core.models import EligibilityVerifier
12 from benefits.core.views import PageTemplateResponse
13 from . import analytics, api, forms
14
15
16 @decorator_from_middleware(middleware.AgencySessionRequired)
17 def index(request):
18 """View handler for the eligibility verifier selection form."""
19
20 session.update(request, eligibility_types=[], origin=reverse("eligibility:index"))
21 agency = session.agency(request)
22
23 eligibility_start = reverse("eligibility:start")
24
25 page = viewmodels.Page(
26 title=_("eligibility.pages.index.title"),
27 content_title=_("eligibility.pages.index.content_title"),
28 forms=forms.EligibilityVerifierSelectionForm(agency=agency),
29 )
30
31 if request.method == "POST":
32 form = forms.EligibilityVerifierSelectionForm(data=request.POST, agency=agency)
33
34 if form.is_valid():
35 verifier_id = form.cleaned_data.get("verifier")
36 verifier = EligibilityVerifier.objects.get(id=verifier_id)
37 session.update(request, verifier=verifier)
38
39 response = redirect(eligibility_start)
40 else:
41 # form was not valid, allow for correction/resubmission
42 page.forms = [form]
43 response = PageTemplateResponse(request, page)
44 else:
45 if agency.eligibility_verifiers.count() == 1:
46 verifier = agency.eligibility_verifiers.first()
47 session.update(request, verifier=verifier)
48 response = redirect(eligibility_start)
49 else:
50 response = PageTemplateResponse(request, page)
51
52 return response
53
54
55 @decorator_from_middleware(middleware.AgencySessionRequired)
56 @decorator_from_middleware(middleware.VerifierSessionRequired)
57 def start(request):
58 """View handler for the eligibility verification getting started screen."""
59
60 session.update(request, eligibility_types=[])
61 verifier = session.verifier(request)
62
63 page = viewmodels.Page(
64 title=_("eligibility.pages.start.title"),
65 content_title=_(verifier.start_content_title),
66 media=[
67 viewmodels.MediaItem(
68 icon=viewmodels.Icon("idcardcheck", pgettext("image alt text", "core.icons.idcardcheck")),
69 heading=_(verifier.start_item_name),
70 details=_(verifier.start_item_description),
71 ),
72 viewmodels.MediaItem(
73 icon=viewmodels.Icon("bankcardcheck", pgettext("image alt text", "core.icons.bankcardcheck")),
74 heading=_("eligibility.pages.start.items[1].title"),
75 details=_("eligibility.pages.start.items[1].text"),
76 ),
77 ],
78 paragraphs=[_(verifier.start_blurb)],
79 button=viewmodels.Button.primary(text=_("eligibility.buttons.continue"), url=reverse("eligibility:confirm")),
80 )
81
82 return PageTemplateResponse(request, page)
83
84
85 @decorator_from_middleware(middleware.AgencySessionRequired)
86 @decorator_from_middleware(middleware.RateLimit)
87 @decorator_from_middleware(middleware.VerifierSessionRequired)
88 def confirm(request):
89 """View handler for the eligibility verification form."""
90
91 verifier = session.verifier(request)
92
93 page = viewmodels.Page(
94 title=_(verifier.form_title),
95 content_title=_(verifier.form_content_title),
96 paragraphs=[_(verifier.form_blurb)],
97 form=forms.EligibilityVerificationForm(auto_id=True, label_suffix="", verifier=verifier),
98 classes="text-lg-center",
99 )
100
101 if request.method == "POST":
102 analytics.started_eligibility(request)
103
104 form = forms.EligibilityVerificationForm(data=request.POST, verifier=verifier)
105 response = _verify(request, form)
106
107 if response is None:
108 # form was not valid, allow for correction/resubmission
109 analytics.returned_error(request, form.errors)
110 page.forms = [form]
111 response = PageTemplateResponse(request, page)
112 elif session.eligible(request):
113 eligibility = session.eligibility(request)
114 response = verified(request, [eligibility.name])
115 else:
116 response = PageTemplateResponse(request, page)
117
118 return response
119
120
121 def _verify(request, form):
122 """Helper calls the eligibility verification API with user input."""
123
124 if not form.is_valid():
125 if recaptcha.has_error(form):
126 messages.error(request, "Recaptcha failed. Please try again.")
127 return None
128
129 sub, name = form.cleaned_data.get("sub"), form.cleaned_data.get("name")
130
131 agency = session.agency(request)
132 verifier = session.verifier(request)
133 client = api.Client(agency, verifier)
134
135 response = client.verify(sub, name)
136
137 if response.error and any(response.error):
138 form.add_api_errors(response.error)
139 return None
140 elif any(response.eligibility):
141 return verified(request, response.eligibility)
142 else:
143 return unverified(request)
144
145
146 @decorator_from_middleware(middleware.AgencySessionRequired)
147 def verified(request, verified_types):
148 """View handler for the verified eligibility page."""
149
150 analytics.returned_success(request)
151
152 enrollment_index = reverse("enrollment:index")
153 session.update(request, eligibility_types=verified_types, origin=enrollment_index)
154
155 return redirect(enrollment_index)
156
157
158 @decorator_from_middleware(middleware.AgencySessionRequired)
159 @decorator_from_middleware(middleware.VerifierSessionRequired)
160 def unverified(request):
161 """View handler for the unverified eligibility page."""
162
163 analytics.returned_fail(request)
164
165 # tel: link to agency phone number
166 agency = session.agency(request)
167 buttons = viewmodels.Button.agency_contact_links(agency)
168
169 verifier = session.verifier(request)
170
171 page = viewmodels.Page(
172 title=_(verifier.unverified_title),
173 content_title=_(verifier.unverified_content_title),
174 icon=viewmodels.Icon("idcardquestion", pgettext("image alt text", "core.icons.idcardquestion")),
175 paragraphs=[_(verifier.unverified_blurb), _("eligibility.pages.unverified.p[1]")],
176 buttons=buttons,
177 classes="text-lg-center",
178 )
179
180 return PageTemplateResponse(request, page)
181
[end of benefits/eligibility/views.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/benefits/eligibility/views.py b/benefits/eligibility/views.py
--- a/benefits/eligibility/views.py
+++ b/benefits/eligibility/views.py
@@ -3,6 +3,7 @@
"""
from django.contrib import messages
from django.shortcuts import redirect
+from django.template.response import TemplateResponse
from django.urls import reverse
from django.utils.decorators import decorator_from_middleware
from django.utils.translation import pgettext, gettext as _
@@ -79,7 +80,7 @@
button=viewmodels.Button.primary(text=_("eligibility.buttons.continue"), url=reverse("eligibility:confirm")),
)
- return PageTemplateResponse(request, page)
+ return TemplateResponse(request, "eligibility/start.html", page.context_dict())
@decorator_from_middleware(middleware.AgencySessionRequired)
|
{"golden_diff": "diff --git a/benefits/eligibility/views.py b/benefits/eligibility/views.py\n--- a/benefits/eligibility/views.py\n+++ b/benefits/eligibility/views.py\n@@ -3,6 +3,7 @@\n \"\"\"\n from django.contrib import messages\n from django.shortcuts import redirect\n+from django.template.response import TemplateResponse\n from django.urls import reverse\n from django.utils.decorators import decorator_from_middleware\n from django.utils.translation import pgettext, gettext as _\n@@ -79,7 +80,7 @@\n button=viewmodels.Button.primary(text=_(\"eligibility.buttons.continue\"), url=reverse(\"eligibility:confirm\")),\n )\n \n- return PageTemplateResponse(request, page)\n+ return TemplateResponse(request, \"eligibility/start.html\", page.context_dict())\n \n \n @decorator_from_middleware(middleware.AgencySessionRequired)\n", "issue": "Create a specific `eligibility/start.html` template, update the view to use it\nLike for some other pages, we need a specific template file for the Eligibility Start page, which has some unique elements to render and behaviors to handle.\r\n\r\nThe scope of this task it to simply create the new template, `{% extends \"core/page.html\" %}`, and get the view to use it -- keeping everything else about the page the same.\n", "before_files": [{"content": "\"\"\"\nThe eligibility application: view definitions for the eligibility verification flow.\n\"\"\"\nfrom django.contrib import messages\nfrom django.shortcuts import redirect\nfrom django.urls import reverse\nfrom django.utils.decorators import decorator_from_middleware\nfrom django.utils.translation import pgettext, gettext as _\n\nfrom benefits.core import middleware, recaptcha, session, viewmodels\nfrom benefits.core.models import EligibilityVerifier\nfrom benefits.core.views import PageTemplateResponse\nfrom . import analytics, api, forms\n\n\n@decorator_from_middleware(middleware.AgencySessionRequired)\ndef index(request):\n \"\"\"View handler for the eligibility verifier selection form.\"\"\"\n\n session.update(request, eligibility_types=[], origin=reverse(\"eligibility:index\"))\n agency = session.agency(request)\n\n eligibility_start = reverse(\"eligibility:start\")\n\n page = viewmodels.Page(\n title=_(\"eligibility.pages.index.title\"),\n content_title=_(\"eligibility.pages.index.content_title\"),\n forms=forms.EligibilityVerifierSelectionForm(agency=agency),\n )\n\n if request.method == \"POST\":\n form = forms.EligibilityVerifierSelectionForm(data=request.POST, agency=agency)\n\n if form.is_valid():\n verifier_id = form.cleaned_data.get(\"verifier\")\n verifier = EligibilityVerifier.objects.get(id=verifier_id)\n session.update(request, verifier=verifier)\n\n response = redirect(eligibility_start)\n else:\n # form was not valid, allow for correction/resubmission\n page.forms = [form]\n response = PageTemplateResponse(request, page)\n else:\n if agency.eligibility_verifiers.count() == 1:\n verifier = agency.eligibility_verifiers.first()\n session.update(request, verifier=verifier)\n response = redirect(eligibility_start)\n else:\n response = PageTemplateResponse(request, page)\n\n return response\n\n\n@decorator_from_middleware(middleware.AgencySessionRequired)\n@decorator_from_middleware(middleware.VerifierSessionRequired)\ndef start(request):\n \"\"\"View handler for the eligibility verification getting started screen.\"\"\"\n\n session.update(request, eligibility_types=[])\n verifier = session.verifier(request)\n\n page = viewmodels.Page(\n title=_(\"eligibility.pages.start.title\"),\n content_title=_(verifier.start_content_title),\n media=[\n viewmodels.MediaItem(\n icon=viewmodels.Icon(\"idcardcheck\", pgettext(\"image alt text\", \"core.icons.idcardcheck\")),\n heading=_(verifier.start_item_name),\n details=_(verifier.start_item_description),\n ),\n viewmodels.MediaItem(\n icon=viewmodels.Icon(\"bankcardcheck\", pgettext(\"image alt text\", \"core.icons.bankcardcheck\")),\n heading=_(\"eligibility.pages.start.items[1].title\"),\n details=_(\"eligibility.pages.start.items[1].text\"),\n ),\n ],\n paragraphs=[_(verifier.start_blurb)],\n button=viewmodels.Button.primary(text=_(\"eligibility.buttons.continue\"), url=reverse(\"eligibility:confirm\")),\n )\n\n return PageTemplateResponse(request, page)\n\n\n@decorator_from_middleware(middleware.AgencySessionRequired)\n@decorator_from_middleware(middleware.RateLimit)\n@decorator_from_middleware(middleware.VerifierSessionRequired)\ndef confirm(request):\n \"\"\"View handler for the eligibility verification form.\"\"\"\n\n verifier = session.verifier(request)\n\n page = viewmodels.Page(\n title=_(verifier.form_title),\n content_title=_(verifier.form_content_title),\n paragraphs=[_(verifier.form_blurb)],\n form=forms.EligibilityVerificationForm(auto_id=True, label_suffix=\"\", verifier=verifier),\n classes=\"text-lg-center\",\n )\n\n if request.method == \"POST\":\n analytics.started_eligibility(request)\n\n form = forms.EligibilityVerificationForm(data=request.POST, verifier=verifier)\n response = _verify(request, form)\n\n if response is None:\n # form was not valid, allow for correction/resubmission\n analytics.returned_error(request, form.errors)\n page.forms = [form]\n response = PageTemplateResponse(request, page)\n elif session.eligible(request):\n eligibility = session.eligibility(request)\n response = verified(request, [eligibility.name])\n else:\n response = PageTemplateResponse(request, page)\n\n return response\n\n\ndef _verify(request, form):\n \"\"\"Helper calls the eligibility verification API with user input.\"\"\"\n\n if not form.is_valid():\n if recaptcha.has_error(form):\n messages.error(request, \"Recaptcha failed. Please try again.\")\n return None\n\n sub, name = form.cleaned_data.get(\"sub\"), form.cleaned_data.get(\"name\")\n\n agency = session.agency(request)\n verifier = session.verifier(request)\n client = api.Client(agency, verifier)\n\n response = client.verify(sub, name)\n\n if response.error and any(response.error):\n form.add_api_errors(response.error)\n return None\n elif any(response.eligibility):\n return verified(request, response.eligibility)\n else:\n return unverified(request)\n\n\n@decorator_from_middleware(middleware.AgencySessionRequired)\ndef verified(request, verified_types):\n \"\"\"View handler for the verified eligibility page.\"\"\"\n\n analytics.returned_success(request)\n\n enrollment_index = reverse(\"enrollment:index\")\n session.update(request, eligibility_types=verified_types, origin=enrollment_index)\n\n return redirect(enrollment_index)\n\n\n@decorator_from_middleware(middleware.AgencySessionRequired)\n@decorator_from_middleware(middleware.VerifierSessionRequired)\ndef unverified(request):\n \"\"\"View handler for the unverified eligibility page.\"\"\"\n\n analytics.returned_fail(request)\n\n # tel: link to agency phone number\n agency = session.agency(request)\n buttons = viewmodels.Button.agency_contact_links(agency)\n\n verifier = session.verifier(request)\n\n page = viewmodels.Page(\n title=_(verifier.unverified_title),\n content_title=_(verifier.unverified_content_title),\n icon=viewmodels.Icon(\"idcardquestion\", pgettext(\"image alt text\", \"core.icons.idcardquestion\")),\n paragraphs=[_(verifier.unverified_blurb), _(\"eligibility.pages.unverified.p[1]\")],\n buttons=buttons,\n classes=\"text-lg-center\",\n )\n\n return PageTemplateResponse(request, page)\n", "path": "benefits/eligibility/views.py"}]}
| 2,393 | 182 |
gh_patches_debug_4069
|
rasdani/github-patches
|
git_diff
|
goauthentik__authentik-7454
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Patreon login doesn't work/setup is not intuitive
**Describe the bug**
While trying to set up the Patreon social integration, I realised that the required fields of Consumer Key and Consumer Secret don't seem to apply to the data that Patreon provides with its API - or at least the terminology is confusing. But outside of that, the default scopes that it seems to be presenting Patreon with upon setup are not part of Patreon's API, and will always error out with an "Invalid Scope" unless manually replaced with the correct ones. If this social integration is working and I'm mistaken, it is missing documentation that would definitely make it easier on new users.
To Reproduce
Steps to reproduce the behavior:
1. Go to the social integration settings.
2. Click on the Patreon integration.
3. Enter the Client ID and Secret into the Key and Secret fields (assuming that's what you're supposed to use)
4. Get an invalid_scope error when trying to sign in
Expected behavior
Should allow users to log in via Patreon.
Screenshots
N/A
Logs
N/A
Version and Deployment (please complete the following information):
authentik version: 2023.6.1
Deployment: TrueNAS
</issue>
<code>
[start of authentik/sources/oauth/types/patreon.py]
1 """Patreon OAuth Views"""
2 from typing import Any
3
4 from authentik.sources.oauth.clients.oauth2 import UserprofileHeaderAuthClient
5 from authentik.sources.oauth.models import OAuthSource
6 from authentik.sources.oauth.types.registry import SourceType, registry
7 from authentik.sources.oauth.views.callback import OAuthCallback
8 from authentik.sources.oauth.views.redirect import OAuthRedirect
9
10
11 class PatreonOAuthRedirect(OAuthRedirect):
12 """Patreon OAuth2 Redirect"""
13
14 def get_additional_parameters(self, source: OAuthSource): # pragma: no cover
15 return {
16 "scope": ["openid", "email", "profile"],
17 }
18
19
20 class PatreonOAuthCallback(OAuthCallback):
21 """Patreon OAuth2 Callback"""
22
23 client_class: UserprofileHeaderAuthClient
24
25 def get_user_id(self, info: dict[str, str]) -> str:
26 return info.get("data", {}).get("id")
27
28 def get_user_enroll_context(
29 self,
30 info: dict[str, Any],
31 ) -> dict[str, Any]:
32 return {
33 "username": info.get("data", {}).get("attributes", {}).get("vanity"),
34 "email": info.get("data", {}).get("attributes", {}).get("email"),
35 "name": info.get("data", {}).get("attributes", {}).get("full_name"),
36 }
37
38
39 @registry.register()
40 class PatreonType(SourceType):
41 """OpenIDConnect Type definition"""
42
43 callback_view = PatreonOAuthCallback
44 redirect_view = PatreonOAuthRedirect
45 name = "Patreon"
46 slug = "patreon"
47
48 authorization_url = "https://www.patreon.com/oauth2/authorize"
49 access_token_url = "https://www.patreon.com/api/oauth2/token" # nosec
50 profile_url = "https://www.patreon.com/api/oauth2/api/current_user"
51
[end of authentik/sources/oauth/types/patreon.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/authentik/sources/oauth/types/patreon.py b/authentik/sources/oauth/types/patreon.py
--- a/authentik/sources/oauth/types/patreon.py
+++ b/authentik/sources/oauth/types/patreon.py
@@ -12,8 +12,9 @@
"""Patreon OAuth2 Redirect"""
def get_additional_parameters(self, source: OAuthSource): # pragma: no cover
+ # https://docs.patreon.com/#scopes
return {
- "scope": ["openid", "email", "profile"],
+ "scope": ["identity", "identity[email]"],
}
|
{"golden_diff": "diff --git a/authentik/sources/oauth/types/patreon.py b/authentik/sources/oauth/types/patreon.py\n--- a/authentik/sources/oauth/types/patreon.py\n+++ b/authentik/sources/oauth/types/patreon.py\n@@ -12,8 +12,9 @@\n \"\"\"Patreon OAuth2 Redirect\"\"\"\r\n \r\n def get_additional_parameters(self, source: OAuthSource): # pragma: no cover\r\n+ # https://docs.patreon.com/#scopes\r\n return {\r\n- \"scope\": [\"openid\", \"email\", \"profile\"],\r\n+ \"scope\": [\"identity\", \"identity[email]\"],\r\n }\n", "issue": "Patreon login doesn't work/setup is not intuitive\n**Describe the bug**\r\nWhile trying to set up the Patreon social integration, I realised that the required fields of Consumer Key and Consumer Secret don't seem to apply to the data that Patreon provides with its API - or at least the terminology is confusing. But outside of that, the default scopes that it seems to be presenting Patreon with upon setup are not part of Patreon's API, and will always error out with an \"Invalid Scope\" unless manually replaced with the correct ones. If this social integration is working and I'm mistaken, it is missing documentation that would definitely make it easier on new users.\r\n\r\nTo Reproduce\r\nSteps to reproduce the behavior:\r\n\r\n1. Go to the social integration settings.\r\n2. Click on the Patreon integration.\r\n3. Enter the Client ID and Secret into the Key and Secret fields (assuming that's what you're supposed to use)\r\n4. Get an invalid_scope error when trying to sign in\r\n\r\nExpected behavior\r\nShould allow users to log in via Patreon.\r\n\r\nScreenshots\r\nN/A\r\n\r\nLogs\r\nN/A\r\n\r\nVersion and Deployment (please complete the following information):\r\n\r\nauthentik version: 2023.6.1\r\nDeployment: TrueNAS\r\n\r\n\r\n\r\n\r\n\r\n\r\n\n", "before_files": [{"content": "\"\"\"Patreon OAuth Views\"\"\"\r\nfrom typing import Any\r\n\r\nfrom authentik.sources.oauth.clients.oauth2 import UserprofileHeaderAuthClient\r\nfrom authentik.sources.oauth.models import OAuthSource\r\nfrom authentik.sources.oauth.types.registry import SourceType, registry\r\nfrom authentik.sources.oauth.views.callback import OAuthCallback\r\nfrom authentik.sources.oauth.views.redirect import OAuthRedirect\r\n\r\n\r\nclass PatreonOAuthRedirect(OAuthRedirect):\r\n \"\"\"Patreon OAuth2 Redirect\"\"\"\r\n\r\n def get_additional_parameters(self, source: OAuthSource): # pragma: no cover\r\n return {\r\n \"scope\": [\"openid\", \"email\", \"profile\"],\r\n }\r\n\r\n\r\nclass PatreonOAuthCallback(OAuthCallback):\r\n \"\"\"Patreon OAuth2 Callback\"\"\"\r\n\r\n client_class: UserprofileHeaderAuthClient\r\n\r\n def get_user_id(self, info: dict[str, str]) -> str:\r\n return info.get(\"data\", {}).get(\"id\")\r\n\r\n def get_user_enroll_context(\r\n self,\r\n info: dict[str, Any],\r\n ) -> dict[str, Any]:\r\n return {\r\n \"username\": info.get(\"data\", {}).get(\"attributes\", {}).get(\"vanity\"),\r\n \"email\": info.get(\"data\", {}).get(\"attributes\", {}).get(\"email\"),\r\n \"name\": info.get(\"data\", {}).get(\"attributes\", {}).get(\"full_name\"),\r\n }\r\n\r\n\r\[email protected]()\r\nclass PatreonType(SourceType):\r\n \"\"\"OpenIDConnect Type definition\"\"\"\r\n\r\n callback_view = PatreonOAuthCallback\r\n redirect_view = PatreonOAuthRedirect\r\n name = \"Patreon\"\r\n slug = \"patreon\"\r\n\r\n authorization_url = \"https://www.patreon.com/oauth2/authorize\"\r\n access_token_url = \"https://www.patreon.com/api/oauth2/token\" # nosec\r\n profile_url = \"https://www.patreon.com/api/oauth2/api/current_user\"\r\n", "path": "authentik/sources/oauth/types/patreon.py"}]}
| 1,290 | 138 |
gh_patches_debug_35647
|
rasdani/github-patches
|
git_diff
|
castorini__pyserini-1434
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Issues with latest MIRACL 2CR
On my iMac Pro (Intel), I'm getting the following failures:
```
condition bm25-mdpr-tied-pft-msmarco-hybrid.bn:
- split: dev
nDCG@10: 0.6540 [OK]
R@100 : 0.9321 [FAIL] expected 0.9100
condition bm25-mdpr-tied-pft-msmarco-hybrid.zh:
- split: dev
nDCG@10: 0.5255 [FAIL] expected 0.5254
R@100 : 0.9587 [OK]
```
</issue>
<code>
[start of scripts/repro_matrix/run_all_miracl.py]
1 #
2 # Pyserini: Reproducible IR research with sparse and dense representations
3 #
4 # Licensed under the Apache License, Version 2.0 (the "License");
5 # you may not use this file except in compliance with the License.
6 # You may obtain a copy of the License at
7 #
8 # http://www.apache.org/licenses/LICENSE-2.0
9 #
10 # Unless required by applicable law or agreed to in writing, software
11 # distributed under the License is distributed on an "AS IS" BASIS,
12 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 # See the License for the specific language governing permissions and
14 # limitations under the License.
15 #
16
17 import argparse
18 import math
19 import os
20 import subprocess
21 import time
22 from collections import defaultdict
23 from string import Template
24
25 import yaml
26
27 from scripts.repro_matrix.defs_miracl import models, languages, trec_eval_metric_definitions
28 from scripts.repro_matrix.utils import run_eval_and_return_metric, ok_str, okish_str, fail_str
29
30
31 def print_results(metric, split):
32 print(f'Metric = {metric}, Split = {split}')
33 print(' ' * 32, end='')
34 for lang in languages:
35 print(f'{lang[0]:3} ', end='')
36 print('')
37 for model in models:
38 print(f'{model:30}', end='')
39 for lang in languages:
40 key = f'{model}.{lang[0]}'
41 print(f'{table[key][split][metric]:7.3f}', end='')
42 print('')
43 print('')
44
45
46 def extract_topic_fn_from_cmd(cmd):
47 cmd = cmd.split()
48 topic_idx = cmd.index('--topics')
49 return cmd[topic_idx + 1]
50
51
52 if __name__ == '__main__':
53 parser = argparse.ArgumentParser(description='Generate regression matrix for MIRACL.')
54 parser.add_argument('--skip-eval', action='store_true', default=False, help='Skip running trec_eval.')
55 args = parser.parse_args()
56
57 start = time.time()
58
59 table = defaultdict(lambda: defaultdict(lambda: defaultdict(lambda: 0.0)))
60
61 with open('pyserini/resources/miracl.yaml') as f:
62 yaml_data = yaml.safe_load(f)
63 for condition in yaml_data['conditions']:
64 name = condition['name']
65 eval_key = condition['eval_key']
66 cmd_template = condition['command']
67 cmd_lst = cmd_template.split()
68
69 print(f'condition {name}:')
70 lang = name.split('.')[-1]
71 is_hybrid_run = 'hybrid' in name
72
73 for splits in condition['splits']:
74 split = splits['split']
75 if is_hybrid_run:
76 hits = int(cmd_lst[cmd_lst.index('--k') + 1])
77 else:
78 hits = int(cmd_lst[cmd_lst.index('--hits') + 1])
79
80 print(f' - split: {split}')
81
82 runfile = f'runs/run.miracl.{name}.{split}.top{hits}.txt'
83 if is_hybrid_run:
84 bm25_output = f'runs/run.miracl.bm25.{lang}.{split}.top{hits}.txt'
85 mdpr_output = f'runs/run.miracl.mdpr-tied-pft-msmarco.{lang}.{split}.top{hits}.txt'
86 if not os.path.exists(bm25_output):
87 print(f'Missing BM25 file: {bm25_output}')
88 continue
89 if not os.path.exists(mdpr_output):
90 print(f'Missing mDPR file: {mdpr_output}')
91 continue
92 cmd = Template(cmd_template).substitute(split=split, output=runfile, bm25_output=bm25_output, mdpr_output=mdpr_output)
93 else:
94 cmd = Template(cmd_template).substitute(split=split, output=runfile)
95
96 # In the yaml file, the topics are written as something like '--topics miracl-v1.0-ar-${split}'
97 # This works for the dev split because the topics are directly included in Anserini/Pyserini.
98 # For this training split, we have to map the symbol into a file in tools/topics-and-qrels/
99 # Here, we assume that the developer has cloned the miracl repo and placed the topics there.
100 if split == 'train':
101 cmd = cmd.replace(f'--topics miracl-v1.0-{lang}-{split}',
102 f'--topics tools/topics-and-qrels/topics.miracl-v1.0-{lang}-{split}.tsv')
103
104 if not os.path.exists(runfile):
105 print(f' Running: {cmd}')
106 rtn = subprocess.run(cmd.split(), capture_output=True)
107 stderr = rtn.stderr.decode()
108 if '--topics' in cmd:
109 topic_fn = extract_topic_fn_from_cmd(cmd)
110 if f'ValueError: Topic {topic_fn} Not Found' in stderr:
111 print(f'Skipping {topic_fn}: file not found.')
112 continue
113
114 for expected in splits['scores']:
115 for metric in expected:
116 if not args.skip_eval:
117 # We have the translate the training qrels into a file located in tools/topics-and-qrels/
118 # because they are not included with Anserini/Pyserini by default.
119 # Here, we assume that the developer has cloned the miracl repo and placed the qrels there.
120 if split == 'train':
121 qrels = f'tools/topics-and-qrels/qrels.{eval_key}-train.tsv'
122 else:
123 qrels = f'{eval_key}-{split}'
124 score = float(run_eval_and_return_metric(metric, qrels,
125 trec_eval_metric_definitions[metric], runfile))
126 if math.isclose(score, float(expected[metric])):
127 result_str = ok_str
128 # Flaky test: small difference on Mac Studio (M1 chip)
129 elif name == 'mdpr-tied-pft-msmarco.hi' and split == 'train' \
130 and math.isclose(score, float(expected[metric]), abs_tol=2e-4):
131 result_str = okish_str
132 else:
133 result_str = fail_str + f' expected {expected[metric]:.4f}'
134 print(f' {metric:7}: {score:.4f} {result_str}')
135 table[name][split][metric] = score
136 else:
137 table[name][split][metric] = expected[metric]
138
139 print('')
140
141 for metric in ['nDCG@10', 'R@100']:
142 for split in ['dev', 'train']:
143 print_results(metric, split)
144
145 end = time.time()
146 print(f'Total elapsed time: {end - start:.0f}s')
147
[end of scripts/repro_matrix/run_all_miracl.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/scripts/repro_matrix/run_all_miracl.py b/scripts/repro_matrix/run_all_miracl.py
--- a/scripts/repro_matrix/run_all_miracl.py
+++ b/scripts/repro_matrix/run_all_miracl.py
@@ -30,12 +30,12 @@
def print_results(metric, split):
print(f'Metric = {metric}, Split = {split}')
- print(' ' * 32, end='')
+ print(' ' * 35, end='')
for lang in languages:
print(f'{lang[0]:3} ', end='')
print('')
for model in models:
- print(f'{model:30}', end='')
+ print(f'{model:33}', end='')
for lang in languages:
key = f'{model}.{lang[0]}'
print(f'{table[key][split][metric]:7.3f}', end='')
@@ -125,9 +125,18 @@
trec_eval_metric_definitions[metric], runfile))
if math.isclose(score, float(expected[metric])):
result_str = ok_str
- # Flaky test: small difference on Mac Studio (M1 chip)
- elif name == 'mdpr-tied-pft-msmarco.hi' and split == 'train' \
- and math.isclose(score, float(expected[metric]), abs_tol=2e-4):
+ # Flaky tests
+ elif (name == 'mdpr-tied-pft-msmarco.hi' and split == 'train'
+ and math.isclose(score, float(expected[metric]), abs_tol=2e-4)) or \
+ (name == 'mdpr-tied-pft-msmarco-ft-all.ru'
+ and split == 'dev' and metric == 'nDCG@10'
+ and math.isclose(score, float(expected[metric]), abs_tol=2e-4)) or \
+ (name == 'bm25-mdpr-tied-pft-msmarco-hybrid.te'
+ and split == 'train' and metric == 'nDCG@10'
+ and math.isclose(score, float(expected[metric]), abs_tol=2e-4)) or \
+ (name == 'bm25-mdpr-tied-pft-msmarco-hybrid.zh'
+ and split == 'dev' and metric == 'nDCG@10'
+ and math.isclose(score, float(expected[metric]), abs_tol=2e-4)):
result_str = okish_str
else:
result_str = fail_str + f' expected {expected[metric]:.4f}'
|
{"golden_diff": "diff --git a/scripts/repro_matrix/run_all_miracl.py b/scripts/repro_matrix/run_all_miracl.py\n--- a/scripts/repro_matrix/run_all_miracl.py\n+++ b/scripts/repro_matrix/run_all_miracl.py\n@@ -30,12 +30,12 @@\n \n def print_results(metric, split):\n print(f'Metric = {metric}, Split = {split}')\n- print(' ' * 32, end='')\n+ print(' ' * 35, end='')\n for lang in languages:\n print(f'{lang[0]:3} ', end='')\n print('')\n for model in models:\n- print(f'{model:30}', end='')\n+ print(f'{model:33}', end='')\n for lang in languages:\n key = f'{model}.{lang[0]}'\n print(f'{table[key][split][metric]:7.3f}', end='')\n@@ -125,9 +125,18 @@\n trec_eval_metric_definitions[metric], runfile))\n if math.isclose(score, float(expected[metric])):\n result_str = ok_str\n- # Flaky test: small difference on Mac Studio (M1 chip)\n- elif name == 'mdpr-tied-pft-msmarco.hi' and split == 'train' \\\n- and math.isclose(score, float(expected[metric]), abs_tol=2e-4):\n+ # Flaky tests\n+ elif (name == 'mdpr-tied-pft-msmarco.hi' and split == 'train'\n+ and math.isclose(score, float(expected[metric]), abs_tol=2e-4)) or \\\n+ (name == 'mdpr-tied-pft-msmarco-ft-all.ru'\n+ and split == 'dev' and metric == 'nDCG@10'\n+ and math.isclose(score, float(expected[metric]), abs_tol=2e-4)) or \\\n+ (name == 'bm25-mdpr-tied-pft-msmarco-hybrid.te'\n+ and split == 'train' and metric == 'nDCG@10'\n+ and math.isclose(score, float(expected[metric]), abs_tol=2e-4)) or \\\n+ (name == 'bm25-mdpr-tied-pft-msmarco-hybrid.zh'\n+ and split == 'dev' and metric == 'nDCG@10'\n+ and math.isclose(score, float(expected[metric]), abs_tol=2e-4)):\n result_str = okish_str\n else:\n result_str = fail_str + f' expected {expected[metric]:.4f}'\n", "issue": "Issues with latest MIRACL 2CR\nOn my iMac Pro (Intel), I'm getting the following failures:\r\n\r\n```\r\ncondition bm25-mdpr-tied-pft-msmarco-hybrid.bn:\r\n - split: dev\r\n nDCG@10: 0.6540 [OK]\r\n R@100 : 0.9321 [FAIL] expected 0.9100\r\n\r\ncondition bm25-mdpr-tied-pft-msmarco-hybrid.zh:\r\n - split: dev\r\n nDCG@10: 0.5255 [FAIL] expected 0.5254\r\n R@100 : 0.9587 [OK]\r\n```\r\n\n", "before_files": [{"content": "#\n# Pyserini: Reproducible IR research with sparse and dense representations\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n#\n\nimport argparse\nimport math\nimport os\nimport subprocess\nimport time\nfrom collections import defaultdict\nfrom string import Template\n\nimport yaml\n\nfrom scripts.repro_matrix.defs_miracl import models, languages, trec_eval_metric_definitions\nfrom scripts.repro_matrix.utils import run_eval_and_return_metric, ok_str, okish_str, fail_str\n\n\ndef print_results(metric, split):\n print(f'Metric = {metric}, Split = {split}')\n print(' ' * 32, end='')\n for lang in languages:\n print(f'{lang[0]:3} ', end='')\n print('')\n for model in models:\n print(f'{model:30}', end='')\n for lang in languages:\n key = f'{model}.{lang[0]}'\n print(f'{table[key][split][metric]:7.3f}', end='')\n print('')\n print('')\n\n\ndef extract_topic_fn_from_cmd(cmd):\n cmd = cmd.split()\n topic_idx = cmd.index('--topics')\n return cmd[topic_idx + 1]\n\n\nif __name__ == '__main__':\n parser = argparse.ArgumentParser(description='Generate regression matrix for MIRACL.')\n parser.add_argument('--skip-eval', action='store_true', default=False, help='Skip running trec_eval.')\n args = parser.parse_args()\n\n start = time.time()\n\n table = defaultdict(lambda: defaultdict(lambda: defaultdict(lambda: 0.0)))\n\n with open('pyserini/resources/miracl.yaml') as f:\n yaml_data = yaml.safe_load(f)\n for condition in yaml_data['conditions']:\n name = condition['name']\n eval_key = condition['eval_key']\n cmd_template = condition['command']\n cmd_lst = cmd_template.split()\n\n print(f'condition {name}:')\n lang = name.split('.')[-1]\n is_hybrid_run = 'hybrid' in name\n\n for splits in condition['splits']:\n split = splits['split']\n if is_hybrid_run:\n hits = int(cmd_lst[cmd_lst.index('--k') + 1])\n else:\n hits = int(cmd_lst[cmd_lst.index('--hits') + 1])\n\n print(f' - split: {split}')\n\n runfile = f'runs/run.miracl.{name}.{split}.top{hits}.txt'\n if is_hybrid_run:\n bm25_output = f'runs/run.miracl.bm25.{lang}.{split}.top{hits}.txt'\n mdpr_output = f'runs/run.miracl.mdpr-tied-pft-msmarco.{lang}.{split}.top{hits}.txt'\n if not os.path.exists(bm25_output):\n print(f'Missing BM25 file: {bm25_output}')\n continue\n if not os.path.exists(mdpr_output):\n print(f'Missing mDPR file: {mdpr_output}')\n continue\n cmd = Template(cmd_template).substitute(split=split, output=runfile, bm25_output=bm25_output, mdpr_output=mdpr_output)\n else:\n cmd = Template(cmd_template).substitute(split=split, output=runfile)\n\n # In the yaml file, the topics are written as something like '--topics miracl-v1.0-ar-${split}'\n # This works for the dev split because the topics are directly included in Anserini/Pyserini.\n # For this training split, we have to map the symbol into a file in tools/topics-and-qrels/\n # Here, we assume that the developer has cloned the miracl repo and placed the topics there.\n if split == 'train':\n cmd = cmd.replace(f'--topics miracl-v1.0-{lang}-{split}',\n f'--topics tools/topics-and-qrels/topics.miracl-v1.0-{lang}-{split}.tsv')\n\n if not os.path.exists(runfile):\n print(f' Running: {cmd}')\n rtn = subprocess.run(cmd.split(), capture_output=True)\n stderr = rtn.stderr.decode()\n if '--topics' in cmd:\n topic_fn = extract_topic_fn_from_cmd(cmd)\n if f'ValueError: Topic {topic_fn} Not Found' in stderr:\n print(f'Skipping {topic_fn}: file not found.')\n continue\n\n for expected in splits['scores']:\n for metric in expected:\n if not args.skip_eval:\n # We have the translate the training qrels into a file located in tools/topics-and-qrels/\n # because they are not included with Anserini/Pyserini by default.\n # Here, we assume that the developer has cloned the miracl repo and placed the qrels there.\n if split == 'train':\n qrels = f'tools/topics-and-qrels/qrels.{eval_key}-train.tsv'\n else:\n qrels = f'{eval_key}-{split}'\n score = float(run_eval_and_return_metric(metric, qrels,\n trec_eval_metric_definitions[metric], runfile))\n if math.isclose(score, float(expected[metric])):\n result_str = ok_str\n # Flaky test: small difference on Mac Studio (M1 chip)\n elif name == 'mdpr-tied-pft-msmarco.hi' and split == 'train' \\\n and math.isclose(score, float(expected[metric]), abs_tol=2e-4):\n result_str = okish_str\n else:\n result_str = fail_str + f' expected {expected[metric]:.4f}'\n print(f' {metric:7}: {score:.4f} {result_str}')\n table[name][split][metric] = score\n else:\n table[name][split][metric] = expected[metric]\n\n print('')\n\n for metric in ['nDCG@10', 'R@100']:\n for split in ['dev', 'train']:\n print_results(metric, split)\n\n end = time.time()\n print(f'Total elapsed time: {end - start:.0f}s')\n", "path": "scripts/repro_matrix/run_all_miracl.py"}]}
| 2,479 | 582 |
gh_patches_debug_1861
|
rasdani/github-patches
|
git_diff
|
carpentries__amy-690
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
No reverse match for rest_framework namespace
The error for a very strange reason shows when accessing these URLs:
https://github.com/swcarpentry/amy/blob/develop/api/urls.py#L57
I wasn't able to get rid of it; it's not being used at all, so maybe it should be removed…?
</issue>
<code>
[start of api/urls.py]
1 from django.conf.urls import url, include
2 from rest_framework_nested import routers
3 from rest_framework.urlpatterns import format_suffix_patterns
4
5 from . import views
6
7 # new in Django 1.9: this defines a namespace for URLs; there's no need for
8 # `namespace='api'` in the include()
9 app_name = 'api'
10
11 # routers generate URLs for methods like `.list` or `.retrieve`
12 router = routers.SimpleRouter()
13 router.register('reports', views.ReportsViewSet, base_name='reports')
14 router.register('persons', views.PersonViewSet)
15 awards_router = routers.NestedSimpleRouter(router, 'persons', lookup='person')
16 awards_router.register('awards', views.AwardViewSet, base_name='person-awards')
17 person_task_router = routers.NestedSimpleRouter(router, 'persons',
18 lookup='person')
19 person_task_router.register('tasks', views.PersonTaskViewSet,
20 base_name='person-tasks')
21 router.register('events', views.EventViewSet)
22 tasks_router = routers.NestedSimpleRouter(router, 'events', lookup='event')
23 tasks_router.register('tasks', views.TaskViewSet, base_name='event-tasks')
24 todos_router = routers.NestedSimpleRouter(router, 'events', lookup='event')
25 todos_router.register('todos', views.TodoViewSet, base_name='event-todos')
26 router.register('hosts', views.HostViewSet)
27 router.register('airports', views.AirportViewSet)
28
29 urlpatterns = [
30 url('^$', views.ApiRoot.as_view(), name='root'),
31 # TODO: turn these export views into ViewSets and add them to the router
32 url('^export/badges/$',
33 views.ExportBadgesView.as_view(),
34 name='export-badges'),
35 url('^export/instructors/$',
36 views.ExportInstructorLocationsView.as_view(),
37 name='export-instructors'),
38 url('^export/members/$',
39 views.ExportMembersView.as_view(),
40 name='export-members'),
41 url('^events/published/$',
42 views.PublishedEvents.as_view(),
43 name='events-published'),
44 url('^todos/user/$',
45 views.UserTodoItems.as_view(),
46 name='user-todos'),
47
48 url('^', include(router.urls)),
49 url('^', include(awards_router.urls)),
50 url('^', include(person_task_router.urls)),
51 url('^', include(tasks_router.urls)),
52 url('^', include(todos_router.urls)),
53 ]
54
55 # for login-logout functionality
56 urlpatterns += [
57 url(r'^api-auth/',
58 include('rest_framework.urls', namespace='rest_framework')),
59 ]
60
61 urlpatterns = format_suffix_patterns(urlpatterns) # allow to specify format
62
[end of api/urls.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/api/urls.py b/api/urls.py
--- a/api/urls.py
+++ b/api/urls.py
@@ -52,10 +52,4 @@
url('^', include(todos_router.urls)),
]
-# for login-logout functionality
-urlpatterns += [
- url(r'^api-auth/',
- include('rest_framework.urls', namespace='rest_framework')),
-]
-
urlpatterns = format_suffix_patterns(urlpatterns) # allow to specify format
|
{"golden_diff": "diff --git a/api/urls.py b/api/urls.py\n--- a/api/urls.py\n+++ b/api/urls.py\n@@ -52,10 +52,4 @@\n url('^', include(todos_router.urls)),\n ]\n \n-# for login-logout functionality\n-urlpatterns += [\n- url(r'^api-auth/',\n- include('rest_framework.urls', namespace='rest_framework')),\n-]\n-\n urlpatterns = format_suffix_patterns(urlpatterns) # allow to specify format\n", "issue": "No reverse match for rest_framework namespace\nThe error for a very strange reason shows when accessing these URLs:\nhttps://github.com/swcarpentry/amy/blob/develop/api/urls.py#L57\n\nI wasn't able to get rid of it; it's not being used at all, so maybe it should be removed\u2026?\n\n", "before_files": [{"content": "from django.conf.urls import url, include\nfrom rest_framework_nested import routers\nfrom rest_framework.urlpatterns import format_suffix_patterns\n\nfrom . import views\n\n# new in Django 1.9: this defines a namespace for URLs; there's no need for\n# `namespace='api'` in the include()\napp_name = 'api'\n\n# routers generate URLs for methods like `.list` or `.retrieve`\nrouter = routers.SimpleRouter()\nrouter.register('reports', views.ReportsViewSet, base_name='reports')\nrouter.register('persons', views.PersonViewSet)\nawards_router = routers.NestedSimpleRouter(router, 'persons', lookup='person')\nawards_router.register('awards', views.AwardViewSet, base_name='person-awards')\nperson_task_router = routers.NestedSimpleRouter(router, 'persons',\n lookup='person')\nperson_task_router.register('tasks', views.PersonTaskViewSet,\n base_name='person-tasks')\nrouter.register('events', views.EventViewSet)\ntasks_router = routers.NestedSimpleRouter(router, 'events', lookup='event')\ntasks_router.register('tasks', views.TaskViewSet, base_name='event-tasks')\ntodos_router = routers.NestedSimpleRouter(router, 'events', lookup='event')\ntodos_router.register('todos', views.TodoViewSet, base_name='event-todos')\nrouter.register('hosts', views.HostViewSet)\nrouter.register('airports', views.AirportViewSet)\n\nurlpatterns = [\n url('^$', views.ApiRoot.as_view(), name='root'),\n # TODO: turn these export views into ViewSets and add them to the router\n url('^export/badges/$',\n views.ExportBadgesView.as_view(),\n name='export-badges'),\n url('^export/instructors/$',\n views.ExportInstructorLocationsView.as_view(),\n name='export-instructors'),\n url('^export/members/$',\n views.ExportMembersView.as_view(),\n name='export-members'),\n url('^events/published/$',\n views.PublishedEvents.as_view(),\n name='events-published'),\n url('^todos/user/$',\n views.UserTodoItems.as_view(),\n name='user-todos'),\n\n url('^', include(router.urls)),\n url('^', include(awards_router.urls)),\n url('^', include(person_task_router.urls)),\n url('^', include(tasks_router.urls)),\n url('^', include(todos_router.urls)),\n]\n\n# for login-logout functionality\nurlpatterns += [\n url(r'^api-auth/',\n include('rest_framework.urls', namespace='rest_framework')),\n]\n\nurlpatterns = format_suffix_patterns(urlpatterns) # allow to specify format\n", "path": "api/urls.py"}]}
| 1,260 | 104 |
gh_patches_debug_8186
|
rasdani/github-patches
|
git_diff
|
saleor__saleor-5117
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Allow to search for products by SKU in admin dashboard
### What I'm trying to achieve
I'm looking to find a product by its SKU
### Describe a proposed solution
Tweak search engine settings to return products with full/partial SKU match.
</issue>
<code>
[start of saleor/search/backends/postgresql_storefront.py]
1 from django.contrib.postgres.search import TrigramSimilarity
2 from django.db.models import Q
3
4 from ...product.models import Product
5
6
7 def search(phrase):
8 """Return matching products for storefront views.
9
10 Fuzzy storefront search that is resistant to small typing errors made
11 by user. Name is matched using trigram similarity, description uses
12 standard postgres full text search.
13
14 Args:
15 phrase (str): searched phrase
16
17 """
18 name_sim = TrigramSimilarity("name", phrase)
19 published = Q(is_published=True)
20 ft_in_description = Q(description__search=phrase)
21 name_similar = Q(name_sim__gt=0.2)
22 return Product.objects.annotate(name_sim=name_sim).filter(
23 (ft_in_description | name_similar) & published
24 )
25
[end of saleor/search/backends/postgresql_storefront.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/saleor/search/backends/postgresql_storefront.py b/saleor/search/backends/postgresql_storefront.py
--- a/saleor/search/backends/postgresql_storefront.py
+++ b/saleor/search/backends/postgresql_storefront.py
@@ -16,9 +16,9 @@
"""
name_sim = TrigramSimilarity("name", phrase)
- published = Q(is_published=True)
ft_in_description = Q(description__search=phrase)
+ ft_by_sku = Q(variants__sku__search=phrase)
name_similar = Q(name_sim__gt=0.2)
return Product.objects.annotate(name_sim=name_sim).filter(
- (ft_in_description | name_similar) & published
+ (ft_in_description | name_similar | ft_by_sku)
)
|
{"golden_diff": "diff --git a/saleor/search/backends/postgresql_storefront.py b/saleor/search/backends/postgresql_storefront.py\n--- a/saleor/search/backends/postgresql_storefront.py\n+++ b/saleor/search/backends/postgresql_storefront.py\n@@ -16,9 +16,9 @@\n \n \"\"\"\n name_sim = TrigramSimilarity(\"name\", phrase)\n- published = Q(is_published=True)\n ft_in_description = Q(description__search=phrase)\n+ ft_by_sku = Q(variants__sku__search=phrase)\n name_similar = Q(name_sim__gt=0.2)\n return Product.objects.annotate(name_sim=name_sim).filter(\n- (ft_in_description | name_similar) & published\n+ (ft_in_description | name_similar | ft_by_sku)\n )\n", "issue": "Allow to search for products by SKU in admin dashboard\n### What I'm trying to achieve\r\nI'm looking to find a product by its SKU\r\n\r\n### Describe a proposed solution\r\nTweak search engine settings to return products with full/partial SKU match.\r\n\r\n\n", "before_files": [{"content": "from django.contrib.postgres.search import TrigramSimilarity\nfrom django.db.models import Q\n\nfrom ...product.models import Product\n\n\ndef search(phrase):\n \"\"\"Return matching products for storefront views.\n\n Fuzzy storefront search that is resistant to small typing errors made\n by user. Name is matched using trigram similarity, description uses\n standard postgres full text search.\n\n Args:\n phrase (str): searched phrase\n\n \"\"\"\n name_sim = TrigramSimilarity(\"name\", phrase)\n published = Q(is_published=True)\n ft_in_description = Q(description__search=phrase)\n name_similar = Q(name_sim__gt=0.2)\n return Product.objects.annotate(name_sim=name_sim).filter(\n (ft_in_description | name_similar) & published\n )\n", "path": "saleor/search/backends/postgresql_storefront.py"}]}
| 809 | 179 |
gh_patches_debug_17752
|
rasdani/github-patches
|
git_diff
|
nf-core__tools-1590
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Lint warning on Nextflow minimum version badge
### Description of the bug
`nf-core lint` complains that the minimum version badge for Nextflow could not found, however it was present in the `README.md`.
It occurred after the `template-merge-2.4`
It appears to be a bug.
### Command used and terminal output
```console
(nextflow2) rnavar$ nf-core lint
,--./,-.
___ __ __ __ ___ /,-._.--~\
|\ | |__ __ / ` / \ |__) |__ } {
| \| | \__, \__/ | \ |___ \`-._,-`-,
`._,._,'
nf-core/tools version 2.4.1 - https://nf-co.re
INFO Testing pipeline: . __init__.py:244
╭─ [!] 1 Pipeline Test Warning ───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ readme: README did not have a Nextflow minimum version badge. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
```
### System information
_No response_
</issue>
<code>
[start of nf_core/lint/readme.py]
1 #!/usr/bin/env python
2
3 import os
4 import re
5
6
7 def readme(self):
8 """Repository ``README.md`` tests
9
10 The ``README.md`` files for a project are very important and must meet some requirements:
11
12 * Nextflow badge
13
14 * If no Nextflow badge is found, a warning is given
15 * If a badge is found but the version doesn't match the minimum version in the config file, the test fails
16 * Example badge code:
17
18 .. code-block:: md
19
20 [](https://www.nextflow.io/)
21
22 * Bioconda badge
23
24 * If your pipeline contains a file called ``environment.yml`` in the root directory, a bioconda badge is required
25 * Required badge code:
26
27 .. code-block:: md
28
29 [](https://bioconda.github.io/)
30
31 .. note:: These badges are a markdown image ```` *inside* a markdown link ``[markdown image](<link URL>)``, so a bit fiddly to write.
32 """
33 passed = []
34 warned = []
35 failed = []
36
37 with open(os.path.join(self.wf_path, "README.md"), "r") as fh:
38 content = fh.read()
39
40 # Check that there is a readme badge showing the minimum required version of Nextflow
41 # [](https://www.nextflow.io/)
42 # and that it has the correct version
43 nf_badge_re = r"\[!\[Nextflow\]\(https://img\.shields\.io/badge/nextflow%20DSL2-%E2%89%A5([\d\.]+)-23aa62\.svg\?labelColor=000000\)\]\(https://www\.nextflow\.io/\)"
44 match = re.search(nf_badge_re, content)
45 if match:
46 nf_badge_version = match.group(1).strip("'\"")
47 try:
48 assert nf_badge_version == self.minNextflowVersion
49 except (AssertionError, KeyError):
50 failed.append(
51 "README Nextflow minimum version badge does not match config. Badge: `{}`, Config: `{}`".format(
52 nf_badge_version, self.minNextflowVersion
53 )
54 )
55 else:
56 passed.append(
57 "README Nextflow minimum version badge matched config. Badge: `{}`, Config: `{}`".format(
58 nf_badge_version, self.minNextflowVersion
59 )
60 )
61 else:
62 warned.append("README did not have a Nextflow minimum version badge.")
63
64 # Check that the minimum version mentioned in the quick start section is consistent
65 # Looking for: "1. Install [`Nextflow`](https://www.nextflow.io/docs/latest/getstarted.html#installation) (`>=21.10.3`)"
66 nf_version_re = r"1\.\s*Install\s*\[`Nextflow`\]\(https://www.nextflow.io/docs/latest/getstarted.html#installation\)\s*\(`>=(\d*\.\d*\.\d*)`\)"
67 match = re.search(nf_version_re, content)
68 if match:
69 nf_quickstart_version = match.group(1)
70 try:
71 assert nf_quickstart_version == self.minNextflowVersion
72 except (AssertionError, KeyError):
73 failed.append(
74 f"README Nextflow minimium version in Quick Start section does not match config. README: `{nf_quickstart_version}`, Config `{self.minNextflowVersion}`"
75 )
76 else:
77 passed.append(
78 f"README Nextflow minimum version in Quick Start section matched config. README: `{nf_quickstart_version}`, Config: `{self.minNextflowVersion}`"
79 )
80 else:
81 warned.append("README did not have a Nextflow minimum version mentioned in Quick Start section.")
82
83 return {"passed": passed, "warned": warned, "failed": failed}
84
[end of nf_core/lint/readme.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/nf_core/lint/readme.py b/nf_core/lint/readme.py
--- a/nf_core/lint/readme.py
+++ b/nf_core/lint/readme.py
@@ -38,9 +38,9 @@
content = fh.read()
# Check that there is a readme badge showing the minimum required version of Nextflow
- # [](https://www.nextflow.io/)
+ # [](https://www.nextflow.io/)
# and that it has the correct version
- nf_badge_re = r"\[!\[Nextflow\]\(https://img\.shields\.io/badge/nextflow%20DSL2-%E2%89%A5([\d\.]+)-23aa62\.svg\?labelColor=000000\)\]\(https://www\.nextflow\.io/\)"
+ nf_badge_re = r"\[!\[Nextflow\]\(https://img\.shields\.io/badge/nextflow%20DSL2-%E2%89%A5([\d\.]+)-23aa62\.svg\)\]\(https://www\.nextflow\.io/\)"
match = re.search(nf_badge_re, content)
if match:
nf_badge_version = match.group(1).strip("'\"")
|
{"golden_diff": "diff --git a/nf_core/lint/readme.py b/nf_core/lint/readme.py\n--- a/nf_core/lint/readme.py\n+++ b/nf_core/lint/readme.py\n@@ -38,9 +38,9 @@\n content = fh.read()\n \n # Check that there is a readme badge showing the minimum required version of Nextflow\n- # [](https://www.nextflow.io/)\n+ # [](https://www.nextflow.io/)\n # and that it has the correct version\n- nf_badge_re = r\"\\[!\\[Nextflow\\]\\(https://img\\.shields\\.io/badge/nextflow%20DSL2-%E2%89%A5([\\d\\.]+)-23aa62\\.svg\\?labelColor=000000\\)\\]\\(https://www\\.nextflow\\.io/\\)\"\n+ nf_badge_re = r\"\\[!\\[Nextflow\\]\\(https://img\\.shields\\.io/badge/nextflow%20DSL2-%E2%89%A5([\\d\\.]+)-23aa62\\.svg\\)\\]\\(https://www\\.nextflow\\.io/\\)\"\n match = re.search(nf_badge_re, content)\n if match:\n nf_badge_version = match.group(1).strip(\"'\\\"\")\n", "issue": "Lint warning on Nextflow minimum version badge\n### Description of the bug\n\n`nf-core lint` complains that the minimum version badge for Nextflow could not found, however it was present in the `README.md`.\r\nIt occurred after the `template-merge-2.4`\r\nIt appears to be a bug.\r\n\r\n\n\n### Command used and terminal output\n\n```console\n(nextflow2) rnavar$ nf-core lint\r\n\r\n\r\n\r\n ,--./,-.\r\n\r\n ___ __ __ __ ___ /,-._.--~\\\r\n\r\n |\\ | |__ __ / ` / \\ |__) |__ } {\r\n\r\n | \\| | \\__, \\__/ | \\ |___ \\`-._,-`-,\r\n\r\n `._,._,'\r\n\r\n\r\n\r\n nf-core/tools version 2.4.1 - https://nf-co.re\r\n\r\n\r\n\r\n\r\n\r\nINFO Testing pipeline: . __init__.py:244\r\n\r\n\r\n\r\n\u256d\u2500 [!] 1 Pipeline Test Warning \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256e\r\n\r\n\u2502 \u2502\r\n\r\n\u2502 readme: README did not have a Nextflow minimum version badge. \u2502\r\n\r\n\u2502 \u2502\r\n\r\n\u2570\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256f\n```\n\n\n### System information\n\n_No response_\n", "before_files": [{"content": "#!/usr/bin/env python\n\nimport os\nimport re\n\n\ndef readme(self):\n \"\"\"Repository ``README.md`` tests\n\n The ``README.md`` files for a project are very important and must meet some requirements:\n\n * Nextflow badge\n\n * If no Nextflow badge is found, a warning is given\n * If a badge is found but the version doesn't match the minimum version in the config file, the test fails\n * Example badge code:\n\n .. code-block:: md\n\n [](https://www.nextflow.io/)\n\n * Bioconda badge\n\n * If your pipeline contains a file called ``environment.yml`` in the root directory, a bioconda badge is required\n * Required badge code:\n\n .. code-block:: md\n\n [](https://bioconda.github.io/)\n\n .. note:: These badges are a markdown image ```` *inside* a markdown link ``[markdown image](<link URL>)``, so a bit fiddly to write.\n \"\"\"\n passed = []\n warned = []\n failed = []\n\n with open(os.path.join(self.wf_path, \"README.md\"), \"r\") as fh:\n content = fh.read()\n\n # Check that there is a readme badge showing the minimum required version of Nextflow\n # [](https://www.nextflow.io/)\n # and that it has the correct version\n nf_badge_re = r\"\\[!\\[Nextflow\\]\\(https://img\\.shields\\.io/badge/nextflow%20DSL2-%E2%89%A5([\\d\\.]+)-23aa62\\.svg\\?labelColor=000000\\)\\]\\(https://www\\.nextflow\\.io/\\)\"\n match = re.search(nf_badge_re, content)\n if match:\n nf_badge_version = match.group(1).strip(\"'\\\"\")\n try:\n assert nf_badge_version == self.minNextflowVersion\n except (AssertionError, KeyError):\n failed.append(\n \"README Nextflow minimum version badge does not match config. Badge: `{}`, Config: `{}`\".format(\n nf_badge_version, self.minNextflowVersion\n )\n )\n else:\n passed.append(\n \"README Nextflow minimum version badge matched config. Badge: `{}`, Config: `{}`\".format(\n nf_badge_version, self.minNextflowVersion\n )\n )\n else:\n warned.append(\"README did not have a Nextflow minimum version badge.\")\n\n # Check that the minimum version mentioned in the quick start section is consistent\n # Looking for: \"1. Install [`Nextflow`](https://www.nextflow.io/docs/latest/getstarted.html#installation) (`>=21.10.3`)\"\n nf_version_re = r\"1\\.\\s*Install\\s*\\[`Nextflow`\\]\\(https://www.nextflow.io/docs/latest/getstarted.html#installation\\)\\s*\\(`>=(\\d*\\.\\d*\\.\\d*)`\\)\"\n match = re.search(nf_version_re, content)\n if match:\n nf_quickstart_version = match.group(1)\n try:\n assert nf_quickstart_version == self.minNextflowVersion\n except (AssertionError, KeyError):\n failed.append(\n f\"README Nextflow minimium version in Quick Start section does not match config. README: `{nf_quickstart_version}`, Config `{self.minNextflowVersion}`\"\n )\n else:\n passed.append(\n f\"README Nextflow minimum version in Quick Start section matched config. README: `{nf_quickstart_version}`, Config: `{self.minNextflowVersion}`\"\n )\n else:\n warned.append(\"README did not have a Nextflow minimum version mentioned in Quick Start section.\")\n\n return {\"passed\": passed, \"warned\": warned, \"failed\": failed}\n", "path": "nf_core/lint/readme.py"}]}
| 1,943 | 392 |
gh_patches_debug_2980
|
rasdani/github-patches
|
git_diff
|
getmoto__moto-2114
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Lambda publish_version returns wrong status code
In boto3,when lambda publish_version is success,boto3 returns Http status code 201.
But, moto returns Http status code 200
moto and boto version
```
boto3 1.9.71
botocore 1.12.71
moto 1.3.7
```
</issue>
<code>
[start of moto/awslambda/responses.py]
1 from __future__ import unicode_literals
2
3 import json
4
5 try:
6 from urllib import unquote
7 except ImportError:
8 from urllib.parse import unquote
9
10 from moto.core.utils import amz_crc32, amzn_request_id, path_url
11 from moto.core.responses import BaseResponse
12 from .models import lambda_backends
13
14
15 class LambdaResponse(BaseResponse):
16 @property
17 def json_body(self):
18 """
19 :return: JSON
20 :rtype: dict
21 """
22 return json.loads(self.body)
23
24 @property
25 def lambda_backend(self):
26 """
27 Get backend
28 :return: Lambda Backend
29 :rtype: moto.awslambda.models.LambdaBackend
30 """
31 return lambda_backends[self.region]
32
33 def root(self, request, full_url, headers):
34 self.setup_class(request, full_url, headers)
35 if request.method == 'GET':
36 return self._list_functions(request, full_url, headers)
37 elif request.method == 'POST':
38 return self._create_function(request, full_url, headers)
39 else:
40 raise ValueError("Cannot handle request")
41
42 def function(self, request, full_url, headers):
43 self.setup_class(request, full_url, headers)
44 if request.method == 'GET':
45 return self._get_function(request, full_url, headers)
46 elif request.method == 'DELETE':
47 return self._delete_function(request, full_url, headers)
48 else:
49 raise ValueError("Cannot handle request")
50
51 def versions(self, request, full_url, headers):
52 self.setup_class(request, full_url, headers)
53 if request.method == 'GET':
54 # This is ListVersionByFunction
55
56 path = request.path if hasattr(request, 'path') else path_url(request.url)
57 function_name = path.split('/')[-2]
58 return self._list_versions_by_function(function_name)
59
60 elif request.method == 'POST':
61 return self._publish_function(request, full_url, headers)
62 else:
63 raise ValueError("Cannot handle request")
64
65 @amz_crc32
66 @amzn_request_id
67 def invoke(self, request, full_url, headers):
68 self.setup_class(request, full_url, headers)
69 if request.method == 'POST':
70 return self._invoke(request, full_url)
71 else:
72 raise ValueError("Cannot handle request")
73
74 @amz_crc32
75 @amzn_request_id
76 def invoke_async(self, request, full_url, headers):
77 self.setup_class(request, full_url, headers)
78 if request.method == 'POST':
79 return self._invoke_async(request, full_url)
80 else:
81 raise ValueError("Cannot handle request")
82
83 def tag(self, request, full_url, headers):
84 self.setup_class(request, full_url, headers)
85 if request.method == 'GET':
86 return self._list_tags(request, full_url)
87 elif request.method == 'POST':
88 return self._tag_resource(request, full_url)
89 elif request.method == 'DELETE':
90 return self._untag_resource(request, full_url)
91 else:
92 raise ValueError("Cannot handle {0} request".format(request.method))
93
94 def policy(self, request, full_url, headers):
95 if request.method == 'GET':
96 return self._get_policy(request, full_url, headers)
97 if request.method == 'POST':
98 return self._add_policy(request, full_url, headers)
99
100 def _add_policy(self, request, full_url, headers):
101 path = request.path if hasattr(request, 'path') else path_url(request.url)
102 function_name = path.split('/')[-2]
103 if self.lambda_backend.get_function(function_name):
104 policy = request.body.decode('utf8')
105 self.lambda_backend.add_policy(function_name, policy)
106 return 200, {}, json.dumps(dict(Statement=policy))
107 else:
108 return 404, {}, "{}"
109
110 def _get_policy(self, request, full_url, headers):
111 path = request.path if hasattr(request, 'path') else path_url(request.url)
112 function_name = path.split('/')[-2]
113 if self.lambda_backend.get_function(function_name):
114 lambda_function = self.lambda_backend.get_function(function_name)
115 return 200, {}, json.dumps(dict(Policy="{\"Statement\":[" + lambda_function.policy + "]}"))
116 else:
117 return 404, {}, "{}"
118
119 def _invoke(self, request, full_url):
120 response_headers = {}
121
122 function_name = self.path.rsplit('/', 2)[-2]
123 qualifier = self._get_param('qualifier')
124
125 fn = self.lambda_backend.get_function(function_name, qualifier)
126 if fn:
127 payload = fn.invoke(self.body, self.headers, response_headers)
128 response_headers['Content-Length'] = str(len(payload))
129 return 202, response_headers, payload
130 else:
131 return 404, response_headers, "{}"
132
133 def _invoke_async(self, request, full_url):
134 response_headers = {}
135
136 function_name = self.path.rsplit('/', 3)[-3]
137
138 fn = self.lambda_backend.get_function(function_name, None)
139 if fn:
140 payload = fn.invoke(self.body, self.headers, response_headers)
141 response_headers['Content-Length'] = str(len(payload))
142 return 202, response_headers, payload
143 else:
144 return 404, response_headers, "{}"
145
146 def _list_functions(self, request, full_url, headers):
147 result = {
148 'Functions': []
149 }
150
151 for fn in self.lambda_backend.list_functions():
152 json_data = fn.get_configuration()
153
154 result['Functions'].append(json_data)
155
156 return 200, {}, json.dumps(result)
157
158 def _list_versions_by_function(self, function_name):
159 result = {
160 'Versions': []
161 }
162
163 functions = self.lambda_backend.list_versions_by_function(function_name)
164 if functions:
165 for fn in functions:
166 json_data = fn.get_configuration()
167 result['Versions'].append(json_data)
168
169 return 200, {}, json.dumps(result)
170
171 def _create_function(self, request, full_url, headers):
172 try:
173 fn = self.lambda_backend.create_function(self.json_body)
174 except ValueError as e:
175 return 400, {}, json.dumps({"Error": {"Code": e.args[0], "Message": e.args[1]}})
176 else:
177 config = fn.get_configuration()
178 return 201, {}, json.dumps(config)
179
180 def _publish_function(self, request, full_url, headers):
181 function_name = self.path.rsplit('/', 2)[-2]
182
183 fn = self.lambda_backend.publish_function(function_name)
184 if fn:
185 config = fn.get_configuration()
186 return 200, {}, json.dumps(config)
187 else:
188 return 404, {}, "{}"
189
190 def _delete_function(self, request, full_url, headers):
191 function_name = self.path.rsplit('/', 1)[-1]
192 qualifier = self._get_param('Qualifier', None)
193
194 if self.lambda_backend.delete_function(function_name, qualifier):
195 return 204, {}, ""
196 else:
197 return 404, {}, "{}"
198
199 def _get_function(self, request, full_url, headers):
200 function_name = self.path.rsplit('/', 1)[-1]
201 qualifier = self._get_param('Qualifier', None)
202
203 fn = self.lambda_backend.get_function(function_name, qualifier)
204
205 if fn:
206 code = fn.get_code()
207
208 return 200, {}, json.dumps(code)
209 else:
210 return 404, {}, "{}"
211
212 def _get_aws_region(self, full_url):
213 region = self.region_regex.search(full_url)
214 if region:
215 return region.group(1)
216 else:
217 return self.default_region
218
219 def _list_tags(self, request, full_url):
220 function_arn = unquote(self.path.rsplit('/', 1)[-1])
221
222 fn = self.lambda_backend.get_function_by_arn(function_arn)
223 if fn:
224 return 200, {}, json.dumps({'Tags': fn.tags})
225 else:
226 return 404, {}, "{}"
227
228 def _tag_resource(self, request, full_url):
229 function_arn = unquote(self.path.rsplit('/', 1)[-1])
230
231 if self.lambda_backend.tag_resource(function_arn, self.json_body['Tags']):
232 return 200, {}, "{}"
233 else:
234 return 404, {}, "{}"
235
236 def _untag_resource(self, request, full_url):
237 function_arn = unquote(self.path.rsplit('/', 1)[-1])
238 tag_keys = self.querystring['tagKeys']
239
240 if self.lambda_backend.untag_resource(function_arn, tag_keys):
241 return 204, {}, "{}"
242 else:
243 return 404, {}, "{}"
244
[end of moto/awslambda/responses.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/moto/awslambda/responses.py b/moto/awslambda/responses.py
--- a/moto/awslambda/responses.py
+++ b/moto/awslambda/responses.py
@@ -183,7 +183,7 @@
fn = self.lambda_backend.publish_function(function_name)
if fn:
config = fn.get_configuration()
- return 200, {}, json.dumps(config)
+ return 201, {}, json.dumps(config)
else:
return 404, {}, "{}"
|
{"golden_diff": "diff --git a/moto/awslambda/responses.py b/moto/awslambda/responses.py\n--- a/moto/awslambda/responses.py\n+++ b/moto/awslambda/responses.py\n@@ -183,7 +183,7 @@\n fn = self.lambda_backend.publish_function(function_name)\n if fn:\n config = fn.get_configuration()\n- return 200, {}, json.dumps(config)\n+ return 201, {}, json.dumps(config)\n else:\n return 404, {}, \"{}\"\n", "issue": "Lambda publish_version returns wrong status code\nIn boto3,when lambda publish_version is success,boto3 returns Http status code 201.\r\nBut, moto returns Http status code 200\r\n\r\n\r\nmoto and boto version\r\n```\r\nboto3 1.9.71\r\nbotocore 1.12.71\r\nmoto 1.3.7\r\n```\r\n\n", "before_files": [{"content": "from __future__ import unicode_literals\n\nimport json\n\ntry:\n from urllib import unquote\nexcept ImportError:\n from urllib.parse import unquote\n\nfrom moto.core.utils import amz_crc32, amzn_request_id, path_url\nfrom moto.core.responses import BaseResponse\nfrom .models import lambda_backends\n\n\nclass LambdaResponse(BaseResponse):\n @property\n def json_body(self):\n \"\"\"\n :return: JSON\n :rtype: dict\n \"\"\"\n return json.loads(self.body)\n\n @property\n def lambda_backend(self):\n \"\"\"\n Get backend\n :return: Lambda Backend\n :rtype: moto.awslambda.models.LambdaBackend\n \"\"\"\n return lambda_backends[self.region]\n\n def root(self, request, full_url, headers):\n self.setup_class(request, full_url, headers)\n if request.method == 'GET':\n return self._list_functions(request, full_url, headers)\n elif request.method == 'POST':\n return self._create_function(request, full_url, headers)\n else:\n raise ValueError(\"Cannot handle request\")\n\n def function(self, request, full_url, headers):\n self.setup_class(request, full_url, headers)\n if request.method == 'GET':\n return self._get_function(request, full_url, headers)\n elif request.method == 'DELETE':\n return self._delete_function(request, full_url, headers)\n else:\n raise ValueError(\"Cannot handle request\")\n\n def versions(self, request, full_url, headers):\n self.setup_class(request, full_url, headers)\n if request.method == 'GET':\n # This is ListVersionByFunction\n\n path = request.path if hasattr(request, 'path') else path_url(request.url)\n function_name = path.split('/')[-2]\n return self._list_versions_by_function(function_name)\n\n elif request.method == 'POST':\n return self._publish_function(request, full_url, headers)\n else:\n raise ValueError(\"Cannot handle request\")\n\n @amz_crc32\n @amzn_request_id\n def invoke(self, request, full_url, headers):\n self.setup_class(request, full_url, headers)\n if request.method == 'POST':\n return self._invoke(request, full_url)\n else:\n raise ValueError(\"Cannot handle request\")\n\n @amz_crc32\n @amzn_request_id\n def invoke_async(self, request, full_url, headers):\n self.setup_class(request, full_url, headers)\n if request.method == 'POST':\n return self._invoke_async(request, full_url)\n else:\n raise ValueError(\"Cannot handle request\")\n\n def tag(self, request, full_url, headers):\n self.setup_class(request, full_url, headers)\n if request.method == 'GET':\n return self._list_tags(request, full_url)\n elif request.method == 'POST':\n return self._tag_resource(request, full_url)\n elif request.method == 'DELETE':\n return self._untag_resource(request, full_url)\n else:\n raise ValueError(\"Cannot handle {0} request\".format(request.method))\n\n def policy(self, request, full_url, headers):\n if request.method == 'GET':\n return self._get_policy(request, full_url, headers)\n if request.method == 'POST':\n return self._add_policy(request, full_url, headers)\n\n def _add_policy(self, request, full_url, headers):\n path = request.path if hasattr(request, 'path') else path_url(request.url)\n function_name = path.split('/')[-2]\n if self.lambda_backend.get_function(function_name):\n policy = request.body.decode('utf8')\n self.lambda_backend.add_policy(function_name, policy)\n return 200, {}, json.dumps(dict(Statement=policy))\n else:\n return 404, {}, \"{}\"\n\n def _get_policy(self, request, full_url, headers):\n path = request.path if hasattr(request, 'path') else path_url(request.url)\n function_name = path.split('/')[-2]\n if self.lambda_backend.get_function(function_name):\n lambda_function = self.lambda_backend.get_function(function_name)\n return 200, {}, json.dumps(dict(Policy=\"{\\\"Statement\\\":[\" + lambda_function.policy + \"]}\"))\n else:\n return 404, {}, \"{}\"\n\n def _invoke(self, request, full_url):\n response_headers = {}\n\n function_name = self.path.rsplit('/', 2)[-2]\n qualifier = self._get_param('qualifier')\n\n fn = self.lambda_backend.get_function(function_name, qualifier)\n if fn:\n payload = fn.invoke(self.body, self.headers, response_headers)\n response_headers['Content-Length'] = str(len(payload))\n return 202, response_headers, payload\n else:\n return 404, response_headers, \"{}\"\n\n def _invoke_async(self, request, full_url):\n response_headers = {}\n\n function_name = self.path.rsplit('/', 3)[-3]\n\n fn = self.lambda_backend.get_function(function_name, None)\n if fn:\n payload = fn.invoke(self.body, self.headers, response_headers)\n response_headers['Content-Length'] = str(len(payload))\n return 202, response_headers, payload\n else:\n return 404, response_headers, \"{}\"\n\n def _list_functions(self, request, full_url, headers):\n result = {\n 'Functions': []\n }\n\n for fn in self.lambda_backend.list_functions():\n json_data = fn.get_configuration()\n\n result['Functions'].append(json_data)\n\n return 200, {}, json.dumps(result)\n\n def _list_versions_by_function(self, function_name):\n result = {\n 'Versions': []\n }\n\n functions = self.lambda_backend.list_versions_by_function(function_name)\n if functions:\n for fn in functions:\n json_data = fn.get_configuration()\n result['Versions'].append(json_data)\n\n return 200, {}, json.dumps(result)\n\n def _create_function(self, request, full_url, headers):\n try:\n fn = self.lambda_backend.create_function(self.json_body)\n except ValueError as e:\n return 400, {}, json.dumps({\"Error\": {\"Code\": e.args[0], \"Message\": e.args[1]}})\n else:\n config = fn.get_configuration()\n return 201, {}, json.dumps(config)\n\n def _publish_function(self, request, full_url, headers):\n function_name = self.path.rsplit('/', 2)[-2]\n\n fn = self.lambda_backend.publish_function(function_name)\n if fn:\n config = fn.get_configuration()\n return 200, {}, json.dumps(config)\n else:\n return 404, {}, \"{}\"\n\n def _delete_function(self, request, full_url, headers):\n function_name = self.path.rsplit('/', 1)[-1]\n qualifier = self._get_param('Qualifier', None)\n\n if self.lambda_backend.delete_function(function_name, qualifier):\n return 204, {}, \"\"\n else:\n return 404, {}, \"{}\"\n\n def _get_function(self, request, full_url, headers):\n function_name = self.path.rsplit('/', 1)[-1]\n qualifier = self._get_param('Qualifier', None)\n\n fn = self.lambda_backend.get_function(function_name, qualifier)\n\n if fn:\n code = fn.get_code()\n\n return 200, {}, json.dumps(code)\n else:\n return 404, {}, \"{}\"\n\n def _get_aws_region(self, full_url):\n region = self.region_regex.search(full_url)\n if region:\n return region.group(1)\n else:\n return self.default_region\n\n def _list_tags(self, request, full_url):\n function_arn = unquote(self.path.rsplit('/', 1)[-1])\n\n fn = self.lambda_backend.get_function_by_arn(function_arn)\n if fn:\n return 200, {}, json.dumps({'Tags': fn.tags})\n else:\n return 404, {}, \"{}\"\n\n def _tag_resource(self, request, full_url):\n function_arn = unquote(self.path.rsplit('/', 1)[-1])\n\n if self.lambda_backend.tag_resource(function_arn, self.json_body['Tags']):\n return 200, {}, \"{}\"\n else:\n return 404, {}, \"{}\"\n\n def _untag_resource(self, request, full_url):\n function_arn = unquote(self.path.rsplit('/', 1)[-1])\n tag_keys = self.querystring['tagKeys']\n\n if self.lambda_backend.untag_resource(function_arn, tag_keys):\n return 204, {}, \"{}\"\n else:\n return 404, {}, \"{}\"\n", "path": "moto/awslambda/responses.py"}]}
| 3,149 | 123 |
gh_patches_debug_4184
|
rasdani/github-patches
|
git_diff
|
jupyterhub__jupyterhub-2995
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Failed build of docs in CI
```
#!/bin/bash -eo pipefail
cd docs
make html
npm install && touch node_modules
npm WARN deprecated [email protected]: request has been deprecated, see https://github.com/request/request/issues/3142
npm notice created a lockfile as package-lock.json. You should commit this file.
npm WARN optional SKIPPING OPTIONAL DEPENDENCY: fsevents@~2.1.2 (node_modules/chokidar/node_modules/fsevents):
npm WARN notsup SKIPPING OPTIONAL DEPENDENCY: Unsupported platform for [email protected]: wanted {"os":"darwin","arch":"any"} (current: {"os":"linux","arch":"x64"})
npm WARN [email protected] No repository field.
added 216 packages from 514 contributors and audited 325 packages in 4.188s
14 packages are looking for funding
run `npm fund` for details
found 2 low severity vulnerabilities
run `npm audit fix` to fix them, or `npm audit` for details
npm run rest-api
> [email protected] rest-api /home/circleci/project/docs
> bootprint openapi ./rest-api.yml source/_static/rest-api
[
'source/_static/rest-api/index.html',
'source/_static/rest-api/main.css',
'source/_static/rest-api/main.css.map'
]
sphinx-build -b html -d build/doctrees "-W" source build/html
Running Sphinx v2.4.4
Adding copy buttons to code blocks...
making output directory... done
/home/circleci/.local/lib/python3.6/site-packages/sphinx/util/compat.py:32: RemovedInSphinx30Warning: The config variable "source_parsers" is deprecated. Please update your extension for the parser and remove the setting.
RemovedInSphinx30Warning)
/home/circleci/.local/lib/python3.6/site-packages/sphinx/util/compat.py:36: RemovedInSphinx30Warning: app.add_source_parser() does not support suffix argument. Use app.add_source_suffix() instead.
app.add_source_parser(suffix, parser)
Theme error:
no theme named 'pandas_sphinx_theme' found (missing theme.conf?)
Makefile:64: recipe for target 'html' failed
make: *** [html] Error 2
```
FYI @choldgraf @betatim
</issue>
<code>
[start of docs/source/conf.py]
1 # -*- coding: utf-8 -*-
2 #
3 import os
4 import shlex
5 import sys
6
7 # Set paths
8 sys.path.insert(0, os.path.abspath('.'))
9
10 # -- General configuration ------------------------------------------------
11
12 # Minimal Sphinx version
13 needs_sphinx = '1.4'
14
15 # Sphinx extension modules
16 extensions = [
17 'sphinx.ext.autodoc',
18 'sphinx.ext.intersphinx',
19 'sphinx.ext.napoleon',
20 'autodoc_traits',
21 'sphinx_copybutton',
22 'sphinx-jsonschema',
23 ]
24
25 templates_path = ['_templates']
26
27 # The master toctree document.
28 master_doc = 'index'
29
30 # General information about the project.
31 project = u'JupyterHub'
32 copyright = u'2016, Project Jupyter team'
33 author = u'Project Jupyter team'
34
35 # Autopopulate version
36 from os.path import dirname
37
38 docs = dirname(dirname(__file__))
39 root = dirname(docs)
40 sys.path.insert(0, root)
41
42 import jupyterhub
43
44 # The short X.Y version.
45 version = '%i.%i' % jupyterhub.version_info[:2]
46 # The full version, including alpha/beta/rc tags.
47 release = jupyterhub.__version__
48
49 language = None
50 exclude_patterns = []
51 pygments_style = 'sphinx'
52 todo_include_todos = False
53
54 # Set the default role so we can use `foo` instead of ``foo``
55 default_role = 'literal'
56
57 # -- Source -------------------------------------------------------------
58
59 import recommonmark
60 from recommonmark.transform import AutoStructify
61
62
63 def setup(app):
64 app.add_config_value('recommonmark_config', {'enable_eval_rst': True}, True)
65 app.add_stylesheet('custom.css')
66 app.add_transform(AutoStructify)
67
68
69 source_parsers = {'.md': 'recommonmark.parser.CommonMarkParser'}
70
71 source_suffix = ['.rst', '.md']
72 # source_encoding = 'utf-8-sig'
73
74 # -- Options for HTML output ----------------------------------------------
75
76 # The theme to use for HTML and HTML Help pages.
77 html_theme = 'pandas_sphinx_theme'
78
79 html_logo = '_static/images/logo/logo.png'
80 html_favicon = '_static/images/logo/favicon.ico'
81
82 # Paths that contain custom static files (such as style sheets)
83 html_static_path = ['_static']
84
85 htmlhelp_basename = 'JupyterHubdoc'
86
87 # -- Options for LaTeX output ---------------------------------------------
88
89 latex_elements = {
90 # 'papersize': 'letterpaper',
91 # 'pointsize': '10pt',
92 # 'preamble': '',
93 # 'figure_align': 'htbp',
94 }
95
96 # Grouping the document tree into LaTeX files. List of tuples
97 # (source start file, target name, title,
98 # author, documentclass [howto, manual, or own class]).
99 latex_documents = [
100 (
101 master_doc,
102 'JupyterHub.tex',
103 u'JupyterHub Documentation',
104 u'Project Jupyter team',
105 'manual',
106 )
107 ]
108
109 # latex_logo = None
110 # latex_use_parts = False
111 # latex_show_pagerefs = False
112 # latex_show_urls = False
113 # latex_appendices = []
114 # latex_domain_indices = True
115
116
117 # -- manual page output -------------------------------------------------
118
119 # One entry per manual page. List of tuples
120 # (source start file, name, description, authors, manual section).
121 man_pages = [(master_doc, 'jupyterhub', u'JupyterHub Documentation', [author], 1)]
122
123 # man_show_urls = False
124
125
126 # -- Texinfo output -----------------------------------------------------
127
128 # Grouping the document tree into Texinfo files. List of tuples
129 # (source start file, target name, title, author,
130 # dir menu entry, description, category)
131 texinfo_documents = [
132 (
133 master_doc,
134 'JupyterHub',
135 u'JupyterHub Documentation',
136 author,
137 'JupyterHub',
138 'One line description of project.',
139 'Miscellaneous',
140 )
141 ]
142
143 # texinfo_appendices = []
144 # texinfo_domain_indices = True
145 # texinfo_show_urls = 'footnote'
146 # texinfo_no_detailmenu = False
147
148
149 # -- Epub output --------------------------------------------------------
150
151 # Bibliographic Dublin Core info.
152 epub_title = project
153 epub_author = author
154 epub_publisher = author
155 epub_copyright = copyright
156
157 # A list of files that should not be packed into the epub file.
158 epub_exclude_files = ['search.html']
159
160 # -- Intersphinx ----------------------------------------------------------
161
162 intersphinx_mapping = {'https://docs.python.org/3/': None}
163
164 # -- Read The Docs --------------------------------------------------------
165
166 on_rtd = os.environ.get('READTHEDOCS', None) == 'True'
167 if on_rtd:
168 # readthedocs.org uses their theme by default, so no need to specify it
169 # build rest-api, since RTD doesn't run make
170 from subprocess import check_call as sh
171
172 sh(['make', 'rest-api'], cwd=docs)
173
174 # -- Spell checking -------------------------------------------------------
175
176 try:
177 import sphinxcontrib.spelling
178 except ImportError:
179 pass
180 else:
181 extensions.append("sphinxcontrib.spelling")
182
183 spelling_word_list_filename = 'spelling_wordlist.txt'
184
[end of docs/source/conf.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/docs/source/conf.py b/docs/source/conf.py
--- a/docs/source/conf.py
+++ b/docs/source/conf.py
@@ -74,7 +74,7 @@
# -- Options for HTML output ----------------------------------------------
# The theme to use for HTML and HTML Help pages.
-html_theme = 'pandas_sphinx_theme'
+html_theme = 'pydata_sphinx_theme'
html_logo = '_static/images/logo/logo.png'
html_favicon = '_static/images/logo/favicon.ico'
|
{"golden_diff": "diff --git a/docs/source/conf.py b/docs/source/conf.py\n--- a/docs/source/conf.py\n+++ b/docs/source/conf.py\n@@ -74,7 +74,7 @@\n # -- Options for HTML output ----------------------------------------------\n \n # The theme to use for HTML and HTML Help pages.\n-html_theme = 'pandas_sphinx_theme'\n+html_theme = 'pydata_sphinx_theme'\n \n html_logo = '_static/images/logo/logo.png'\n html_favicon = '_static/images/logo/favicon.ico'\n", "issue": "Failed build of docs in CI\n```\r\n#!/bin/bash -eo pipefail\r\ncd docs\r\nmake html\r\nnpm install && touch node_modules\r\nnpm WARN deprecated [email protected]: request has been deprecated, see https://github.com/request/request/issues/3142\r\nnpm notice created a lockfile as package-lock.json. You should commit this file.\r\nnpm WARN optional SKIPPING OPTIONAL DEPENDENCY: fsevents@~2.1.2 (node_modules/chokidar/node_modules/fsevents):\r\nnpm WARN notsup SKIPPING OPTIONAL DEPENDENCY: Unsupported platform for [email protected]: wanted {\"os\":\"darwin\",\"arch\":\"any\"} (current: {\"os\":\"linux\",\"arch\":\"x64\"})\r\nnpm WARN [email protected] No repository field.\r\n\r\nadded 216 packages from 514 contributors and audited 325 packages in 4.188s\r\n\r\n14 packages are looking for funding\r\n run `npm fund` for details\r\n\r\nfound 2 low severity vulnerabilities\r\n run `npm audit fix` to fix them, or `npm audit` for details\r\nnpm run rest-api\r\n\r\n> [email protected] rest-api /home/circleci/project/docs\r\n> bootprint openapi ./rest-api.yml source/_static/rest-api\r\n\r\n[\r\n 'source/_static/rest-api/index.html',\r\n 'source/_static/rest-api/main.css',\r\n 'source/_static/rest-api/main.css.map'\r\n]\r\nsphinx-build -b html -d build/doctrees \"-W\" source build/html\r\nRunning Sphinx v2.4.4\r\nAdding copy buttons to code blocks...\r\nmaking output directory... done\r\n/home/circleci/.local/lib/python3.6/site-packages/sphinx/util/compat.py:32: RemovedInSphinx30Warning: The config variable \"source_parsers\" is deprecated. Please update your extension for the parser and remove the setting.\r\n RemovedInSphinx30Warning)\r\n/home/circleci/.local/lib/python3.6/site-packages/sphinx/util/compat.py:36: RemovedInSphinx30Warning: app.add_source_parser() does not support suffix argument. Use app.add_source_suffix() instead.\r\n app.add_source_parser(suffix, parser)\r\n\r\nTheme error:\r\nno theme named 'pandas_sphinx_theme' found (missing theme.conf?)\r\nMakefile:64: recipe for target 'html' failed\r\nmake: *** [html] Error 2\r\n```\r\n\r\nFYI @choldgraf @betatim\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n#\nimport os\nimport shlex\nimport sys\n\n# Set paths\nsys.path.insert(0, os.path.abspath('.'))\n\n# -- General configuration ------------------------------------------------\n\n# Minimal Sphinx version\nneeds_sphinx = '1.4'\n\n# Sphinx extension modules\nextensions = [\n 'sphinx.ext.autodoc',\n 'sphinx.ext.intersphinx',\n 'sphinx.ext.napoleon',\n 'autodoc_traits',\n 'sphinx_copybutton',\n 'sphinx-jsonschema',\n]\n\ntemplates_path = ['_templates']\n\n# The master toctree document.\nmaster_doc = 'index'\n\n# General information about the project.\nproject = u'JupyterHub'\ncopyright = u'2016, Project Jupyter team'\nauthor = u'Project Jupyter team'\n\n# Autopopulate version\nfrom os.path import dirname\n\ndocs = dirname(dirname(__file__))\nroot = dirname(docs)\nsys.path.insert(0, root)\n\nimport jupyterhub\n\n# The short X.Y version.\nversion = '%i.%i' % jupyterhub.version_info[:2]\n# The full version, including alpha/beta/rc tags.\nrelease = jupyterhub.__version__\n\nlanguage = None\nexclude_patterns = []\npygments_style = 'sphinx'\ntodo_include_todos = False\n\n# Set the default role so we can use `foo` instead of ``foo``\ndefault_role = 'literal'\n\n# -- Source -------------------------------------------------------------\n\nimport recommonmark\nfrom recommonmark.transform import AutoStructify\n\n\ndef setup(app):\n app.add_config_value('recommonmark_config', {'enable_eval_rst': True}, True)\n app.add_stylesheet('custom.css')\n app.add_transform(AutoStructify)\n\n\nsource_parsers = {'.md': 'recommonmark.parser.CommonMarkParser'}\n\nsource_suffix = ['.rst', '.md']\n# source_encoding = 'utf-8-sig'\n\n# -- Options for HTML output ----------------------------------------------\n\n# The theme to use for HTML and HTML Help pages.\nhtml_theme = 'pandas_sphinx_theme'\n\nhtml_logo = '_static/images/logo/logo.png'\nhtml_favicon = '_static/images/logo/favicon.ico'\n\n# Paths that contain custom static files (such as style sheets)\nhtml_static_path = ['_static']\n\nhtmlhelp_basename = 'JupyterHubdoc'\n\n# -- Options for LaTeX output ---------------------------------------------\n\nlatex_elements = {\n # 'papersize': 'letterpaper',\n # 'pointsize': '10pt',\n # 'preamble': '',\n # 'figure_align': 'htbp',\n}\n\n# Grouping the document tree into LaTeX files. List of tuples\n# (source start file, target name, title,\n# author, documentclass [howto, manual, or own class]).\nlatex_documents = [\n (\n master_doc,\n 'JupyterHub.tex',\n u'JupyterHub Documentation',\n u'Project Jupyter team',\n 'manual',\n )\n]\n\n# latex_logo = None\n# latex_use_parts = False\n# latex_show_pagerefs = False\n# latex_show_urls = False\n# latex_appendices = []\n# latex_domain_indices = True\n\n\n# -- manual page output -------------------------------------------------\n\n# One entry per manual page. List of tuples\n# (source start file, name, description, authors, manual section).\nman_pages = [(master_doc, 'jupyterhub', u'JupyterHub Documentation', [author], 1)]\n\n# man_show_urls = False\n\n\n# -- Texinfo output -----------------------------------------------------\n\n# Grouping the document tree into Texinfo files. List of tuples\n# (source start file, target name, title, author,\n# dir menu entry, description, category)\ntexinfo_documents = [\n (\n master_doc,\n 'JupyterHub',\n u'JupyterHub Documentation',\n author,\n 'JupyterHub',\n 'One line description of project.',\n 'Miscellaneous',\n )\n]\n\n# texinfo_appendices = []\n# texinfo_domain_indices = True\n# texinfo_show_urls = 'footnote'\n# texinfo_no_detailmenu = False\n\n\n# -- Epub output --------------------------------------------------------\n\n# Bibliographic Dublin Core info.\nepub_title = project\nepub_author = author\nepub_publisher = author\nepub_copyright = copyright\n\n# A list of files that should not be packed into the epub file.\nepub_exclude_files = ['search.html']\n\n# -- Intersphinx ----------------------------------------------------------\n\nintersphinx_mapping = {'https://docs.python.org/3/': None}\n\n# -- Read The Docs --------------------------------------------------------\n\non_rtd = os.environ.get('READTHEDOCS', None) == 'True'\nif on_rtd:\n # readthedocs.org uses their theme by default, so no need to specify it\n # build rest-api, since RTD doesn't run make\n from subprocess import check_call as sh\n\n sh(['make', 'rest-api'], cwd=docs)\n\n# -- Spell checking -------------------------------------------------------\n\ntry:\n import sphinxcontrib.spelling\nexcept ImportError:\n pass\nelse:\n extensions.append(\"sphinxcontrib.spelling\")\n\nspelling_word_list_filename = 'spelling_wordlist.txt'\n", "path": "docs/source/conf.py"}]}
| 2,633 | 104 |
gh_patches_debug_27228
|
rasdani/github-patches
|
git_diff
|
microsoft__AzureTRE-176
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Standardize TRE identifiers
## Description
As a TRE developer
I want naming of identifiers to be simple and standardized across the TRE
So it will as intuitive as possible
Currently we have Core ID, TRE ID and resource_name_prefix, which all are unique IDs for a TRE instance. ([Ref to code](https://github.com/microsoft/AzureTRE/blob/3cc8e14c6a16d5bb940f259dd5cb257e735e448b/templates/core/terraform/main.tf#L17))
They are used to ensure no clashes between names, but having a single identifier is sufficient.
### A simplified solution
When creating a TRE instance, a unique identifier is needed, to make sure no clashes occur. That identifier should be named TRE_ID and can be up to 10 chars long (Alphanumeric, underscore, and hyphen). If the Cloud Administrator wants to use a specific naming convention e.g. one that includes environment, the Cloud Administrator can do so.
Examples of a TRE_ID:
- mytre
- msfttre-dev
- tre123
Hench the TRE_ID is an unique identifier for the TRE instance replacing the Core ID, which consisted of TRE ID + resource_name_prefix.
## Acceptance criteria
- [x] TRE provisioning script uses the TRE ID as the TRE instance name, hence creates the cross-cutting services in a ressource group with the name of TRE ID e.g. mytre
- [x] TRE provisioning script does not require environment parameter
- [x] Workspace bundle uses TRE_ID (not Core ID as now) as the identifier for the TRE instance
</issue>
<code>
[start of management_api_app/core/config.py]
1 from starlette.config import Config
2
3
4 config = Config(".env")
5
6 # API settings
7 API_PREFIX = "/api"
8 PROJECT_NAME: str = config("PROJECT_NAME", default="Azure TRE API")
9 DEBUG: bool = config("DEBUG", cast=bool, default=False)
10 VERSION = "0.0.0"
11
12 # Resource Info
13 RESOURCE_LOCATION: str = config("RESOURCE_LOCATION", default="")
14 CORE_ID: str = config("CORE_ID", default="")
15
16 # State store configuration
17 STATE_STORE_ENDPOINT: str = config("STATE_STORE_ENDPOINT", default="") # Cosmos DB endpoint
18 STATE_STORE_KEY: str = config("STATE_STORE_KEY", default="") # Cosmos DB access key
19 STATE_STORE_DATABASE = "AzureTRE"
20 STATE_STORE_RESOURCES_CONTAINER = "Resources"
21 STATE_STORE_BUNDLE_SPECS_CONTAINER = "ResourceSpecs"
22
23 # Service bus configuration
24 SERVICE_BUS_FULLY_QUALIFIED_NAMESPACE: str = config("SERVICE_BUS_FULLY_QUALIFIED_NAMESPACE", default="")
25 SERVICE_BUS_RESOURCE_REQUEST_QUEUE: str = config("SERVICE_BUS_RESOURCE_REQUEST_QUEUE", default="")
26
[end of management_api_app/core/config.py]
[start of management_api_app/db/repositories/workspaces.py]
1 import uuid
2 from typing import List
3
4 from azure.cosmos import ContainerProxy, CosmosClient
5 from pydantic import UUID4
6
7 from core import config
8 from db.errors import EntityDoesNotExist
9 from db.query_builder import QueryBuilder
10 from db.repositories.base import BaseRepository
11 from models.domain.resource import ResourceType
12 from models.domain.workspace import Workspace
13 from models.schemas.workspace import WorkspaceInCreate
14
15
16 class WorkspaceRepository(BaseRepository):
17 def __init__(self, client: CosmosClient):
18 super().__init__(client, config.STATE_STORE_RESOURCES_CONTAINER)
19
20 @property
21 def container(self) -> ContainerProxy:
22 return self._container
23
24 def get_all_active_workspaces(self) -> List[Workspace]:
25 query = QueryBuilder().select_active_resources(ResourceType.Workspace).build()
26 workspaces = list(self.container.query_items(query=query, enable_cross_partition_query=True))
27 return workspaces
28
29 def get_workspace_by_workspace_id(self, workspace_id: UUID4) -> Workspace:
30 query = QueryBuilder().select_active_resources(ResourceType.Workspace).with_id(workspace_id).build()
31 workspaces = list(self.container.query_items(query=query, enable_cross_partition_query=True))
32
33 if workspaces:
34 return workspaces[0]
35 else:
36 raise EntityDoesNotExist
37
38 def create_workspace(self, workspace_create: WorkspaceInCreate) -> Workspace:
39 resource_spec_parameters = {
40 "location": config.RESOURCE_LOCATION,
41 "workspace_id": "0001", # TODO: Calculate this value - Issue #166
42 "core_id": config.CORE_ID,
43 "address_space": "10.2.1.0/24" # TODO: Calculate this value - Issue #52
44 }
45
46 workspace = Workspace(
47 id=str(uuid.uuid4()),
48 displayName=workspace_create.displayName,
49 description=workspace_create.description,
50 resourceSpecName=workspace_create.workspaceType,
51 resourceSpecVersion="0.1.0", # TODO: Calculate latest - Issue #167
52 resourceSpecParameters=resource_spec_parameters
53 )
54
55 self.container.create_item(body=workspace.dict())
56 return workspace
57
[end of management_api_app/db/repositories/workspaces.py]
[start of management_api_app/models/schemas/workspace.py]
1 from typing import List
2 from pydantic import BaseModel, Field
3
4 from models.domain.workspace import Workspace
5
6
7 def get_sample_workspace(workspace_id: str, spec_workspace_id: str = "0001") -> dict:
8 return {
9 "id": workspace_id,
10 "displayName": "my workspace",
11 "description": "some description",
12 "resourceSpecName": "tre-workspace-vanilla",
13 "resourceSpecVersion": "0.1.0",
14 "resourceSpecParameters": {
15 "location": "westeurope",
16 "workspace_id": spec_workspace_id,
17 "core_id": "mytre-dev-1234",
18 "address_space": "10.2.1.0/24"
19 },
20 "status": "not_deployed",
21 "isDeleted": False,
22 "resourceType": "workspace",
23 "workspaceURL": ""
24 }
25
26
27 class WorkspaceInResponse(BaseModel):
28 workspace: Workspace
29
30 class Config:
31 schema_extra = {
32 "example": {
33 "workspace": get_sample_workspace("933ad738-7265-4b5f-9eae-a1a62928772e")
34 }
35 }
36
37
38 class WorkspacesInList(BaseModel):
39 workspaces: List[Workspace]
40
41 class Config:
42 schema_extra = {
43 "example": {
44 "workspaces": [
45 get_sample_workspace("933ad738-7265-4b5f-9eae-a1a62928772e", "0001"),
46 get_sample_workspace("2fdc9fba-726e-4db6-a1b8-9018a2165748", "0002"),
47 ]
48 }
49 }
50
51
52 class WorkspaceInCreate(BaseModel):
53 displayName: str = Field(title="Friendly name for workspace")
54 workspaceType: str = Field(title="Workspace type", description="Bundle name")
55 description: str = Field(title="Workspace description")
56 parameters: dict = Field({}, title="Workspace parameters", description="Values for the parameters required by the workspace resource specification")
57
58 class Config:
59 schema_extra = {
60 "example": {
61 "displayName": "My workspace",
62 "description": "workspace for team X",
63 "workspaceType": "tre-workspace-vanilla",
64 "parameters": {}
65 }
66 }
67
68
69 class WorkspaceIdInResponse(BaseModel):
70 workspaceId: str
71
72 class Config:
73 schema_extra = {
74 "example": {
75 "workspaceId": "49a7445c-aae6-41ec-a539-30dfa90ab1ae",
76 }
77 }
78
[end of management_api_app/models/schemas/workspace.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/management_api_app/core/config.py b/management_api_app/core/config.py
--- a/management_api_app/core/config.py
+++ b/management_api_app/core/config.py
@@ -11,7 +11,7 @@
# Resource Info
RESOURCE_LOCATION: str = config("RESOURCE_LOCATION", default="")
-CORE_ID: str = config("CORE_ID", default="")
+TRE_ID: str = config("TRE_ID", default="")
# State store configuration
STATE_STORE_ENDPOINT: str = config("STATE_STORE_ENDPOINT", default="") # Cosmos DB endpoint
diff --git a/management_api_app/db/repositories/workspaces.py b/management_api_app/db/repositories/workspaces.py
--- a/management_api_app/db/repositories/workspaces.py
+++ b/management_api_app/db/repositories/workspaces.py
@@ -39,7 +39,7 @@
resource_spec_parameters = {
"location": config.RESOURCE_LOCATION,
"workspace_id": "0001", # TODO: Calculate this value - Issue #166
- "core_id": config.CORE_ID,
+ "tre_id": config.TRE_ID,
"address_space": "10.2.1.0/24" # TODO: Calculate this value - Issue #52
}
diff --git a/management_api_app/models/schemas/workspace.py b/management_api_app/models/schemas/workspace.py
--- a/management_api_app/models/schemas/workspace.py
+++ b/management_api_app/models/schemas/workspace.py
@@ -14,7 +14,7 @@
"resourceSpecParameters": {
"location": "westeurope",
"workspace_id": spec_workspace_id,
- "core_id": "mytre-dev-1234",
+ "tre_id": "mytre-dev-1234",
"address_space": "10.2.1.0/24"
},
"status": "not_deployed",
|
{"golden_diff": "diff --git a/management_api_app/core/config.py b/management_api_app/core/config.py\n--- a/management_api_app/core/config.py\n+++ b/management_api_app/core/config.py\n@@ -11,7 +11,7 @@\n \n # Resource Info\n RESOURCE_LOCATION: str = config(\"RESOURCE_LOCATION\", default=\"\")\n-CORE_ID: str = config(\"CORE_ID\", default=\"\")\n+TRE_ID: str = config(\"TRE_ID\", default=\"\")\n \n # State store configuration\n STATE_STORE_ENDPOINT: str = config(\"STATE_STORE_ENDPOINT\", default=\"\") # Cosmos DB endpoint\ndiff --git a/management_api_app/db/repositories/workspaces.py b/management_api_app/db/repositories/workspaces.py\n--- a/management_api_app/db/repositories/workspaces.py\n+++ b/management_api_app/db/repositories/workspaces.py\n@@ -39,7 +39,7 @@\n resource_spec_parameters = {\n \"location\": config.RESOURCE_LOCATION,\n \"workspace_id\": \"0001\", # TODO: Calculate this value - Issue #166\n- \"core_id\": config.CORE_ID,\n+ \"tre_id\": config.TRE_ID,\n \"address_space\": \"10.2.1.0/24\" # TODO: Calculate this value - Issue #52\n }\n \ndiff --git a/management_api_app/models/schemas/workspace.py b/management_api_app/models/schemas/workspace.py\n--- a/management_api_app/models/schemas/workspace.py\n+++ b/management_api_app/models/schemas/workspace.py\n@@ -14,7 +14,7 @@\n \"resourceSpecParameters\": {\n \"location\": \"westeurope\",\n \"workspace_id\": spec_workspace_id,\n- \"core_id\": \"mytre-dev-1234\",\n+ \"tre_id\": \"mytre-dev-1234\",\n \"address_space\": \"10.2.1.0/24\"\n },\n \"status\": \"not_deployed\",\n", "issue": "Standardize TRE identifiers\n## Description\r\n\r\nAs a TRE developer\r\nI want naming of identifiers to be simple and standardized across the TRE\r\nSo it will as intuitive as possible\r\n\r\nCurrently we have Core ID, TRE ID and resource_name_prefix, which all are unique IDs for a TRE instance. ([Ref to code](https://github.com/microsoft/AzureTRE/blob/3cc8e14c6a16d5bb940f259dd5cb257e735e448b/templates/core/terraform/main.tf#L17))\r\nThey are used to ensure no clashes between names, but having a single identifier is sufficient.\r\n\r\n### A simplified solution\r\n\r\nWhen creating a TRE instance, a unique identifier is needed, to make sure no clashes occur. That identifier should be named TRE_ID and can be up to 10 chars long (Alphanumeric, underscore, and hyphen). If the Cloud Administrator wants to use a specific naming convention e.g. one that includes environment, the Cloud Administrator can do so.\r\n\r\nExamples of a TRE_ID:\r\n\r\n- mytre\r\n- msfttre-dev\r\n- tre123\r\n\r\nHench the TRE_ID is an unique identifier for the TRE instance replacing the Core ID, which consisted of TRE ID + resource_name_prefix.\r\n\r\n## Acceptance criteria\r\n\r\n- [x] TRE provisioning script uses the TRE ID as the TRE instance name, hence creates the cross-cutting services in a ressource group with the name of TRE ID e.g. mytre\r\n- [x] TRE provisioning script does not require environment parameter\r\n- [x] Workspace bundle uses TRE_ID (not Core ID as now) as the identifier for the TRE instance\r\n\n", "before_files": [{"content": "from starlette.config import Config\n\n\nconfig = Config(\".env\")\n\n# API settings\nAPI_PREFIX = \"/api\"\nPROJECT_NAME: str = config(\"PROJECT_NAME\", default=\"Azure TRE API\")\nDEBUG: bool = config(\"DEBUG\", cast=bool, default=False)\nVERSION = \"0.0.0\"\n\n# Resource Info\nRESOURCE_LOCATION: str = config(\"RESOURCE_LOCATION\", default=\"\")\nCORE_ID: str = config(\"CORE_ID\", default=\"\")\n\n# State store configuration\nSTATE_STORE_ENDPOINT: str = config(\"STATE_STORE_ENDPOINT\", default=\"\") # Cosmos DB endpoint\nSTATE_STORE_KEY: str = config(\"STATE_STORE_KEY\", default=\"\") # Cosmos DB access key\nSTATE_STORE_DATABASE = \"AzureTRE\"\nSTATE_STORE_RESOURCES_CONTAINER = \"Resources\"\nSTATE_STORE_BUNDLE_SPECS_CONTAINER = \"ResourceSpecs\"\n\n# Service bus configuration\nSERVICE_BUS_FULLY_QUALIFIED_NAMESPACE: str = config(\"SERVICE_BUS_FULLY_QUALIFIED_NAMESPACE\", default=\"\")\nSERVICE_BUS_RESOURCE_REQUEST_QUEUE: str = config(\"SERVICE_BUS_RESOURCE_REQUEST_QUEUE\", default=\"\")\n", "path": "management_api_app/core/config.py"}, {"content": "import uuid\nfrom typing import List\n\nfrom azure.cosmos import ContainerProxy, CosmosClient\nfrom pydantic import UUID4\n\nfrom core import config\nfrom db.errors import EntityDoesNotExist\nfrom db.query_builder import QueryBuilder\nfrom db.repositories.base import BaseRepository\nfrom models.domain.resource import ResourceType\nfrom models.domain.workspace import Workspace\nfrom models.schemas.workspace import WorkspaceInCreate\n\n\nclass WorkspaceRepository(BaseRepository):\n def __init__(self, client: CosmosClient):\n super().__init__(client, config.STATE_STORE_RESOURCES_CONTAINER)\n\n @property\n def container(self) -> ContainerProxy:\n return self._container\n\n def get_all_active_workspaces(self) -> List[Workspace]:\n query = QueryBuilder().select_active_resources(ResourceType.Workspace).build()\n workspaces = list(self.container.query_items(query=query, enable_cross_partition_query=True))\n return workspaces\n\n def get_workspace_by_workspace_id(self, workspace_id: UUID4) -> Workspace:\n query = QueryBuilder().select_active_resources(ResourceType.Workspace).with_id(workspace_id).build()\n workspaces = list(self.container.query_items(query=query, enable_cross_partition_query=True))\n\n if workspaces:\n return workspaces[0]\n else:\n raise EntityDoesNotExist\n\n def create_workspace(self, workspace_create: WorkspaceInCreate) -> Workspace:\n resource_spec_parameters = {\n \"location\": config.RESOURCE_LOCATION,\n \"workspace_id\": \"0001\", # TODO: Calculate this value - Issue #166\n \"core_id\": config.CORE_ID,\n \"address_space\": \"10.2.1.0/24\" # TODO: Calculate this value - Issue #52\n }\n\n workspace = Workspace(\n id=str(uuid.uuid4()),\n displayName=workspace_create.displayName,\n description=workspace_create.description,\n resourceSpecName=workspace_create.workspaceType,\n resourceSpecVersion=\"0.1.0\", # TODO: Calculate latest - Issue #167\n resourceSpecParameters=resource_spec_parameters\n )\n\n self.container.create_item(body=workspace.dict())\n return workspace\n", "path": "management_api_app/db/repositories/workspaces.py"}, {"content": "from typing import List\nfrom pydantic import BaseModel, Field\n\nfrom models.domain.workspace import Workspace\n\n\ndef get_sample_workspace(workspace_id: str, spec_workspace_id: str = \"0001\") -> dict:\n return {\n \"id\": workspace_id,\n \"displayName\": \"my workspace\",\n \"description\": \"some description\",\n \"resourceSpecName\": \"tre-workspace-vanilla\",\n \"resourceSpecVersion\": \"0.1.0\",\n \"resourceSpecParameters\": {\n \"location\": \"westeurope\",\n \"workspace_id\": spec_workspace_id,\n \"core_id\": \"mytre-dev-1234\",\n \"address_space\": \"10.2.1.0/24\"\n },\n \"status\": \"not_deployed\",\n \"isDeleted\": False,\n \"resourceType\": \"workspace\",\n \"workspaceURL\": \"\"\n }\n\n\nclass WorkspaceInResponse(BaseModel):\n workspace: Workspace\n\n class Config:\n schema_extra = {\n \"example\": {\n \"workspace\": get_sample_workspace(\"933ad738-7265-4b5f-9eae-a1a62928772e\")\n }\n }\n\n\nclass WorkspacesInList(BaseModel):\n workspaces: List[Workspace]\n\n class Config:\n schema_extra = {\n \"example\": {\n \"workspaces\": [\n get_sample_workspace(\"933ad738-7265-4b5f-9eae-a1a62928772e\", \"0001\"),\n get_sample_workspace(\"2fdc9fba-726e-4db6-a1b8-9018a2165748\", \"0002\"),\n ]\n }\n }\n\n\nclass WorkspaceInCreate(BaseModel):\n displayName: str = Field(title=\"Friendly name for workspace\")\n workspaceType: str = Field(title=\"Workspace type\", description=\"Bundle name\")\n description: str = Field(title=\"Workspace description\")\n parameters: dict = Field({}, title=\"Workspace parameters\", description=\"Values for the parameters required by the workspace resource specification\")\n\n class Config:\n schema_extra = {\n \"example\": {\n \"displayName\": \"My workspace\",\n \"description\": \"workspace for team X\",\n \"workspaceType\": \"tre-workspace-vanilla\",\n \"parameters\": {}\n }\n }\n\n\nclass WorkspaceIdInResponse(BaseModel):\n workspaceId: str\n\n class Config:\n schema_extra = {\n \"example\": {\n \"workspaceId\": \"49a7445c-aae6-41ec-a539-30dfa90ab1ae\",\n }\n }\n", "path": "management_api_app/models/schemas/workspace.py"}]}
| 2,508 | 430 |
gh_patches_debug_27427
|
rasdani/github-patches
|
git_diff
|
beeware__toga-1109
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Memory leaks in Windows
**Describe the bug**
Unfortunately, it seems like there are memory leaks issues in the Windows implementation of toga
**To Reproduce**
Steps to reproduce the behavior:
1. install memory profiler, briefcase and matplotlib using `pip install memory_profiler briefcase matplotlib`
2. Go to the Box example directory (or any other example for that matter)
3. Run `mprof run --include-children briefcase dev`
4. Click some buttons and wait for a few seconds
5. Exit the program
6. run `mprof plot` and see the memory leak

**Environment:**
- Operating System: Windows 10
- Python version: python 3.8
- Software versions:
- Briefcase: latest
- Toga: latest
</issue>
<code>
[start of src/winforms/toga_winforms/libs/proactor.py]
1 import asyncio
2 import sys
3 import threading
4 from asyncio import events
5
6 from .winforms import Action, Task, WinForms, user32
7
8
9 class AsyncIOTickMessageFilter(WinForms.IMessageFilter):
10 """
11 A Winforms message filter that will catch the request to tick the Asyncio
12 event loop.
13 """
14 __namespace__ = 'System.Windows.Forms'
15
16 def __init__(self, loop, msg_id):
17 self.loop = loop
18 self.msg_id = msg_id
19
20 def PreFilterMessage(self, message):
21 print('ping', message)
22 if message.Msg == self.msg_id:
23 print("asyncio tick message!!")
24 self.loop.run_once_recurring()
25 return True
26 # print("Filter message", message)
27 return False
28
29
30 class WinformsProactorEventLoop(asyncio.ProactorEventLoop):
31 def run_forever(self, app_context):
32 """Set up the asyncio event loop, integrate it with the Winforms
33 event loop, and start the application.
34
35 This largely duplicates the setup behavior of the default Proactor
36 run_forever implementation.
37
38 :param app_context: The WinForms.ApplicationContext instance
39 controlling the lifecycle of the app.
40 """
41 # Python 3.8 added an implementation of run_forever() in
42 # ProactorEventLoop. The only part that actually matters is the
43 # refactoring that moved the initial call to stage _loop_self_reading;
44 # it now needs to be created as part of run_forever; otherwise the
45 # event loop locks up, because there won't be anything for the
46 # select call to process.
47 if sys.version_info >= (3, 8):
48 self.call_soon(self._loop_self_reading)
49
50 # Remember the application context.
51 self.app_context = app_context
52
53 # Register a custom user window message.
54 self.msg_id = user32.RegisterWindowMessageA("Python asyncio tick")
55 # Add a message filter to listen for the asyncio tick message
56 # FIXME: Actually install the message filter.
57 # msg_filter = AsyncIOTickMessageFilter(self, self.msg_id)
58 # WinForms.Application.AddMessageFilter(msg_filter)
59
60 # Setup the Proactor.
61 # The code between the following markers should be exactly the same as
62 # the official CPython implementation, up to the start of the
63 # `while True:` part of run_forever() (see BaseEventLoop.run_forever()
64 # in Lib/ascynio/base_events.py)
65 # === START BaseEventLoop.run_forever() setup ===
66 self._check_closed()
67 if self.is_running():
68 raise RuntimeError('This event loop is already running')
69 if events._get_running_loop() is not None:
70 raise RuntimeError(
71 'Cannot run the event loop while another loop is running')
72 self._set_coroutine_origin_tracking(self._debug)
73 self._thread_id = threading.get_ident()
74 try:
75 self._old_agen_hooks = sys.get_asyncgen_hooks()
76 sys.set_asyncgen_hooks(
77 firstiter=self._asyncgen_firstiter_hook,
78 finalizer=self._asyncgen_finalizer_hook
79 )
80 except AttributeError:
81 # Python < 3.6 didn't have sys.get_asyncgen_hooks();
82 # No action required for those versions.
83 pass
84
85 events._set_running_loop(self)
86 # === END BaseEventLoop.run_forever() setup ===
87
88 # Rather than going into a `while True:` loop, we're going to use the
89 # Winforms event loop to queue a tick() message that will cause a
90 # single iteration of the asyncio event loop to be executed. Each time
91 # we do this, we queue *another* tick() message in 5ms time. In this
92 # way, we'll get a continuous stream of tick() calls, without blocking
93 # the Winforms event loop.
94
95 # Queue the first asyncio tick.
96 self.enqueue_tick()
97
98 # Start the Winforms event loop.
99 WinForms.Application.Run(self.app_context)
100
101 def enqueue_tick(self):
102 # Queue a call to tick in 5ms.
103 Task.Delay(5).ContinueWith(Action[Task](self.tick))
104
105 def tick(self, *args, **kwargs):
106 """
107 Cause a single iteration of the event loop to run on the main GUI thread.
108 """
109 # Post a userspace message that will trigger running an iteration
110 # of the asyncio event loop. This can't be done directly, because the
111 # tick() will be executing in a threadpool, and we need the asyncio
112 # handling to occur in the main GUI thread. However, by positing a
113 # message, it will be caught by the MessageFilter we installed on the
114 # Application thread.
115
116 # The message is sent with:
117 # * HWND 0xfff (all windows),
118 # * MSG self.msg_id (a message ID in the WM_USER range)
119 # * LPARAM and WPARAM empty (no extra details needed; just tick!)
120 user32.PostMessageA(0xffff, self.msg_id, None, None)
121
122 # FIXME: Once we have a working message filter, this invoke call
123 # can be removed.
124 # If the app context has a main form, invoke run_once_recurring()
125 # on the thread associated with that form.
126 if self.app_context.MainForm:
127 self.app_context.MainForm.Invoke(Action(self.run_once_recurring))
128
129 def run_once_recurring(self):
130 """
131 Run one iteration of the event loop, and enqueue the next iteration
132 (if we're not stopping).
133
134 This largely duplicates the "finally" behavior of the default Proactor
135 run_forever implementation.
136 """
137 # Perform one tick of the event loop.
138 self._run_once()
139
140 if self._stopping:
141 # If we're stopping, we can do the "finally" handling from
142 # the BaseEventLoop run_forever().
143 # === START BaseEventLoop.run_forever() finally handling ===
144 self._stopping = False
145 self._thread_id = None
146 events._set_running_loop(None)
147 self._set_coroutine_origin_tracking(False)
148 try:
149 sys.set_asyncgen_hooks(*self._old_agen_hooks)
150 except AttributeError:
151 # Python < 3.6 didn't have set_asyncgen_hooks.
152 # No action required for those versions.
153 pass
154 # === END BaseEventLoop.run_forever() finally handling ===
155 else:
156 # Otherwise, live to tick another day. Enqueue the next tick,
157 # and make sure there will be *something* to be processed.
158 # If you don't ensure there is at least one message on the
159 # queue, the select() call will block, locking the app.
160 self.enqueue_tick()
161 self.call_soon(self._loop_self_reading)
162
163
164 # Python 3.7 changed the name of an internal wrapper function.
165 # Install an alias for the old name at the new name.
166 if sys.version_info < (3, 7):
167 WinformsProactorEventLoop._set_coroutine_origin_tracking = WinformsProactorEventLoop._set_coroutine_wrapper
168
[end of src/winforms/toga_winforms/libs/proactor.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/src/winforms/toga_winforms/libs/proactor.py b/src/winforms/toga_winforms/libs/proactor.py
--- a/src/winforms/toga_winforms/libs/proactor.py
+++ b/src/winforms/toga_winforms/libs/proactor.py
@@ -100,7 +100,8 @@
def enqueue_tick(self):
# Queue a call to tick in 5ms.
- Task.Delay(5).ContinueWith(Action[Task](self.tick))
+ self.task = Action[Task](self.tick)
+ Task.Delay(5).ContinueWith(self.task)
def tick(self, *args, **kwargs):
"""
@@ -113,6 +114,9 @@
# message, it will be caught by the MessageFilter we installed on the
# Application thread.
+ if self.task:
+ self.task.Dispose()
+ del self.task
# The message is sent with:
# * HWND 0xfff (all windows),
# * MSG self.msg_id (a message ID in the WM_USER range)
@@ -124,7 +128,10 @@
# If the app context has a main form, invoke run_once_recurring()
# on the thread associated with that form.
if self.app_context.MainForm:
- self.app_context.MainForm.Invoke(Action(self.run_once_recurring))
+ action = Action(self.run_once_recurring)
+ self.app_context.MainForm.Invoke(action)
+ action.Dispose()
+ del action
def run_once_recurring(self):
"""
|
{"golden_diff": "diff --git a/src/winforms/toga_winforms/libs/proactor.py b/src/winforms/toga_winforms/libs/proactor.py\n--- a/src/winforms/toga_winforms/libs/proactor.py\n+++ b/src/winforms/toga_winforms/libs/proactor.py\n@@ -100,7 +100,8 @@\n \n def enqueue_tick(self):\n # Queue a call to tick in 5ms.\n- Task.Delay(5).ContinueWith(Action[Task](self.tick))\n+ self.task = Action[Task](self.tick)\n+ Task.Delay(5).ContinueWith(self.task)\n \n def tick(self, *args, **kwargs):\n \"\"\"\n@@ -113,6 +114,9 @@\n # message, it will be caught by the MessageFilter we installed on the\n # Application thread.\n \n+ if self.task:\n+ self.task.Dispose()\n+ del self.task\n # The message is sent with:\n # * HWND 0xfff (all windows),\n # * MSG self.msg_id (a message ID in the WM_USER range)\n@@ -124,7 +128,10 @@\n # If the app context has a main form, invoke run_once_recurring()\n # on the thread associated with that form.\n if self.app_context.MainForm:\n- self.app_context.MainForm.Invoke(Action(self.run_once_recurring))\n+ action = Action(self.run_once_recurring)\n+ self.app_context.MainForm.Invoke(action)\n+ action.Dispose()\n+ del action\n \n def run_once_recurring(self):\n \"\"\"\n", "issue": "Memory leaks in Windows\n**Describe the bug**\r\nUnfortunately, it seems like there are memory leaks issues in the Windows implementation of toga\r\n\r\n**To Reproduce**\r\nSteps to reproduce the behavior:\r\n1. install memory profiler, briefcase and matplotlib using `pip install memory_profiler briefcase matplotlib`\r\n2. Go to the Box example directory (or any other example for that matter)\r\n3. Run `mprof run --include-children briefcase dev`\r\n4. Click some buttons and wait for a few seconds\r\n5. Exit the program\r\n6. run `mprof plot` and see the memory leak\r\n\r\n\r\n\r\n**Environment:**\r\n - Operating System: Windows 10\r\n - Python version: python 3.8\r\n - Software versions:\r\n - Briefcase: latest\r\n - Toga: latest\r\n\n", "before_files": [{"content": "import asyncio\nimport sys\nimport threading\nfrom asyncio import events\n\nfrom .winforms import Action, Task, WinForms, user32\n\n\nclass AsyncIOTickMessageFilter(WinForms.IMessageFilter):\n \"\"\"\n A Winforms message filter that will catch the request to tick the Asyncio\n event loop.\n \"\"\"\n __namespace__ = 'System.Windows.Forms'\n\n def __init__(self, loop, msg_id):\n self.loop = loop\n self.msg_id = msg_id\n\n def PreFilterMessage(self, message):\n print('ping', message)\n if message.Msg == self.msg_id:\n print(\"asyncio tick message!!\")\n self.loop.run_once_recurring()\n return True\n # print(\"Filter message\", message)\n return False\n\n\nclass WinformsProactorEventLoop(asyncio.ProactorEventLoop):\n def run_forever(self, app_context):\n \"\"\"Set up the asyncio event loop, integrate it with the Winforms\n event loop, and start the application.\n\n This largely duplicates the setup behavior of the default Proactor\n run_forever implementation.\n\n :param app_context: The WinForms.ApplicationContext instance\n controlling the lifecycle of the app.\n \"\"\"\n # Python 3.8 added an implementation of run_forever() in\n # ProactorEventLoop. The only part that actually matters is the\n # refactoring that moved the initial call to stage _loop_self_reading;\n # it now needs to be created as part of run_forever; otherwise the\n # event loop locks up, because there won't be anything for the\n # select call to process.\n if sys.version_info >= (3, 8):\n self.call_soon(self._loop_self_reading)\n\n # Remember the application context.\n self.app_context = app_context\n\n # Register a custom user window message.\n self.msg_id = user32.RegisterWindowMessageA(\"Python asyncio tick\")\n # Add a message filter to listen for the asyncio tick message\n # FIXME: Actually install the message filter.\n # msg_filter = AsyncIOTickMessageFilter(self, self.msg_id)\n # WinForms.Application.AddMessageFilter(msg_filter)\n\n # Setup the Proactor.\n # The code between the following markers should be exactly the same as\n # the official CPython implementation, up to the start of the\n # `while True:` part of run_forever() (see BaseEventLoop.run_forever()\n # in Lib/ascynio/base_events.py)\n # === START BaseEventLoop.run_forever() setup ===\n self._check_closed()\n if self.is_running():\n raise RuntimeError('This event loop is already running')\n if events._get_running_loop() is not None:\n raise RuntimeError(\n 'Cannot run the event loop while another loop is running')\n self._set_coroutine_origin_tracking(self._debug)\n self._thread_id = threading.get_ident()\n try:\n self._old_agen_hooks = sys.get_asyncgen_hooks()\n sys.set_asyncgen_hooks(\n firstiter=self._asyncgen_firstiter_hook,\n finalizer=self._asyncgen_finalizer_hook\n )\n except AttributeError:\n # Python < 3.6 didn't have sys.get_asyncgen_hooks();\n # No action required for those versions.\n pass\n\n events._set_running_loop(self)\n # === END BaseEventLoop.run_forever() setup ===\n\n # Rather than going into a `while True:` loop, we're going to use the\n # Winforms event loop to queue a tick() message that will cause a\n # single iteration of the asyncio event loop to be executed. Each time\n # we do this, we queue *another* tick() message in 5ms time. In this\n # way, we'll get a continuous stream of tick() calls, without blocking\n # the Winforms event loop.\n\n # Queue the first asyncio tick.\n self.enqueue_tick()\n\n # Start the Winforms event loop.\n WinForms.Application.Run(self.app_context)\n\n def enqueue_tick(self):\n # Queue a call to tick in 5ms.\n Task.Delay(5).ContinueWith(Action[Task](self.tick))\n\n def tick(self, *args, **kwargs):\n \"\"\"\n Cause a single iteration of the event loop to run on the main GUI thread.\n \"\"\"\n # Post a userspace message that will trigger running an iteration\n # of the asyncio event loop. This can't be done directly, because the\n # tick() will be executing in a threadpool, and we need the asyncio\n # handling to occur in the main GUI thread. However, by positing a\n # message, it will be caught by the MessageFilter we installed on the\n # Application thread.\n\n # The message is sent with:\n # * HWND 0xfff (all windows),\n # * MSG self.msg_id (a message ID in the WM_USER range)\n # * LPARAM and WPARAM empty (no extra details needed; just tick!)\n user32.PostMessageA(0xffff, self.msg_id, None, None)\n\n # FIXME: Once we have a working message filter, this invoke call\n # can be removed.\n # If the app context has a main form, invoke run_once_recurring()\n # on the thread associated with that form.\n if self.app_context.MainForm:\n self.app_context.MainForm.Invoke(Action(self.run_once_recurring))\n\n def run_once_recurring(self):\n \"\"\"\n Run one iteration of the event loop, and enqueue the next iteration\n (if we're not stopping).\n\n This largely duplicates the \"finally\" behavior of the default Proactor\n run_forever implementation.\n \"\"\"\n # Perform one tick of the event loop.\n self._run_once()\n\n if self._stopping:\n # If we're stopping, we can do the \"finally\" handling from\n # the BaseEventLoop run_forever().\n # === START BaseEventLoop.run_forever() finally handling ===\n self._stopping = False\n self._thread_id = None\n events._set_running_loop(None)\n self._set_coroutine_origin_tracking(False)\n try:\n sys.set_asyncgen_hooks(*self._old_agen_hooks)\n except AttributeError:\n # Python < 3.6 didn't have set_asyncgen_hooks.\n # No action required for those versions.\n pass\n # === END BaseEventLoop.run_forever() finally handling ===\n else:\n # Otherwise, live to tick another day. Enqueue the next tick,\n # and make sure there will be *something* to be processed.\n # If you don't ensure there is at least one message on the\n # queue, the select() call will block, locking the app.\n self.enqueue_tick()\n self.call_soon(self._loop_self_reading)\n\n\n# Python 3.7 changed the name of an internal wrapper function.\n# Install an alias for the old name at the new name.\nif sys.version_info < (3, 7):\n WinformsProactorEventLoop._set_coroutine_origin_tracking = WinformsProactorEventLoop._set_coroutine_wrapper\n", "path": "src/winforms/toga_winforms/libs/proactor.py"}]}
| 2,714 | 340 |
gh_patches_debug_21134
|
rasdani/github-patches
|
git_diff
|
comic__grand-challenge.org-649
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
No response when uploading a new algorithm using the wrong file format
# Recipe
1. Go to https://grand-challenge.org/algorithms/create/
2. Upload, for example, a `.tar.gz` file
# Result
Upload completes, nothing happens.
</issue>
<code>
[start of app/grandchallenge/algorithms/forms.py]
1 from crispy_forms.helper import FormHelper
2 from crispy_forms.layout import Submit
3 from django import forms
4
5 from grandchallenge.algorithms.models import Algorithm, Job
6 from grandchallenge.core.validators import (
7 ExtensionValidator,
8 MimeTypeValidator,
9 )
10 from grandchallenge.jqfileupload.widgets import uploader
11 from grandchallenge.jqfileupload.widgets.uploader import UploadedAjaxFileList
12
13 algorithm_upload_widget = uploader.AjaxUploadWidget(
14 ajax_target_path="ajax/algorithm-upload/", multifile=False
15 )
16
17
18 class AlgorithmForm(forms.ModelForm):
19 ipython_notebook = forms.FileField(
20 validators=[MimeTypeValidator(allowed_types=("text/plain",))],
21 required=False,
22 help_text=(
23 "Please upload an iPython notebook that describes your algorithm"
24 ),
25 )
26 chunked_upload = UploadedAjaxFileList(
27 widget=algorithm_upload_widget,
28 label="Algorithm Image",
29 validators=[ExtensionValidator(allowed_extensions=(".tar",))],
30 help_text=(
31 "Tar archive of the container image produced from the command "
32 "`docker save IMAGE > IMAGE.tar`. See "
33 "https://docs.docker.com/engine/reference/commandline/save/"
34 ),
35 )
36
37 def __init__(self, *args, **kwargs):
38 super().__init__(*args, **kwargs)
39 self.helper = FormHelper(self)
40
41 class Meta:
42 model = Algorithm
43 fields = (
44 "title",
45 "requires_gpu",
46 "ipython_notebook",
47 "chunked_upload",
48 )
49
[end of app/grandchallenge/algorithms/forms.py]
[start of app/grandchallenge/evaluation/forms.py]
1 from crispy_forms.bootstrap import TabHolder, Tab
2 from crispy_forms.helper import FormHelper
3 from crispy_forms.layout import Submit, Layout, ButtonHolder
4 from django import forms
5 from django_summernote.widgets import SummernoteInplaceWidget
6
7 from grandchallenge.core.validators import ExtensionValidator
8 from grandchallenge.core.widgets import JSONEditorWidget
9 from grandchallenge.evaluation.models import (
10 Method,
11 Submission,
12 Config,
13 EXTRA_RESULT_COLUMNS_SCHEMA,
14 )
15 from grandchallenge.jqfileupload.widgets import uploader
16 from grandchallenge.jqfileupload.widgets.uploader import UploadedAjaxFileList
17
18 submission_options = (
19 "submission_page_html",
20 "daily_submission_limit",
21 "allow_submission_comments",
22 "supplementary_file_choice",
23 "supplementary_file_label",
24 "supplementary_file_help_text",
25 "publication_url_choice",
26 )
27
28 scoring_options = (
29 "score_title",
30 "score_jsonpath",
31 "score_error_jsonpath",
32 "score_default_sort",
33 "score_decimal_places",
34 "extra_results_columns",
35 "scoring_method_choice",
36 "auto_publish_new_results",
37 "result_display_choice",
38 )
39
40 leaderboard_options = (
41 "use_teams",
42 "display_submission_comments",
43 "show_supplementary_file_link",
44 "show_publication_url",
45 )
46
47 result_detail_options = ("submission_join_key",)
48
49
50 class ConfigForm(forms.ModelForm):
51 def __init__(self, *args, **kwargs):
52 super().__init__(*args, **kwargs)
53 self.helper = FormHelper(self)
54 self.helper.layout = Layout(
55 TabHolder(
56 Tab("Submission", *submission_options),
57 Tab("Scoring", *scoring_options),
58 Tab("Leaderboard", *leaderboard_options),
59 Tab("Result Detail", *result_detail_options),
60 ),
61 ButtonHolder(Submit("save", "Save")),
62 )
63
64 class Meta:
65 model = Config
66 fields = (
67 *submission_options,
68 *scoring_options,
69 *leaderboard_options,
70 *result_detail_options,
71 )
72 widgets = {
73 "submission_page_html": SummernoteInplaceWidget(),
74 "extra_results_columns": JSONEditorWidget(
75 schema=EXTRA_RESULT_COLUMNS_SCHEMA
76 ),
77 }
78
79
80 method_upload_widget = uploader.AjaxUploadWidget(
81 ajax_target_path="ajax/method-upload/", multifile=False
82 )
83
84
85 class MethodForm(forms.ModelForm):
86 chunked_upload = UploadedAjaxFileList(
87 widget=method_upload_widget,
88 label="Evaluation Method Container",
89 validators=[ExtensionValidator(allowed_extensions=(".tar",))],
90 help_text=(
91 "Tar archive of the container image produced from the command "
92 "`docker save IMAGE > IMAGE.tar`. See "
93 "https://docs.docker.com/engine/reference/commandline/save/"
94 ),
95 )
96
97 def __init__(self, *args, **kwargs):
98 super().__init__(*args, **kwargs)
99 self.helper = FormHelper(self)
100
101 class Meta:
102 model = Method
103 fields = ["chunked_upload"]
104
105
106 submission_upload_widget = uploader.AjaxUploadWidget(
107 ajax_target_path="ajax/submission-upload/", multifile=False
108 )
109
110 submission_fields = (
111 "comment",
112 "supplementary_file",
113 "publication_url",
114 "chunked_upload",
115 )
116
117
118 class SubmissionForm(forms.ModelForm):
119 chunked_upload = UploadedAjaxFileList(
120 widget=submission_upload_widget,
121 label="Predictions File",
122 validators=[ExtensionValidator(allowed_extensions=(".zip", ".csv"))],
123 )
124
125 def __init__(
126 self,
127 *args,
128 display_comment_field=False,
129 supplementary_file_choice=Config.OFF,
130 supplementary_file_label="",
131 supplementary_file_help_text="",
132 publication_url_choice=Config.OFF,
133 **kwargs,
134 ):
135 """
136 Conditionally render the comment field based on the
137 display_comment_field kwarg
138 """
139 super().__init__(*args, **kwargs)
140
141 if not display_comment_field:
142 del self.fields["comment"]
143
144 if supplementary_file_label:
145 self.fields["supplementary_file"].label = supplementary_file_label
146
147 if supplementary_file_help_text:
148 self.fields[
149 "supplementary_file"
150 ].help_text = supplementary_file_help_text
151
152 if supplementary_file_choice == Config.REQUIRED:
153 self.fields["supplementary_file"].required = True
154 elif supplementary_file_choice == Config.OFF:
155 del self.fields["supplementary_file"]
156
157 if publication_url_choice == Config.REQUIRED:
158 self.fields["publication_url"].required = True
159 elif publication_url_choice == Config.OFF:
160 del self.fields["publication_url"]
161
162 self.helper = FormHelper(self)
163
164 class Meta:
165 model = Submission
166 fields = submission_fields
167
168
169 class LegacySubmissionForm(SubmissionForm):
170 class Meta:
171 model = Submission
172 fields = ("creator", *submission_fields)
173
[end of app/grandchallenge/evaluation/forms.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/app/grandchallenge/algorithms/forms.py b/app/grandchallenge/algorithms/forms.py
--- a/app/grandchallenge/algorithms/forms.py
+++ b/app/grandchallenge/algorithms/forms.py
@@ -29,7 +29,7 @@
validators=[ExtensionValidator(allowed_extensions=(".tar",))],
help_text=(
"Tar archive of the container image produced from the command "
- "`docker save IMAGE > IMAGE.tar`. See "
+ "'docker save IMAGE > IMAGE.tar'. See "
"https://docs.docker.com/engine/reference/commandline/save/"
),
)
diff --git a/app/grandchallenge/evaluation/forms.py b/app/grandchallenge/evaluation/forms.py
--- a/app/grandchallenge/evaluation/forms.py
+++ b/app/grandchallenge/evaluation/forms.py
@@ -89,7 +89,7 @@
validators=[ExtensionValidator(allowed_extensions=(".tar",))],
help_text=(
"Tar archive of the container image produced from the command "
- "`docker save IMAGE > IMAGE.tar`. See "
+ "'docker save IMAGE > IMAGE.tar'. See "
"https://docs.docker.com/engine/reference/commandline/save/"
),
)
|
{"golden_diff": "diff --git a/app/grandchallenge/algorithms/forms.py b/app/grandchallenge/algorithms/forms.py\n--- a/app/grandchallenge/algorithms/forms.py\n+++ b/app/grandchallenge/algorithms/forms.py\n@@ -29,7 +29,7 @@\n validators=[ExtensionValidator(allowed_extensions=(\".tar\",))],\n help_text=(\n \"Tar archive of the container image produced from the command \"\n- \"`docker save IMAGE > IMAGE.tar`. See \"\n+ \"'docker save IMAGE > IMAGE.tar'. See \"\n \"https://docs.docker.com/engine/reference/commandline/save/\"\n ),\n )\ndiff --git a/app/grandchallenge/evaluation/forms.py b/app/grandchallenge/evaluation/forms.py\n--- a/app/grandchallenge/evaluation/forms.py\n+++ b/app/grandchallenge/evaluation/forms.py\n@@ -89,7 +89,7 @@\n validators=[ExtensionValidator(allowed_extensions=(\".tar\",))],\n help_text=(\n \"Tar archive of the container image produced from the command \"\n- \"`docker save IMAGE > IMAGE.tar`. See \"\n+ \"'docker save IMAGE > IMAGE.tar'. See \"\n \"https://docs.docker.com/engine/reference/commandline/save/\"\n ),\n )\n", "issue": "No response when uploading a new algorithm using the wrong file format\n# Recipe\r\n\r\n1. Go to https://grand-challenge.org/algorithms/create/\r\n2. Upload, for example, a `.tar.gz` file\r\n\r\n# Result\r\n\r\nUpload completes, nothing happens.\n", "before_files": [{"content": "from crispy_forms.helper import FormHelper\nfrom crispy_forms.layout import Submit\nfrom django import forms\n\nfrom grandchallenge.algorithms.models import Algorithm, Job\nfrom grandchallenge.core.validators import (\n ExtensionValidator,\n MimeTypeValidator,\n)\nfrom grandchallenge.jqfileupload.widgets import uploader\nfrom grandchallenge.jqfileupload.widgets.uploader import UploadedAjaxFileList\n\nalgorithm_upload_widget = uploader.AjaxUploadWidget(\n ajax_target_path=\"ajax/algorithm-upload/\", multifile=False\n)\n\n\nclass AlgorithmForm(forms.ModelForm):\n ipython_notebook = forms.FileField(\n validators=[MimeTypeValidator(allowed_types=(\"text/plain\",))],\n required=False,\n help_text=(\n \"Please upload an iPython notebook that describes your algorithm\"\n ),\n )\n chunked_upload = UploadedAjaxFileList(\n widget=algorithm_upload_widget,\n label=\"Algorithm Image\",\n validators=[ExtensionValidator(allowed_extensions=(\".tar\",))],\n help_text=(\n \"Tar archive of the container image produced from the command \"\n \"`docker save IMAGE > IMAGE.tar`. See \"\n \"https://docs.docker.com/engine/reference/commandline/save/\"\n ),\n )\n\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n self.helper = FormHelper(self)\n\n class Meta:\n model = Algorithm\n fields = (\n \"title\",\n \"requires_gpu\",\n \"ipython_notebook\",\n \"chunked_upload\",\n )\n", "path": "app/grandchallenge/algorithms/forms.py"}, {"content": "from crispy_forms.bootstrap import TabHolder, Tab\nfrom crispy_forms.helper import FormHelper\nfrom crispy_forms.layout import Submit, Layout, ButtonHolder\nfrom django import forms\nfrom django_summernote.widgets import SummernoteInplaceWidget\n\nfrom grandchallenge.core.validators import ExtensionValidator\nfrom grandchallenge.core.widgets import JSONEditorWidget\nfrom grandchallenge.evaluation.models import (\n Method,\n Submission,\n Config,\n EXTRA_RESULT_COLUMNS_SCHEMA,\n)\nfrom grandchallenge.jqfileupload.widgets import uploader\nfrom grandchallenge.jqfileupload.widgets.uploader import UploadedAjaxFileList\n\nsubmission_options = (\n \"submission_page_html\",\n \"daily_submission_limit\",\n \"allow_submission_comments\",\n \"supplementary_file_choice\",\n \"supplementary_file_label\",\n \"supplementary_file_help_text\",\n \"publication_url_choice\",\n)\n\nscoring_options = (\n \"score_title\",\n \"score_jsonpath\",\n \"score_error_jsonpath\",\n \"score_default_sort\",\n \"score_decimal_places\",\n \"extra_results_columns\",\n \"scoring_method_choice\",\n \"auto_publish_new_results\",\n \"result_display_choice\",\n)\n\nleaderboard_options = (\n \"use_teams\",\n \"display_submission_comments\",\n \"show_supplementary_file_link\",\n \"show_publication_url\",\n)\n\nresult_detail_options = (\"submission_join_key\",)\n\n\nclass ConfigForm(forms.ModelForm):\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n self.helper = FormHelper(self)\n self.helper.layout = Layout(\n TabHolder(\n Tab(\"Submission\", *submission_options),\n Tab(\"Scoring\", *scoring_options),\n Tab(\"Leaderboard\", *leaderboard_options),\n Tab(\"Result Detail\", *result_detail_options),\n ),\n ButtonHolder(Submit(\"save\", \"Save\")),\n )\n\n class Meta:\n model = Config\n fields = (\n *submission_options,\n *scoring_options,\n *leaderboard_options,\n *result_detail_options,\n )\n widgets = {\n \"submission_page_html\": SummernoteInplaceWidget(),\n \"extra_results_columns\": JSONEditorWidget(\n schema=EXTRA_RESULT_COLUMNS_SCHEMA\n ),\n }\n\n\nmethod_upload_widget = uploader.AjaxUploadWidget(\n ajax_target_path=\"ajax/method-upload/\", multifile=False\n)\n\n\nclass MethodForm(forms.ModelForm):\n chunked_upload = UploadedAjaxFileList(\n widget=method_upload_widget,\n label=\"Evaluation Method Container\",\n validators=[ExtensionValidator(allowed_extensions=(\".tar\",))],\n help_text=(\n \"Tar archive of the container image produced from the command \"\n \"`docker save IMAGE > IMAGE.tar`. See \"\n \"https://docs.docker.com/engine/reference/commandline/save/\"\n ),\n )\n\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n self.helper = FormHelper(self)\n\n class Meta:\n model = Method\n fields = [\"chunked_upload\"]\n\n\nsubmission_upload_widget = uploader.AjaxUploadWidget(\n ajax_target_path=\"ajax/submission-upload/\", multifile=False\n)\n\nsubmission_fields = (\n \"comment\",\n \"supplementary_file\",\n \"publication_url\",\n \"chunked_upload\",\n)\n\n\nclass SubmissionForm(forms.ModelForm):\n chunked_upload = UploadedAjaxFileList(\n widget=submission_upload_widget,\n label=\"Predictions File\",\n validators=[ExtensionValidator(allowed_extensions=(\".zip\", \".csv\"))],\n )\n\n def __init__(\n self,\n *args,\n display_comment_field=False,\n supplementary_file_choice=Config.OFF,\n supplementary_file_label=\"\",\n supplementary_file_help_text=\"\",\n publication_url_choice=Config.OFF,\n **kwargs,\n ):\n \"\"\"\n Conditionally render the comment field based on the\n display_comment_field kwarg\n \"\"\"\n super().__init__(*args, **kwargs)\n\n if not display_comment_field:\n del self.fields[\"comment\"]\n\n if supplementary_file_label:\n self.fields[\"supplementary_file\"].label = supplementary_file_label\n\n if supplementary_file_help_text:\n self.fields[\n \"supplementary_file\"\n ].help_text = supplementary_file_help_text\n\n if supplementary_file_choice == Config.REQUIRED:\n self.fields[\"supplementary_file\"].required = True\n elif supplementary_file_choice == Config.OFF:\n del self.fields[\"supplementary_file\"]\n\n if publication_url_choice == Config.REQUIRED:\n self.fields[\"publication_url\"].required = True\n elif publication_url_choice == Config.OFF:\n del self.fields[\"publication_url\"]\n\n self.helper = FormHelper(self)\n\n class Meta:\n model = Submission\n fields = submission_fields\n\n\nclass LegacySubmissionForm(SubmissionForm):\n class Meta:\n model = Submission\n fields = (\"creator\", *submission_fields)\n", "path": "app/grandchallenge/evaluation/forms.py"}]}
| 2,450 | 256 |
gh_patches_debug_7639
|
rasdani/github-patches
|
git_diff
|
frappe__frappe-11557
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
IMAP port settings are not updated from Email Domain to Email Account
## Description of the issue
When changing the IMAP port in an existing Email Domain, the Email Accounts using this Domain are not updated accordingly. This can lead to Frappe trying an IMAPS connection (which usually is 993) to the plain IMAP port 143, resulting in misleading error messages like `ssl.SSLError: [SSL: WRONG_VERSION_NUMBER] wrong version number (_ssl.c:852)`.
We could track down the root cause to the method `on_update` from the DocType "Email Domain": it simply misses the field `incoming_port` when copying data to all e-mail accounts that use this domain. This leads to the problem if the `incoming_port` is already set in the email account and gets updated/changed afterwards in the email domain.
## Context information (for bug reports)
```
frappe-bench$ bench --version
5.0.0
frappe-bench$ bench version
erpnext 12.11.2
frappe 12.9.1
```
## Steps to reproduce the issue
1. To reproduce this small bug you need to create a "Email Domain" in Frappe and save it with imap-port 143 and no SSL.
2. Create an e-mail account and link it with the domain from step 1 but without `Enable Incoming` and save.
3. Try to `Enable Incoming` and save
4. After "saving" the e-mail account go to the domain and change the imap-port from 143 to 993 and check SSL.
5. The `incoming_port` in Email-account is still 143.
### Observed result
In the database you can see that the `incoming_port` in the e-mail account is still 143 (real domain and mail addresses hidden):
```
select
ea.email_id,
ea.domain,
ea.incoming_port,
ed.incoming_port,
ea.email_server,
ed.email_server
from
`tabEmail Account` ea,
`tabEmail Domain` ed
where ea.domain = ed.name
and ed.name = "example.com";
```
#### Before updating the IMAP port in the domain
```
+------------------+-------------+---------------+---------------+--------------+--------------+
| email_id | domain | incoming_port | incoming_port | email_server | email_server |
+------------------+-------------+---------------+---------------+--------------+--------------+
| [email protected] | example.com | 143 | 143 | example.com | example.com |
+------------------+-------------+---------------+---------------+--------------+--------------+
1 row in set (0.000 sec)
```
#### After updating the IMAP port in the domain
```
+------------------+-------------+---------------+---------------+--------------+--------------+
| email_id | domain | incoming_port | incoming_port | email_server | email_server |
+------------------+-------------+---------------+---------------+--------------+--------------+
| [email protected] | example.com | 143 | 993 | example.com | example.com |
+------------------+-------------+---------------+---------------+--------------+--------------+
1 row in set (0.001 sec)
```
Now it will always trigger an SSL-handshake-error if the scheduler tries to get access.
### Expected result
When the mail domain gets updated all necessary fields related to e-mail account should be updated including the `incoming_port`.
### Stacktrace / full error message
```
Traceback (most recent call last):
File "/home/erpnext/frappe-bench/apps/frappe/frappe/app.py", line 64, in application
response = frappe.api.handle()
File "/home/erpnext/frappe-bench/apps/frappe/frappe/api.py", line 59, in handle
return frappe.handler.handle()
File "/home/erpnext/frappe-bench/apps/frappe/frappe/handler.py", line 24, in handle
data = execute_cmd(cmd)
File "/home/erpnext/frappe-bench/apps/frappe/frappe/handler.py", line 63, in execute_cmd
return frappe.call(method, **frappe.form_dict)
File "/home/erpnext/frappe-bench/apps/frappe/frappe/__init__.py", line 1055, in call
return fn(*args, **newargs)
File "/home/erpnext/frappe-bench/apps/frappe/frappe/desk/form/save.py", line 21, in savedocs
doc.save()
File "/home/erpnext/frappe-bench/apps/frappe/frappe/model/document.py", line 273, in save
return self._save(*args, **kwargs)
File "/home/erpnext/frappe-bench/apps/frappe/frappe/model/document.py", line 309, in _save
self.run_before_save_methods()
File "/home/erpnext/frappe-bench/apps/frappe/frappe/model/document.py", line 896, in run_before_save_methods
self.run_method("validate")
File "/home/erpnext/frappe-bench/apps/frappe/frappe/model/document.py", line 797, in run_method
out = Document.hook(fn)(self, *args, **kwargs)
File "/home/erpnext/frappe-bench/apps/frappe/frappe/model/document.py", line 1073, in composer
return composed(self, method, *args, **kwargs)
File "/home/erpnext/frappe-bench/apps/frappe/frappe/model/document.py", line 1056, in runner
add_to_return_value(self, fn(self, *args, **kwargs))
File "/home/erpnext/frappe-bench/apps/frappe/frappe/model/document.py", line 791, in <lambda>
fn = lambda self, *args, **kwargs: getattr(self, method)(*args, **kwargs)
File "/home/erpnext/frappe-bench/apps/frappe/frappe/email/doctype/email_account/email_account.py", line 68, in validate
self.get_incoming_server()
File "/home/erpnext/frappe-bench/apps/frappe/frappe/email/doctype/email_account/email_account.py", line 168, in get_incoming_server
email_server.connect()
File "/home/erpnext/frappe-bench/apps/frappe/frappe/email/receive.py", line 43, in connect
return self.connect_imap()
File "/home/erpnext/frappe-bench/apps/frappe/frappe/email/receive.py", line 51, in connect_imap
self.imap = Timed_IMAP4_SSL(self.settings.host, self.settings.incoming_port, timeout=frappe.conf.get("pop_timeout"))
File "/home/erpnext/frappe-bench/apps/frappe/frappe/email/receive.py", line 564, in __init__
self._super.__init__(self, *args, **kwargs)
File "/usr/lib/python3.6/imaplib.py", line 1288, in __init__
IMAP4.__init__(self, host, port)
File "/usr/lib/python3.6/imaplib.py", line 198, in __init__
self.open(host, port)
File "/usr/lib/python3.6/imaplib.py", line 1301, in open
IMAP4.open(self, host, port)
File "/usr/lib/python3.6/imaplib.py", line 299, in open
self.sock = self._create_socket()
File "/usr/lib/python3.6/imaplib.py", line 1293, in _create_socket
server_hostname=self.host)
File "/usr/lib/python3.6/ssl.py", line 407, in wrap_socket
_context=self, _session=session)
File "/usr/lib/python3.6/ssl.py", line 817, in __init__
self.do_handshake()
File "/usr/lib/python3.6/ssl.py", line 1077, in do_handshake
self._sslobj.do_handshake()
File "/usr/lib/python3.6/ssl.py", line 689, in do_handshake
self._sslobj.do_handshake()
ssl.SSLError: [SSL: WRONG_VERSION_NUMBER] wrong version number (_ssl.c:852)
```
## OS
- Linux Ubuntu 18.04
</issue>
<code>
[start of frappe/email/doctype/email_domain/email_domain.py]
1 # -*- coding: utf-8 -*-
2 # Copyright (c) 2015, Frappe Technologies Pvt. Ltd. and contributors
3 # For license information, please see license.txt
4
5 from __future__ import unicode_literals
6 import frappe
7 from frappe import _
8 from frappe.model.document import Document
9 from frappe.utils import validate_email_address ,cint, cstr
10 import imaplib,poplib,smtplib
11 from frappe.email.utils import get_port
12
13 class EmailDomain(Document):
14 def autoname(self):
15 if self.domain_name:
16 self.name = self.domain_name
17
18 def validate(self):
19 """Validate email id and check POP3/IMAP and SMTP connections is enabled."""
20 if self.email_id:
21 validate_email_address(self.email_id, True)
22
23 if frappe.local.flags.in_patch or frappe.local.flags.in_test:
24 return
25
26 if not frappe.local.flags.in_install and not frappe.local.flags.in_patch:
27 try:
28 if self.use_imap:
29 if self.use_ssl:
30 test = imaplib.IMAP4_SSL(self.email_server, port=get_port(self))
31 else:
32 test = imaplib.IMAP4(self.email_server, port=get_port(self))
33
34 else:
35 if self.use_ssl:
36 test = poplib.POP3_SSL(self.email_server, port=get_port(self))
37 else:
38 test = poplib.POP3(self.email_server, port=get_port(self))
39
40 except Exception:
41 frappe.throw(_("Incoming email account not correct"))
42 return None
43 finally:
44 try:
45 if self.use_imap:
46 test.logout()
47 else:
48 test.quit()
49 except Exception:
50 pass
51 try:
52 if self.use_tls and not self.smtp_port:
53 self.smtp_port = 587
54 sess = smtplib.SMTP(cstr(self.smtp_server or ""), cint(self.smtp_port) or None)
55 sess.quit()
56 except Exception:
57 frappe.throw(_("Outgoing email account not correct"))
58 return None
59 return
60
61 def on_update(self):
62 """update all email accounts using this domain"""
63 for email_account in frappe.get_all("Email Account",
64 filters={"domain": self.name}):
65
66 try:
67 email_account = frappe.get_doc("Email Account",
68 email_account.name)
69 email_account.set("email_server",self.email_server)
70 email_account.set("use_imap",self.use_imap)
71 email_account.set("use_ssl",self.use_ssl)
72 email_account.set("use_tls",self.use_tls)
73 email_account.set("attachment_limit",self.attachment_limit)
74 email_account.set("smtp_server",self.smtp_server)
75 email_account.set("smtp_port",self.smtp_port)
76 email_account.save()
77 except Exception as e:
78 frappe.msgprint(email_account.name)
79 frappe.throw(e)
80 return None
81
[end of frappe/email/doctype/email_domain/email_domain.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/frappe/email/doctype/email_domain/email_domain.py b/frappe/email/doctype/email_domain/email_domain.py
--- a/frappe/email/doctype/email_domain/email_domain.py
+++ b/frappe/email/doctype/email_domain/email_domain.py
@@ -73,6 +73,7 @@
email_account.set("attachment_limit",self.attachment_limit)
email_account.set("smtp_server",self.smtp_server)
email_account.set("smtp_port",self.smtp_port)
+ email_account.set("incoming_port", self.incoming_port)
email_account.save()
except Exception as e:
frappe.msgprint(email_account.name)
|
{"golden_diff": "diff --git a/frappe/email/doctype/email_domain/email_domain.py b/frappe/email/doctype/email_domain/email_domain.py\n--- a/frappe/email/doctype/email_domain/email_domain.py\n+++ b/frappe/email/doctype/email_domain/email_domain.py\n@@ -73,6 +73,7 @@\n \t\t\t\temail_account.set(\"attachment_limit\",self.attachment_limit)\n \t\t\t\temail_account.set(\"smtp_server\",self.smtp_server)\n \t\t\t\temail_account.set(\"smtp_port\",self.smtp_port)\n+\t\t\t\temail_account.set(\"incoming_port\", self.incoming_port)\n \t\t\t\temail_account.save()\n \t\t\texcept Exception as e:\n \t\t\t\tfrappe.msgprint(email_account.name)\n", "issue": "IMAP port settings are not updated from Email Domain to Email Account\n## Description of the issue\r\n\r\nWhen changing the IMAP port in an existing Email Domain, the Email Accounts using this Domain are not updated accordingly. This can lead to Frappe trying an IMAPS connection (which usually is 993) to the plain IMAP port 143, resulting in misleading error messages like `ssl.SSLError: [SSL: WRONG_VERSION_NUMBER] wrong version number (_ssl.c:852)`.\r\n\r\nWe could track down the root cause to the method `on_update` from the DocType \"Email Domain\": it simply misses the field `incoming_port` when copying data to all e-mail accounts that use this domain. This leads to the problem if the `incoming_port` is already set in the email account and gets updated/changed afterwards in the email domain.\r\n## Context information (for bug reports)\r\n\r\n```\r\nfrappe-bench$ bench --version\r\n5.0.0\r\n\r\nfrappe-bench$ bench version\r\nerpnext 12.11.2\r\nfrappe 12.9.1\r\n```\r\n## Steps to reproduce the issue\r\n\r\n1. To reproduce this small bug you need to create a \"Email Domain\" in Frappe and save it with imap-port 143 and no SSL.\r\n2. Create an e-mail account and link it with the domain from step 1 but without `Enable Incoming` and save.\r\n3. Try to `Enable Incoming` and save\r\n4. After \"saving\" the e-mail account go to the domain and change the imap-port from 143 to 993 and check SSL.\r\n5. The `incoming_port` in Email-account is still 143.\r\n\r\n### Observed result\r\nIn the database you can see that the `incoming_port` in the e-mail account is still 143 (real domain and mail addresses hidden):\r\n\r\n```\r\nselect\r\n ea.email_id,\r\n ea.domain,\r\n ea.incoming_port,\r\n ed.incoming_port,\r\n ea.email_server,\r\n ed.email_server\r\nfrom \r\n `tabEmail Account` ea,\r\n `tabEmail Domain` ed\r\nwhere ea.domain = ed.name\r\n and ed.name = \"example.com\";\r\n```\r\n\r\n#### Before updating the IMAP port in the domain\r\n```\r\n+------------------+-------------+---------------+---------------+--------------+--------------+\r\n| email_id | domain | incoming_port | incoming_port | email_server | email_server |\r\n+------------------+-------------+---------------+---------------+--------------+--------------+\r\n| [email protected] | example.com | 143 | 143 | example.com | example.com |\r\n+------------------+-------------+---------------+---------------+--------------+--------------+\r\n1 row in set (0.000 sec)\r\n```\r\n#### After updating the IMAP port in the domain\r\n```\r\n+------------------+-------------+---------------+---------------+--------------+--------------+\r\n| email_id | domain | incoming_port | incoming_port | email_server | email_server |\r\n+------------------+-------------+---------------+---------------+--------------+--------------+\r\n| [email protected] | example.com | 143 | 993 | example.com | example.com |\r\n+------------------+-------------+---------------+---------------+--------------+--------------+\r\n1 row in set (0.001 sec)\r\n```\r\nNow it will always trigger an SSL-handshake-error if the scheduler tries to get access.\r\n\r\n### Expected result\r\nWhen the mail domain gets updated all necessary fields related to e-mail account should be updated including the `incoming_port`.\r\n### Stacktrace / full error message\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"/home/erpnext/frappe-bench/apps/frappe/frappe/app.py\", line 64, in application\r\n response = frappe.api.handle()\r\n File \"/home/erpnext/frappe-bench/apps/frappe/frappe/api.py\", line 59, in handle\r\n return frappe.handler.handle()\r\n File \"/home/erpnext/frappe-bench/apps/frappe/frappe/handler.py\", line 24, in handle\r\n data = execute_cmd(cmd)\r\n File \"/home/erpnext/frappe-bench/apps/frappe/frappe/handler.py\", line 63, in execute_cmd\r\n return frappe.call(method, **frappe.form_dict)\r\n File \"/home/erpnext/frappe-bench/apps/frappe/frappe/__init__.py\", line 1055, in call\r\n return fn(*args, **newargs)\r\n File \"/home/erpnext/frappe-bench/apps/frappe/frappe/desk/form/save.py\", line 21, in savedocs\r\n doc.save()\r\n File \"/home/erpnext/frappe-bench/apps/frappe/frappe/model/document.py\", line 273, in save\r\n return self._save(*args, **kwargs)\r\n File \"/home/erpnext/frappe-bench/apps/frappe/frappe/model/document.py\", line 309, in _save\r\n self.run_before_save_methods()\r\n File \"/home/erpnext/frappe-bench/apps/frappe/frappe/model/document.py\", line 896, in run_before_save_methods\r\n self.run_method(\"validate\")\r\n File \"/home/erpnext/frappe-bench/apps/frappe/frappe/model/document.py\", line 797, in run_method\r\n out = Document.hook(fn)(self, *args, **kwargs)\r\n File \"/home/erpnext/frappe-bench/apps/frappe/frappe/model/document.py\", line 1073, in composer\r\n return composed(self, method, *args, **kwargs)\r\n File \"/home/erpnext/frappe-bench/apps/frappe/frappe/model/document.py\", line 1056, in runner\r\n add_to_return_value(self, fn(self, *args, **kwargs))\r\n File \"/home/erpnext/frappe-bench/apps/frappe/frappe/model/document.py\", line 791, in <lambda>\r\n fn = lambda self, *args, **kwargs: getattr(self, method)(*args, **kwargs)\r\n File \"/home/erpnext/frappe-bench/apps/frappe/frappe/email/doctype/email_account/email_account.py\", line 68, in validate\r\n self.get_incoming_server()\r\n File \"/home/erpnext/frappe-bench/apps/frappe/frappe/email/doctype/email_account/email_account.py\", line 168, in get_incoming_server\r\n email_server.connect()\r\n File \"/home/erpnext/frappe-bench/apps/frappe/frappe/email/receive.py\", line 43, in connect\r\n return self.connect_imap()\r\n File \"/home/erpnext/frappe-bench/apps/frappe/frappe/email/receive.py\", line 51, in connect_imap\r\n self.imap = Timed_IMAP4_SSL(self.settings.host, self.settings.incoming_port, timeout=frappe.conf.get(\"pop_timeout\"))\r\n File \"/home/erpnext/frappe-bench/apps/frappe/frappe/email/receive.py\", line 564, in __init__\r\n self._super.__init__(self, *args, **kwargs)\r\n File \"/usr/lib/python3.6/imaplib.py\", line 1288, in __init__\r\n IMAP4.__init__(self, host, port)\r\n File \"/usr/lib/python3.6/imaplib.py\", line 198, in __init__\r\n self.open(host, port)\r\n File \"/usr/lib/python3.6/imaplib.py\", line 1301, in open\r\n IMAP4.open(self, host, port)\r\n File \"/usr/lib/python3.6/imaplib.py\", line 299, in open\r\n self.sock = self._create_socket()\r\n File \"/usr/lib/python3.6/imaplib.py\", line 1293, in _create_socket\r\n server_hostname=self.host)\r\n File \"/usr/lib/python3.6/ssl.py\", line 407, in wrap_socket\r\n _context=self, _session=session)\r\n File \"/usr/lib/python3.6/ssl.py\", line 817, in __init__\r\n self.do_handshake()\r\n File \"/usr/lib/python3.6/ssl.py\", line 1077, in do_handshake\r\n self._sslobj.do_handshake()\r\n File \"/usr/lib/python3.6/ssl.py\", line 689, in do_handshake\r\n self._sslobj.do_handshake()\r\nssl.SSLError: [SSL: WRONG_VERSION_NUMBER] wrong version number (_ssl.c:852)\r\n```\r\n\r\n## OS\r\n- Linux Ubuntu 18.04\r\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n# Copyright (c) 2015, Frappe Technologies Pvt. Ltd. and contributors\n# For license information, please see license.txt\n\nfrom __future__ import unicode_literals\nimport frappe\nfrom frappe import _\nfrom frappe.model.document import Document\nfrom frappe.utils import validate_email_address ,cint, cstr\nimport imaplib,poplib,smtplib\nfrom frappe.email.utils import get_port\n\nclass EmailDomain(Document):\n\tdef autoname(self):\n\t\tif self.domain_name:\n\t\t\tself.name = self.domain_name\n\n\tdef validate(self):\n\t\t\"\"\"Validate email id and check POP3/IMAP and SMTP connections is enabled.\"\"\"\n\t\tif self.email_id:\n\t\t\tvalidate_email_address(self.email_id, True)\n\n\t\tif frappe.local.flags.in_patch or frappe.local.flags.in_test:\n\t\t\treturn\n\n\t\tif not frappe.local.flags.in_install and not frappe.local.flags.in_patch:\n\t\t\ttry:\n\t\t\t\tif self.use_imap:\n\t\t\t\t\tif self.use_ssl:\n\t\t\t\t\t\ttest = imaplib.IMAP4_SSL(self.email_server, port=get_port(self))\n\t\t\t\t\telse:\n\t\t\t\t\t\ttest = imaplib.IMAP4(self.email_server, port=get_port(self))\n\n\t\t\t\telse:\n\t\t\t\t\tif self.use_ssl:\n\t\t\t\t\t\ttest = poplib.POP3_SSL(self.email_server, port=get_port(self))\n\t\t\t\t\telse:\n\t\t\t\t\t\ttest = poplib.POP3(self.email_server, port=get_port(self))\n\n\t\t\texcept Exception:\n\t\t\t\tfrappe.throw(_(\"Incoming email account not correct\"))\n\t\t\t\treturn None\n\t\t\tfinally:\n\t\t\t\ttry:\n\t\t\t\t\tif self.use_imap:\n\t\t\t\t\t\ttest.logout()\n\t\t\t\t\telse:\n\t\t\t\t\t\ttest.quit()\n\t\t\t\texcept Exception:\n\t\t\t\t\tpass\n\t\t\ttry:\n\t\t\t\tif self.use_tls and not self.smtp_port:\n\t\t\t\t\tself.smtp_port = 587\n\t\t\t\tsess = smtplib.SMTP(cstr(self.smtp_server or \"\"), cint(self.smtp_port) or None)\n\t\t\t\tsess.quit()\n\t\t\texcept Exception:\n\t\t\t\tfrappe.throw(_(\"Outgoing email account not correct\"))\n\t\t\t\treturn None\n\t\treturn\n\n\tdef on_update(self):\n\t\t\"\"\"update all email accounts using this domain\"\"\"\n\t\tfor email_account in frappe.get_all(\"Email Account\",\n\t\tfilters={\"domain\": self.name}):\n\n\t\t\ttry:\n\t\t\t\temail_account = frappe.get_doc(\"Email Account\",\n\t\t\t\t\temail_account.name)\n\t\t\t\temail_account.set(\"email_server\",self.email_server)\n\t\t\t\temail_account.set(\"use_imap\",self.use_imap)\n\t\t\t\temail_account.set(\"use_ssl\",self.use_ssl)\n\t\t\t\temail_account.set(\"use_tls\",self.use_tls)\n\t\t\t\temail_account.set(\"attachment_limit\",self.attachment_limit)\n\t\t\t\temail_account.set(\"smtp_server\",self.smtp_server)\n\t\t\t\temail_account.set(\"smtp_port\",self.smtp_port)\n\t\t\t\temail_account.save()\n\t\t\texcept Exception as e:\n\t\t\t\tfrappe.msgprint(email_account.name)\n\t\t\t\tfrappe.throw(e)\n\t\t\t\treturn None\n", "path": "frappe/email/doctype/email_domain/email_domain.py"}]}
| 3,170 | 135 |
gh_patches_debug_8873
|
rasdani/github-patches
|
git_diff
|
kubeflow__pipelines-4132
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
allow output artifact store configuration (vs hard coded)
it seems like the output artifacts are always stored in a specific minio service, port, namespace, bucket, secrets, etc (`minio-service.kubeflow:9000`).
see: https://github.com/kubeflow/pipelines/blob/f40a22a3f4a8e06d20cf3e3f425b5058d5c87e0b/sdk/python/kfp/compiler/_op_to_template.py#L148
it would be great to make it flexible, e.g. allow using S3, or change namespace or bucket names.
i suggest making it configurable, i can do such PR if we agree its needed.
flexible pipeline service (host) path in client SDK
when creating an SDK `Client()` the path to `ml-pipeline` API service is loaded from a hard coded value (`ml-pipeline.kubeflow.svc.cluster.local:8888`) which indicate a specific k8s namespace. it can be valuable to load that default value from an env variable, i.e. changing the line in `_client.py` from:
`config.host = host if host else Client.IN_CLUSTER_DNS_NAME`
to:
`config.host = host or os.environ.get('ML_PIPELINE_DNS_NAME',Client.IN_CLUSTER_DNS_NAME)`
also note that when a user provide the `host` parameter, the ipython output points to the API server and not to the UI service (see the logic in `_get_url_prefix()`), it seems like a potential bug
if its acceptable i can submit a PR for the line change above
</issue>
<code>
[start of sdk/python/setup.py]
1 # Copyright 2018 Google LLC
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import os
16 import re
17 from setuptools import setup
18
19 NAME = 'kfp'
20 #VERSION = .... Change the version in kfp/__init__.py
21
22 REQUIRES = [
23 'PyYAML',
24 'google-cloud-storage>=1.13.0',
25 'kubernetes>=8.0.0, <12.0.0',
26 'google-auth>=1.6.1',
27 'requests_toolbelt>=0.8.0',
28 'cloudpickle',
29 # Update the upper version whenever a new major version of the
30 # kfp-server-api package is released.
31 # Update the lower version when kfp sdk depends on new apis/fields in
32 # kfp-server-api.
33 # Note, please also update ./requirements.in
34 'kfp-server-api>=0.2.5, <2.0.0',
35 'jsonschema >= 3.0.1',
36 'tabulate',
37 'click',
38 'Deprecated',
39 'strip-hints',
40 ]
41
42
43 def find_version(*file_path_parts):
44 here = os.path.abspath(os.path.dirname(__file__))
45 with open(os.path.join(here, *file_path_parts), 'r') as fp:
46 version_file_text = fp.read()
47
48 version_match = re.search(
49 r"^__version__ = ['\"]([^'\"]*)['\"]",
50 version_file_text,
51 re.M,
52 )
53 if version_match:
54 return version_match.group(1)
55
56 raise RuntimeError('Unable to find version string.')
57
58
59 setup(
60 name=NAME,
61 version=find_version('kfp', '__init__.py'),
62 description='KubeFlow Pipelines SDK',
63 author='google',
64 install_requires=REQUIRES,
65 packages=[
66 'kfp',
67 'kfp.cli',
68 'kfp.cli.diagnose_me',
69 'kfp.compiler',
70 'kfp.components',
71 'kfp.components.structures',
72 'kfp.components.structures.kubernetes',
73 'kfp.containers',
74 'kfp.dsl',
75 'kfp.dsl.extensions',
76 'kfp.notebook',
77 ],
78 classifiers=[
79 'Intended Audience :: Developers',
80 'Intended Audience :: Education',
81 'Intended Audience :: Science/Research',
82 'License :: OSI Approved :: Apache Software License',
83 'Programming Language :: Python :: 3',
84 'Programming Language :: Python :: 3.5',
85 'Programming Language :: Python :: 3.6',
86 'Programming Language :: Python :: 3.7',
87 'Topic :: Scientific/Engineering',
88 'Topic :: Scientific/Engineering :: Artificial Intelligence',
89 'Topic :: Software Development',
90 'Topic :: Software Development :: Libraries',
91 'Topic :: Software Development :: Libraries :: Python Modules',
92 ],
93 python_requires='>=3.5.3',
94 include_package_data=True,
95 entry_points={
96 'console_scripts': [
97 'dsl-compile = kfp.compiler.main:main', 'kfp=kfp.__main__:main'
98 ]
99 })
100
[end of sdk/python/setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/sdk/python/setup.py b/sdk/python/setup.py
--- a/sdk/python/setup.py
+++ b/sdk/python/setup.py
@@ -39,6 +39,10 @@
'strip-hints',
]
+TESTS_REQUIRE = [
+ 'mock',
+]
+
def find_version(*file_path_parts):
here = os.path.abspath(os.path.dirname(__file__))
@@ -62,6 +66,7 @@
description='KubeFlow Pipelines SDK',
author='google',
install_requires=REQUIRES,
+ tests_require=TESTS_REQUIRE,
packages=[
'kfp',
'kfp.cli',
|
{"golden_diff": "diff --git a/sdk/python/setup.py b/sdk/python/setup.py\n--- a/sdk/python/setup.py\n+++ b/sdk/python/setup.py\n@@ -39,6 +39,10 @@\n 'strip-hints',\n ]\n \n+TESTS_REQUIRE = [\n+ 'mock',\n+]\n+\n \n def find_version(*file_path_parts):\n here = os.path.abspath(os.path.dirname(__file__))\n@@ -62,6 +66,7 @@\n description='KubeFlow Pipelines SDK',\n author='google',\n install_requires=REQUIRES,\n+ tests_require=TESTS_REQUIRE,\n packages=[\n 'kfp',\n 'kfp.cli',\n", "issue": "allow output artifact store configuration (vs hard coded)\nit seems like the output artifacts are always stored in a specific minio service, port, namespace, bucket, secrets, etc (`minio-service.kubeflow:9000`). \r\n\r\nsee: https://github.com/kubeflow/pipelines/blob/f40a22a3f4a8e06d20cf3e3f425b5058d5c87e0b/sdk/python/kfp/compiler/_op_to_template.py#L148\r\n\r\nit would be great to make it flexible, e.g. allow using S3, or change namespace or bucket names.\r\ni suggest making it configurable, i can do such PR if we agree its needed. \nflexible pipeline service (host) path in client SDK \nwhen creating an SDK `Client()` the path to `ml-pipeline` API service is loaded from a hard coded value (`ml-pipeline.kubeflow.svc.cluster.local:8888`) which indicate a specific k8s namespace. it can be valuable to load that default value from an env variable, i.e. changing the line in `_client.py` from:\r\n\r\n`config.host = host if host else Client.IN_CLUSTER_DNS_NAME`\r\n\r\nto:\r\n\r\n`config.host = host or os.environ.get('ML_PIPELINE_DNS_NAME',Client.IN_CLUSTER_DNS_NAME)`\r\n\r\nalso note that when a user provide the `host` parameter, the ipython output points to the API server and not to the UI service (see the logic in `_get_url_prefix()`), it seems like a potential bug\r\n\r\nif its acceptable i can submit a PR for the line change above\r\n \n", "before_files": [{"content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport os\nimport re\nfrom setuptools import setup\n\nNAME = 'kfp'\n#VERSION = .... Change the version in kfp/__init__.py\n\nREQUIRES = [\n 'PyYAML',\n 'google-cloud-storage>=1.13.0',\n 'kubernetes>=8.0.0, <12.0.0',\n 'google-auth>=1.6.1',\n 'requests_toolbelt>=0.8.0',\n 'cloudpickle',\n # Update the upper version whenever a new major version of the\n # kfp-server-api package is released.\n # Update the lower version when kfp sdk depends on new apis/fields in\n # kfp-server-api.\n # Note, please also update ./requirements.in\n 'kfp-server-api>=0.2.5, <2.0.0',\n 'jsonschema >= 3.0.1',\n 'tabulate',\n 'click',\n 'Deprecated',\n 'strip-hints',\n]\n\n\ndef find_version(*file_path_parts):\n here = os.path.abspath(os.path.dirname(__file__))\n with open(os.path.join(here, *file_path_parts), 'r') as fp:\n version_file_text = fp.read()\n\n version_match = re.search(\n r\"^__version__ = ['\\\"]([^'\\\"]*)['\\\"]\",\n version_file_text,\n re.M,\n )\n if version_match:\n return version_match.group(1)\n\n raise RuntimeError('Unable to find version string.')\n\n\nsetup(\n name=NAME,\n version=find_version('kfp', '__init__.py'),\n description='KubeFlow Pipelines SDK',\n author='google',\n install_requires=REQUIRES,\n packages=[\n 'kfp',\n 'kfp.cli',\n 'kfp.cli.diagnose_me',\n 'kfp.compiler',\n 'kfp.components',\n 'kfp.components.structures',\n 'kfp.components.structures.kubernetes',\n 'kfp.containers',\n 'kfp.dsl',\n 'kfp.dsl.extensions',\n 'kfp.notebook',\n ],\n classifiers=[\n 'Intended Audience :: Developers',\n 'Intended Audience :: Education',\n 'Intended Audience :: Science/Research',\n 'License :: OSI Approved :: Apache Software License',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Topic :: Scientific/Engineering',\n 'Topic :: Scientific/Engineering :: Artificial Intelligence',\n 'Topic :: Software Development',\n 'Topic :: Software Development :: Libraries',\n 'Topic :: Software Development :: Libraries :: Python Modules',\n ],\n python_requires='>=3.5.3',\n include_package_data=True,\n entry_points={\n 'console_scripts': [\n 'dsl-compile = kfp.compiler.main:main', 'kfp=kfp.__main__:main'\n ]\n })\n", "path": "sdk/python/setup.py"}]}
| 1,850 | 141 |
gh_patches_debug_2772
|
rasdani/github-patches
|
git_diff
|
kivy__kivy-4785
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
RecycleBoxLayout NameError
``` python
from random import sample
from string import ascii_lowercase
from kivy.app import App
from kivy.lang import Builder
from kivy.uix.boxlayout import BoxLayout
kv = """
<Row@BoxLayout>:
value: ''
size_hint: None, None
size: self.minimum_size
Label:
text: root.value
size_hint: None, None
size: self.texture_size
<Test>:
rv: rv
orientation: 'vertical'
Button:
text: 'Populate list'
on_press: root.populate()
RecycleView:
id: rv
viewclass: 'Row'
RecycleBoxLayout:
default_size: None, None
size_hint_y: None
height: self.minimum_height
orientation: 'vertical'
"""
Builder.load_string(kv)
class Test(BoxLayout):
def populate(self):
self.rv.data = [{'value': ''.join(sample(ascii_lowercase, 6))}
for x in range(50)]
class TestApp(App):
def build(self):
return Test()
if __name__ == '__main__':
TestApp().run()
```
``` python
Traceback (most recent call last):
File "E:\dev\prs\kivy\examples\widgets\recycleview\basic_data.py", line 49, in <module>
TestApp().run()
File "E:\dev\prs\kivy\kivy\app.py", line 828, in run
runTouchApp()
File "E:\dev\prs\kivy\kivy\base.py", line 487, in runTouchApp
EventLoop.window.mainloop()
File "E:\dev\prs\kivy\kivy\core\window\window_sdl2.py", line 633, in mainloop
self._mainloop()
File "E:\dev\prs\kivy\kivy\core\window\window_sdl2.py", line 388, in _mainloop
EventLoop.idle()
File "E:\dev\prs\kivy\kivy\base.py", line 336, in idle
Clock.tick_draw()
File "E:\dev\prs\kivy\kivy\clock.py", line 528, in tick_draw
self._process_events_before_frame()
File "E:\dev\prs\kivy\kivy\clock.py", line 678, in _process_events_before_frame
event.tick(self._last_tick, remove)
File "E:\dev\prs\kivy\kivy\clock.py", line 412, in tick
ret = callback(self._dt)
File "E:\dev\prs\kivy\kivy\uix\recycleview\__init__.py", line 109, in refresh_views
lm.compute_layout(data, f)
File "E:\dev\prs\kivy\kivy\uix\recycleboxlayout.py", line 88, in compute_layout
changed and not self._update_sizes(changed)):
File "E:\dev\prs\kivy\kivy\uix\recycleboxlayout.py", line 81, in _update_sizes
return relayout
NameError: name 'relayout' is not defined
```
</issue>
<code>
[start of kivy/uix/recycleboxlayout.py]
1 """
2 RecycleBoxLayout
3 ================
4
5 .. versionadded:: 1.9.2
6
7 .. warning::
8 This module is highly experimental, its API may change in the future and
9 the documentation is not complete at this time.
10
11 The RecycleBoxLayout is designed to provide a
12 :class:`~kivy.uix.boxlayout.BoxLayout` type layout when used with the
13 :class:`~kivy.uix.recycleview.RecycleView` widget. Please refer to the
14 :mod:`~kivy.uix.recycleview` module documentation for more information.
15
16 """
17
18 from kivy.uix.recyclelayout import RecycleLayout
19 from kivy.uix.boxlayout import BoxLayout
20
21 __all__ = ('RecycleBoxLayout', )
22
23
24 class RecycleBoxLayout(RecycleLayout, BoxLayout):
25
26 _rv_positions = None
27
28 def __init__(self, **kwargs):
29 super(RecycleBoxLayout, self).__init__(**kwargs)
30 self.funbind('children', self._trigger_layout)
31
32 def _update_sizes(self, changed):
33 horizontal = self.orientation == 'horizontal'
34 padding_left, padding_top, padding_right, padding_bottom = self.padding
35 padding_x = padding_left + padding_right
36 padding_y = padding_top + padding_bottom
37 selfw = self.width
38 selfh = self.height
39 layout_w = max(0, selfw - padding_x)
40 layout_h = max(0, selfh - padding_y)
41 cx = self.x + padding_left
42 cy = self.y + padding_bottom
43 view_opts = self.view_opts
44 remove_view = self.remove_view
45
46 for (index, widget, (w, h), (wn, hn), (shw, shh), (shnw, shnh),
47 (shw_min, shh_min), (shwn_min, shhn_min), (shw_max, shh_max),
48 (shwn_max, shhn_max), ph, phn) in changed:
49 if (horizontal and
50 (shw != shnw or w != wn or shw_min != shwn_min or
51 shw_max != shwn_max) or
52 not horizontal and
53 (shh != shnh or h != hn or shh_min != shhn_min or
54 shh_max != shhn_max)):
55 return True
56
57 remove_view(widget, index)
58 opt = view_opts[index]
59 if horizontal:
60 wo, ho = opt['size']
61 if shnh is not None:
62 _, h = opt['size'] = [wo, shnh * layout_h]
63 else:
64 h = ho
65
66 xo, yo = opt['pos']
67 for key, value in phn.items():
68 posy = value * layout_h
69 if key == 'y':
70 yo = posy + cy
71 elif key == 'top':
72 yo = posy - h
73 elif key == 'center_y':
74 yo = posy - (h / 2.)
75 opt['pos'] = [xo, yo]
76 else:
77 wo, ho = opt['size']
78 if shnw is not None:
79 w, _ = opt['size'] = [shnw * layout_w, ho]
80 else:
81 w = wo
82
83 xo, yo = opt['pos']
84 for key, value in phn.items():
85 posx = value * layout_w
86 if key == 'x':
87 xo = posx + cx
88 elif key == 'right':
89 xo = posx - w
90 elif key == 'center_x':
91 xo = posx - (w / 2.)
92 opt['pos'] = [xo, yo]
93
94 return relayout
95
96 def compute_layout(self, data, flags):
97 super(RecycleBoxLayout, self).compute_layout(data, flags)
98
99 changed = self._changed_views
100 if (changed is None or
101 changed and not self._update_sizes(changed)):
102 return
103
104 self.clear_layout()
105 self._rv_positions = None
106 if not data:
107 l, t, r, b = self.padding
108 self.minimum_size = l + r, t + b
109 return
110
111 view_opts = self.view_opts
112 n = len(view_opts)
113 for i, x, y, w, h in self._iterate_layout(
114 [(opt['size'], opt['size_hint'], opt['pos_hint'],
115 opt['size_hint_min'], opt['size_hint_max']) for
116 opt in reversed(view_opts)]):
117 opt = view_opts[n - i - 1]
118 shw, shh = opt['size_hint']
119 opt['pos'] = x, y
120 wo, ho = opt['size']
121 # layout won't/shouldn't change previous size if size_hint is None
122 # which is what w/h being None means.
123 opt['size'] = [(wo if shw is None else w),
124 (ho if shh is None else h)]
125
126 spacing = self.spacing
127 pos = self._rv_positions = [None, ] * len(data)
128
129 if self.orientation == 'horizontal':
130 pos[0] = self.x
131 last = pos[0] + self.padding[0] + view_opts[0]['size'][0] + \
132 spacing / 2.
133 for i, val in enumerate(view_opts[1:], 1):
134 pos[i] = last
135 last += val['size'][0] + spacing
136 else:
137 last = pos[-1] = \
138 self.y + self.height - self.padding[1] - \
139 view_opts[0]['size'][1] - spacing / 2.
140 n = len(view_opts)
141 for i, val in enumerate(view_opts[1:], 1):
142 last -= spacing + val['size'][1]
143 pos[n - 1 - i] = last
144
145 def get_view_index_at(self, pos):
146 calc_pos = self._rv_positions
147 if not calc_pos:
148 return 0
149
150 x, y = pos
151
152 if self.orientation == 'horizontal':
153 if x >= calc_pos[-1] or len(calc_pos) == 1:
154 return len(calc_pos) - 1
155
156 ix = 0
157 for val in calc_pos[1:]:
158 if x < val:
159 return ix
160 ix += 1
161 else:
162 if y >= calc_pos[-1] or len(calc_pos) == 1:
163 return 0
164
165 iy = 0
166 for val in calc_pos[1:]:
167 if y < val:
168 return len(calc_pos) - iy - 1
169 iy += 1
170
171 assert False
172
173 def compute_visible_views(self, data, viewport):
174 if self._rv_positions is None or not data:
175 return []
176
177 x, y, w, h = viewport
178 at_idx = self.get_view_index_at
179 if self.orientation == 'horizontal':
180 a, b = at_idx((x, y)), at_idx((x + w, y))
181 else:
182 a, b = at_idx((x, y + h)), at_idx((x, y))
183 return list(range(a, b + 1))
184
[end of kivy/uix/recycleboxlayout.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/kivy/uix/recycleboxlayout.py b/kivy/uix/recycleboxlayout.py
--- a/kivy/uix/recycleboxlayout.py
+++ b/kivy/uix/recycleboxlayout.py
@@ -91,7 +91,7 @@
xo = posx - (w / 2.)
opt['pos'] = [xo, yo]
- return relayout
+ return False
def compute_layout(self, data, flags):
super(RecycleBoxLayout, self).compute_layout(data, flags)
|
{"golden_diff": "diff --git a/kivy/uix/recycleboxlayout.py b/kivy/uix/recycleboxlayout.py\n--- a/kivy/uix/recycleboxlayout.py\n+++ b/kivy/uix/recycleboxlayout.py\n@@ -91,7 +91,7 @@\n xo = posx - (w / 2.)\n opt['pos'] = [xo, yo]\n \n- return relayout\n+ return False\n \n def compute_layout(self, data, flags):\n super(RecycleBoxLayout, self).compute_layout(data, flags)\n", "issue": "RecycleBoxLayout NameError\n``` python\nfrom random import sample\nfrom string import ascii_lowercase\n\nfrom kivy.app import App\nfrom kivy.lang import Builder\nfrom kivy.uix.boxlayout import BoxLayout\n\n\nkv = \"\"\"\n<Row@BoxLayout>:\n value: ''\n size_hint: None, None\n size: self.minimum_size\n Label:\n text: root.value\n size_hint: None, None\n size: self.texture_size\n\n<Test>:\n rv: rv\n orientation: 'vertical'\n Button:\n text: 'Populate list'\n on_press: root.populate()\n RecycleView:\n id: rv\n viewclass: 'Row'\n RecycleBoxLayout:\n default_size: None, None\n size_hint_y: None\n height: self.minimum_height\n orientation: 'vertical'\n\"\"\"\n\nBuilder.load_string(kv)\n\n\nclass Test(BoxLayout):\n\n def populate(self):\n self.rv.data = [{'value': ''.join(sample(ascii_lowercase, 6))}\n for x in range(50)]\n\n\nclass TestApp(App):\n def build(self):\n return Test()\n\nif __name__ == '__main__':\n TestApp().run()\n```\n\n``` python\n Traceback (most recent call last):\n File \"E:\\dev\\prs\\kivy\\examples\\widgets\\recycleview\\basic_data.py\", line 49, in <module>\n TestApp().run()\n File \"E:\\dev\\prs\\kivy\\kivy\\app.py\", line 828, in run\n runTouchApp()\n File \"E:\\dev\\prs\\kivy\\kivy\\base.py\", line 487, in runTouchApp\n EventLoop.window.mainloop()\n File \"E:\\dev\\prs\\kivy\\kivy\\core\\window\\window_sdl2.py\", line 633, in mainloop\n self._mainloop()\n File \"E:\\dev\\prs\\kivy\\kivy\\core\\window\\window_sdl2.py\", line 388, in _mainloop\n EventLoop.idle()\n File \"E:\\dev\\prs\\kivy\\kivy\\base.py\", line 336, in idle\n Clock.tick_draw()\n File \"E:\\dev\\prs\\kivy\\kivy\\clock.py\", line 528, in tick_draw\n self._process_events_before_frame()\n File \"E:\\dev\\prs\\kivy\\kivy\\clock.py\", line 678, in _process_events_before_frame\n event.tick(self._last_tick, remove)\n File \"E:\\dev\\prs\\kivy\\kivy\\clock.py\", line 412, in tick\n ret = callback(self._dt)\n File \"E:\\dev\\prs\\kivy\\kivy\\uix\\recycleview\\__init__.py\", line 109, in refresh_views\n lm.compute_layout(data, f)\n File \"E:\\dev\\prs\\kivy\\kivy\\uix\\recycleboxlayout.py\", line 88, in compute_layout\n changed and not self._update_sizes(changed)):\n File \"E:\\dev\\prs\\kivy\\kivy\\uix\\recycleboxlayout.py\", line 81, in _update_sizes\n return relayout\n NameError: name 'relayout' is not defined\n```\n\n", "before_files": [{"content": "\"\"\"\nRecycleBoxLayout\n================\n\n.. versionadded:: 1.9.2\n\n.. warning::\n This module is highly experimental, its API may change in the future and\n the documentation is not complete at this time.\n\nThe RecycleBoxLayout is designed to provide a\n:class:`~kivy.uix.boxlayout.BoxLayout` type layout when used with the\n:class:`~kivy.uix.recycleview.RecycleView` widget. Please refer to the\n:mod:`~kivy.uix.recycleview` module documentation for more information.\n\n\"\"\"\n\nfrom kivy.uix.recyclelayout import RecycleLayout\nfrom kivy.uix.boxlayout import BoxLayout\n\n__all__ = ('RecycleBoxLayout', )\n\n\nclass RecycleBoxLayout(RecycleLayout, BoxLayout):\n\n _rv_positions = None\n\n def __init__(self, **kwargs):\n super(RecycleBoxLayout, self).__init__(**kwargs)\n self.funbind('children', self._trigger_layout)\n\n def _update_sizes(self, changed):\n horizontal = self.orientation == 'horizontal'\n padding_left, padding_top, padding_right, padding_bottom = self.padding\n padding_x = padding_left + padding_right\n padding_y = padding_top + padding_bottom\n selfw = self.width\n selfh = self.height\n layout_w = max(0, selfw - padding_x)\n layout_h = max(0, selfh - padding_y)\n cx = self.x + padding_left\n cy = self.y + padding_bottom\n view_opts = self.view_opts\n remove_view = self.remove_view\n\n for (index, widget, (w, h), (wn, hn), (shw, shh), (shnw, shnh),\n (shw_min, shh_min), (shwn_min, shhn_min), (shw_max, shh_max),\n (shwn_max, shhn_max), ph, phn) in changed:\n if (horizontal and\n (shw != shnw or w != wn or shw_min != shwn_min or\n shw_max != shwn_max) or\n not horizontal and\n (shh != shnh or h != hn or shh_min != shhn_min or\n shh_max != shhn_max)):\n return True\n\n remove_view(widget, index)\n opt = view_opts[index]\n if horizontal:\n wo, ho = opt['size']\n if shnh is not None:\n _, h = opt['size'] = [wo, shnh * layout_h]\n else:\n h = ho\n\n xo, yo = opt['pos']\n for key, value in phn.items():\n posy = value * layout_h\n if key == 'y':\n yo = posy + cy\n elif key == 'top':\n yo = posy - h\n elif key == 'center_y':\n yo = posy - (h / 2.)\n opt['pos'] = [xo, yo]\n else:\n wo, ho = opt['size']\n if shnw is not None:\n w, _ = opt['size'] = [shnw * layout_w, ho]\n else:\n w = wo\n\n xo, yo = opt['pos']\n for key, value in phn.items():\n posx = value * layout_w\n if key == 'x':\n xo = posx + cx\n elif key == 'right':\n xo = posx - w\n elif key == 'center_x':\n xo = posx - (w / 2.)\n opt['pos'] = [xo, yo]\n\n return relayout\n\n def compute_layout(self, data, flags):\n super(RecycleBoxLayout, self).compute_layout(data, flags)\n\n changed = self._changed_views\n if (changed is None or\n changed and not self._update_sizes(changed)):\n return\n\n self.clear_layout()\n self._rv_positions = None\n if not data:\n l, t, r, b = self.padding\n self.minimum_size = l + r, t + b\n return\n\n view_opts = self.view_opts\n n = len(view_opts)\n for i, x, y, w, h in self._iterate_layout(\n [(opt['size'], opt['size_hint'], opt['pos_hint'],\n opt['size_hint_min'], opt['size_hint_max']) for\n opt in reversed(view_opts)]):\n opt = view_opts[n - i - 1]\n shw, shh = opt['size_hint']\n opt['pos'] = x, y\n wo, ho = opt['size']\n # layout won't/shouldn't change previous size if size_hint is None\n # which is what w/h being None means.\n opt['size'] = [(wo if shw is None else w),\n (ho if shh is None else h)]\n\n spacing = self.spacing\n pos = self._rv_positions = [None, ] * len(data)\n\n if self.orientation == 'horizontal':\n pos[0] = self.x\n last = pos[0] + self.padding[0] + view_opts[0]['size'][0] + \\\n spacing / 2.\n for i, val in enumerate(view_opts[1:], 1):\n pos[i] = last\n last += val['size'][0] + spacing\n else:\n last = pos[-1] = \\\n self.y + self.height - self.padding[1] - \\\n view_opts[0]['size'][1] - spacing / 2.\n n = len(view_opts)\n for i, val in enumerate(view_opts[1:], 1):\n last -= spacing + val['size'][1]\n pos[n - 1 - i] = last\n\n def get_view_index_at(self, pos):\n calc_pos = self._rv_positions\n if not calc_pos:\n return 0\n\n x, y = pos\n\n if self.orientation == 'horizontal':\n if x >= calc_pos[-1] or len(calc_pos) == 1:\n return len(calc_pos) - 1\n\n ix = 0\n for val in calc_pos[1:]:\n if x < val:\n return ix\n ix += 1\n else:\n if y >= calc_pos[-1] or len(calc_pos) == 1:\n return 0\n\n iy = 0\n for val in calc_pos[1:]:\n if y < val:\n return len(calc_pos) - iy - 1\n iy += 1\n\n assert False\n\n def compute_visible_views(self, data, viewport):\n if self._rv_positions is None or not data:\n return []\n\n x, y, w, h = viewport\n at_idx = self.get_view_index_at\n if self.orientation == 'horizontal':\n a, b = at_idx((x, y)), at_idx((x + w, y))\n else:\n a, b = at_idx((x, y + h)), at_idx((x, y))\n return list(range(a, b + 1))\n", "path": "kivy/uix/recycleboxlayout.py"}]}
| 3,271 | 120 |
gh_patches_debug_22632
|
rasdani/github-patches
|
git_diff
|
Pylons__pyramid-1448
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
add PYTHONSTARTUP support to pshell
Currently pshell does not source any `.pystartup` file pointed to by the `PYTHONSTARTUP` environment variable. It'd be nice to support this.
https://docs.python.org/2/tutorial/interpreter.html#the-interactive-startup-file
</issue>
<code>
[start of pyramid/scripts/pshell.py]
1 from code import interact
2 import optparse
3 import sys
4 import textwrap
5
6 from pyramid.compat import configparser
7 from pyramid.util import DottedNameResolver
8 from pyramid.paster import bootstrap
9
10 from pyramid.paster import setup_logging
11
12 from pyramid.scripts.common import parse_vars
13
14 def main(argv=sys.argv, quiet=False):
15 command = PShellCommand(argv, quiet)
16 return command.run()
17
18 class PShellCommand(object):
19 usage = '%prog config_uri'
20 description = """\
21 Open an interactive shell with a Pyramid app loaded. This command
22 accepts one positional argument named "config_uri" which specifies the
23 PasteDeploy config file to use for the interactive shell. The format is
24 "inifile#name". If the name is left off, the Pyramid default application
25 will be assumed. Example: "pshell myapp.ini#main"
26
27 If you do not point the loader directly at the section of the ini file
28 containing your Pyramid application, the command will attempt to
29 find the app for you. If you are loading a pipeline that contains more
30 than one Pyramid application within it, the loader will use the
31 last one.
32 """
33 bootstrap = (bootstrap,) # for testing
34
35 parser = optparse.OptionParser(
36 usage,
37 description=textwrap.dedent(description)
38 )
39 parser.add_option('-p', '--python-shell',
40 action='store', type='string', dest='python_shell',
41 default='', help='ipython | bpython | python')
42 parser.add_option('--setup',
43 dest='setup',
44 help=("A callable that will be passed the environment "
45 "before it is made available to the shell. This "
46 "option will override the 'setup' key in the "
47 "[pshell] ini section."))
48
49 ConfigParser = configparser.ConfigParser # testing
50
51 loaded_objects = {}
52 object_help = {}
53 setup = None
54
55 def __init__(self, argv, quiet=False):
56 self.quiet = quiet
57 self.options, self.args = self.parser.parse_args(argv[1:])
58
59 def pshell_file_config(self, filename):
60 config = self.ConfigParser()
61 config.read(filename)
62 try:
63 items = config.items('pshell')
64 except configparser.NoSectionError:
65 return
66
67 resolver = DottedNameResolver(None)
68 self.loaded_objects = {}
69 self.object_help = {}
70 self.setup = None
71 for k, v in items:
72 if k == 'setup':
73 self.setup = v
74 else:
75 self.loaded_objects[k] = resolver.maybe_resolve(v)
76 self.object_help[k] = v
77
78 def out(self, msg): # pragma: no cover
79 if not self.quiet:
80 print(msg)
81
82 def run(self, shell=None):
83 if not self.args:
84 self.out('Requires a config file argument')
85 return 2
86 config_uri = self.args[0]
87 config_file = config_uri.split('#', 1)[0]
88 setup_logging(config_file)
89 self.pshell_file_config(config_file)
90
91 # bootstrap the environ
92 env = self.bootstrap[0](config_uri, options=parse_vars(self.args[1:]))
93
94 # remove the closer from the env
95 closer = env.pop('closer')
96
97 # setup help text for default environment
98 env_help = dict(env)
99 env_help['app'] = 'The WSGI application.'
100 env_help['root'] = 'Root of the default resource tree.'
101 env_help['registry'] = 'Active Pyramid registry.'
102 env_help['request'] = 'Active request object.'
103 env_help['root_factory'] = (
104 'Default root factory used to create `root`.')
105
106 # override use_script with command-line options
107 if self.options.setup:
108 self.setup = self.options.setup
109
110 if self.setup:
111 # store the env before muddling it with the script
112 orig_env = env.copy()
113
114 # call the setup callable
115 resolver = DottedNameResolver(None)
116 setup = resolver.maybe_resolve(self.setup)
117 setup(env)
118
119 # remove any objects from default help that were overidden
120 for k, v in env.items():
121 if k not in orig_env or env[k] != orig_env[k]:
122 env_help[k] = v
123
124 # load the pshell section of the ini file
125 env.update(self.loaded_objects)
126
127 # eliminate duplicates from env, allowing custom vars to override
128 for k in self.loaded_objects:
129 if k in env_help:
130 del env_help[k]
131
132 # generate help text
133 help = ''
134 if env_help:
135 help += 'Environment:'
136 for var in sorted(env_help.keys()):
137 help += '\n %-12s %s' % (var, env_help[var])
138
139 if self.object_help:
140 help += '\n\nCustom Variables:'
141 for var in sorted(self.object_help.keys()):
142 help += '\n %-12s %s' % (var, self.object_help[var])
143
144 if shell is None:
145 shell = self.make_shell()
146
147 try:
148 shell(env, help)
149 finally:
150 closer()
151
152 def make_shell(self):
153 shell = None
154 user_shell = self.options.python_shell.lower()
155 if not user_shell:
156 shell = self.make_ipython_shell()
157 if shell is None:
158 shell = self.make_bpython_shell()
159
160 elif user_shell == 'ipython':
161 shell = self.make_ipython_shell()
162
163 elif user_shell == 'bpython':
164 shell = self.make_bpython_shell()
165
166 if shell is None:
167 shell = self.make_default_shell()
168
169 return shell
170
171 def make_default_shell(self, interact=interact):
172 def shell(env, help):
173 cprt = 'Type "help" for more information.'
174 banner = "Python %s on %s\n%s" % (sys.version, sys.platform, cprt)
175 banner += '\n\n' + help + '\n'
176 interact(banner, local=env)
177 return shell
178
179 def make_bpython_shell(self, BPShell=None):
180 if BPShell is None: # pragma: no cover
181 try:
182 from bpython import embed
183 BPShell = embed
184 except ImportError:
185 return None
186 def shell(env, help):
187 BPShell(locals_=env, banner=help + '\n')
188 return shell
189
190 def make_ipython_shell(self):
191 shell = self.make_ipython_v1_1_shell()
192 if shell is None:
193 shell = self.make_ipython_v0_11_shell()
194 if shell is None:
195 shell = self.make_ipython_v0_10_shell()
196 return shell
197
198 def make_ipython_v1_1_shell(self, IPShellFactory=None):
199 if IPShellFactory is None: # pragma: no cover
200 try:
201 from IPython.terminal.embed import (
202 InteractiveShellEmbed)
203 IPShellFactory = InteractiveShellEmbed
204 except ImportError:
205 return None
206 def shell(env, help):
207 IPShell = IPShellFactory(banner2=help + '\n', user_ns=env)
208 IPShell()
209 return shell
210
211 def make_ipython_v0_11_shell(self, IPShellFactory=None):
212 if IPShellFactory is None: # pragma: no cover
213 try:
214 from IPython.frontend.terminal.embed import (
215 InteractiveShellEmbed)
216 IPShellFactory = InteractiveShellEmbed
217 except ImportError:
218 return None
219 def shell(env, help):
220 IPShell = IPShellFactory(banner2=help + '\n', user_ns=env)
221 IPShell()
222 return shell
223
224 def make_ipython_v0_10_shell(self, IPShellFactory=None):
225 if IPShellFactory is None: # pragma: no cover
226 try:
227 from IPython.Shell import IPShellEmbed
228 IPShellFactory = IPShellEmbed
229 except ImportError:
230 return None
231 def shell(env, help):
232 IPShell = IPShellFactory(argv=[], user_ns=env)
233 IPShell.set_banner(IPShell.IP.BANNER + '\n' + help + '\n')
234 IPShell()
235 return shell
236
237 if __name__ == '__main__': # pragma: no cover
238 sys.exit(main() or 0)
239
[end of pyramid/scripts/pshell.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/pyramid/scripts/pshell.py b/pyramid/scripts/pshell.py
--- a/pyramid/scripts/pshell.py
+++ b/pyramid/scripts/pshell.py
@@ -1,9 +1,11 @@
from code import interact
import optparse
+import os
import sys
import textwrap
from pyramid.compat import configparser
+from pyramid.compat import exec_
from pyramid.util import DottedNameResolver
from pyramid.paster import bootstrap
@@ -51,6 +53,7 @@
loaded_objects = {}
object_help = {}
setup = None
+ pystartup = os.environ.get('PYTHONSTARTUP')
def __init__(self, argv, quiet=False):
self.quiet = quiet
@@ -144,6 +147,12 @@
if shell is None:
shell = self.make_shell()
+ if self.pystartup and os.path.isfile(self.pystartup):
+ with open(self.pystartup, 'rb') as fp:
+ exec_(fp.read().decode('utf-8'), env)
+ if '__builtins__' in env:
+ del env['__builtins__']
+
try:
shell(env, help)
finally:
|
{"golden_diff": "diff --git a/pyramid/scripts/pshell.py b/pyramid/scripts/pshell.py\n--- a/pyramid/scripts/pshell.py\n+++ b/pyramid/scripts/pshell.py\n@@ -1,9 +1,11 @@\n from code import interact\n import optparse\n+import os\n import sys\n import textwrap\n \n from pyramid.compat import configparser\n+from pyramid.compat import exec_\n from pyramid.util import DottedNameResolver\n from pyramid.paster import bootstrap\n \n@@ -51,6 +53,7 @@\n loaded_objects = {}\n object_help = {}\n setup = None\n+ pystartup = os.environ.get('PYTHONSTARTUP')\n \n def __init__(self, argv, quiet=False):\n self.quiet = quiet\n@@ -144,6 +147,12 @@\n if shell is None:\n shell = self.make_shell()\n \n+ if self.pystartup and os.path.isfile(self.pystartup):\n+ with open(self.pystartup, 'rb') as fp:\n+ exec_(fp.read().decode('utf-8'), env)\n+ if '__builtins__' in env:\n+ del env['__builtins__']\n+\n try:\n shell(env, help)\n finally:\n", "issue": "add PYTHONSTARTUP support to pshell\nCurrently pshell does not source any `.pystartup` file pointed to by the `PYTHONSTARTUP` environment variable. It'd be nice to support this.\n\nhttps://docs.python.org/2/tutorial/interpreter.html#the-interactive-startup-file\n\n", "before_files": [{"content": "from code import interact\nimport optparse\nimport sys\nimport textwrap\n\nfrom pyramid.compat import configparser\nfrom pyramid.util import DottedNameResolver\nfrom pyramid.paster import bootstrap\n\nfrom pyramid.paster import setup_logging\n\nfrom pyramid.scripts.common import parse_vars\n\ndef main(argv=sys.argv, quiet=False):\n command = PShellCommand(argv, quiet)\n return command.run()\n\nclass PShellCommand(object):\n usage = '%prog config_uri'\n description = \"\"\"\\\n Open an interactive shell with a Pyramid app loaded. This command\n accepts one positional argument named \"config_uri\" which specifies the\n PasteDeploy config file to use for the interactive shell. The format is\n \"inifile#name\". If the name is left off, the Pyramid default application\n will be assumed. Example: \"pshell myapp.ini#main\"\n\n If you do not point the loader directly at the section of the ini file\n containing your Pyramid application, the command will attempt to\n find the app for you. If you are loading a pipeline that contains more\n than one Pyramid application within it, the loader will use the\n last one.\n \"\"\"\n bootstrap = (bootstrap,) # for testing\n\n parser = optparse.OptionParser(\n usage,\n description=textwrap.dedent(description)\n )\n parser.add_option('-p', '--python-shell',\n action='store', type='string', dest='python_shell',\n default='', help='ipython | bpython | python')\n parser.add_option('--setup',\n dest='setup',\n help=(\"A callable that will be passed the environment \"\n \"before it is made available to the shell. This \"\n \"option will override the 'setup' key in the \"\n \"[pshell] ini section.\"))\n\n ConfigParser = configparser.ConfigParser # testing\n\n loaded_objects = {}\n object_help = {}\n setup = None\n\n def __init__(self, argv, quiet=False):\n self.quiet = quiet\n self.options, self.args = self.parser.parse_args(argv[1:])\n\n def pshell_file_config(self, filename):\n config = self.ConfigParser()\n config.read(filename)\n try:\n items = config.items('pshell')\n except configparser.NoSectionError:\n return\n\n resolver = DottedNameResolver(None)\n self.loaded_objects = {}\n self.object_help = {}\n self.setup = None\n for k, v in items:\n if k == 'setup':\n self.setup = v\n else:\n self.loaded_objects[k] = resolver.maybe_resolve(v)\n self.object_help[k] = v\n\n def out(self, msg): # pragma: no cover\n if not self.quiet:\n print(msg)\n\n def run(self, shell=None):\n if not self.args:\n self.out('Requires a config file argument')\n return 2\n config_uri = self.args[0]\n config_file = config_uri.split('#', 1)[0]\n setup_logging(config_file)\n self.pshell_file_config(config_file)\n\n # bootstrap the environ\n env = self.bootstrap[0](config_uri, options=parse_vars(self.args[1:]))\n\n # remove the closer from the env\n closer = env.pop('closer')\n\n # setup help text for default environment\n env_help = dict(env)\n env_help['app'] = 'The WSGI application.'\n env_help['root'] = 'Root of the default resource tree.'\n env_help['registry'] = 'Active Pyramid registry.'\n env_help['request'] = 'Active request object.'\n env_help['root_factory'] = (\n 'Default root factory used to create `root`.')\n\n # override use_script with command-line options\n if self.options.setup:\n self.setup = self.options.setup\n\n if self.setup:\n # store the env before muddling it with the script\n orig_env = env.copy()\n\n # call the setup callable\n resolver = DottedNameResolver(None)\n setup = resolver.maybe_resolve(self.setup)\n setup(env)\n\n # remove any objects from default help that were overidden\n for k, v in env.items():\n if k not in orig_env or env[k] != orig_env[k]:\n env_help[k] = v\n\n # load the pshell section of the ini file\n env.update(self.loaded_objects)\n\n # eliminate duplicates from env, allowing custom vars to override\n for k in self.loaded_objects:\n if k in env_help:\n del env_help[k]\n\n # generate help text\n help = ''\n if env_help:\n help += 'Environment:'\n for var in sorted(env_help.keys()):\n help += '\\n %-12s %s' % (var, env_help[var])\n\n if self.object_help:\n help += '\\n\\nCustom Variables:'\n for var in sorted(self.object_help.keys()):\n help += '\\n %-12s %s' % (var, self.object_help[var])\n\n if shell is None:\n shell = self.make_shell()\n\n try:\n shell(env, help)\n finally:\n closer()\n\n def make_shell(self):\n shell = None\n user_shell = self.options.python_shell.lower()\n if not user_shell:\n shell = self.make_ipython_shell()\n if shell is None:\n shell = self.make_bpython_shell()\n\n elif user_shell == 'ipython':\n shell = self.make_ipython_shell()\n\n elif user_shell == 'bpython':\n shell = self.make_bpython_shell()\n\n if shell is None:\n shell = self.make_default_shell()\n\n return shell\n\n def make_default_shell(self, interact=interact):\n def shell(env, help):\n cprt = 'Type \"help\" for more information.'\n banner = \"Python %s on %s\\n%s\" % (sys.version, sys.platform, cprt)\n banner += '\\n\\n' + help + '\\n'\n interact(banner, local=env)\n return shell\n\n def make_bpython_shell(self, BPShell=None):\n if BPShell is None: # pragma: no cover\n try:\n from bpython import embed\n BPShell = embed\n except ImportError:\n return None\n def shell(env, help):\n BPShell(locals_=env, banner=help + '\\n')\n return shell\n\n def make_ipython_shell(self):\n shell = self.make_ipython_v1_1_shell()\n if shell is None:\n shell = self.make_ipython_v0_11_shell()\n if shell is None:\n shell = self.make_ipython_v0_10_shell()\n return shell\n\n def make_ipython_v1_1_shell(self, IPShellFactory=None):\n if IPShellFactory is None: # pragma: no cover\n try:\n from IPython.terminal.embed import (\n InteractiveShellEmbed)\n IPShellFactory = InteractiveShellEmbed\n except ImportError:\n return None\n def shell(env, help):\n IPShell = IPShellFactory(banner2=help + '\\n', user_ns=env)\n IPShell()\n return shell\n\n def make_ipython_v0_11_shell(self, IPShellFactory=None):\n if IPShellFactory is None: # pragma: no cover\n try:\n from IPython.frontend.terminal.embed import (\n InteractiveShellEmbed)\n IPShellFactory = InteractiveShellEmbed\n except ImportError:\n return None\n def shell(env, help):\n IPShell = IPShellFactory(banner2=help + '\\n', user_ns=env)\n IPShell()\n return shell\n\n def make_ipython_v0_10_shell(self, IPShellFactory=None):\n if IPShellFactory is None: # pragma: no cover\n try:\n from IPython.Shell import IPShellEmbed\n IPShellFactory = IPShellEmbed\n except ImportError:\n return None\n def shell(env, help):\n IPShell = IPShellFactory(argv=[], user_ns=env)\n IPShell.set_banner(IPShell.IP.BANNER + '\\n' + help + '\\n')\n IPShell()\n return shell\n\nif __name__ == '__main__': # pragma: no cover\n sys.exit(main() or 0)\n", "path": "pyramid/scripts/pshell.py"}]}
| 2,995 | 269 |
gh_patches_debug_11121
|
rasdani/github-patches
|
git_diff
|
aws-cloudformation__cfn-lint-1853
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Using !Ref in ContentUri of LayerVersion throw error E0001
*cfn-lint version: 0.43.0*
*Description of issue.*
The `ContentUri` property of a resource type [AWS::Serverless::LayerVersion][1] can be a string or a `LayerContent` object. But, if we use `!Ref` the template is marked as an error with the message:
>[cfn-lint] E0001: Error transforming template: Resource with id [CommonDependenciesLayer4ffbb56ae8] is invalid. 'ContentUri' requires Bucket and Key properties to be specified.
Here is an example:
```yaml
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Parameters:
EnvironmentName:
Description: An environment name
Type: String
Default: staging
AllowedValues:
- sandbox
- prod
- staging
LayerContentUri:
Type: String
Description: Layer content uri
Default: '../dependencies-layer/nodejs'
Resources:
CommonDependenciesLayer:
Type: AWS::Serverless::LayerVersion
Properties:
LayerName: !Sub '${EnvironmentName}-common-dependencies'
Description: 'Common dependencies'
ContentUri: !Ref LayerContentUri
CompatibleRuntimes:
- nodejs12.x
RetentionPolicy: Retain
Metadata:
BuildMethod: nodejs12.x
```
The template deploys ok using `sam deploy` so it is not an error. I'm already using the latest version of `cfn-lint` and already did `cfn-lint -u`
Hope this gets fixed soon.
PS: nice job!
[1]: https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/sam-resource-layerversion.html
</issue>
<code>
[start of src/cfnlint/transform.py]
1 """
2 Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
3 SPDX-License-Identifier: MIT-0
4 """
5 import os
6 import logging
7 import six
8 import samtranslator
9 from samtranslator.parser import parser
10 from samtranslator.translator.translator import Translator
11 from samtranslator.public.exceptions import InvalidDocumentException
12
13 from cfnlint.helpers import load_resource, convert_dict, format_json_string
14 from cfnlint.data import Serverless
15 from cfnlint.rules import Match, TransformError
16 LOGGER = logging.getLogger('cfnlint')
17
18
19 class Transform(object):
20 """
21 Application Serverless Module tranform Wrapper.
22 Based on code from AWS SAM CLI:
23 https://github.com/awslabs/aws-sam-cli/blob/develop/samcli/commands/validate/lib/sam_template_validator.py
24 """
25
26 def __init__(self, filename, template, region):
27 """
28 Initialize Transform class
29 """
30 self._filename = filename
31 self._template = template
32 self._region = region
33 self._parameters = {}
34
35 self._managed_policy_map = self.load_managed_policies()
36 self._sam_parser = parser.Parser()
37
38 def template(self):
39 """Get the template"""
40 return self._template
41
42 def load_managed_policies(self):
43 """
44 Load the ManagedPolicies locally, based on the AWS-CLI:
45 https://github.com/awslabs/aws-sam-cli/blob/develop/samcli/lib/samlib/default_managed_policies.json
46 """
47 return load_resource(Serverless, 'ManagedPolicies.json')
48
49 def _replace_local_codeuri(self):
50 """
51 Replaces the CodeUri in AWS::Serverless::Function and DefinitionUri in
52 AWS::Serverless::Api to a fake S3 Uri. This is to support running the
53 SAM Translator with valid values for these fields. If this is not done,
54 the template is invalid in the eyes of SAM Translator (the translator
55 does not support local paths)
56 """
57
58 all_resources = self._template.get('Resources', {})
59
60 template_globals = self._template.get('Globals', {})
61 auto_publish_alias = template_globals.get('Function', {}).get('AutoPublishAlias')
62 if isinstance(auto_publish_alias, dict):
63 if len(auto_publish_alias) == 1:
64 for k, v in auto_publish_alias.items():
65 if k == 'Ref':
66 if v in self._template.get('Parameters'):
67 self._parameters[v] = 'Alias'
68
69
70 for _, resource in all_resources.items():
71
72 resource_type = resource.get('Type')
73 resource_dict = resource.get('Properties')
74
75 if resource_type == 'AWS::Serverless::Function':
76
77 Transform._update_to_s3_uri('CodeUri', resource_dict)
78 auto_publish_alias = resource_dict.get('AutoPublishAlias')
79 if isinstance(auto_publish_alias, dict):
80 if len(auto_publish_alias) == 1:
81 for k, v in auto_publish_alias.items():
82 if k == 'Ref':
83 if v in self._template.get('Parameters'):
84 self._parameters[v] = 'Alias'
85 if resource_type in ['AWS::Serverless::LayerVersion']:
86 if resource_dict.get('ContentUri'):
87 Transform._update_to_s3_uri('ContentUri', resource_dict)
88 if resource_type == 'AWS::Serverless::Application':
89 if resource_dict.get('Location'):
90 resource_dict['Location'] = ''
91 Transform._update_to_s3_uri('Location', resource_dict)
92 if resource_type == 'AWS::Serverless::Api':
93 if ('DefinitionBody' not in resource_dict and
94 'Auth' not in resource_dict and 'Cors' not in resource_dict):
95 Transform._update_to_s3_uri('DefinitionUri', resource_dict)
96 else:
97 resource_dict['DefinitionBody'] = ''
98 if resource_type == 'AWS::Serverless::StateMachine' and resource_dict.get('DefinitionUri'):
99 Transform._update_to_s3_uri('DefinitionUri', resource_dict)
100
101 def transform_template(self):
102 """
103 Transform the Template using the Serverless Application Model.
104 """
105 matches = []
106
107 try:
108 # Output the SAM Translator version in debug mode
109 LOGGER.info('SAM Translator: %s', samtranslator.__version__)
110
111 sam_translator = Translator(
112 managed_policy_map=self._managed_policy_map,
113 sam_parser=self._sam_parser)
114
115 self._replace_local_codeuri()
116
117 # Tell SAM to use the region we're linting in, this has to be
118 # controlled using the default AWS mechanisms, see also:
119 # https://github.com/awslabs/serverless-application-model/blob/master/samtranslator/translator/arn_generator.py
120 LOGGER.info('Setting AWS_DEFAULT_REGION to %s', self._region)
121 os.environ['AWS_DEFAULT_REGION'] = self._region
122
123 self._template = convert_dict(
124 sam_translator.translate(sam_template=self._template,
125 parameter_values=self._parameters))
126
127 LOGGER.info('Transformed template: \n%s',
128 format_json_string(self._template))
129 except InvalidDocumentException as e:
130 message = 'Error transforming template: {0}'
131 for cause in e.causes:
132 matches.append(Match(
133 1, 1,
134 1, 1,
135 self._filename,
136 TransformError(), message.format(cause.message)))
137 except Exception as e: # pylint: disable=W0703
138 LOGGER.debug('Error transforming template: %s', str(e))
139 LOGGER.debug('Stack trace: %s', e, exc_info=True)
140 message = 'Error transforming template: {0}'
141 matches.append(Match(
142 1, 1,
143 1, 1,
144 self._filename,
145 TransformError(), message.format(str(e))))
146
147 return matches
148
149 @staticmethod
150 def is_s3_uri(uri):
151 """
152 Checks the uri and determines if it is a valid S3 Uri
153 Parameters
154 ----------
155 uri str, required
156 Uri to check
157 Returns
158 -------
159 bool
160 Returns True if the uri given is an S3 uri, otherwise False
161 """
162 return isinstance(uri, six.string_types) and uri.startswith('s3://')
163
164 @staticmethod
165 def _update_to_s3_uri(
166 property_key, resource_property_dict,
167 s3_uri_value='s3://bucket/value'):
168 """
169 Updates the 'property_key' in the 'resource_property_dict' to the
170 value of 's3_uri_value'
171 Note: The function will mutate the resource_property_dict that is pass
172 in Parameters
173 ----------
174 property_key str, required
175 Key in the resource_property_dict
176 resource_property_dict dict, required
177 Property dictionary of a Resource in the template to replace
178 s3_uri_value str, optional
179 Value to update the value of the property_key to
180 """
181 uri_property = resource_property_dict.get(property_key, '.')
182
183 # ignore if dict or already an S3 Uri
184 if isinstance(uri_property, dict) or Transform.is_s3_uri(uri_property):
185 return
186
187 resource_property_dict[property_key] = s3_uri_value
188
[end of src/cfnlint/transform.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/src/cfnlint/transform.py b/src/cfnlint/transform.py
--- a/src/cfnlint/transform.py
+++ b/src/cfnlint/transform.py
@@ -181,7 +181,13 @@
uri_property = resource_property_dict.get(property_key, '.')
# ignore if dict or already an S3 Uri
- if isinstance(uri_property, dict) or Transform.is_s3_uri(uri_property):
+ if isinstance(uri_property, dict):
+ if len(uri_property) == 1:
+ for k in uri_property.keys():
+ if k == 'Ref':
+ resource_property_dict[property_key] = s3_uri_value
+ return
+ if Transform.is_s3_uri(uri_property):
return
resource_property_dict[property_key] = s3_uri_value
|
{"golden_diff": "diff --git a/src/cfnlint/transform.py b/src/cfnlint/transform.py\n--- a/src/cfnlint/transform.py\n+++ b/src/cfnlint/transform.py\n@@ -181,7 +181,13 @@\n uri_property = resource_property_dict.get(property_key, '.')\n \n # ignore if dict or already an S3 Uri\n- if isinstance(uri_property, dict) or Transform.is_s3_uri(uri_property):\n+ if isinstance(uri_property, dict):\n+ if len(uri_property) == 1:\n+ for k in uri_property.keys():\n+ if k == 'Ref':\n+ resource_property_dict[property_key] = s3_uri_value\n+ return\n+ if Transform.is_s3_uri(uri_property):\n return\n \n resource_property_dict[property_key] = s3_uri_value\n", "issue": "Using !Ref in ContentUri of LayerVersion throw error E0001\n*cfn-lint version: 0.43.0*\r\n\r\n*Description of issue.*\r\n\r\nThe `ContentUri` property of a resource type [AWS::Serverless::LayerVersion][1] can be a string or a `LayerContent` object. But, if we use `!Ref` the template is marked as an error with the message:\r\n>[cfn-lint] E0001: Error transforming template: Resource with id [CommonDependenciesLayer4ffbb56ae8] is invalid. 'ContentUri' requires Bucket and Key properties to be specified.\r\n\r\nHere is an example:\r\n```yaml\r\nAWSTemplateFormatVersion: '2010-09-09'\r\nTransform: AWS::Serverless-2016-10-31\r\n\r\nParameters:\r\n EnvironmentName:\r\n Description: An environment name\r\n Type: String\r\n Default: staging\r\n AllowedValues:\r\n - sandbox\r\n - prod\r\n - staging\r\n\r\n LayerContentUri:\r\n Type: String\r\n Description: Layer content uri\r\n Default: '../dependencies-layer/nodejs'\r\n\r\n\r\nResources:\r\n CommonDependenciesLayer:\r\n Type: AWS::Serverless::LayerVersion\r\n Properties:\r\n LayerName: !Sub '${EnvironmentName}-common-dependencies'\r\n Description: 'Common dependencies'\r\n ContentUri: !Ref LayerContentUri\r\n CompatibleRuntimes:\r\n - nodejs12.x\r\n RetentionPolicy: Retain\r\n Metadata:\r\n BuildMethod: nodejs12.x\r\n```\r\nThe template deploys ok using `sam deploy` so it is not an error. I'm already using the latest version of `cfn-lint` and already did `cfn-lint -u`\r\n\r\nHope this gets fixed soon.\r\n\r\nPS: nice job!\r\n\r\n[1]: https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/sam-resource-layerversion.html\n", "before_files": [{"content": "\"\"\"\nCopyright Amazon.com, Inc. or its affiliates. All Rights Reserved.\nSPDX-License-Identifier: MIT-0\n\"\"\"\nimport os\nimport logging\nimport six\nimport samtranslator\nfrom samtranslator.parser import parser\nfrom samtranslator.translator.translator import Translator\nfrom samtranslator.public.exceptions import InvalidDocumentException\n\nfrom cfnlint.helpers import load_resource, convert_dict, format_json_string\nfrom cfnlint.data import Serverless\nfrom cfnlint.rules import Match, TransformError\nLOGGER = logging.getLogger('cfnlint')\n\n\nclass Transform(object):\n \"\"\"\n Application Serverless Module tranform Wrapper.\n Based on code from AWS SAM CLI:\n https://github.com/awslabs/aws-sam-cli/blob/develop/samcli/commands/validate/lib/sam_template_validator.py\n \"\"\"\n\n def __init__(self, filename, template, region):\n \"\"\"\n Initialize Transform class\n \"\"\"\n self._filename = filename\n self._template = template\n self._region = region\n self._parameters = {}\n\n self._managed_policy_map = self.load_managed_policies()\n self._sam_parser = parser.Parser()\n\n def template(self):\n \"\"\"Get the template\"\"\"\n return self._template\n\n def load_managed_policies(self):\n \"\"\"\n Load the ManagedPolicies locally, based on the AWS-CLI:\n https://github.com/awslabs/aws-sam-cli/blob/develop/samcli/lib/samlib/default_managed_policies.json\n \"\"\"\n return load_resource(Serverless, 'ManagedPolicies.json')\n\n def _replace_local_codeuri(self):\n \"\"\"\n Replaces the CodeUri in AWS::Serverless::Function and DefinitionUri in\n AWS::Serverless::Api to a fake S3 Uri. This is to support running the\n SAM Translator with valid values for these fields. If this is not done,\n the template is invalid in the eyes of SAM Translator (the translator\n does not support local paths)\n \"\"\"\n\n all_resources = self._template.get('Resources', {})\n\n template_globals = self._template.get('Globals', {})\n auto_publish_alias = template_globals.get('Function', {}).get('AutoPublishAlias')\n if isinstance(auto_publish_alias, dict):\n if len(auto_publish_alias) == 1:\n for k, v in auto_publish_alias.items():\n if k == 'Ref':\n if v in self._template.get('Parameters'):\n self._parameters[v] = 'Alias'\n\n\n for _, resource in all_resources.items():\n\n resource_type = resource.get('Type')\n resource_dict = resource.get('Properties')\n\n if resource_type == 'AWS::Serverless::Function':\n\n Transform._update_to_s3_uri('CodeUri', resource_dict)\n auto_publish_alias = resource_dict.get('AutoPublishAlias')\n if isinstance(auto_publish_alias, dict):\n if len(auto_publish_alias) == 1:\n for k, v in auto_publish_alias.items():\n if k == 'Ref':\n if v in self._template.get('Parameters'):\n self._parameters[v] = 'Alias'\n if resource_type in ['AWS::Serverless::LayerVersion']:\n if resource_dict.get('ContentUri'):\n Transform._update_to_s3_uri('ContentUri', resource_dict)\n if resource_type == 'AWS::Serverless::Application':\n if resource_dict.get('Location'):\n resource_dict['Location'] = ''\n Transform._update_to_s3_uri('Location', resource_dict)\n if resource_type == 'AWS::Serverless::Api':\n if ('DefinitionBody' not in resource_dict and\n 'Auth' not in resource_dict and 'Cors' not in resource_dict):\n Transform._update_to_s3_uri('DefinitionUri', resource_dict)\n else:\n resource_dict['DefinitionBody'] = ''\n if resource_type == 'AWS::Serverless::StateMachine' and resource_dict.get('DefinitionUri'):\n Transform._update_to_s3_uri('DefinitionUri', resource_dict)\n\n def transform_template(self):\n \"\"\"\n Transform the Template using the Serverless Application Model.\n \"\"\"\n matches = []\n\n try:\n # Output the SAM Translator version in debug mode\n LOGGER.info('SAM Translator: %s', samtranslator.__version__)\n\n sam_translator = Translator(\n managed_policy_map=self._managed_policy_map,\n sam_parser=self._sam_parser)\n\n self._replace_local_codeuri()\n\n # Tell SAM to use the region we're linting in, this has to be\n # controlled using the default AWS mechanisms, see also:\n # https://github.com/awslabs/serverless-application-model/blob/master/samtranslator/translator/arn_generator.py\n LOGGER.info('Setting AWS_DEFAULT_REGION to %s', self._region)\n os.environ['AWS_DEFAULT_REGION'] = self._region\n\n self._template = convert_dict(\n sam_translator.translate(sam_template=self._template,\n parameter_values=self._parameters))\n\n LOGGER.info('Transformed template: \\n%s',\n format_json_string(self._template))\n except InvalidDocumentException as e:\n message = 'Error transforming template: {0}'\n for cause in e.causes:\n matches.append(Match(\n 1, 1,\n 1, 1,\n self._filename,\n TransformError(), message.format(cause.message)))\n except Exception as e: # pylint: disable=W0703\n LOGGER.debug('Error transforming template: %s', str(e))\n LOGGER.debug('Stack trace: %s', e, exc_info=True)\n message = 'Error transforming template: {0}'\n matches.append(Match(\n 1, 1,\n 1, 1,\n self._filename,\n TransformError(), message.format(str(e))))\n\n return matches\n\n @staticmethod\n def is_s3_uri(uri):\n \"\"\"\n Checks the uri and determines if it is a valid S3 Uri\n Parameters\n ----------\n uri str, required\n Uri to check\n Returns\n -------\n bool\n Returns True if the uri given is an S3 uri, otherwise False\n \"\"\"\n return isinstance(uri, six.string_types) and uri.startswith('s3://')\n\n @staticmethod\n def _update_to_s3_uri(\n property_key, resource_property_dict,\n s3_uri_value='s3://bucket/value'):\n \"\"\"\n Updates the 'property_key' in the 'resource_property_dict' to the\n value of 's3_uri_value'\n Note: The function will mutate the resource_property_dict that is pass\n in Parameters\n ----------\n property_key str, required\n Key in the resource_property_dict\n resource_property_dict dict, required\n Property dictionary of a Resource in the template to replace\n s3_uri_value str, optional\n Value to update the value of the property_key to\n \"\"\"\n uri_property = resource_property_dict.get(property_key, '.')\n\n # ignore if dict or already an S3 Uri\n if isinstance(uri_property, dict) or Transform.is_s3_uri(uri_property):\n return\n\n resource_property_dict[property_key] = s3_uri_value\n", "path": "src/cfnlint/transform.py"}]}
| 2,930 | 181 |
gh_patches_debug_63849
|
rasdani/github-patches
|
git_diff
|
WeblateOrg__weblate-10794
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Syntax highlighting of search input
### Describe the issue
1. Go to a screenshot
2. Enter "not found" as the search term
3. A lot of strings appear as search results, most of them not containing anything related to "not found"
If I enter "not" or "found" then fewer results are found compared to "not found".
### I already tried
- [X] I've read and searched [the documentation](https://docs.weblate.org/).
- [X] I've searched for similar issues in this repository.
### Steps to reproduce the behavior
1. Go to a screenshot
2. Enter "not found" as the search term
3. A lot of strings appear as search results, most of them not containing anything related to "not found"
### Expected behavior
Search only lists strings containing "not found"
### Screenshots

### Exception traceback
_No response_
### How do you run Weblate?
weblate.org service
### Weblate versions
_No response_
### Weblate deploy checks
_No response_
### Additional context
_No response_
</issue>
<code>
[start of weblate/utils/forms.py]
1 # Copyright © Michal Čihař <[email protected]>
2 #
3 # SPDX-License-Identifier: GPL-3.0-or-later
4
5 from crispy_forms.layout import Div, Field
6 from crispy_forms.utils import TEMPLATE_PACK
7 from django import forms
8 from django.core.exceptions import ValidationError
9 from django.db.models import Q
10 from django.forms.models import ModelChoiceIterator
11 from django.template.loader import render_to_string
12 from django.utils.translation import gettext, gettext_lazy
13
14 from weblate.trans.defines import EMAIL_LENGTH, USERNAME_LENGTH
15 from weblate.trans.filter import FILTERS
16 from weblate.trans.util import sort_unicode
17 from weblate.utils.errors import report_error
18 from weblate.utils.search import parse_query
19 from weblate.utils.validators import validate_email, validate_username
20
21
22 class QueryField(forms.CharField):
23 def __init__(self, parser: str = "unit", **kwargs):
24 if "label" not in kwargs:
25 kwargs["label"] = gettext_lazy("Query")
26 if "required" not in kwargs:
27 kwargs["required"] = False
28 self.parser = parser
29 super().__init__(**kwargs)
30
31 def clean(self, value):
32 if not value:
33 if self.required:
34 raise ValidationError(gettext("Missing query string."))
35 return ""
36 try:
37 parse_query(value, parser=self.parser)
38 except ValueError as error:
39 raise ValidationError(
40 gettext("Could not parse query string: {}").format(error)
41 ) from error
42 except Exception as error:
43 report_error(cause="Error parsing search query")
44 raise ValidationError(
45 gettext("Could not parse query string: {}").format(error)
46 ) from error
47 return value
48
49
50 class UsernameField(forms.CharField):
51 default_validators = [validate_username]
52
53 def __init__(self, *args, **kwargs):
54 params = {
55 "max_length": USERNAME_LENGTH,
56 "help_text": gettext_lazy(
57 "Username may only contain letters, "
58 "numbers or the following characters: @ . + - _"
59 ),
60 "label": gettext_lazy("Username"),
61 "required": True,
62 }
63 params.update(kwargs)
64 self.valid = None
65
66 super().__init__(*args, **params)
67
68
69 class UserField(forms.CharField):
70 def __init__(
71 self,
72 queryset=None,
73 empty_label="---------",
74 to_field_name=None,
75 limit_choices_to=None,
76 blank=None,
77 **kwargs,
78 ):
79 # This swallows some parameters to mimic ModelChoiceField API
80 super().__init__(**kwargs)
81
82 def widget_attrs(self, widget):
83 attrs = super().widget_attrs(widget)
84 attrs["dir"] = "ltr"
85 attrs["class"] = "user-autocomplete"
86 attrs["spellcheck"] = "false"
87 attrs["autocorrect"] = "off"
88 attrs["autocomplete"] = "off"
89 attrs["autocapitalize"] = "off"
90 return attrs
91
92 def clean(self, value):
93 from weblate.auth.models import User
94
95 if not value:
96 if self.required:
97 raise ValidationError(gettext("Missing username or e-mail."))
98 return None
99 try:
100 return User.objects.get(Q(username=value) | Q(email=value))
101 except User.DoesNotExist:
102 raise ValidationError(gettext("Could not find any such user."))
103 except User.MultipleObjectsReturned:
104 raise ValidationError(gettext("More possible users were found."))
105
106
107 class EmailField(forms.EmailField):
108 """
109 Slightly restricted EmailField.
110
111 We blacklist some additional local parts and customize error messages.
112 """
113
114 default_validators = [validate_email]
115
116 def __init__(self, *args, **kwargs):
117 kwargs.setdefault("max_length", EMAIL_LENGTH)
118 super().__init__(*args, **kwargs)
119
120
121 class SortedSelectMixin:
122 """Mixin for Select widgets to sort choices alphabetically."""
123
124 def optgroups(self, name, value, attrs=None):
125 groups = super().optgroups(name, value, attrs)
126 return sort_unicode(groups, lambda val: str(val[1][0]["label"]))
127
128
129 class ColorWidget(forms.RadioSelect):
130 def __init__(self, attrs=None, choices=()):
131 attrs = {**(attrs or {}), "class": "color_edit"}
132 super().__init__(attrs, choices)
133
134
135 class SortedSelectMultiple(SortedSelectMixin, forms.SelectMultiple):
136 """Wrapper class to sort choices alphabetically."""
137
138
139 class SortedSelect(SortedSelectMixin, forms.Select):
140 """Wrapper class to sort choices alphabetically."""
141
142
143 class ContextDiv(Div):
144 def __init__(self, *fields, **kwargs):
145 self.context = kwargs.pop("context", {})
146 super().__init__(*fields, **kwargs)
147
148 def render(self, form, context, template_pack=TEMPLATE_PACK, **kwargs):
149 template = self.get_template_name(template_pack)
150 return render_to_string(template, self.context)
151
152
153 class SearchField(Field):
154 def __init__(self, *args, **kwargs):
155 kwargs["template"] = "snippets/query-field.html"
156 super().__init__(*args, **kwargs)
157
158 def render(self, form, context, template_pack=TEMPLATE_PACK, **kwargs):
159 extra_context = {"custom_filter_list": self.get_search_query_choices()}
160 return super().render(form, context, template_pack, extra_context, **kwargs)
161
162 def get_search_query_choices(self):
163 """Return all filtering choices for query field."""
164 filter_keys = [
165 "nottranslated",
166 "todo",
167 "translated",
168 "fuzzy",
169 "suggestions",
170 "variants",
171 "screenshots",
172 "labels",
173 "context",
174 "nosuggestions",
175 "comments",
176 "allchecks",
177 "approved",
178 "unapproved",
179 ]
180 return [
181 (key, FILTERS.get_filter_name(key), FILTERS.get_filter_query(key))
182 for key in filter_keys
183 ]
184
185
186 class CachedQueryIterator(ModelChoiceIterator):
187 """
188 Choice iterator for cached querysets.
189
190 It assumes the queryset is reused and avoids using an iterator or counting queries.
191 """
192
193 def __iter__(self):
194 if self.field.empty_label is not None:
195 yield ("", self.field.empty_label)
196 for obj in self.queryset:
197 yield self.choice(obj)
198
199 def __len__(self):
200 return len(self.queryset) + (1 if self.field.empty_label is not None else 0)
201
202 def __bool__(self):
203 return self.field.empty_label is not None or bool(self.queryset)
204
205
206 class NonCopyingSetQuerysetMixin:
207 iterator = CachedQueryIterator
208
209 def _get_queryset(self):
210 return self._queryset
211
212 def _set_queryset(self, queryset):
213 self._queryset = queryset
214 self.widget.choices = self.choices
215
216 queryset = property(_get_queryset, _set_queryset)
217
218
219 class CachedModelChoiceField(NonCopyingSetQuerysetMixin, forms.ModelChoiceField):
220 pass
221
222
223 class CachedModelMultipleChoiceField(
224 NonCopyingSetQuerysetMixin, forms.ModelMultipleChoiceField
225 ):
226 pass
227
[end of weblate/utils/forms.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/weblate/utils/forms.py b/weblate/utils/forms.py
--- a/weblate/utils/forms.py
+++ b/weblate/utils/forms.py
@@ -25,6 +25,8 @@
kwargs["label"] = gettext_lazy("Query")
if "required" not in kwargs:
kwargs["required"] = False
+ if "widget" not in kwargs:
+ kwargs["widget"] = forms.Textarea(attrs={"cols": None, "rows": 1})
self.parser = parser
super().__init__(**kwargs)
|
{"golden_diff": "diff --git a/weblate/utils/forms.py b/weblate/utils/forms.py\n--- a/weblate/utils/forms.py\n+++ b/weblate/utils/forms.py\n@@ -25,6 +25,8 @@\n kwargs[\"label\"] = gettext_lazy(\"Query\")\n if \"required\" not in kwargs:\n kwargs[\"required\"] = False\n+ if \"widget\" not in kwargs:\n+ kwargs[\"widget\"] = forms.Textarea(attrs={\"cols\": None, \"rows\": 1})\n self.parser = parser\n super().__init__(**kwargs)\n", "issue": "Syntax highlighting of search input\n### Describe the issue\n\n1. Go to a screenshot\r\n2. Enter \"not found\" as the search term\r\n3. A lot of strings appear as search results, most of them not containing anything related to \"not found\"\r\n\r\n\r\nIf I enter \"not\" or \"found\" then fewer results are found compared to \"not found\".\n\n### I already tried\n\n- [X] I've read and searched [the documentation](https://docs.weblate.org/).\n- [X] I've searched for similar issues in this repository.\n\n### Steps to reproduce the behavior\n\n1. Go to a screenshot\r\n2. Enter \"not found\" as the search term\r\n3. A lot of strings appear as search results, most of them not containing anything related to \"not found\"\n\n### Expected behavior\n\nSearch only lists strings containing \"not found\"\n\n### Screenshots\n\n\r\n\n\n### Exception traceback\n\n_No response_\n\n### How do you run Weblate?\n\nweblate.org service\n\n### Weblate versions\n\n_No response_\n\n### Weblate deploy checks\n\n_No response_\n\n### Additional context\n\n_No response_\n", "before_files": [{"content": "# Copyright \u00a9 Michal \u010ciha\u0159 <[email protected]>\n#\n# SPDX-License-Identifier: GPL-3.0-or-later\n\nfrom crispy_forms.layout import Div, Field\nfrom crispy_forms.utils import TEMPLATE_PACK\nfrom django import forms\nfrom django.core.exceptions import ValidationError\nfrom django.db.models import Q\nfrom django.forms.models import ModelChoiceIterator\nfrom django.template.loader import render_to_string\nfrom django.utils.translation import gettext, gettext_lazy\n\nfrom weblate.trans.defines import EMAIL_LENGTH, USERNAME_LENGTH\nfrom weblate.trans.filter import FILTERS\nfrom weblate.trans.util import sort_unicode\nfrom weblate.utils.errors import report_error\nfrom weblate.utils.search import parse_query\nfrom weblate.utils.validators import validate_email, validate_username\n\n\nclass QueryField(forms.CharField):\n def __init__(self, parser: str = \"unit\", **kwargs):\n if \"label\" not in kwargs:\n kwargs[\"label\"] = gettext_lazy(\"Query\")\n if \"required\" not in kwargs:\n kwargs[\"required\"] = False\n self.parser = parser\n super().__init__(**kwargs)\n\n def clean(self, value):\n if not value:\n if self.required:\n raise ValidationError(gettext(\"Missing query string.\"))\n return \"\"\n try:\n parse_query(value, parser=self.parser)\n except ValueError as error:\n raise ValidationError(\n gettext(\"Could not parse query string: {}\").format(error)\n ) from error\n except Exception as error:\n report_error(cause=\"Error parsing search query\")\n raise ValidationError(\n gettext(\"Could not parse query string: {}\").format(error)\n ) from error\n return value\n\n\nclass UsernameField(forms.CharField):\n default_validators = [validate_username]\n\n def __init__(self, *args, **kwargs):\n params = {\n \"max_length\": USERNAME_LENGTH,\n \"help_text\": gettext_lazy(\n \"Username may only contain letters, \"\n \"numbers or the following characters: @ . + - _\"\n ),\n \"label\": gettext_lazy(\"Username\"),\n \"required\": True,\n }\n params.update(kwargs)\n self.valid = None\n\n super().__init__(*args, **params)\n\n\nclass UserField(forms.CharField):\n def __init__(\n self,\n queryset=None,\n empty_label=\"---------\",\n to_field_name=None,\n limit_choices_to=None,\n blank=None,\n **kwargs,\n ):\n # This swallows some parameters to mimic ModelChoiceField API\n super().__init__(**kwargs)\n\n def widget_attrs(self, widget):\n attrs = super().widget_attrs(widget)\n attrs[\"dir\"] = \"ltr\"\n attrs[\"class\"] = \"user-autocomplete\"\n attrs[\"spellcheck\"] = \"false\"\n attrs[\"autocorrect\"] = \"off\"\n attrs[\"autocomplete\"] = \"off\"\n attrs[\"autocapitalize\"] = \"off\"\n return attrs\n\n def clean(self, value):\n from weblate.auth.models import User\n\n if not value:\n if self.required:\n raise ValidationError(gettext(\"Missing username or e-mail.\"))\n return None\n try:\n return User.objects.get(Q(username=value) | Q(email=value))\n except User.DoesNotExist:\n raise ValidationError(gettext(\"Could not find any such user.\"))\n except User.MultipleObjectsReturned:\n raise ValidationError(gettext(\"More possible users were found.\"))\n\n\nclass EmailField(forms.EmailField):\n \"\"\"\n Slightly restricted EmailField.\n\n We blacklist some additional local parts and customize error messages.\n \"\"\"\n\n default_validators = [validate_email]\n\n def __init__(self, *args, **kwargs):\n kwargs.setdefault(\"max_length\", EMAIL_LENGTH)\n super().__init__(*args, **kwargs)\n\n\nclass SortedSelectMixin:\n \"\"\"Mixin for Select widgets to sort choices alphabetically.\"\"\"\n\n def optgroups(self, name, value, attrs=None):\n groups = super().optgroups(name, value, attrs)\n return sort_unicode(groups, lambda val: str(val[1][0][\"label\"]))\n\n\nclass ColorWidget(forms.RadioSelect):\n def __init__(self, attrs=None, choices=()):\n attrs = {**(attrs or {}), \"class\": \"color_edit\"}\n super().__init__(attrs, choices)\n\n\nclass SortedSelectMultiple(SortedSelectMixin, forms.SelectMultiple):\n \"\"\"Wrapper class to sort choices alphabetically.\"\"\"\n\n\nclass SortedSelect(SortedSelectMixin, forms.Select):\n \"\"\"Wrapper class to sort choices alphabetically.\"\"\"\n\n\nclass ContextDiv(Div):\n def __init__(self, *fields, **kwargs):\n self.context = kwargs.pop(\"context\", {})\n super().__init__(*fields, **kwargs)\n\n def render(self, form, context, template_pack=TEMPLATE_PACK, **kwargs):\n template = self.get_template_name(template_pack)\n return render_to_string(template, self.context)\n\n\nclass SearchField(Field):\n def __init__(self, *args, **kwargs):\n kwargs[\"template\"] = \"snippets/query-field.html\"\n super().__init__(*args, **kwargs)\n\n def render(self, form, context, template_pack=TEMPLATE_PACK, **kwargs):\n extra_context = {\"custom_filter_list\": self.get_search_query_choices()}\n return super().render(form, context, template_pack, extra_context, **kwargs)\n\n def get_search_query_choices(self):\n \"\"\"Return all filtering choices for query field.\"\"\"\n filter_keys = [\n \"nottranslated\",\n \"todo\",\n \"translated\",\n \"fuzzy\",\n \"suggestions\",\n \"variants\",\n \"screenshots\",\n \"labels\",\n \"context\",\n \"nosuggestions\",\n \"comments\",\n \"allchecks\",\n \"approved\",\n \"unapproved\",\n ]\n return [\n (key, FILTERS.get_filter_name(key), FILTERS.get_filter_query(key))\n for key in filter_keys\n ]\n\n\nclass CachedQueryIterator(ModelChoiceIterator):\n \"\"\"\n Choice iterator for cached querysets.\n\n It assumes the queryset is reused and avoids using an iterator or counting queries.\n \"\"\"\n\n def __iter__(self):\n if self.field.empty_label is not None:\n yield (\"\", self.field.empty_label)\n for obj in self.queryset:\n yield self.choice(obj)\n\n def __len__(self):\n return len(self.queryset) + (1 if self.field.empty_label is not None else 0)\n\n def __bool__(self):\n return self.field.empty_label is not None or bool(self.queryset)\n\n\nclass NonCopyingSetQuerysetMixin:\n iterator = CachedQueryIterator\n\n def _get_queryset(self):\n return self._queryset\n\n def _set_queryset(self, queryset):\n self._queryset = queryset\n self.widget.choices = self.choices\n\n queryset = property(_get_queryset, _set_queryset)\n\n\nclass CachedModelChoiceField(NonCopyingSetQuerysetMixin, forms.ModelChoiceField):\n pass\n\n\nclass CachedModelMultipleChoiceField(\n NonCopyingSetQuerysetMixin, forms.ModelMultipleChoiceField\n):\n pass\n", "path": "weblate/utils/forms.py"}]}
| 2,916 | 122 |
gh_patches_debug_23594
|
rasdani/github-patches
|
git_diff
|
canonical__microk8s-3054
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
latest/edge: server cert generation failure too verbose
```
Restarting cluster-agent to load new server certificate
error: error running snapctl: snap "microk8s" has "service-control" change in progress
Traceback (most recent call last):
File "/snap/microk8s/3125/scripts/cluster/add_token.py", line 190, in <module>
subprocess.check_call(["snapctl", "restart", "microk8s.daemon-cluster-agent"])
File "/snap/microk8s/3125/usr/lib/python3.6/subprocess.py", line 311, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['snapctl', 'restart', 'microk8s.daemon-cluster-agent']' returned non-zero exit status 1.
```
The action failing is fine, but the call stack being printed is un-necessary
</issue>
<code>
[start of scripts/cluster/add_token.py]
1 import json
2 import yaml
3 import os
4 import sys
5 import time
6 import argparse
7 import subprocess
8 import hashlib
9 import ssl
10 import http.client
11
12 from common.utils import is_node_running_dqlite
13
14 try:
15 from secrets import token_hex
16 except ImportError:
17 from os import urandom
18
19 def token_hex(nbytes=None):
20 return urandom(nbytes).hex()
21
22
23 cluster_tokens_file = os.path.expandvars("${SNAP_DATA}/credentials/cluster-tokens.txt")
24 utils_sh_file = os.path.expandvars("${SNAP}/actions/common/utils.sh")
25 token_with_expiry = "{}|{}\n"
26 token_without_expiry = "{}\n"
27
28
29 def add_token_with_expiry(token, file, ttl):
30 """
31 This method will add a token to the token file with or without expiry
32 Expiry time is in seconds.
33
34 Format of the item in the file: <token>|<expiry in seconds>
35
36 :param str token: The token to add to the file
37 :param str file: The file name for which the token will be written to
38 :param ttl: How long the token should last before expiry, represented in seconds.
39 """
40
41 with open(file, "a+") as fp:
42 if ttl != -1:
43 expiry = int(round(time.time())) + ttl
44 fp.write(token_with_expiry.format(token, expiry))
45 else:
46 fp.write(token_without_expiry.format(token))
47
48
49 def run_util(*args, debug=False):
50 env = os.environ.copy()
51 prog = ["bash", utils_sh_file]
52 prog.extend(args)
53
54 if debug:
55 print("\033[;1;32m+ %s\033[;0;0m" % " ".join(prog))
56
57 result = subprocess.run(
58 prog,
59 stdin=subprocess.PIPE,
60 stdout=subprocess.PIPE,
61 stderr=subprocess.PIPE,
62 env=env,
63 )
64
65 try:
66 result.check_returncode()
67 except subprocess.CalledProcessError:
68 print("Failed to call utility function.")
69 sys.exit(1)
70
71 return result.stdout.decode("utf-8").strip()
72
73
74 def get_network_info():
75 """
76 Obtain machine IP address(es) and cluster agent port.
77 :return: tuple of default IP, all IPs, and cluster agent port
78 """
79 default_ip = run_util("get_default_ip")
80 all_ips = run_util("get_ips").split(" ")
81 port = run_util("cluster_agent_port")
82
83 return (default_ip, all_ips, port)
84
85
86 def print_pretty(token, check):
87 default_ip, all_ips, port = get_network_info()
88
89 print("From the node you wish to join to this cluster, run the following:")
90 print(f"microk8s join {default_ip}:{port}/{token}/{check}\n")
91
92 if is_node_running_dqlite():
93 print(
94 "Use the '--worker' flag to join a node as a worker not running the control plane, eg:"
95 )
96 print(f"microk8s join {default_ip}:{port}/{token}/{check} --worker\n")
97
98 print(
99 "If the node you are adding is not reachable through the default interface you can use one of the following:"
100 )
101 for ip in all_ips:
102 print(f"microk8s join {ip}:{port}/{token}/{check}")
103
104
105 def get_output_dict(token, check):
106 _, all_ips, port = get_network_info()
107 info = {
108 "token": f"{token}/{check}",
109 "urls": [f"{ip}:{port}/{token}/{check}" for ip in all_ips],
110 }
111 return info
112
113
114 def print_json(token, check):
115 info = get_output_dict(token, check)
116 print(json.dumps(info, indent=2))
117
118
119 def print_yaml(token, check):
120 info = get_output_dict(token, check)
121 print(yaml.dump(info, indent=2))
122
123
124 def print_short(token, check):
125 default_ip, all_ips, port = get_network_info()
126
127 print(f"microk8s join {default_ip}:{port}/{token}/{check}")
128 for ip in all_ips:
129 print(f"microk8s join {ip}:{port}/{token}/{check}")
130
131
132 if __name__ == "__main__":
133
134 # initiate the parser with a description
135 parser = argparse.ArgumentParser(
136 description="Produce a connection string for a node to join the cluster.",
137 prog="microk8s add-node",
138 )
139 parser.add_argument(
140 "--token-ttl",
141 "-l",
142 help="Specify how long the token is valid, before it expires. "
143 'Value of "-1" indicates that the token is usable only once '
144 "(i.e. after joining a node, the token becomes invalid)",
145 type=int,
146 default="-1",
147 )
148 parser.add_argument(
149 "--token",
150 "-t",
151 help="Specify the bootstrap token to add, must be 32 characters long. "
152 "Auto generates when empty.",
153 )
154 parser.add_argument(
155 "--format",
156 help="Format the output of the token in pretty, short, token, or token-check",
157 default="pretty",
158 choices={"pretty", "short", "token", "token-check", "json", "yaml"},
159 )
160
161 # read arguments from the command line
162 args = parser.parse_args()
163
164 ttl = args.token_ttl
165
166 if args.token is not None:
167 token = args.token
168 else:
169 token = token_hex(16)
170
171 if len(token) < 32:
172 print("Invalid token size. It must be 32 characters long.")
173 exit(1)
174
175 add_token_with_expiry(token, cluster_tokens_file, ttl)
176 check = run_util("server_cert_check")
177
178 # NOTE(akolaitis): if the server certificate has changed after the cluster-agent
179 # service started, the server hash will be out of date. make sure to restart
180 # the cluster-agent in this case, otherwise the joining nodes will be unable
181 # to verify.
182 context = ssl._create_unverified_context()
183 conn = http.client.HTTPSConnection("127.0.0.1:25000", context=context)
184 conn.connect()
185 der_cert_bin = conn.sock.getpeercert(True)
186 conn.close()
187 peer_cert_hash = hashlib.sha256(der_cert_bin).hexdigest()
188 if not peer_cert_hash.startswith(check):
189 print("Restarting cluster-agent to load new server certificate")
190 subprocess.check_call(["snapctl", "restart", "microk8s.daemon-cluster-agent"])
191
192 if args.format == "pretty":
193 print_pretty(token, check)
194 elif args.format == "short":
195 print_short(token, check)
196 elif args.format == "token-check":
197 print(f"{token}/{check}")
198 elif args.format == "json":
199 print_json(token, check)
200 elif args.format == "yaml":
201 print_yaml(token, check)
202 else:
203 print(token)
204
[end of scripts/cluster/add_token.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/scripts/cluster/add_token.py b/scripts/cluster/add_token.py
--- a/scripts/cluster/add_token.py
+++ b/scripts/cluster/add_token.py
@@ -5,9 +5,6 @@
import time
import argparse
import subprocess
-import hashlib
-import ssl
-import http.client
from common.utils import is_node_running_dqlite
@@ -175,20 +172,6 @@
add_token_with_expiry(token, cluster_tokens_file, ttl)
check = run_util("server_cert_check")
- # NOTE(akolaitis): if the server certificate has changed after the cluster-agent
- # service started, the server hash will be out of date. make sure to restart
- # the cluster-agent in this case, otherwise the joining nodes will be unable
- # to verify.
- context = ssl._create_unverified_context()
- conn = http.client.HTTPSConnection("127.0.0.1:25000", context=context)
- conn.connect()
- der_cert_bin = conn.sock.getpeercert(True)
- conn.close()
- peer_cert_hash = hashlib.sha256(der_cert_bin).hexdigest()
- if not peer_cert_hash.startswith(check):
- print("Restarting cluster-agent to load new server certificate")
- subprocess.check_call(["snapctl", "restart", "microk8s.daemon-cluster-agent"])
-
if args.format == "pretty":
print_pretty(token, check)
elif args.format == "short":
|
{"golden_diff": "diff --git a/scripts/cluster/add_token.py b/scripts/cluster/add_token.py\n--- a/scripts/cluster/add_token.py\n+++ b/scripts/cluster/add_token.py\n@@ -5,9 +5,6 @@\n import time\n import argparse\n import subprocess\n-import hashlib\n-import ssl\n-import http.client\n \n from common.utils import is_node_running_dqlite\n \n@@ -175,20 +172,6 @@\n add_token_with_expiry(token, cluster_tokens_file, ttl)\n check = run_util(\"server_cert_check\")\n \n- # NOTE(akolaitis): if the server certificate has changed after the cluster-agent\n- # service started, the server hash will be out of date. make sure to restart\n- # the cluster-agent in this case, otherwise the joining nodes will be unable\n- # to verify.\n- context = ssl._create_unverified_context()\n- conn = http.client.HTTPSConnection(\"127.0.0.1:25000\", context=context)\n- conn.connect()\n- der_cert_bin = conn.sock.getpeercert(True)\n- conn.close()\n- peer_cert_hash = hashlib.sha256(der_cert_bin).hexdigest()\n- if not peer_cert_hash.startswith(check):\n- print(\"Restarting cluster-agent to load new server certificate\")\n- subprocess.check_call([\"snapctl\", \"restart\", \"microk8s.daemon-cluster-agent\"])\n-\n if args.format == \"pretty\":\n print_pretty(token, check)\n elif args.format == \"short\":\n", "issue": "latest/edge: server cert generation failure too verbose\n```\r\nRestarting cluster-agent to load new server certificate\r\n\r\nerror: error running snapctl: snap \"microk8s\" has \"service-control\" change in progress\r\n\r\nTraceback (most recent call last):\r\n\r\n File \"/snap/microk8s/3125/scripts/cluster/add_token.py\", line 190, in <module>\r\n\r\n subprocess.check_call([\"snapctl\", \"restart\", \"microk8s.daemon-cluster-agent\"])\r\n\r\n File \"/snap/microk8s/3125/usr/lib/python3.6/subprocess.py\", line 311, in check_call\r\n\r\n raise CalledProcessError(retcode, cmd)\r\n\r\nsubprocess.CalledProcessError: Command '['snapctl', 'restart', 'microk8s.daemon-cluster-agent']' returned non-zero exit status 1.\r\n\r\n```\r\n\r\nThe action failing is fine, but the call stack being printed is un-necessary \n", "before_files": [{"content": "import json\nimport yaml\nimport os\nimport sys\nimport time\nimport argparse\nimport subprocess\nimport hashlib\nimport ssl\nimport http.client\n\nfrom common.utils import is_node_running_dqlite\n\ntry:\n from secrets import token_hex\nexcept ImportError:\n from os import urandom\n\n def token_hex(nbytes=None):\n return urandom(nbytes).hex()\n\n\ncluster_tokens_file = os.path.expandvars(\"${SNAP_DATA}/credentials/cluster-tokens.txt\")\nutils_sh_file = os.path.expandvars(\"${SNAP}/actions/common/utils.sh\")\ntoken_with_expiry = \"{}|{}\\n\"\ntoken_without_expiry = \"{}\\n\"\n\n\ndef add_token_with_expiry(token, file, ttl):\n \"\"\"\n This method will add a token to the token file with or without expiry\n Expiry time is in seconds.\n\n Format of the item in the file: <token>|<expiry in seconds>\n\n :param str token: The token to add to the file\n :param str file: The file name for which the token will be written to\n :param ttl: How long the token should last before expiry, represented in seconds.\n \"\"\"\n\n with open(file, \"a+\") as fp:\n if ttl != -1:\n expiry = int(round(time.time())) + ttl\n fp.write(token_with_expiry.format(token, expiry))\n else:\n fp.write(token_without_expiry.format(token))\n\n\ndef run_util(*args, debug=False):\n env = os.environ.copy()\n prog = [\"bash\", utils_sh_file]\n prog.extend(args)\n\n if debug:\n print(\"\\033[;1;32m+ %s\\033[;0;0m\" % \" \".join(prog))\n\n result = subprocess.run(\n prog,\n stdin=subprocess.PIPE,\n stdout=subprocess.PIPE,\n stderr=subprocess.PIPE,\n env=env,\n )\n\n try:\n result.check_returncode()\n except subprocess.CalledProcessError:\n print(\"Failed to call utility function.\")\n sys.exit(1)\n\n return result.stdout.decode(\"utf-8\").strip()\n\n\ndef get_network_info():\n \"\"\"\n Obtain machine IP address(es) and cluster agent port.\n :return: tuple of default IP, all IPs, and cluster agent port\n \"\"\"\n default_ip = run_util(\"get_default_ip\")\n all_ips = run_util(\"get_ips\").split(\" \")\n port = run_util(\"cluster_agent_port\")\n\n return (default_ip, all_ips, port)\n\n\ndef print_pretty(token, check):\n default_ip, all_ips, port = get_network_info()\n\n print(\"From the node you wish to join to this cluster, run the following:\")\n print(f\"microk8s join {default_ip}:{port}/{token}/{check}\\n\")\n\n if is_node_running_dqlite():\n print(\n \"Use the '--worker' flag to join a node as a worker not running the control plane, eg:\"\n )\n print(f\"microk8s join {default_ip}:{port}/{token}/{check} --worker\\n\")\n\n print(\n \"If the node you are adding is not reachable through the default interface you can use one of the following:\"\n )\n for ip in all_ips:\n print(f\"microk8s join {ip}:{port}/{token}/{check}\")\n\n\ndef get_output_dict(token, check):\n _, all_ips, port = get_network_info()\n info = {\n \"token\": f\"{token}/{check}\",\n \"urls\": [f\"{ip}:{port}/{token}/{check}\" for ip in all_ips],\n }\n return info\n\n\ndef print_json(token, check):\n info = get_output_dict(token, check)\n print(json.dumps(info, indent=2))\n\n\ndef print_yaml(token, check):\n info = get_output_dict(token, check)\n print(yaml.dump(info, indent=2))\n\n\ndef print_short(token, check):\n default_ip, all_ips, port = get_network_info()\n\n print(f\"microk8s join {default_ip}:{port}/{token}/{check}\")\n for ip in all_ips:\n print(f\"microk8s join {ip}:{port}/{token}/{check}\")\n\n\nif __name__ == \"__main__\":\n\n # initiate the parser with a description\n parser = argparse.ArgumentParser(\n description=\"Produce a connection string for a node to join the cluster.\",\n prog=\"microk8s add-node\",\n )\n parser.add_argument(\n \"--token-ttl\",\n \"-l\",\n help=\"Specify how long the token is valid, before it expires. \"\n 'Value of \"-1\" indicates that the token is usable only once '\n \"(i.e. after joining a node, the token becomes invalid)\",\n type=int,\n default=\"-1\",\n )\n parser.add_argument(\n \"--token\",\n \"-t\",\n help=\"Specify the bootstrap token to add, must be 32 characters long. \"\n \"Auto generates when empty.\",\n )\n parser.add_argument(\n \"--format\",\n help=\"Format the output of the token in pretty, short, token, or token-check\",\n default=\"pretty\",\n choices={\"pretty\", \"short\", \"token\", \"token-check\", \"json\", \"yaml\"},\n )\n\n # read arguments from the command line\n args = parser.parse_args()\n\n ttl = args.token_ttl\n\n if args.token is not None:\n token = args.token\n else:\n token = token_hex(16)\n\n if len(token) < 32:\n print(\"Invalid token size. It must be 32 characters long.\")\n exit(1)\n\n add_token_with_expiry(token, cluster_tokens_file, ttl)\n check = run_util(\"server_cert_check\")\n\n # NOTE(akolaitis): if the server certificate has changed after the cluster-agent\n # service started, the server hash will be out of date. make sure to restart\n # the cluster-agent in this case, otherwise the joining nodes will be unable\n # to verify.\n context = ssl._create_unverified_context()\n conn = http.client.HTTPSConnection(\"127.0.0.1:25000\", context=context)\n conn.connect()\n der_cert_bin = conn.sock.getpeercert(True)\n conn.close()\n peer_cert_hash = hashlib.sha256(der_cert_bin).hexdigest()\n if not peer_cert_hash.startswith(check):\n print(\"Restarting cluster-agent to load new server certificate\")\n subprocess.check_call([\"snapctl\", \"restart\", \"microk8s.daemon-cluster-agent\"])\n\n if args.format == \"pretty\":\n print_pretty(token, check)\n elif args.format == \"short\":\n print_short(token, check)\n elif args.format == \"token-check\":\n print(f\"{token}/{check}\")\n elif args.format == \"json\":\n print_json(token, check)\n elif args.format == \"yaml\":\n print_yaml(token, check)\n else:\n print(token)\n", "path": "scripts/cluster/add_token.py"}]}
| 2,747 | 333 |
gh_patches_debug_7280
|
rasdani/github-patches
|
git_diff
|
jupyterhub__jupyterhub-4428
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
FAQ audience confusion
This FAQ item is unfortunately causing some local support problems for our institution:
https://github.com/jupyterhub/jupyterhub/blob/38ba275367936c9a5a081a7b5acf326134705a0a/docs/source/faq/faq.md?plain=1#L3
As I understand, this FAQ item is written for a sophisticated end-user who is setting up their own jupterhub on their own servers. They have sysadmin access to the filesystems to configure it so that users on their system can easily share notebooks using these instructions, e.g. with [Jupyterlab "copy shareable link"](https://jupyterlab.readthedocs.io/en/stable/user/files.html).
However, these instructions cause support problems downstream when our faculty assume they can use these instructions to allow their students to share links to their notebooks. Of course, this depends on how the institution's jupyterhub is configured on the backend.
If you could clarify this FAQ item for these audiences, I think that would help reduce frustration for general end users who search google for how to share a notebook link and stumble onto this entry. I think what most of them are searching for is [RTC (Real Time Collaboration) link sharing](https://jupyterlab.readthedocs.io/en/stable/user/rtc.html) which (as of March 2023) is not yet ready for prime-time and depends on the jupyterhub server environment configured for it, which most are not. Sometimes what they are trying to do can be accomplished with [nbgitpuller link generator](https://hub.jupyter.org/nbgitpuller/link), but I think most people are looking for RTC.
Thanks!
</issue>
<code>
[start of docs/source/conf.py]
1 # Configuration file for Sphinx to build our documentation to HTML.
2 #
3 # Configuration reference: https://www.sphinx-doc.org/en/master/usage/configuration.html
4 #
5 import contextlib
6 import datetime
7 import io
8 import os
9 import subprocess
10
11 from docutils import nodes
12 from sphinx.directives.other import SphinxDirective
13
14 import jupyterhub
15 from jupyterhub.app import JupyterHub
16
17 # -- Project information -----------------------------------------------------
18 # ref: https://www.sphinx-doc.org/en/master/usage/configuration.html#project-information
19 #
20 project = "JupyterHub"
21 author = "Project Jupyter Contributors"
22 copyright = f"{datetime.date.today().year}, {author}"
23
24
25 # -- General Sphinx configuration --------------------------------------------
26 # ref: https://www.sphinx-doc.org/en/master/usage/configuration.html#general-configuration
27 #
28 extensions = [
29 "sphinx.ext.autodoc",
30 "sphinx.ext.intersphinx",
31 "sphinx.ext.napoleon",
32 "autodoc_traits",
33 "sphinx_copybutton",
34 "sphinx-jsonschema",
35 "sphinxext.opengraph",
36 "sphinxext.rediraffe",
37 "jupyterhub_sphinx_theme",
38 "myst_parser",
39 ]
40 root_doc = "index"
41 source_suffix = [".md"]
42 # default_role let's use use `foo` instead of ``foo`` in rST
43 default_role = "literal"
44
45
46 # -- MyST configuration ------------------------------------------------------
47 # ref: https://myst-parser.readthedocs.io/en/latest/configuration.html
48 #
49 myst_heading_anchors = 2
50
51 myst_enable_extensions = [
52 # available extensions: https://myst-parser.readthedocs.io/en/latest/syntax/optional.html
53 "attrs_inline",
54 "colon_fence",
55 "deflist",
56 "fieldlist",
57 "substitution",
58 ]
59
60 myst_substitutions = {
61 # date example: Dev 07, 2022
62 "date": datetime.date.today().strftime("%b %d, %Y").title(),
63 "version": jupyterhub.__version__,
64 }
65
66
67 # -- Custom directives to generate documentation -----------------------------
68 # ref: https://myst-parser.readthedocs.io/en/latest/syntax/roles-and-directives.html
69 #
70 # We define custom directives to help us generate documentation using Python on
71 # demand when referenced from our documentation files.
72 #
73
74 # Create a temp instance of JupyterHub for use by two separate directive classes
75 # to get the output from using the "--generate-config" and "--help-all" CLI
76 # flags respectively.
77 #
78 jupyterhub_app = JupyterHub()
79
80
81 class ConfigDirective(SphinxDirective):
82 """Generate the configuration file output for use in the documentation."""
83
84 has_content = False
85 required_arguments = 0
86 optional_arguments = 0
87 final_argument_whitespace = False
88 option_spec = {}
89
90 def run(self):
91 # The generated configuration file for this version
92 generated_config = jupyterhub_app.generate_config_file()
93 # post-process output
94 home_dir = os.environ["HOME"]
95 generated_config = generated_config.replace(home_dir, "$HOME", 1)
96 par = nodes.literal_block(text=generated_config)
97 return [par]
98
99
100 class HelpAllDirective(SphinxDirective):
101 """Print the output of jupyterhub help --all for use in the documentation."""
102
103 has_content = False
104 required_arguments = 0
105 optional_arguments = 0
106 final_argument_whitespace = False
107 option_spec = {}
108
109 def run(self):
110 # The output of the help command for this version
111 buffer = io.StringIO()
112 with contextlib.redirect_stdout(buffer):
113 jupyterhub_app.print_help("--help-all")
114 all_help = buffer.getvalue()
115 # post-process output
116 home_dir = os.environ["HOME"]
117 all_help = all_help.replace(home_dir, "$HOME", 1)
118 par = nodes.literal_block(text=all_help)
119 return [par]
120
121
122 def setup(app):
123 app.add_css_file("custom.css")
124 app.add_directive("jupyterhub-generate-config", ConfigDirective)
125 app.add_directive("jupyterhub-help-all", HelpAllDirective)
126
127
128 # -- Read The Docs -----------------------------------------------------------
129 #
130 # Since RTD runs sphinx-build directly without running "make html", we run the
131 # pre-requisite steps for "make html" from here if needed.
132 #
133 if os.environ.get("READTHEDOCS"):
134 docs = os.path.dirname(os.path.dirname(__file__))
135 subprocess.check_call(["make", "metrics", "scopes"], cwd=docs)
136
137
138 # -- Spell checking ----------------------------------------------------------
139 # ref: https://sphinxcontrib-spelling.readthedocs.io/en/latest/customize.html#configuration-options
140 #
141 # The "sphinxcontrib.spelling" extension is optionally enabled if its available.
142 #
143 try:
144 import sphinxcontrib.spelling # noqa
145 except ImportError:
146 pass
147 else:
148 extensions.append("sphinxcontrib.spelling")
149 spelling_word_list_filename = "spelling_wordlist.txt"
150
151
152 # -- Options for HTML output -------------------------------------------------
153 # ref: https://www.sphinx-doc.org/en/master/usage/configuration.html#options-for-html-output
154 #
155 html_logo = "_static/images/logo/logo.png"
156 html_favicon = "_static/images/logo/favicon.ico"
157 html_static_path = ["_static"]
158
159 html_theme = "jupyterhub_sphinx_theme"
160 html_theme_options = {
161 "icon_links": [
162 {
163 "name": "GitHub",
164 "url": "https://github.com/jupyterhub/jupyterhub",
165 "icon": "fa-brands fa-github",
166 },
167 ],
168 "use_edit_page_button": True,
169 "navbar_align": "left",
170 }
171 html_context = {
172 "github_user": "jupyterhub",
173 "github_repo": "jupyterhub",
174 "github_version": "main",
175 "doc_path": "docs/source",
176 }
177
178
179 # -- Options for linkcheck builder -------------------------------------------
180 # ref: https://www.sphinx-doc.org/en/master/usage/configuration.html#options-for-the-linkcheck-builder
181 #
182 linkcheck_ignore = [
183 r"(.*)github\.com(.*)#", # javascript based anchors
184 r"(.*)/#%21(.*)/(.*)", # /#!forum/jupyter - encoded anchor edge case
185 r"https://github.com/[^/]*$", # too many github usernames / searches in changelog
186 "https://github.com/jupyterhub/jupyterhub/pull/", # too many PRs in changelog
187 "https://github.com/jupyterhub/jupyterhub/compare/", # too many comparisons in changelog
188 r"https?://(localhost|127.0.0.1).*", # ignore localhost references in auto-links
189 r".*/rest-api.html#.*", # ignore javascript-resolved internal rest-api links
190 r"https://jupyter.chameleoncloud.org", # FIXME: ignore (presumably) short-term SSL issue
191 ]
192 linkcheck_anchors_ignore = [
193 "/#!",
194 "/#%21",
195 ]
196
197 # -- Intersphinx -------------------------------------------------------------
198 # ref: https://www.sphinx-doc.org/en/master/usage/extensions/intersphinx.html#configuration
199 #
200 intersphinx_mapping = {
201 "python": ("https://docs.python.org/3/", None),
202 "tornado": ("https://www.tornadoweb.org/en/stable/", None),
203 "jupyter-server": ("https://jupyter-server.readthedocs.io/en/stable/", None),
204 }
205
206 # -- Options for the opengraph extension -------------------------------------
207 # ref: https://github.com/wpilibsuite/sphinxext-opengraph#options
208 #
209 # ogp_site_url is set automatically by RTD
210 ogp_image = "_static/logo.png"
211 ogp_use_first_image = True
212
213
214 # -- Options for the rediraffe extension -------------------------------------
215 # ref: https://github.com/wpilibsuite/sphinxext-rediraffe#readme
216 #
217 # This extension helps us relocate content without breaking links. If a
218 # document is moved internally, a redirect link should be configured as below to
219 # help us not break links.
220 #
221 # The workflow for adding redirects can be as follows:
222 # 1. Change "rediraffe_branch" below to point to the commit/ branch you
223 # want to base off the changes.
224 # 2. Option 1: run "make rediraffecheckdiff"
225 # a. Analyze the output of this command.
226 # b. Manually add the redirect entries to the "redirects.txt" file.
227 # Option 2: run "make rediraffewritediff"
228 # a. rediraffe will then automatically add the obvious redirects to redirects.txt.
229 # b. Analyze the output of the command for broken links.
230 # c. Check the "redirects.txt" file for any files that were moved/ renamed but are not listed.
231 # d. Manually add the redirects that have been mised by the automatic builder to "redirects.txt".
232 # Option 3: Do not use the commands above and, instead, do everything manually - by taking
233 # note of the files you have moved or renamed and adding them to the "redirects.txt" file.
234 #
235 # If you are basing changes off another branch/ commit, always change back
236 # rediraffe_branch to main before pushing your changes upstream.
237 #
238 rediraffe_branch = os.environ.get("REDIRAFFE_BRANCH", "main")
239 rediraffe_redirects = "redirects.txt"
240
241 # allow 80% match for autogenerated redirects
242 rediraffe_auto_redirect_perc = 80
243
244 # rediraffe_redirects = {
245 # "old-file": "new-folder/new-file-name",
246 # }
247
[end of docs/source/conf.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/docs/source/conf.py b/docs/source/conf.py
--- a/docs/source/conf.py
+++ b/docs/source/conf.py
@@ -201,6 +201,7 @@
"python": ("https://docs.python.org/3/", None),
"tornado": ("https://www.tornadoweb.org/en/stable/", None),
"jupyter-server": ("https://jupyter-server.readthedocs.io/en/stable/", None),
+ "nbgitpuller": ("https://nbgitpuller.readthedocs.io/en/latest", None),
}
# -- Options for the opengraph extension -------------------------------------
|
{"golden_diff": "diff --git a/docs/source/conf.py b/docs/source/conf.py\n--- a/docs/source/conf.py\n+++ b/docs/source/conf.py\n@@ -201,6 +201,7 @@\n \"python\": (\"https://docs.python.org/3/\", None),\n \"tornado\": (\"https://www.tornadoweb.org/en/stable/\", None),\n \"jupyter-server\": (\"https://jupyter-server.readthedocs.io/en/stable/\", None),\n+ \"nbgitpuller\": (\"https://nbgitpuller.readthedocs.io/en/latest\", None),\n }\n \n # -- Options for the opengraph extension -------------------------------------\n", "issue": "FAQ audience confusion\nThis FAQ item is unfortunately causing some local support problems for our institution:\r\n\r\nhttps://github.com/jupyterhub/jupyterhub/blob/38ba275367936c9a5a081a7b5acf326134705a0a/docs/source/faq/faq.md?plain=1#L3\r\n\r\nAs I understand, this FAQ item is written for a sophisticated end-user who is setting up their own jupterhub on their own servers. They have sysadmin access to the filesystems to configure it so that users on their system can easily share notebooks using these instructions, e.g. with [Jupyterlab \"copy shareable link\"](https://jupyterlab.readthedocs.io/en/stable/user/files.html).\r\n\r\nHowever, these instructions cause support problems downstream when our faculty assume they can use these instructions to allow their students to share links to their notebooks. Of course, this depends on how the institution's jupyterhub is configured on the backend.\r\n\r\nIf you could clarify this FAQ item for these audiences, I think that would help reduce frustration for general end users who search google for how to share a notebook link and stumble onto this entry. I think what most of them are searching for is [RTC (Real Time Collaboration) link sharing](https://jupyterlab.readthedocs.io/en/stable/user/rtc.html) which (as of March 2023) is not yet ready for prime-time and depends on the jupyterhub server environment configured for it, which most are not. Sometimes what they are trying to do can be accomplished with [nbgitpuller link generator](https://hub.jupyter.org/nbgitpuller/link), but I think most people are looking for RTC.\r\n\r\nThanks!\n", "before_files": [{"content": "# Configuration file for Sphinx to build our documentation to HTML.\n#\n# Configuration reference: https://www.sphinx-doc.org/en/master/usage/configuration.html\n#\nimport contextlib\nimport datetime\nimport io\nimport os\nimport subprocess\n\nfrom docutils import nodes\nfrom sphinx.directives.other import SphinxDirective\n\nimport jupyterhub\nfrom jupyterhub.app import JupyterHub\n\n# -- Project information -----------------------------------------------------\n# ref: https://www.sphinx-doc.org/en/master/usage/configuration.html#project-information\n#\nproject = \"JupyterHub\"\nauthor = \"Project Jupyter Contributors\"\ncopyright = f\"{datetime.date.today().year}, {author}\"\n\n\n# -- General Sphinx configuration --------------------------------------------\n# ref: https://www.sphinx-doc.org/en/master/usage/configuration.html#general-configuration\n#\nextensions = [\n \"sphinx.ext.autodoc\",\n \"sphinx.ext.intersphinx\",\n \"sphinx.ext.napoleon\",\n \"autodoc_traits\",\n \"sphinx_copybutton\",\n \"sphinx-jsonschema\",\n \"sphinxext.opengraph\",\n \"sphinxext.rediraffe\",\n \"jupyterhub_sphinx_theme\",\n \"myst_parser\",\n]\nroot_doc = \"index\"\nsource_suffix = [\".md\"]\n# default_role let's use use `foo` instead of ``foo`` in rST\ndefault_role = \"literal\"\n\n\n# -- MyST configuration ------------------------------------------------------\n# ref: https://myst-parser.readthedocs.io/en/latest/configuration.html\n#\nmyst_heading_anchors = 2\n\nmyst_enable_extensions = [\n # available extensions: https://myst-parser.readthedocs.io/en/latest/syntax/optional.html\n \"attrs_inline\",\n \"colon_fence\",\n \"deflist\",\n \"fieldlist\",\n \"substitution\",\n]\n\nmyst_substitutions = {\n # date example: Dev 07, 2022\n \"date\": datetime.date.today().strftime(\"%b %d, %Y\").title(),\n \"version\": jupyterhub.__version__,\n}\n\n\n# -- Custom directives to generate documentation -----------------------------\n# ref: https://myst-parser.readthedocs.io/en/latest/syntax/roles-and-directives.html\n#\n# We define custom directives to help us generate documentation using Python on\n# demand when referenced from our documentation files.\n#\n\n# Create a temp instance of JupyterHub for use by two separate directive classes\n# to get the output from using the \"--generate-config\" and \"--help-all\" CLI\n# flags respectively.\n#\njupyterhub_app = JupyterHub()\n\n\nclass ConfigDirective(SphinxDirective):\n \"\"\"Generate the configuration file output for use in the documentation.\"\"\"\n\n has_content = False\n required_arguments = 0\n optional_arguments = 0\n final_argument_whitespace = False\n option_spec = {}\n\n def run(self):\n # The generated configuration file for this version\n generated_config = jupyterhub_app.generate_config_file()\n # post-process output\n home_dir = os.environ[\"HOME\"]\n generated_config = generated_config.replace(home_dir, \"$HOME\", 1)\n par = nodes.literal_block(text=generated_config)\n return [par]\n\n\nclass HelpAllDirective(SphinxDirective):\n \"\"\"Print the output of jupyterhub help --all for use in the documentation.\"\"\"\n\n has_content = False\n required_arguments = 0\n optional_arguments = 0\n final_argument_whitespace = False\n option_spec = {}\n\n def run(self):\n # The output of the help command for this version\n buffer = io.StringIO()\n with contextlib.redirect_stdout(buffer):\n jupyterhub_app.print_help(\"--help-all\")\n all_help = buffer.getvalue()\n # post-process output\n home_dir = os.environ[\"HOME\"]\n all_help = all_help.replace(home_dir, \"$HOME\", 1)\n par = nodes.literal_block(text=all_help)\n return [par]\n\n\ndef setup(app):\n app.add_css_file(\"custom.css\")\n app.add_directive(\"jupyterhub-generate-config\", ConfigDirective)\n app.add_directive(\"jupyterhub-help-all\", HelpAllDirective)\n\n\n# -- Read The Docs -----------------------------------------------------------\n#\n# Since RTD runs sphinx-build directly without running \"make html\", we run the\n# pre-requisite steps for \"make html\" from here if needed.\n#\nif os.environ.get(\"READTHEDOCS\"):\n docs = os.path.dirname(os.path.dirname(__file__))\n subprocess.check_call([\"make\", \"metrics\", \"scopes\"], cwd=docs)\n\n\n# -- Spell checking ----------------------------------------------------------\n# ref: https://sphinxcontrib-spelling.readthedocs.io/en/latest/customize.html#configuration-options\n#\n# The \"sphinxcontrib.spelling\" extension is optionally enabled if its available.\n#\ntry:\n import sphinxcontrib.spelling # noqa\nexcept ImportError:\n pass\nelse:\n extensions.append(\"sphinxcontrib.spelling\")\nspelling_word_list_filename = \"spelling_wordlist.txt\"\n\n\n# -- Options for HTML output -------------------------------------------------\n# ref: https://www.sphinx-doc.org/en/master/usage/configuration.html#options-for-html-output\n#\nhtml_logo = \"_static/images/logo/logo.png\"\nhtml_favicon = \"_static/images/logo/favicon.ico\"\nhtml_static_path = [\"_static\"]\n\nhtml_theme = \"jupyterhub_sphinx_theme\"\nhtml_theme_options = {\n \"icon_links\": [\n {\n \"name\": \"GitHub\",\n \"url\": \"https://github.com/jupyterhub/jupyterhub\",\n \"icon\": \"fa-brands fa-github\",\n },\n ],\n \"use_edit_page_button\": True,\n \"navbar_align\": \"left\",\n}\nhtml_context = {\n \"github_user\": \"jupyterhub\",\n \"github_repo\": \"jupyterhub\",\n \"github_version\": \"main\",\n \"doc_path\": \"docs/source\",\n}\n\n\n# -- Options for linkcheck builder -------------------------------------------\n# ref: https://www.sphinx-doc.org/en/master/usage/configuration.html#options-for-the-linkcheck-builder\n#\nlinkcheck_ignore = [\n r\"(.*)github\\.com(.*)#\", # javascript based anchors\n r\"(.*)/#%21(.*)/(.*)\", # /#!forum/jupyter - encoded anchor edge case\n r\"https://github.com/[^/]*$\", # too many github usernames / searches in changelog\n \"https://github.com/jupyterhub/jupyterhub/pull/\", # too many PRs in changelog\n \"https://github.com/jupyterhub/jupyterhub/compare/\", # too many comparisons in changelog\n r\"https?://(localhost|127.0.0.1).*\", # ignore localhost references in auto-links\n r\".*/rest-api.html#.*\", # ignore javascript-resolved internal rest-api links\n r\"https://jupyter.chameleoncloud.org\", # FIXME: ignore (presumably) short-term SSL issue\n]\nlinkcheck_anchors_ignore = [\n \"/#!\",\n \"/#%21\",\n]\n\n# -- Intersphinx -------------------------------------------------------------\n# ref: https://www.sphinx-doc.org/en/master/usage/extensions/intersphinx.html#configuration\n#\nintersphinx_mapping = {\n \"python\": (\"https://docs.python.org/3/\", None),\n \"tornado\": (\"https://www.tornadoweb.org/en/stable/\", None),\n \"jupyter-server\": (\"https://jupyter-server.readthedocs.io/en/stable/\", None),\n}\n\n# -- Options for the opengraph extension -------------------------------------\n# ref: https://github.com/wpilibsuite/sphinxext-opengraph#options\n#\n# ogp_site_url is set automatically by RTD\nogp_image = \"_static/logo.png\"\nogp_use_first_image = True\n\n\n# -- Options for the rediraffe extension -------------------------------------\n# ref: https://github.com/wpilibsuite/sphinxext-rediraffe#readme\n#\n# This extension helps us relocate content without breaking links. If a\n# document is moved internally, a redirect link should be configured as below to\n# help us not break links.\n#\n# The workflow for adding redirects can be as follows:\n# 1. Change \"rediraffe_branch\" below to point to the commit/ branch you\n# want to base off the changes.\n# 2. Option 1: run \"make rediraffecheckdiff\"\n# a. Analyze the output of this command.\n# b. Manually add the redirect entries to the \"redirects.txt\" file.\n# Option 2: run \"make rediraffewritediff\"\n# a. rediraffe will then automatically add the obvious redirects to redirects.txt.\n# b. Analyze the output of the command for broken links.\n# c. Check the \"redirects.txt\" file for any files that were moved/ renamed but are not listed.\n# d. Manually add the redirects that have been mised by the automatic builder to \"redirects.txt\".\n# Option 3: Do not use the commands above and, instead, do everything manually - by taking\n# note of the files you have moved or renamed and adding them to the \"redirects.txt\" file.\n#\n# If you are basing changes off another branch/ commit, always change back\n# rediraffe_branch to main before pushing your changes upstream.\n#\nrediraffe_branch = os.environ.get(\"REDIRAFFE_BRANCH\", \"main\")\nrediraffe_redirects = \"redirects.txt\"\n\n# allow 80% match for autogenerated redirects\nrediraffe_auto_redirect_perc = 80\n\n# rediraffe_redirects = {\n# \"old-file\": \"new-folder/new-file-name\",\n# }\n", "path": "docs/source/conf.py"}]}
| 3,603 | 138 |
gh_patches_debug_29733
|
rasdani/github-patches
|
git_diff
|
bridgecrewio__checkov-4879
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Unable to Evaluate Final Result from condition
**Describe the issue**
CKV_GCP_43: "Ensure KMS encryption keys are rotated within a period of 90 days"
**Examples**
Check: CKV_GCP_43: "Ensure KMS encryption keys are rotated within a period of 90 days"
FAILED for resource: module.kms.google_kms_crypto_key.key
File: /main.tf:11-29
Calling File: /example/production/main.tf:1-6
Guide: https://docs.bridgecrew.io/docs/bc_gcp_general_4
11 | resource "google_kms_crypto_key" "key" {
12 | count = var.prevent_destroy ? length(var.keys) : 0
13 | name = var.keys[count.index]
14 | key_ring = google_kms_key_ring.key_ring.id
15 | rotation_period = contains(["ASYMMETRIC_SIGN", "ASYMMETRIC_DECRYPT"], var.purpose) ? null : var.key_rotation_period
16 | #rotation_period = var.key_rotation_period
17 | purpose = var.purpose
18 |
19 | lifecycle {
20 | prevent_destroy = true
21 | }
22 |
23 | version_template {
24 | algorithm = var.key_algorithm
25 | protection_level = var.key_protection_level
26 | }
27 |
28 | labels = var.labels
29 | }
Checkov should providing error only in ASYMMETRIC key creation not the ENCRYPT_DCRYPT purpose for KMS key. Even after setting the purpose to ENCRYPT_DCRYPT and key_rotation_period variable to 90 days(7776000s), check is failing.
**Version (please complete the following information):**
- Checkov Version 2.3.156
**Additional context**
`contains(["ASYMMETRIC_SIGN", "ASYMMETRIC_DECRYPT"], var.purpose) ? null : var.key_rotation_period`
Above line should be evaluated and marked as passed for GCP KMS as ASYMMETRIC key is not supporting Automatic rotation.
</issue>
<code>
[start of checkov/terraform/checks/resource/gcp/GoogleKMSRotationPeriod.py]
1 from typing import Dict, List, Any
2
3 from checkov.common.util.type_forcers import force_int
4
5 from checkov.common.models.enums import CheckResult, CheckCategories
6 from checkov.terraform.checks.resource.base_resource_check import BaseResourceCheck
7
8 # rotation_period time unit is seconds
9 ONE_DAY = 24 * 60 * 60
10 NINETY_DAYS = 90 * ONE_DAY
11
12
13 class GoogleKMSKeyRotationPeriod(BaseResourceCheck):
14 def __init__(self) -> None:
15 name = "Ensure KMS encryption keys are rotated within a period of 90 days"
16 id = "CKV_GCP_43"
17 supported_resources = ["google_kms_crypto_key"]
18 categories = [CheckCategories.GENERAL_SECURITY]
19 super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)
20
21 def scan_resource_conf(self, conf: Dict[str, List[Any]]) -> CheckResult:
22 self.evaluated_keys = ["rotation_period"]
23 rotation = conf.get("rotation_period")
24 if rotation and rotation[0]:
25 time = force_int(rotation[0][:-1])
26 if time and ONE_DAY <= time <= NINETY_DAYS:
27 return CheckResult.PASSED
28 return CheckResult.FAILED
29
30
31 check = GoogleKMSKeyRotationPeriod()
32
[end of checkov/terraform/checks/resource/gcp/GoogleKMSRotationPeriod.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/checkov/terraform/checks/resource/gcp/GoogleKMSRotationPeriod.py b/checkov/terraform/checks/resource/gcp/GoogleKMSRotationPeriod.py
--- a/checkov/terraform/checks/resource/gcp/GoogleKMSRotationPeriod.py
+++ b/checkov/terraform/checks/resource/gcp/GoogleKMSRotationPeriod.py
@@ -5,6 +5,7 @@
from checkov.common.models.enums import CheckResult, CheckCategories
from checkov.terraform.checks.resource.base_resource_check import BaseResourceCheck
+ASYMMETRIC_KEYS = {"ASYMMETRIC_DECRYPT", "ASYMMETRIC_SIGN"}
# rotation_period time unit is seconds
ONE_DAY = 24 * 60 * 60
NINETY_DAYS = 90 * ONE_DAY
@@ -14,11 +15,17 @@
def __init__(self) -> None:
name = "Ensure KMS encryption keys are rotated within a period of 90 days"
id = "CKV_GCP_43"
- supported_resources = ["google_kms_crypto_key"]
- categories = [CheckCategories.GENERAL_SECURITY]
+ supported_resources = ("google_kms_crypto_key",)
+ categories = (CheckCategories.GENERAL_SECURITY,)
super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)
def scan_resource_conf(self, conf: Dict[str, List[Any]]) -> CheckResult:
+ purpose = conf.get("purpose")
+ if purpose and isinstance(purpose, list) and purpose[0] in ASYMMETRIC_KEYS:
+ # https://cloud.google.com/kms/docs/key-rotation#asymmetric
+ # automatic key rotation is not supported for asymmetric keys
+ return CheckResult.UNKNOWN
+
self.evaluated_keys = ["rotation_period"]
rotation = conf.get("rotation_period")
if rotation and rotation[0]:
|
{"golden_diff": "diff --git a/checkov/terraform/checks/resource/gcp/GoogleKMSRotationPeriod.py b/checkov/terraform/checks/resource/gcp/GoogleKMSRotationPeriod.py\n--- a/checkov/terraform/checks/resource/gcp/GoogleKMSRotationPeriod.py\n+++ b/checkov/terraform/checks/resource/gcp/GoogleKMSRotationPeriod.py\n@@ -5,6 +5,7 @@\n from checkov.common.models.enums import CheckResult, CheckCategories\n from checkov.terraform.checks.resource.base_resource_check import BaseResourceCheck\n \n+ASYMMETRIC_KEYS = {\"ASYMMETRIC_DECRYPT\", \"ASYMMETRIC_SIGN\"}\n # rotation_period time unit is seconds\n ONE_DAY = 24 * 60 * 60\n NINETY_DAYS = 90 * ONE_DAY\n@@ -14,11 +15,17 @@\n def __init__(self) -> None:\n name = \"Ensure KMS encryption keys are rotated within a period of 90 days\"\n id = \"CKV_GCP_43\"\n- supported_resources = [\"google_kms_crypto_key\"]\n- categories = [CheckCategories.GENERAL_SECURITY]\n+ supported_resources = (\"google_kms_crypto_key\",)\n+ categories = (CheckCategories.GENERAL_SECURITY,)\n super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)\n \n def scan_resource_conf(self, conf: Dict[str, List[Any]]) -> CheckResult:\n+ purpose = conf.get(\"purpose\")\n+ if purpose and isinstance(purpose, list) and purpose[0] in ASYMMETRIC_KEYS:\n+ # https://cloud.google.com/kms/docs/key-rotation#asymmetric\n+ # automatic key rotation is not supported for asymmetric keys\n+ return CheckResult.UNKNOWN\n+\n self.evaluated_keys = [\"rotation_period\"]\n rotation = conf.get(\"rotation_period\")\n if rotation and rotation[0]:\n", "issue": "Unable to Evaluate Final Result from condition \n**Describe the issue**\r\nCKV_GCP_43: \"Ensure KMS encryption keys are rotated within a period of 90 days\"\r\n\r\n**Examples**\r\nCheck: CKV_GCP_43: \"Ensure KMS encryption keys are rotated within a period of 90 days\"\r\n\tFAILED for resource: module.kms.google_kms_crypto_key.key\r\n\tFile: /main.tf:11-29\r\n\tCalling File: /example/production/main.tf:1-6\r\n\tGuide: https://docs.bridgecrew.io/docs/bc_gcp_general_4\r\n\r\n\t\t11 | resource \"google_kms_crypto_key\" \"key\" {\r\n\t\t12 | count = var.prevent_destroy ? length(var.keys) : 0\r\n\t\t13 | name = var.keys[count.index]\r\n\t\t14 | key_ring = google_kms_key_ring.key_ring.id\r\n\t\t15 | rotation_period = contains([\"ASYMMETRIC_SIGN\", \"ASYMMETRIC_DECRYPT\"], var.purpose) ? null : var.key_rotation_period\r\n\t\t16 | #rotation_period = var.key_rotation_period\r\n\t\t17 | purpose = var.purpose\r\n\t\t18 |\r\n\t\t19 | lifecycle {\r\n\t\t20 | prevent_destroy = true\r\n\t\t21 | }\r\n\t\t22 |\r\n\t\t23 | version_template {\r\n\t\t24 | algorithm = var.key_algorithm\r\n\t\t25 | protection_level = var.key_protection_level\r\n\t\t26 | }\r\n\t\t27 |\r\n\t\t28 | labels = var.labels\r\n\t\t29 | }\r\n\r\nCheckov should providing error only in ASYMMETRIC key creation not the ENCRYPT_DCRYPT purpose for KMS key. Even after setting the purpose to ENCRYPT_DCRYPT and key_rotation_period variable to 90 days(7776000s), check is failing.\r\n**Version (please complete the following information):**\r\n - Checkov Version 2.3.156\r\n\r\n**Additional context**\r\n`contains([\"ASYMMETRIC_SIGN\", \"ASYMMETRIC_DECRYPT\"], var.purpose) ? null : var.key_rotation_period`\r\nAbove line should be evaluated and marked as passed for GCP KMS as ASYMMETRIC key is not supporting Automatic rotation.\r\n\n", "before_files": [{"content": "from typing import Dict, List, Any\n\nfrom checkov.common.util.type_forcers import force_int\n\nfrom checkov.common.models.enums import CheckResult, CheckCategories\nfrom checkov.terraform.checks.resource.base_resource_check import BaseResourceCheck\n\n# rotation_period time unit is seconds\nONE_DAY = 24 * 60 * 60\nNINETY_DAYS = 90 * ONE_DAY\n\n\nclass GoogleKMSKeyRotationPeriod(BaseResourceCheck):\n def __init__(self) -> None:\n name = \"Ensure KMS encryption keys are rotated within a period of 90 days\"\n id = \"CKV_GCP_43\"\n supported_resources = [\"google_kms_crypto_key\"]\n categories = [CheckCategories.GENERAL_SECURITY]\n super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)\n\n def scan_resource_conf(self, conf: Dict[str, List[Any]]) -> CheckResult:\n self.evaluated_keys = [\"rotation_period\"]\n rotation = conf.get(\"rotation_period\")\n if rotation and rotation[0]:\n time = force_int(rotation[0][:-1])\n if time and ONE_DAY <= time <= NINETY_DAYS:\n return CheckResult.PASSED\n return CheckResult.FAILED\n\n\ncheck = GoogleKMSKeyRotationPeriod()\n", "path": "checkov/terraform/checks/resource/gcp/GoogleKMSRotationPeriod.py"}]}
| 1,408 | 417 |
gh_patches_debug_50437
|
rasdani/github-patches
|
git_diff
|
readthedocs__readthedocs.org-5470
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Build List Screen Performance Issue
While working on #5464, I discovered a performance issue on the [build list screen](http://readthedocs.org/projects/requests/builds/).
There appears to be a couple duplicated queries in the build list screen. One is to get the project (probably from `Build.get_absolute_url`) for the build and the other gets the version (probably from `build_list_detailed.html`). This results in O(n) SQL queries where n is the number of builds displayed. It should be possible to get the project and version for each build using `select_related`.
<img width="1155" alt="Screen Shot 2019-03-15 at 11 00 13 AM" src="https://user-images.githubusercontent.com/185043/54452149-a0d76e80-4711-11e9-82f4-763418863f94.png">
</issue>
<code>
[start of readthedocs/builds/views.py]
1 # -*- coding: utf-8 -*-
2
3 """Views for builds app."""
4
5 import logging
6 import textwrap
7
8 from django.contrib import messages
9 from django.contrib.auth.decorators import login_required
10 from django.http import (
11 HttpResponseForbidden,
12 HttpResponsePermanentRedirect,
13 HttpResponseRedirect,
14 )
15 from django.shortcuts import get_object_or_404
16 from django.urls import reverse
17 from django.utils.decorators import method_decorator
18 from django.views.generic import DetailView, ListView
19 from requests.utils import quote
20 from urllib.parse import urlparse
21
22 from readthedocs.doc_builder.exceptions import BuildEnvironmentError
23 from readthedocs.builds.models import Build, Version
24 from readthedocs.core.permissions import AdminPermission
25 from readthedocs.core.utils import trigger_build
26 from readthedocs.projects.models import Project
27
28
29 log = logging.getLogger(__name__)
30
31
32 class BuildBase:
33 model = Build
34
35 def get_queryset(self):
36 self.project_slug = self.kwargs.get('project_slug', None)
37 self.project = get_object_or_404(
38 Project.objects.protected(self.request.user),
39 slug=self.project_slug,
40 )
41 queryset = Build.objects.public(
42 user=self.request.user,
43 project=self.project,
44 )
45
46 return queryset
47
48
49 class BuildTriggerMixin:
50
51 @method_decorator(login_required)
52 def post(self, request, project_slug):
53 project = get_object_or_404(Project, slug=project_slug)
54
55 if not AdminPermission.is_admin(request.user, project):
56 return HttpResponseForbidden()
57
58 version_slug = request.POST.get('version_slug')
59 version = get_object_or_404(
60 Version,
61 project=project,
62 slug=version_slug,
63 )
64
65 update_docs_task, build = trigger_build(
66 project=project,
67 version=version,
68 )
69 if (update_docs_task, build) == (None, None):
70 # Build was skipped
71 messages.add_message(
72 request,
73 messages.WARNING,
74 "This project is currently disabled and can't trigger new builds.",
75 )
76 return HttpResponseRedirect(
77 reverse('builds_project_list', args=[project.slug]),
78 )
79
80 return HttpResponseRedirect(
81 reverse('builds_detail', args=[project.slug, build.pk]),
82 )
83
84
85 class BuildList(BuildBase, BuildTriggerMixin, ListView):
86
87 def get_context_data(self, **kwargs):
88 context = super().get_context_data(**kwargs)
89
90 active_builds = self.get_queryset().exclude(
91 state='finished',
92 ).values('id')
93
94 context['project'] = self.project
95 context['active_builds'] = active_builds
96 context['versions'] = Version.objects.public(
97 user=self.request.user,
98 project=self.project,
99 )
100 context['build_qs'] = self.get_queryset()
101
102 return context
103
104
105 class BuildDetail(BuildBase, DetailView):
106 pk_url_kwarg = 'build_pk'
107
108 def get_context_data(self, **kwargs):
109 context = super().get_context_data(**kwargs)
110 context['project'] = self.project
111
112 build = self.get_object()
113 if build.error != BuildEnvironmentError.GENERIC_WITH_BUILD_ID.format(build_id=build.pk):
114 # Do not suggest to open an issue if the error is not generic
115 return context
116
117 scheme = (
118 'https://github.com/rtfd/readthedocs.org/issues/new'
119 '?title={title}{build_id}'
120 '&body={body}'
121 )
122
123 # TODO: we could use ``.github/ISSUE_TEMPLATE.md`` here, but we would
124 # need to add some variables to it which could impact in the UX when
125 # filling an issue from the web
126 body = """
127 ## Details:
128
129 * Project URL: https://readthedocs.org/projects/{project_slug}/
130 * Build URL(if applicable): https://readthedocs.org{build_path}
131 * Read the Docs username(if applicable): {username}
132
133 ## Expected Result
134
135 *A description of what you wanted to happen*
136
137 ## Actual Result
138
139 *A description of what actually happened*""".format(
140 project_slug=self.project,
141 build_path=self.request.path,
142 username=self.request.user,
143 )
144
145 scheme_dict = {
146 'title': quote('Build error with build id #'),
147 'build_id': context['build'].id,
148 'body': quote(textwrap.dedent(body)),
149 }
150
151 issue_url = scheme.format(**scheme_dict)
152 issue_url = urlparse(issue_url).geturl()
153 context['issue_url'] = issue_url
154 return context
155
156
157 # Old build view redirects
158
159
160 def builds_redirect_list(request, project_slug): # pylint: disable=unused-argument
161 return HttpResponsePermanentRedirect(
162 reverse('builds_project_list', args=[project_slug]),
163 )
164
165
166 def builds_redirect_detail(request, project_slug, pk): # pylint: disable=unused-argument
167 return HttpResponsePermanentRedirect(
168 reverse('builds_detail', args=[project_slug, pk]),
169 )
170
[end of readthedocs/builds/views.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/readthedocs/builds/views.py b/readthedocs/builds/views.py
--- a/readthedocs/builds/views.py
+++ b/readthedocs/builds/views.py
@@ -41,7 +41,7 @@
queryset = Build.objects.public(
user=self.request.user,
project=self.project,
- )
+ ).select_related('project', 'version')
return queryset
|
{"golden_diff": "diff --git a/readthedocs/builds/views.py b/readthedocs/builds/views.py\n--- a/readthedocs/builds/views.py\n+++ b/readthedocs/builds/views.py\n@@ -41,7 +41,7 @@\n queryset = Build.objects.public(\n user=self.request.user,\n project=self.project,\n- )\n+ ).select_related('project', 'version')\n \n return queryset\n", "issue": "Build List Screen Performance Issue\nWhile working on #5464, I discovered a performance issue on the [build list screen](http://readthedocs.org/projects/requests/builds/).\r\n\r\nThere appears to be a couple duplicated queries in the build list screen. One is to get the project (probably from `Build.get_absolute_url`) for the build and the other gets the version (probably from `build_list_detailed.html`). This results in O(n) SQL queries where n is the number of builds displayed. It should be possible to get the project and version for each build using `select_related`.\r\n\r\n<img width=\"1155\" alt=\"Screen Shot 2019-03-15 at 11 00 13 AM\" src=\"https://user-images.githubusercontent.com/185043/54452149-a0d76e80-4711-11e9-82f4-763418863f94.png\">\r\n\r\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\n\"\"\"Views for builds app.\"\"\"\n\nimport logging\nimport textwrap\n\nfrom django.contrib import messages\nfrom django.contrib.auth.decorators import login_required\nfrom django.http import (\n HttpResponseForbidden,\n HttpResponsePermanentRedirect,\n HttpResponseRedirect,\n)\nfrom django.shortcuts import get_object_or_404\nfrom django.urls import reverse\nfrom django.utils.decorators import method_decorator\nfrom django.views.generic import DetailView, ListView\nfrom requests.utils import quote\nfrom urllib.parse import urlparse\n\nfrom readthedocs.doc_builder.exceptions import BuildEnvironmentError\nfrom readthedocs.builds.models import Build, Version\nfrom readthedocs.core.permissions import AdminPermission\nfrom readthedocs.core.utils import trigger_build\nfrom readthedocs.projects.models import Project\n\n\nlog = logging.getLogger(__name__)\n\n\nclass BuildBase:\n model = Build\n\n def get_queryset(self):\n self.project_slug = self.kwargs.get('project_slug', None)\n self.project = get_object_or_404(\n Project.objects.protected(self.request.user),\n slug=self.project_slug,\n )\n queryset = Build.objects.public(\n user=self.request.user,\n project=self.project,\n )\n\n return queryset\n\n\nclass BuildTriggerMixin:\n\n @method_decorator(login_required)\n def post(self, request, project_slug):\n project = get_object_or_404(Project, slug=project_slug)\n\n if not AdminPermission.is_admin(request.user, project):\n return HttpResponseForbidden()\n\n version_slug = request.POST.get('version_slug')\n version = get_object_or_404(\n Version,\n project=project,\n slug=version_slug,\n )\n\n update_docs_task, build = trigger_build(\n project=project,\n version=version,\n )\n if (update_docs_task, build) == (None, None):\n # Build was skipped\n messages.add_message(\n request,\n messages.WARNING,\n \"This project is currently disabled and can't trigger new builds.\",\n )\n return HttpResponseRedirect(\n reverse('builds_project_list', args=[project.slug]),\n )\n\n return HttpResponseRedirect(\n reverse('builds_detail', args=[project.slug, build.pk]),\n )\n\n\nclass BuildList(BuildBase, BuildTriggerMixin, ListView):\n\n def get_context_data(self, **kwargs):\n context = super().get_context_data(**kwargs)\n\n active_builds = self.get_queryset().exclude(\n state='finished',\n ).values('id')\n\n context['project'] = self.project\n context['active_builds'] = active_builds\n context['versions'] = Version.objects.public(\n user=self.request.user,\n project=self.project,\n )\n context['build_qs'] = self.get_queryset()\n\n return context\n\n\nclass BuildDetail(BuildBase, DetailView):\n pk_url_kwarg = 'build_pk'\n\n def get_context_data(self, **kwargs):\n context = super().get_context_data(**kwargs)\n context['project'] = self.project\n\n build = self.get_object()\n if build.error != BuildEnvironmentError.GENERIC_WITH_BUILD_ID.format(build_id=build.pk):\n # Do not suggest to open an issue if the error is not generic\n return context\n\n scheme = (\n 'https://github.com/rtfd/readthedocs.org/issues/new'\n '?title={title}{build_id}'\n '&body={body}'\n )\n\n # TODO: we could use ``.github/ISSUE_TEMPLATE.md`` here, but we would\n # need to add some variables to it which could impact in the UX when\n # filling an issue from the web\n body = \"\"\"\n ## Details:\n\n * Project URL: https://readthedocs.org/projects/{project_slug}/\n * Build URL(if applicable): https://readthedocs.org{build_path}\n * Read the Docs username(if applicable): {username}\n\n ## Expected Result\n\n *A description of what you wanted to happen*\n\n ## Actual Result\n\n *A description of what actually happened*\"\"\".format(\n project_slug=self.project,\n build_path=self.request.path,\n username=self.request.user,\n )\n\n scheme_dict = {\n 'title': quote('Build error with build id #'),\n 'build_id': context['build'].id,\n 'body': quote(textwrap.dedent(body)),\n }\n\n issue_url = scheme.format(**scheme_dict)\n issue_url = urlparse(issue_url).geturl()\n context['issue_url'] = issue_url\n return context\n\n\n# Old build view redirects\n\n\ndef builds_redirect_list(request, project_slug): # pylint: disable=unused-argument\n return HttpResponsePermanentRedirect(\n reverse('builds_project_list', args=[project_slug]),\n )\n\n\ndef builds_redirect_detail(request, project_slug, pk): # pylint: disable=unused-argument\n return HttpResponsePermanentRedirect(\n reverse('builds_detail', args=[project_slug, pk]),\n )\n", "path": "readthedocs/builds/views.py"}]}
| 2,227 | 89 |
gh_patches_debug_4687
|
rasdani/github-patches
|
git_diff
|
rasterio__rasterio-258
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Reduce cython compile errors
We get a bunch of compile-time warnings from the cython code. I think we could reduce these.
</issue>
<code>
[start of setup.py]
1 #!/usr/bin/env python
2
3 # Two environmental variables influence this script.
4 #
5 # GDAL_CONFIG: the path to a gdal-config program that points to GDAL headers,
6 # libraries, and data files.
7 #
8 # PACKAGE_DATA: if defined, GDAL and PROJ4 data files will be copied into the
9 # source or binary distribution. This is essential when creating self-contained
10 # binary wheels.
11
12 import logging
13 import os
14 import pprint
15 import shutil
16 import subprocess
17 import sys
18
19 from setuptools import setup
20 from setuptools.extension import Extension
21
22 logging.basicConfig()
23 log = logging.getLogger()
24
25 # python -W all setup.py ...
26 if 'all' in sys.warnoptions:
27 log.level = logging.DEBUG
28
29 # Parse the version from the fiona module.
30 with open('rasterio/__init__.py') as f:
31 for line in f:
32 if line.find("__version__") >= 0:
33 version = line.split("=")[1].strip()
34 version = version.strip('"')
35 version = version.strip("'")
36 continue
37
38 with open('VERSION.txt', 'w') as f:
39 f.write(version)
40
41 # Use Cython if available.
42 try:
43 from Cython.Build import cythonize
44 except ImportError:
45 cythonize = None
46
47 # By default we'll try to get options via gdal-config. On systems without,
48 # options will need to be set in setup.cfg or on the setup command line.
49 include_dirs = []
50 library_dirs = []
51 libraries = []
52 extra_link_args = []
53
54 try:
55 import numpy
56 include_dirs.append(numpy.get_include())
57 except ImportError:
58 log.critical("Numpy and its headers are required to run setup(). Exiting.")
59 sys.exit(1)
60
61 try:
62 gdal_config = os.environ.get('GDAL_CONFIG', 'gdal-config')
63 with open("gdal-config.txt", "w") as gcfg:
64 subprocess.call([gdal_config, "--cflags"], stdout=gcfg)
65 subprocess.call([gdal_config, "--libs"], stdout=gcfg)
66 subprocess.call([gdal_config, "--datadir"], stdout=gcfg)
67 with open("gdal-config.txt", "r") as gcfg:
68 cflags = gcfg.readline().strip()
69 libs = gcfg.readline().strip()
70 datadir = gcfg.readline().strip()
71 for item in cflags.split():
72 if item.startswith("-I"):
73 include_dirs.extend(item[2:].split(":"))
74 for item in libs.split():
75 if item.startswith("-L"):
76 library_dirs.extend(item[2:].split(":"))
77 elif item.startswith("-l"):
78 libraries.append(item[2:])
79 else:
80 # e.g. -framework GDAL
81 extra_link_args.append(item)
82
83 # Conditionally copy the GDAL data. To be used in conjunction with
84 # the bdist_wheel command to make self-contained binary wheels.
85 if os.environ.get('PACKAGE_DATA'):
86 try:
87 shutil.rmtree('rasterio/gdal_data')
88 except OSError:
89 pass
90 shutil.copytree(datadir, 'rasterio/gdal_data')
91
92 except Exception as e:
93 log.warning("Failed to get options via gdal-config: %s", str(e))
94
95 # Conditionally copy PROJ.4 data.
96 if os.environ.get('PACKAGE_DATA'):
97 projdatadir = os.environ.get('PROJ_LIB', '/usr/local/share/proj')
98 if os.path.exists(projdatadir):
99 try:
100 shutil.rmtree('rasterio/proj_data')
101 except OSError:
102 pass
103 shutil.copytree(projdatadir, 'rasterio/proj_data')
104
105 ext_options = dict(
106 include_dirs=include_dirs,
107 library_dirs=library_dirs,
108 libraries=libraries,
109 extra_link_args=extra_link_args)
110
111 log.debug('ext_options:\n%s', pprint.pformat(ext_options))
112
113 # When building from a repo, Cython is required.
114 if os.path.exists("MANIFEST.in") and "clean" not in sys.argv:
115 log.info("MANIFEST.in found, presume a repo, cythonizing...")
116 if not cythonize:
117 log.critical(
118 "Cython.Build.cythonize not found. "
119 "Cython is required to build from a repo.")
120 sys.exit(1)
121 ext_modules = cythonize([
122 Extension(
123 'rasterio._base', ['rasterio/_base.pyx'], **ext_options),
124 Extension(
125 'rasterio._io', ['rasterio/_io.pyx'], **ext_options),
126 Extension(
127 'rasterio._copy', ['rasterio/_copy.pyx'], **ext_options),
128 Extension(
129 'rasterio._features', ['rasterio/_features.pyx'], **ext_options),
130 Extension(
131 'rasterio._drivers', ['rasterio/_drivers.pyx'], **ext_options),
132 Extension(
133 'rasterio._warp', ['rasterio/_warp.pyx'], **ext_options),
134 Extension(
135 'rasterio._err', ['rasterio/_err.pyx'], **ext_options),
136 Extension(
137 'rasterio._example', ['rasterio/_example.pyx'], **ext_options),
138 ])
139
140 # If there's no manifest template, as in an sdist, we just specify .c files.
141 else:
142 ext_modules = [
143 Extension(
144 'rasterio._base', ['rasterio/_base.c'], **ext_options),
145 Extension(
146 'rasterio._io', ['rasterio/_io.c'], **ext_options),
147 Extension(
148 'rasterio._copy', ['rasterio/_copy.c'], **ext_options),
149 Extension(
150 'rasterio._features', ['rasterio/_features.c'], **ext_options),
151 Extension(
152 'rasterio._drivers', ['rasterio/_drivers.c'], **ext_options),
153 Extension(
154 'rasterio._warp', ['rasterio/_warp.cpp'], **ext_options),
155 Extension(
156 'rasterio._err', ['rasterio/_err.c'], **ext_options),
157 Extension(
158 'rasterio._example', ['rasterio/_example.c'], **ext_options),
159 ]
160
161 with open('README.rst') as f:
162 readme = f.read()
163
164 # Runtime requirements.
165 inst_reqs = [
166 'affine>=1.0',
167 'cligj',
168 'Numpy>=1.7' ]
169
170 if sys.version_info < (3, 4):
171 inst_reqs.append('enum34')
172
173 setup_args = dict(
174 name='rasterio',
175 version=version,
176 description="Fast and direct raster I/O for use with Numpy and SciPy",
177 long_description=readme,
178 classifiers=[
179 'Development Status :: 4 - Beta',
180 'Intended Audience :: Developers',
181 'Intended Audience :: Information Technology',
182 'Intended Audience :: Science/Research',
183 'License :: OSI Approved :: BSD License',
184 'Programming Language :: C',
185 'Programming Language :: Python :: 2.6',
186 'Programming Language :: Python :: 2.7',
187 'Programming Language :: Python :: 3.3',
188 'Programming Language :: Python :: 3.4',
189 'Topic :: Multimedia :: Graphics :: Graphics Conversion',
190 'Topic :: Scientific/Engineering :: GIS'],
191 keywords='raster gdal',
192 author='Sean Gillies',
193 author_email='[email protected]',
194 url='https://github.com/mapbox/rasterio',
195 license='BSD',
196 package_dir={'': '.'},
197 packages=['rasterio', 'rasterio.rio'],
198 entry_points='''
199 [console_scripts]
200 rio=rasterio.rio.main:cli
201 ''',
202 include_package_data=True,
203 ext_modules=ext_modules,
204 zip_safe=False,
205 install_requires=inst_reqs)
206
207 if os.environ.get('PACKAGE_DATA'):
208 setup_args['package_data'] = {'rasterio': ['gdal_data/*', 'proj_data/*']}
209
210 setup(**setup_args)
211
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -135,7 +135,7 @@
'rasterio._err', ['rasterio/_err.pyx'], **ext_options),
Extension(
'rasterio._example', ['rasterio/_example.pyx'], **ext_options),
- ])
+ ], quiet=True)
# If there's no manifest template, as in an sdist, we just specify .c files.
else:
|
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -135,7 +135,7 @@\n 'rasterio._err', ['rasterio/_err.pyx'], **ext_options),\n Extension(\n 'rasterio._example', ['rasterio/_example.pyx'], **ext_options),\n- ])\n+ ], quiet=True)\n \n # If there's no manifest template, as in an sdist, we just specify .c files.\n else:\n", "issue": "Reduce cython compile errors\nWe get a bunch of compile-time warnings from the cython code. I think we could reduce these.\n\n", "before_files": [{"content": "#!/usr/bin/env python\n\n# Two environmental variables influence this script.\n#\n# GDAL_CONFIG: the path to a gdal-config program that points to GDAL headers,\n# libraries, and data files.\n#\n# PACKAGE_DATA: if defined, GDAL and PROJ4 data files will be copied into the\n# source or binary distribution. This is essential when creating self-contained\n# binary wheels.\n\nimport logging\nimport os\nimport pprint\nimport shutil\nimport subprocess\nimport sys\n\nfrom setuptools import setup\nfrom setuptools.extension import Extension\n\nlogging.basicConfig()\nlog = logging.getLogger()\n\n# python -W all setup.py ...\nif 'all' in sys.warnoptions:\n log.level = logging.DEBUG\n\n# Parse the version from the fiona module.\nwith open('rasterio/__init__.py') as f:\n for line in f:\n if line.find(\"__version__\") >= 0:\n version = line.split(\"=\")[1].strip()\n version = version.strip('\"')\n version = version.strip(\"'\")\n continue\n\nwith open('VERSION.txt', 'w') as f:\n f.write(version)\n\n# Use Cython if available.\ntry:\n from Cython.Build import cythonize\nexcept ImportError:\n cythonize = None\n\n# By default we'll try to get options via gdal-config. On systems without,\n# options will need to be set in setup.cfg or on the setup command line.\ninclude_dirs = []\nlibrary_dirs = []\nlibraries = []\nextra_link_args = []\n\ntry:\n import numpy\n include_dirs.append(numpy.get_include())\nexcept ImportError:\n log.critical(\"Numpy and its headers are required to run setup(). Exiting.\")\n sys.exit(1)\n\ntry:\n gdal_config = os.environ.get('GDAL_CONFIG', 'gdal-config')\n with open(\"gdal-config.txt\", \"w\") as gcfg:\n subprocess.call([gdal_config, \"--cflags\"], stdout=gcfg)\n subprocess.call([gdal_config, \"--libs\"], stdout=gcfg)\n subprocess.call([gdal_config, \"--datadir\"], stdout=gcfg)\n with open(\"gdal-config.txt\", \"r\") as gcfg:\n cflags = gcfg.readline().strip()\n libs = gcfg.readline().strip()\n datadir = gcfg.readline().strip()\n for item in cflags.split():\n if item.startswith(\"-I\"):\n include_dirs.extend(item[2:].split(\":\"))\n for item in libs.split():\n if item.startswith(\"-L\"):\n library_dirs.extend(item[2:].split(\":\"))\n elif item.startswith(\"-l\"):\n libraries.append(item[2:])\n else:\n # e.g. -framework GDAL\n extra_link_args.append(item)\n\n # Conditionally copy the GDAL data. To be used in conjunction with\n # the bdist_wheel command to make self-contained binary wheels.\n if os.environ.get('PACKAGE_DATA'):\n try:\n shutil.rmtree('rasterio/gdal_data')\n except OSError:\n pass\n shutil.copytree(datadir, 'rasterio/gdal_data')\n\nexcept Exception as e:\n log.warning(\"Failed to get options via gdal-config: %s\", str(e))\n\n# Conditionally copy PROJ.4 data.\nif os.environ.get('PACKAGE_DATA'):\n projdatadir = os.environ.get('PROJ_LIB', '/usr/local/share/proj')\n if os.path.exists(projdatadir):\n try:\n shutil.rmtree('rasterio/proj_data')\n except OSError:\n pass\n shutil.copytree(projdatadir, 'rasterio/proj_data')\n\next_options = dict(\n include_dirs=include_dirs,\n library_dirs=library_dirs,\n libraries=libraries,\n extra_link_args=extra_link_args)\n\nlog.debug('ext_options:\\n%s', pprint.pformat(ext_options))\n\n# When building from a repo, Cython is required.\nif os.path.exists(\"MANIFEST.in\") and \"clean\" not in sys.argv:\n log.info(\"MANIFEST.in found, presume a repo, cythonizing...\")\n if not cythonize:\n log.critical(\n \"Cython.Build.cythonize not found. \"\n \"Cython is required to build from a repo.\")\n sys.exit(1)\n ext_modules = cythonize([\n Extension(\n 'rasterio._base', ['rasterio/_base.pyx'], **ext_options),\n Extension(\n 'rasterio._io', ['rasterio/_io.pyx'], **ext_options),\n Extension(\n 'rasterio._copy', ['rasterio/_copy.pyx'], **ext_options),\n Extension(\n 'rasterio._features', ['rasterio/_features.pyx'], **ext_options),\n Extension(\n 'rasterio._drivers', ['rasterio/_drivers.pyx'], **ext_options),\n Extension(\n 'rasterio._warp', ['rasterio/_warp.pyx'], **ext_options),\n Extension(\n 'rasterio._err', ['rasterio/_err.pyx'], **ext_options),\n Extension(\n 'rasterio._example', ['rasterio/_example.pyx'], **ext_options),\n ])\n\n# If there's no manifest template, as in an sdist, we just specify .c files.\nelse:\n ext_modules = [\n Extension(\n 'rasterio._base', ['rasterio/_base.c'], **ext_options),\n Extension(\n 'rasterio._io', ['rasterio/_io.c'], **ext_options),\n Extension(\n 'rasterio._copy', ['rasterio/_copy.c'], **ext_options),\n Extension(\n 'rasterio._features', ['rasterio/_features.c'], **ext_options),\n Extension(\n 'rasterio._drivers', ['rasterio/_drivers.c'], **ext_options),\n Extension(\n 'rasterio._warp', ['rasterio/_warp.cpp'], **ext_options),\n Extension(\n 'rasterio._err', ['rasterio/_err.c'], **ext_options),\n Extension(\n 'rasterio._example', ['rasterio/_example.c'], **ext_options),\n ]\n\nwith open('README.rst') as f:\n readme = f.read()\n\n# Runtime requirements.\ninst_reqs = [\n 'affine>=1.0',\n 'cligj',\n 'Numpy>=1.7' ]\n\nif sys.version_info < (3, 4):\n inst_reqs.append('enum34')\n\nsetup_args = dict(\n name='rasterio',\n version=version,\n description=\"Fast and direct raster I/O for use with Numpy and SciPy\",\n long_description=readme,\n classifiers=[\n 'Development Status :: 4 - Beta',\n 'Intended Audience :: Developers',\n 'Intended Audience :: Information Technology',\n 'Intended Audience :: Science/Research',\n 'License :: OSI Approved :: BSD License',\n 'Programming Language :: C',\n 'Programming Language :: Python :: 2.6',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3.3',\n 'Programming Language :: Python :: 3.4',\n 'Topic :: Multimedia :: Graphics :: Graphics Conversion',\n 'Topic :: Scientific/Engineering :: GIS'],\n keywords='raster gdal',\n author='Sean Gillies',\n author_email='[email protected]',\n url='https://github.com/mapbox/rasterio',\n license='BSD',\n package_dir={'': '.'},\n packages=['rasterio', 'rasterio.rio'],\n entry_points='''\n [console_scripts]\n rio=rasterio.rio.main:cli\n ''',\n include_package_data=True,\n ext_modules=ext_modules,\n zip_safe=False,\n install_requires=inst_reqs)\n\nif os.environ.get('PACKAGE_DATA'):\n setup_args['package_data'] = {'rasterio': ['gdal_data/*', 'proj_data/*']}\n\nsetup(**setup_args)\n", "path": "setup.py"}]}
| 2,802 | 112 |
gh_patches_debug_12541
|
rasdani/github-patches
|
git_diff
|
arviz-devs__arviz-1038
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
plot_posterior writes auto instead of point estimate name
**Describe the bug**
`plot_posterior` should write the name of the point estimate, however it currently writes `auto` if `auto` is passed as point estimate. This can be seen in the [docs example](https://arviz-devs.github.io/arviz/examples/matplotlib/mpl_plot_posterior.html)
**Expected behavior**
It should write the name of the point estimate used (taken from rcParams).
**Additional context**
ArviZ master
</issue>
<code>
[start of arviz/plots/posteriorplot.py]
1 """Plot posterior densities."""
2 from typing import Optional
3
4 from ..data import convert_to_dataset
5 from .plot_utils import (
6 xarray_var_iter,
7 _scale_fig_size,
8 default_grid,
9 get_coords,
10 filter_plotters_list,
11 get_plotting_function,
12 )
13 from ..utils import _var_names
14 from ..rcparams import rcParams
15
16
17 def plot_posterior(
18 data,
19 var_names=None,
20 coords=None,
21 figsize=None,
22 textsize=None,
23 credible_interval=None,
24 multimodal=False,
25 round_to: Optional[int] = None,
26 point_estimate="auto",
27 group="posterior",
28 rope=None,
29 ref_val=None,
30 kind="kde",
31 bw=4.5,
32 bins=None,
33 ax=None,
34 backend=None,
35 backend_kwargs=None,
36 show=None,
37 **kwargs
38 ):
39 """Plot Posterior densities in the style of John K. Kruschke's book.
40
41 Parameters
42 ----------
43 data : obj
44 Any object that can be converted to an az.InferenceData object
45 Refer to documentation of az.convert_to_dataset for details
46 var_names : list of variable names
47 Variables to be plotted, two variables are required.
48 coords : mapping, optional
49 Coordinates of var_names to be plotted. Passed to `Dataset.sel`
50 figsize : tuple
51 Figure size. If None it will be defined automatically.
52 textsize: float
53 Text size scaling factor for labels, titles and lines. If None it will be autoscaled based
54 on figsize.
55 credible_interval : float, optional
56 Credible intervals. Defaults to 0.94. Use None to hide the credible interval
57 multimodal : bool
58 If true (default) it may compute more than one credible interval if the distribution is
59 multimodal and the modes are well separated.
60 round_to : int, optional
61 Controls formatting of floats. Defaults to 2 or the integer part, whichever is bigger.
62 point_estimate : Optional[str]
63 Plot point estimate per variable. Values should be 'mean', 'median', 'mode' or None.
64 Defaults to 'auto' i.e. it falls back to default set in rcParams.
65 group : str, optional
66 Specifies which InferenceData group should be plotted. Defaults to ‘posterior’.
67 rope: tuple or dictionary of tuples
68 Lower and upper values of the Region Of Practical Equivalence. If a list is provided, its
69 length should match the number of variables.
70 ref_val: float or dictionary of floats
71 display the percentage below and above the values in ref_val. Must be None (default),
72 a constant, a list or a dictionary like see an example below. If a list is provided, its
73 length should match the number of variables.
74 kind: str
75 Type of plot to display (kde or hist) For discrete variables this argument is ignored and
76 a histogram is always used.
77 bw : float
78 Bandwidth scaling factor for the KDE. Should be larger than 0. The higher this number the
79 smoother the KDE will be. Defaults to 4.5 which is essentially the same as the Scott's rule
80 of thumb (the default rule used by SciPy). Only works if `kind == kde`.
81 bins : integer or sequence or 'auto', optional
82 Controls the number of bins, accepts the same keywords `matplotlib.hist()` does. Only works
83 if `kind == hist`. If None (default) it will use `auto` for continuous variables and
84 `range(xmin, xmax + 1)` for discrete variables.
85 ax: axes, optional
86 Matplotlib axes or bokeh figures.
87 backend: str, optional
88 Select plotting backend {"matplotlib","bokeh"}. Default "matplotlib".
89 backend_kwargs: bool, optional
90 These are kwargs specific to the backend being used. For additional documentation
91 check the plotting method of the backend.
92 show : bool, optional
93 Call backend show function.
94 **kwargs
95 Passed as-is to plt.hist() or plt.plot() function depending on the value of `kind`.
96
97 Returns
98 -------
99 axes : matplotlib axes or bokeh figures
100
101 Examples
102 --------
103 Show a default kernel density plot following style of John Kruschke
104
105 .. plot::
106 :context: close-figs
107
108 >>> import arviz as az
109 >>> data = az.load_arviz_data('centered_eight')
110 >>> az.plot_posterior(data)
111
112 Plot subset variables by specifying variable name exactly
113
114 .. plot::
115 :context: close-figs
116
117 >>> az.plot_posterior(data, var_names=['mu'])
118
119 Plot Region of Practical Equivalence (rope) for all distributions
120
121 .. plot::
122 :context: close-figs
123
124 >>> az.plot_posterior(data, var_names=['mu', 'theta'], rope=(-1, 1))
125
126 Plot Region of Practical Equivalence for selected distributions
127
128 .. plot::
129 :context: close-figs
130
131 >>> rope = {'mu': [{'rope': (-2, 2)}], 'theta': [{'school': 'Choate', 'rope': (2, 4)}]}
132 >>> az.plot_posterior(data, var_names=['mu', 'theta'], rope=rope)
133
134
135 Add reference lines
136
137 .. plot::
138 :context: close-figs
139
140 >>> az.plot_posterior(data, var_names=['mu', 'theta'], ref_val=0)
141
142 Show point estimate of distribution
143
144 .. plot::
145 :context: close-figs
146
147 >>> az.plot_posterior(data, var_names=['mu', 'theta'], point_estimate='mode')
148
149 Show reference values using variable names and coordinates
150
151 .. plot::
152 :context: close-figs
153
154 >>> az.plot_posterior(data, ref_val= {"theta": [{"school": "Deerfield", "ref_val": 4},
155 ... {"school": "Choate", "ref_val": 3}]})
156
157 Show reference values using a list
158
159 .. plot::
160 :context: close-figs
161
162 >>> az.plot_posterior(data, ref_val=[1] + [5] * 8 + [1])
163
164
165 Plot posterior as a histogram
166
167 .. plot::
168 :context: close-figs
169
170 >>> az.plot_posterior(data, var_names=['mu'], kind='hist')
171
172 Change size of credible interval
173
174 .. plot::
175 :context: close-figs
176
177 >>> az.plot_posterior(data, var_names=['mu'], credible_interval=.75)
178 """
179 data = convert_to_dataset(data, group=group)
180 var_names = _var_names(var_names, data)
181
182 if coords is None:
183 coords = {}
184
185 if credible_interval is None:
186 credible_interval = rcParams["stats.credible_interval"]
187 else:
188 if not 1 >= credible_interval > 0:
189 raise ValueError("The value of credible_interval should be in the interval (0, 1]")
190
191 plotters = filter_plotters_list(
192 list(xarray_var_iter(get_coords(data, coords), var_names=var_names, combined=True)),
193 "plot_posterior",
194 )
195 length_plotters = len(plotters)
196 rows, cols = default_grid(length_plotters)
197
198 (figsize, ax_labelsize, titlesize, xt_labelsize, _linewidth, _) = _scale_fig_size(
199 figsize, textsize, rows, cols
200 )
201 kwargs.setdefault("linewidth", _linewidth)
202
203 posteriorplot_kwargs = dict(
204 ax=ax,
205 length_plotters=length_plotters,
206 rows=rows,
207 cols=cols,
208 figsize=figsize,
209 plotters=plotters,
210 bw=bw,
211 bins=bins,
212 kind=kind,
213 point_estimate=point_estimate,
214 round_to=round_to,
215 credible_interval=credible_interval,
216 multimodal=multimodal,
217 ref_val=ref_val,
218 rope=rope,
219 ax_labelsize=ax_labelsize,
220 xt_labelsize=xt_labelsize,
221 kwargs=kwargs,
222 titlesize=titlesize,
223 backend_kwargs=backend_kwargs,
224 show=show,
225 )
226
227 if backend == "bokeh":
228
229 posteriorplot_kwargs.pop("xt_labelsize")
230 posteriorplot_kwargs.pop("titlesize")
231
232 # TODO: Add backend kwargs
233 plot = get_plotting_function("plot_posterior", "posteriorplot", backend)
234 ax = plot(**posteriorplot_kwargs)
235 return ax
236
[end of arviz/plots/posteriorplot.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/arviz/plots/posteriorplot.py b/arviz/plots/posteriorplot.py
--- a/arviz/plots/posteriorplot.py
+++ b/arviz/plots/posteriorplot.py
@@ -188,6 +188,11 @@
if not 1 >= credible_interval > 0:
raise ValueError("The value of credible_interval should be in the interval (0, 1]")
+ if point_estimate == "auto":
+ point_estimate = rcParams["plot.point_estimate"]
+ elif point_estimate not in {"mean", "median", "mode", None}:
+ raise ValueError("The value of point_estimate must be either mean, median, mode or None.")
+
plotters = filter_plotters_list(
list(xarray_var_iter(get_coords(data, coords), var_names=var_names, combined=True)),
"plot_posterior",
|
{"golden_diff": "diff --git a/arviz/plots/posteriorplot.py b/arviz/plots/posteriorplot.py\n--- a/arviz/plots/posteriorplot.py\n+++ b/arviz/plots/posteriorplot.py\n@@ -188,6 +188,11 @@\n if not 1 >= credible_interval > 0:\n raise ValueError(\"The value of credible_interval should be in the interval (0, 1]\")\n \n+ if point_estimate == \"auto\":\n+ point_estimate = rcParams[\"plot.point_estimate\"]\n+ elif point_estimate not in {\"mean\", \"median\", \"mode\", None}:\n+ raise ValueError(\"The value of point_estimate must be either mean, median, mode or None.\")\n+\n plotters = filter_plotters_list(\n list(xarray_var_iter(get_coords(data, coords), var_names=var_names, combined=True)),\n \"plot_posterior\",\n", "issue": "plot_posterior writes auto instead of point estimate name\n**Describe the bug**\r\n`plot_posterior` should write the name of the point estimate, however it currently writes `auto` if `auto` is passed as point estimate. This can be seen in the [docs example](https://arviz-devs.github.io/arviz/examples/matplotlib/mpl_plot_posterior.html)\r\n\r\n**Expected behavior**\r\nIt should write the name of the point estimate used (taken from rcParams).\r\n\r\n**Additional context**\r\nArviZ master\r\n\n", "before_files": [{"content": "\"\"\"Plot posterior densities.\"\"\"\nfrom typing import Optional\n\nfrom ..data import convert_to_dataset\nfrom .plot_utils import (\n xarray_var_iter,\n _scale_fig_size,\n default_grid,\n get_coords,\n filter_plotters_list,\n get_plotting_function,\n)\nfrom ..utils import _var_names\nfrom ..rcparams import rcParams\n\n\ndef plot_posterior(\n data,\n var_names=None,\n coords=None,\n figsize=None,\n textsize=None,\n credible_interval=None,\n multimodal=False,\n round_to: Optional[int] = None,\n point_estimate=\"auto\",\n group=\"posterior\",\n rope=None,\n ref_val=None,\n kind=\"kde\",\n bw=4.5,\n bins=None,\n ax=None,\n backend=None,\n backend_kwargs=None,\n show=None,\n **kwargs\n):\n \"\"\"Plot Posterior densities in the style of John K. Kruschke's book.\n\n Parameters\n ----------\n data : obj\n Any object that can be converted to an az.InferenceData object\n Refer to documentation of az.convert_to_dataset for details\n var_names : list of variable names\n Variables to be plotted, two variables are required.\n coords : mapping, optional\n Coordinates of var_names to be plotted. Passed to `Dataset.sel`\n figsize : tuple\n Figure size. If None it will be defined automatically.\n textsize: float\n Text size scaling factor for labels, titles and lines. If None it will be autoscaled based\n on figsize.\n credible_interval : float, optional\n Credible intervals. Defaults to 0.94. Use None to hide the credible interval\n multimodal : bool\n If true (default) it may compute more than one credible interval if the distribution is\n multimodal and the modes are well separated.\n round_to : int, optional\n Controls formatting of floats. Defaults to 2 or the integer part, whichever is bigger.\n point_estimate : Optional[str]\n Plot point estimate per variable. Values should be 'mean', 'median', 'mode' or None.\n Defaults to 'auto' i.e. it falls back to default set in rcParams.\n group : str, optional\n Specifies which InferenceData group should be plotted. Defaults to \u2018posterior\u2019.\n rope: tuple or dictionary of tuples\n Lower and upper values of the Region Of Practical Equivalence. If a list is provided, its\n length should match the number of variables.\n ref_val: float or dictionary of floats\n display the percentage below and above the values in ref_val. Must be None (default),\n a constant, a list or a dictionary like see an example below. If a list is provided, its\n length should match the number of variables.\n kind: str\n Type of plot to display (kde or hist) For discrete variables this argument is ignored and\n a histogram is always used.\n bw : float\n Bandwidth scaling factor for the KDE. Should be larger than 0. The higher this number the\n smoother the KDE will be. Defaults to 4.5 which is essentially the same as the Scott's rule\n of thumb (the default rule used by SciPy). Only works if `kind == kde`.\n bins : integer or sequence or 'auto', optional\n Controls the number of bins, accepts the same keywords `matplotlib.hist()` does. Only works\n if `kind == hist`. If None (default) it will use `auto` for continuous variables and\n `range(xmin, xmax + 1)` for discrete variables.\n ax: axes, optional\n Matplotlib axes or bokeh figures.\n backend: str, optional\n Select plotting backend {\"matplotlib\",\"bokeh\"}. Default \"matplotlib\".\n backend_kwargs: bool, optional\n These are kwargs specific to the backend being used. For additional documentation\n check the plotting method of the backend.\n show : bool, optional\n Call backend show function.\n **kwargs\n Passed as-is to plt.hist() or plt.plot() function depending on the value of `kind`.\n\n Returns\n -------\n axes : matplotlib axes or bokeh figures\n\n Examples\n --------\n Show a default kernel density plot following style of John Kruschke\n\n .. plot::\n :context: close-figs\n\n >>> import arviz as az\n >>> data = az.load_arviz_data('centered_eight')\n >>> az.plot_posterior(data)\n\n Plot subset variables by specifying variable name exactly\n\n .. plot::\n :context: close-figs\n\n >>> az.plot_posterior(data, var_names=['mu'])\n\n Plot Region of Practical Equivalence (rope) for all distributions\n\n .. plot::\n :context: close-figs\n\n >>> az.plot_posterior(data, var_names=['mu', 'theta'], rope=(-1, 1))\n\n Plot Region of Practical Equivalence for selected distributions\n\n .. plot::\n :context: close-figs\n\n >>> rope = {'mu': [{'rope': (-2, 2)}], 'theta': [{'school': 'Choate', 'rope': (2, 4)}]}\n >>> az.plot_posterior(data, var_names=['mu', 'theta'], rope=rope)\n\n\n Add reference lines\n\n .. plot::\n :context: close-figs\n\n >>> az.plot_posterior(data, var_names=['mu', 'theta'], ref_val=0)\n\n Show point estimate of distribution\n\n .. plot::\n :context: close-figs\n\n >>> az.plot_posterior(data, var_names=['mu', 'theta'], point_estimate='mode')\n\n Show reference values using variable names and coordinates\n\n .. plot::\n :context: close-figs\n\n >>> az.plot_posterior(data, ref_val= {\"theta\": [{\"school\": \"Deerfield\", \"ref_val\": 4},\n ... {\"school\": \"Choate\", \"ref_val\": 3}]})\n\n Show reference values using a list\n\n .. plot::\n :context: close-figs\n\n >>> az.plot_posterior(data, ref_val=[1] + [5] * 8 + [1])\n\n\n Plot posterior as a histogram\n\n .. plot::\n :context: close-figs\n\n >>> az.plot_posterior(data, var_names=['mu'], kind='hist')\n\n Change size of credible interval\n\n .. plot::\n :context: close-figs\n\n >>> az.plot_posterior(data, var_names=['mu'], credible_interval=.75)\n \"\"\"\n data = convert_to_dataset(data, group=group)\n var_names = _var_names(var_names, data)\n\n if coords is None:\n coords = {}\n\n if credible_interval is None:\n credible_interval = rcParams[\"stats.credible_interval\"]\n else:\n if not 1 >= credible_interval > 0:\n raise ValueError(\"The value of credible_interval should be in the interval (0, 1]\")\n\n plotters = filter_plotters_list(\n list(xarray_var_iter(get_coords(data, coords), var_names=var_names, combined=True)),\n \"plot_posterior\",\n )\n length_plotters = len(plotters)\n rows, cols = default_grid(length_plotters)\n\n (figsize, ax_labelsize, titlesize, xt_labelsize, _linewidth, _) = _scale_fig_size(\n figsize, textsize, rows, cols\n )\n kwargs.setdefault(\"linewidth\", _linewidth)\n\n posteriorplot_kwargs = dict(\n ax=ax,\n length_plotters=length_plotters,\n rows=rows,\n cols=cols,\n figsize=figsize,\n plotters=plotters,\n bw=bw,\n bins=bins,\n kind=kind,\n point_estimate=point_estimate,\n round_to=round_to,\n credible_interval=credible_interval,\n multimodal=multimodal,\n ref_val=ref_val,\n rope=rope,\n ax_labelsize=ax_labelsize,\n xt_labelsize=xt_labelsize,\n kwargs=kwargs,\n titlesize=titlesize,\n backend_kwargs=backend_kwargs,\n show=show,\n )\n\n if backend == \"bokeh\":\n\n posteriorplot_kwargs.pop(\"xt_labelsize\")\n posteriorplot_kwargs.pop(\"titlesize\")\n\n # TODO: Add backend kwargs\n plot = get_plotting_function(\"plot_posterior\", \"posteriorplot\", backend)\n ax = plot(**posteriorplot_kwargs)\n return ax\n", "path": "arviz/plots/posteriorplot.py"}]}
| 3,104 | 192 |
gh_patches_debug_29450
|
rasdani/github-patches
|
git_diff
|
bridgecrewio__checkov-4476
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
AWS_CKV_7 False Positive on assymetric key check in Cloudformation
**Describe the issue**
In terraform, the check avoids false positives with an extra check against symmetric keys before checking whether rotation is enabled. This same check hasn't been configured for cloudformation:
```
def scan_resource_conf(self, conf):
# Only symmetric keys support auto rotation. The attribute is optional and defaults to symmetric.
spec = conf.get('customer_master_key_spec')
if not spec or 'SYMMETRIC_DEFAULT' in spec:
return super().scan_resource_conf(conf)
else:
return CheckResult.PASSED
```
**Examples**
```
RSASigningKey:
Type: 'AWS::KMS::Key'
Properties:
Description: RSA-3072 asymmetric KMS key for signing and verification
KeySpec: RSA_3072
KeyUsage: SIGN_VERIFY
KeyPolicy:
Version: 2012-10-17
Id: key-default-1
Statement:
- Sid: Enable IAM User Permissions
Effect: Allow
Principal:
AWS: 'arn:aws:iam::111122223333:root'
Action: 'kms:*'
Resource: '*'
- Sid: Allow administration of the key
Effect: Allow
Principal:
AWS: 'arn:aws:iam::111122223333:role/Admin'
Action:
- 'kms:Create*'
- 'kms:Describe*'
- 'kms:Enable*'
- 'kms:List*'
- 'kms:Put*'
- 'kms:Update*'
- 'kms:Revoke*'
- 'kms:Disable*'
- 'kms:Get*'
- 'kms:Delete*'
- 'kms:ScheduleKeyDeletion'
- 'kms:CancelKeyDeletion'
Resource: '*'
- Sid: Allow use of the key
Effect: Allow
Principal:
AWS: 'arn:aws:iam::111122223333:role/Developer'
Action:
- 'kms:Sign'
- 'kms:Verify'
- 'kms:DescribeKey'
Resource: '*'
```
**Version (please complete the following information):**
- Checkov Version [e.g. 2.3.0]
**Additional context**
This blocks checkov working for assymetric keys in CFN.
</issue>
<code>
[start of checkov/cloudformation/checks/resource/aws/KMSRotation.py]
1 from checkov.common.models.enums import CheckCategories
2 from checkov.cloudformation.checks.resource.base_resource_value_check import BaseResourceValueCheck
3
4
5 class KMSRotation(BaseResourceValueCheck):
6 def __init__(self) -> None:
7 name = "Ensure rotation for customer created CMKs is enabled"
8 id = "CKV_AWS_7"
9 supported_resources = ("AWS::KMS::Key",)
10 categories = (CheckCategories.ENCRYPTION,)
11 super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)
12
13 def get_inspected_key(self) -> str:
14 return "Properties/EnableKeyRotation"
15
16
17 check = KMSRotation()
18
[end of checkov/cloudformation/checks/resource/aws/KMSRotation.py]
[start of checkov/terraform/checks/resource/aws/KMSRotation.py]
1 from checkov.terraform.checks.resource.base_resource_value_check import BaseResourceValueCheck
2 from checkov.common.models.enums import CheckCategories, CheckResult
3
4
5 class KMSRotation(BaseResourceValueCheck):
6 def __init__(self):
7 name = "Ensure rotation for customer created CMKs is enabled"
8 id = "CKV_AWS_7"
9 supported_resources = ['aws_kms_key']
10 categories = [CheckCategories.ENCRYPTION]
11 super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)
12
13 def get_inspected_key(self):
14 return "enable_key_rotation"
15
16 def scan_resource_conf(self, conf):
17 # Only symmetric keys support auto rotation. The attribute is optional and defaults to symmetric.
18 spec = conf.get('customer_master_key_spec')
19 if not spec or 'SYMMETRIC_DEFAULT' in spec:
20 return super().scan_resource_conf(conf)
21 else:
22 return CheckResult.PASSED
23
24
25 check = KMSRotation()
26
[end of checkov/terraform/checks/resource/aws/KMSRotation.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/checkov/cloudformation/checks/resource/aws/KMSRotation.py b/checkov/cloudformation/checks/resource/aws/KMSRotation.py
--- a/checkov/cloudformation/checks/resource/aws/KMSRotation.py
+++ b/checkov/cloudformation/checks/resource/aws/KMSRotation.py
@@ -1,4 +1,4 @@
-from checkov.common.models.enums import CheckCategories
+from checkov.common.models.enums import CheckCategories, CheckResult
from checkov.cloudformation.checks.resource.base_resource_value_check import BaseResourceValueCheck
@@ -13,5 +13,15 @@
def get_inspected_key(self) -> str:
return "Properties/EnableKeyRotation"
+ def scan_resource_conf(self, conf):
+ # Only symmetric keys support auto rotation. The attribute is optional and defaults to symmetric.
+ properties = conf.get("Properties")
+ if properties and isinstance(properties, dict):
+ spec = properties.get("KeySpec")
+ if spec and isinstance(spec, str):
+ if 'SYMMETRIC_DEFAULT' not in spec and 'HMAC' not in spec:
+ return CheckResult.UNKNOWN
+ return super().scan_resource_conf(conf)
+
check = KMSRotation()
diff --git a/checkov/terraform/checks/resource/aws/KMSRotation.py b/checkov/terraform/checks/resource/aws/KMSRotation.py
--- a/checkov/terraform/checks/resource/aws/KMSRotation.py
+++ b/checkov/terraform/checks/resource/aws/KMSRotation.py
@@ -16,10 +16,10 @@
def scan_resource_conf(self, conf):
# Only symmetric keys support auto rotation. The attribute is optional and defaults to symmetric.
spec = conf.get('customer_master_key_spec')
- if not spec or 'SYMMETRIC_DEFAULT' in spec:
+ if not spec or 'SYMMETRIC_DEFAULT' in spec or 'HMAC' in spec:
return super().scan_resource_conf(conf)
else:
- return CheckResult.PASSED
+ return CheckResult.UNKNOWN
check = KMSRotation()
|
{"golden_diff": "diff --git a/checkov/cloudformation/checks/resource/aws/KMSRotation.py b/checkov/cloudformation/checks/resource/aws/KMSRotation.py\n--- a/checkov/cloudformation/checks/resource/aws/KMSRotation.py\n+++ b/checkov/cloudformation/checks/resource/aws/KMSRotation.py\n@@ -1,4 +1,4 @@\n-from checkov.common.models.enums import CheckCategories\n+from checkov.common.models.enums import CheckCategories, CheckResult\n from checkov.cloudformation.checks.resource.base_resource_value_check import BaseResourceValueCheck\n \n \n@@ -13,5 +13,15 @@\n def get_inspected_key(self) -> str:\n return \"Properties/EnableKeyRotation\"\n \n+ def scan_resource_conf(self, conf):\n+ # Only symmetric keys support auto rotation. The attribute is optional and defaults to symmetric.\n+ properties = conf.get(\"Properties\")\n+ if properties and isinstance(properties, dict):\n+ spec = properties.get(\"KeySpec\")\n+ if spec and isinstance(spec, str):\n+ if 'SYMMETRIC_DEFAULT' not in spec and 'HMAC' not in spec:\n+ return CheckResult.UNKNOWN\n+ return super().scan_resource_conf(conf)\n+\n \n check = KMSRotation()\ndiff --git a/checkov/terraform/checks/resource/aws/KMSRotation.py b/checkov/terraform/checks/resource/aws/KMSRotation.py\n--- a/checkov/terraform/checks/resource/aws/KMSRotation.py\n+++ b/checkov/terraform/checks/resource/aws/KMSRotation.py\n@@ -16,10 +16,10 @@\n def scan_resource_conf(self, conf):\n # Only symmetric keys support auto rotation. The attribute is optional and defaults to symmetric.\n spec = conf.get('customer_master_key_spec')\n- if not spec or 'SYMMETRIC_DEFAULT' in spec:\n+ if not spec or 'SYMMETRIC_DEFAULT' in spec or 'HMAC' in spec:\n return super().scan_resource_conf(conf)\n else:\n- return CheckResult.PASSED\n+ return CheckResult.UNKNOWN\n \n \n check = KMSRotation()\n", "issue": "AWS_CKV_7 False Positive on assymetric key check in Cloudformation\n**Describe the issue**\r\nIn terraform, the check avoids false positives with an extra check against symmetric keys before checking whether rotation is enabled. This same check hasn't been configured for cloudformation:\r\n\r\n```\r\ndef scan_resource_conf(self, conf):\r\n # Only symmetric keys support auto rotation. The attribute is optional and defaults to symmetric.\r\n spec = conf.get('customer_master_key_spec')\r\n if not spec or 'SYMMETRIC_DEFAULT' in spec:\r\n return super().scan_resource_conf(conf)\r\n else:\r\n return CheckResult.PASSED\r\n```\r\n\r\n**Examples**\r\n\r\n```\r\nRSASigningKey:\r\n Type: 'AWS::KMS::Key'\r\n Properties:\r\n Description: RSA-3072 asymmetric KMS key for signing and verification\r\n KeySpec: RSA_3072\r\n KeyUsage: SIGN_VERIFY\r\n KeyPolicy:\r\n Version: 2012-10-17\r\n Id: key-default-1\r\n Statement:\r\n - Sid: Enable IAM User Permissions\r\n Effect: Allow\r\n Principal:\r\n AWS: 'arn:aws:iam::111122223333:root'\r\n Action: 'kms:*'\r\n Resource: '*'\r\n - Sid: Allow administration of the key\r\n Effect: Allow\r\n Principal:\r\n AWS: 'arn:aws:iam::111122223333:role/Admin'\r\n Action:\r\n - 'kms:Create*'\r\n - 'kms:Describe*'\r\n - 'kms:Enable*'\r\n - 'kms:List*'\r\n - 'kms:Put*'\r\n - 'kms:Update*'\r\n - 'kms:Revoke*'\r\n - 'kms:Disable*'\r\n - 'kms:Get*'\r\n - 'kms:Delete*'\r\n - 'kms:ScheduleKeyDeletion'\r\n - 'kms:CancelKeyDeletion'\r\n Resource: '*'\r\n - Sid: Allow use of the key\r\n Effect: Allow\r\n Principal:\r\n AWS: 'arn:aws:iam::111122223333:role/Developer'\r\n Action:\r\n - 'kms:Sign'\r\n - 'kms:Verify'\r\n - 'kms:DescribeKey'\r\n Resource: '*'\r\n```\r\n\r\n**Version (please complete the following information):**\r\n - Checkov Version [e.g. 2.3.0]\r\n\r\n**Additional context**\r\n\r\nThis blocks checkov working for assymetric keys in CFN.\r\n\n", "before_files": [{"content": "from checkov.common.models.enums import CheckCategories\nfrom checkov.cloudformation.checks.resource.base_resource_value_check import BaseResourceValueCheck\n\n\nclass KMSRotation(BaseResourceValueCheck):\n def __init__(self) -> None:\n name = \"Ensure rotation for customer created CMKs is enabled\"\n id = \"CKV_AWS_7\"\n supported_resources = (\"AWS::KMS::Key\",)\n categories = (CheckCategories.ENCRYPTION,)\n super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)\n\n def get_inspected_key(self) -> str:\n return \"Properties/EnableKeyRotation\"\n\n\ncheck = KMSRotation()\n", "path": "checkov/cloudformation/checks/resource/aws/KMSRotation.py"}, {"content": "from checkov.terraform.checks.resource.base_resource_value_check import BaseResourceValueCheck\nfrom checkov.common.models.enums import CheckCategories, CheckResult\n\n\nclass KMSRotation(BaseResourceValueCheck):\n def __init__(self):\n name = \"Ensure rotation for customer created CMKs is enabled\"\n id = \"CKV_AWS_7\"\n supported_resources = ['aws_kms_key']\n categories = [CheckCategories.ENCRYPTION]\n super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)\n\n def get_inspected_key(self):\n return \"enable_key_rotation\"\n\n def scan_resource_conf(self, conf):\n # Only symmetric keys support auto rotation. The attribute is optional and defaults to symmetric.\n spec = conf.get('customer_master_key_spec')\n if not spec or 'SYMMETRIC_DEFAULT' in spec:\n return super().scan_resource_conf(conf)\n else:\n return CheckResult.PASSED\n\n\ncheck = KMSRotation()\n", "path": "checkov/terraform/checks/resource/aws/KMSRotation.py"}]}
| 1,582 | 447 |
gh_patches_debug_1522
|
rasdani/github-patches
|
git_diff
|
openstates__openstates-scrapers-1736
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
CA: email issue
State: CA
(via contact)
I’d like to call your attention to an error on your email address listings for State Senators in California. Your database lists an address in the form of [email protected], whereas the correct email is [email protected].
It does appear that Senate staffers are [email protected].
We’ve been using your service, and our advocates’ emails are bouncing back.
</issue>
<code>
[start of openstates/ca/legislators.py]
1 import re
2 import collections
3 import unicodedata
4 from operator import methodcaller
5
6 import lxml.html
7
8 from billy.scrape.legislators import LegislatorScraper, Legislator
9
10
11 def parse_address(s, split=re.compile(r'[;,]\s{,3}').split):
12 '''
13 Extract address fields from text.
14 '''
15 # If the address isn't formatted correctly, skip for now.
16 if ';' not in s:
17 return []
18
19 fields = 'city state_zip phone'.split()
20 vals = split(s)
21 res = []
22 while True:
23 try:
24 _field = fields.pop()
25 _value = vals.pop()
26 except IndexError:
27 break
28 else:
29 if _value.strip():
30 res.append((_field, _value))
31 if vals:
32 res.append(('street', ', '.join(vals)))
33 return res
34
35
36 class CALegislatorScraper(LegislatorScraper):
37
38 jurisdiction = 'ca'
39
40 urls = {'upper': 'http://senate.ca.gov/senators',
41 'lower': 'http://assembly.ca.gov/assemblymembers'}
42
43 def scrape(self, chamber, term):
44
45 url = self.urls[chamber]
46 html = self.get(url).text
47 doc = lxml.html.fromstring(html)
48
49 if chamber == 'lower':
50 rows = doc.xpath('//table/tbody/tr')
51 parse = self.parse_assembly
52 else:
53 rows = doc.xpath('//div[contains(@class, "views-row")]')
54 parse = self.parse_senate
55
56 for tr in rows:
57 legislator = parse(tr, term, chamber)
58 if legislator is None:
59 continue
60 if 'Vacant' in legislator['full_name']:
61 continue
62
63 legislator.add_source(url)
64 legislator['full_name'] = legislator['full_name'].strip()
65 self.save_legislator(legislator)
66
67 def parse_senate(self, div, term, chamber):
68 name = div.xpath('.//h3/text()')[0]
69 if name.endswith(' (R)'):
70 party = 'Republican'
71 elif name.endswith(' (D)'):
72 party = 'Democratic'
73 else:
74 self.warning('skipping ' + name)
75 return None
76 name = name.split(' (')[0]
77
78 district = div.xpath(
79 './/div[contains(@class, "senator-district")]/div/text()'
80 )[0].strip().lstrip('0')
81 photo_url = div.xpath('.//img/@src')[0]
82 url = div.xpath('.//a/@href')[0]
83
84 leg = Legislator(term, chamber, full_name=name, party=party, district=district,
85 photo_url=photo_url, url=url)
86
87 # CA senators have working emails, but they're not putting them on
88 # their public pages anymore
89 email = self._construct_email(chamber, name)
90
91 for addr in div.xpath('.//div[contains(@class, "views-field-field-senator-capitol-office")]//p'):
92 addr, phone = addr.text_content().split('; ')
93 leg.add_office(
94 'capitol', 'Senate Office',
95 address=addr.strip(), phone=phone.strip(), email=email)
96
97 for addr in div.xpath('.//div[contains(@class, "views-field-field-senator-district-office")]//p'):
98 for addr in addr.text_content().strip().splitlines():
99 try:
100 addr, phone = addr.strip().replace(u'\xa0', ' ').split('; ')
101 leg.add_office(
102 'district', 'District Office',
103 address=addr.strip(), phone=phone.strip())
104 except ValueError:
105 addr = addr.strip().replace(u'\xa0', ' ')
106 leg.add_office('district', 'District Office', address=addr)
107
108 return leg
109
110 def parse_assembly(self, tr, term, chamber):
111 '''
112 Given a tr element, get specific data from it.
113 '''
114
115 strip = methodcaller('strip')
116
117 xpath = 'td[contains(@class, "views-field-field-%s-%s")]%s'
118
119 xp = {
120 'url': [('lname-sort', '/a[not(contains(text(), "edit"))]/@href')],
121 'district': [('district', '/text()')],
122 'party': [('party', '/text()')],
123 'full_name': [('office-information', '/a[not(contains(text(), "edit"))]/text()')],
124 'address': [('office-information', '/h3/following-sibling::text()'),
125 ('office-information', '/p/text()')]
126 }
127
128 titles = {'upper': 'senator', 'lower': 'member'}
129
130 funcs = {
131 'full_name': lambda s: re.sub( # "Assembly" is misspelled once
132 r'Contact Assembl?y Member', '', s).strip(),
133 'address': parse_address,
134 }
135
136 rubberstamp = lambda _: _
137 tr_xpath = tr.xpath
138 res = collections.defaultdict(list)
139 for k, xpath_info in xp.items():
140 for vals in xpath_info:
141 f = funcs.get(k, rubberstamp)
142 vals = (titles[chamber],) + vals
143 vals = map(f, map(strip, tr_xpath(xpath % vals)))
144 res[k].extend(vals)
145
146 # Photo.
147 try:
148 res['photo_url'] = tr_xpath('td/p/img/@src')[0]
149 except IndexError:
150 pass
151
152 # Remove junk from assembly member names.
153 junk = 'Contact Assembly Member '
154
155 try:
156 res['full_name'] = res['full_name'].pop().replace(junk, '')
157 except IndexError:
158 return
159
160 # Addresses.
161 addresses = res['address']
162 try:
163 addresses = map(dict, filter(None, addresses))
164 except ValueError:
165 # Sometimes legislators only have one address, in which
166 # case this awful hack is helpful.
167 addresses = map(dict, filter(None, [addresses]))
168
169 for address in addresses[:]:
170
171 # Toss results that don't have required keys.
172 if not set(['street', 'city', 'state_zip']) < set(address):
173 if address in addresses:
174 addresses.remove(address)
175
176 # Re-key the addresses
177 offices = []
178 if addresses:
179 # Mariko Yamada's addresses wouldn't parse correctly as of
180 # 3/23/2013, so here we're forced to test whether any
181 # addresses were even found.
182 addresses[0].update(type='capitol', name='Capitol Office')
183 offices.append(addresses[0])
184
185 # CA reps have working emails, but they're not putting them on
186 # their public pages anymore
187 offices[0]['email'] = \
188 self._construct_email(chamber, res['full_name'])
189
190 for office in addresses[1:]:
191 office.update(type='district', name='District Office')
192 offices.append(office)
193
194 for office in offices:
195 street = office['street']
196 state_zip = re.sub(r'\s+', ' ', office['state_zip'])
197 street = '%s\n%s, %s' % (street, office['city'], state_zip)
198 office['address'] = street
199 office['fax'] = None
200 if 'email' not in office:
201 office['email'] = None
202
203 del office['street'], office['city'], office['state_zip']
204
205 res['offices'] = offices
206 del res['address']
207
208 # Normalize party.
209 for party in res['party'][:]:
210 if party:
211 if party == 'Democrat':
212 party = 'Democratic'
213 res['party'] = party
214 break
215 else:
216 res['party'] = None
217
218 # Mariko Yamada also didn't have a url that lxml would parse
219 # as of 3/22/2013.
220 if res['url']:
221 res['url'] = res['url'].pop()
222 else:
223 del res['url']
224
225 # strip leading zero
226 res['district'] = str(int(res['district'].pop()))
227
228 # Add a source for the url.
229 leg = Legislator(term, chamber, **res)
230 leg.update(**res)
231
232 return leg
233
234 def _construct_email(self, chamber, full_name):
235 last_name = re.split(r'\s+', full_name)[-1].lower()
236
237 # translate accents to non-accented versions for use in an
238 # email and drop apostrophes
239 last_name = ''.join(c for c in
240 unicodedata.normalize('NFD', unicode(last_name))
241 if unicodedata.category(c) != 'Mn')
242 last_name = last_name.replace("'", "")
243
244 if chamber == 'lower':
245 return 'assemblymember.' + last_name + '@assembly.ca.gov'
246 else:
247 return 'senator.' + last_name + '@sen.ca.gov'
248
249
[end of openstates/ca/legislators.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/openstates/ca/legislators.py b/openstates/ca/legislators.py
--- a/openstates/ca/legislators.py
+++ b/openstates/ca/legislators.py
@@ -244,5 +244,4 @@
if chamber == 'lower':
return 'assemblymember.' + last_name + '@assembly.ca.gov'
else:
- return 'senator.' + last_name + '@sen.ca.gov'
-
+ return 'senator.' + last_name + '@senator.ca.gov'
|
{"golden_diff": "diff --git a/openstates/ca/legislators.py b/openstates/ca/legislators.py\n--- a/openstates/ca/legislators.py\n+++ b/openstates/ca/legislators.py\n@@ -244,5 +244,4 @@\n if chamber == 'lower':\n return 'assemblymember.' + last_name + '@assembly.ca.gov'\n else:\n- return 'senator.' + last_name + '@sen.ca.gov'\n- \n+ return 'senator.' + last_name + '@senator.ca.gov'\n", "issue": "CA: email issue\nState: CA\r\n\r\n(via contact)\r\nI\u2019d like to call your attention to an error on your email address listings for State Senators in California. Your database lists an address in the form of [email protected], whereas the correct email is [email protected].\r\n \r\nIt does appear that Senate staffers are [email protected].\r\n \r\nWe\u2019ve been using your service, and our advocates\u2019 emails are bouncing back.\r\n\r\n\n", "before_files": [{"content": "import re\nimport collections\nimport unicodedata\nfrom operator import methodcaller\n\nimport lxml.html\n\nfrom billy.scrape.legislators import LegislatorScraper, Legislator\n\n\ndef parse_address(s, split=re.compile(r'[;,]\\s{,3}').split):\n '''\n Extract address fields from text.\n '''\n # If the address isn't formatted correctly, skip for now.\n if ';' not in s:\n return []\n\n fields = 'city state_zip phone'.split()\n vals = split(s)\n res = []\n while True:\n try:\n _field = fields.pop()\n _value = vals.pop()\n except IndexError:\n break\n else:\n if _value.strip():\n res.append((_field, _value))\n if vals:\n res.append(('street', ', '.join(vals)))\n return res\n\n\nclass CALegislatorScraper(LegislatorScraper):\n\n jurisdiction = 'ca'\n\n urls = {'upper': 'http://senate.ca.gov/senators',\n 'lower': 'http://assembly.ca.gov/assemblymembers'}\n\n def scrape(self, chamber, term):\n\n url = self.urls[chamber]\n html = self.get(url).text\n doc = lxml.html.fromstring(html)\n\n if chamber == 'lower':\n rows = doc.xpath('//table/tbody/tr')\n parse = self.parse_assembly\n else:\n rows = doc.xpath('//div[contains(@class, \"views-row\")]')\n parse = self.parse_senate\n\n for tr in rows:\n legislator = parse(tr, term, chamber)\n if legislator is None:\n continue\n if 'Vacant' in legislator['full_name']:\n continue\n\n legislator.add_source(url)\n legislator['full_name'] = legislator['full_name'].strip()\n self.save_legislator(legislator)\n\n def parse_senate(self, div, term, chamber):\n name = div.xpath('.//h3/text()')[0]\n if name.endswith(' (R)'):\n party = 'Republican'\n elif name.endswith(' (D)'):\n party = 'Democratic'\n else:\n self.warning('skipping ' + name)\n return None\n name = name.split(' (')[0]\n\n district = div.xpath(\n './/div[contains(@class, \"senator-district\")]/div/text()'\n )[0].strip().lstrip('0')\n photo_url = div.xpath('.//img/@src')[0]\n url = div.xpath('.//a/@href')[0]\n\n leg = Legislator(term, chamber, full_name=name, party=party, district=district,\n photo_url=photo_url, url=url)\n\n # CA senators have working emails, but they're not putting them on\n # their public pages anymore\n email = self._construct_email(chamber, name)\n\n for addr in div.xpath('.//div[contains(@class, \"views-field-field-senator-capitol-office\")]//p'):\n addr, phone = addr.text_content().split('; ')\n leg.add_office(\n 'capitol', 'Senate Office',\n address=addr.strip(), phone=phone.strip(), email=email)\n\n for addr in div.xpath('.//div[contains(@class, \"views-field-field-senator-district-office\")]//p'):\n for addr in addr.text_content().strip().splitlines():\n try:\n addr, phone = addr.strip().replace(u'\\xa0', ' ').split('; ')\n leg.add_office(\n 'district', 'District Office',\n address=addr.strip(), phone=phone.strip())\n except ValueError:\n addr = addr.strip().replace(u'\\xa0', ' ')\n leg.add_office('district', 'District Office', address=addr)\n\n return leg\n\n def parse_assembly(self, tr, term, chamber):\n '''\n Given a tr element, get specific data from it.\n '''\n\n strip = methodcaller('strip')\n\n xpath = 'td[contains(@class, \"views-field-field-%s-%s\")]%s'\n\n xp = {\n 'url': [('lname-sort', '/a[not(contains(text(), \"edit\"))]/@href')],\n 'district': [('district', '/text()')],\n 'party': [('party', '/text()')],\n 'full_name': [('office-information', '/a[not(contains(text(), \"edit\"))]/text()')],\n 'address': [('office-information', '/h3/following-sibling::text()'),\n ('office-information', '/p/text()')]\n }\n\n titles = {'upper': 'senator', 'lower': 'member'}\n\n funcs = {\n 'full_name': lambda s: re.sub( # \"Assembly\" is misspelled once\n r'Contact Assembl?y Member', '', s).strip(),\n 'address': parse_address,\n }\n\n rubberstamp = lambda _: _\n tr_xpath = tr.xpath\n res = collections.defaultdict(list)\n for k, xpath_info in xp.items():\n for vals in xpath_info:\n f = funcs.get(k, rubberstamp)\n vals = (titles[chamber],) + vals\n vals = map(f, map(strip, tr_xpath(xpath % vals)))\n res[k].extend(vals)\n\n # Photo.\n try:\n res['photo_url'] = tr_xpath('td/p/img/@src')[0]\n except IndexError:\n pass\n\n # Remove junk from assembly member names.\n junk = 'Contact Assembly Member '\n\n try:\n res['full_name'] = res['full_name'].pop().replace(junk, '')\n except IndexError:\n return\n\n # Addresses.\n addresses = res['address']\n try:\n addresses = map(dict, filter(None, addresses))\n except ValueError:\n # Sometimes legislators only have one address, in which\n # case this awful hack is helpful.\n addresses = map(dict, filter(None, [addresses]))\n\n for address in addresses[:]:\n\n # Toss results that don't have required keys.\n if not set(['street', 'city', 'state_zip']) < set(address):\n if address in addresses:\n addresses.remove(address)\n\n # Re-key the addresses\n offices = []\n if addresses:\n # Mariko Yamada's addresses wouldn't parse correctly as of\n # 3/23/2013, so here we're forced to test whether any\n # addresses were even found.\n addresses[0].update(type='capitol', name='Capitol Office')\n offices.append(addresses[0])\n\n # CA reps have working emails, but they're not putting them on\n # their public pages anymore\n offices[0]['email'] = \\\n self._construct_email(chamber, res['full_name'])\n\n for office in addresses[1:]:\n office.update(type='district', name='District Office')\n offices.append(office)\n\n for office in offices:\n street = office['street']\n state_zip = re.sub(r'\\s+', ' ', office['state_zip'])\n street = '%s\\n%s, %s' % (street, office['city'], state_zip)\n office['address'] = street\n office['fax'] = None\n if 'email' not in office:\n office['email'] = None\n\n del office['street'], office['city'], office['state_zip']\n\n res['offices'] = offices\n del res['address']\n\n # Normalize party.\n for party in res['party'][:]:\n if party:\n if party == 'Democrat':\n party = 'Democratic'\n res['party'] = party\n break\n else:\n res['party'] = None\n\n # Mariko Yamada also didn't have a url that lxml would parse\n # as of 3/22/2013.\n if res['url']:\n res['url'] = res['url'].pop()\n else:\n del res['url']\n\n # strip leading zero\n res['district'] = str(int(res['district'].pop()))\n\n # Add a source for the url.\n leg = Legislator(term, chamber, **res)\n leg.update(**res)\n\n return leg\n\n def _construct_email(self, chamber, full_name):\n last_name = re.split(r'\\s+', full_name)[-1].lower()\n\n # translate accents to non-accented versions for use in an\n # email and drop apostrophes\n last_name = ''.join(c for c in\n unicodedata.normalize('NFD', unicode(last_name))\n if unicodedata.category(c) != 'Mn')\n last_name = last_name.replace(\"'\", \"\")\n\n if chamber == 'lower':\n return 'assemblymember.' + last_name + '@assembly.ca.gov'\n else:\n return 'senator.' + last_name + '@sen.ca.gov'\n \n", "path": "openstates/ca/legislators.py"}]}
| 3,208 | 121 |
gh_patches_debug_24365
|
rasdani/github-patches
|
git_diff
|
canonical__snapcraft-4370
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
remote-build: add control logic when the project is in a git repo
### What needs to get done
This adds control logic to determine whether to execute the new or legacy remote-build code.
If the project is not part of a git repository, then execute the legacy remote-build code. Otherwise, execute the new remote-build code.
This check can be done by using the GitPython wrapper class (#4320).

### Why it needs to get done
This check exists to minimize changes for existing workflows.
</issue>
<code>
[start of snapcraft/commands/remote.py]
1 # -*- Mode:Python; indent-tabs-mode:nil; tab-width:4 -*-
2 #
3 # Copyright 2022-2023 Canonical Ltd.
4 #
5 # This program is free software: you can redistribute it and/or modify
6 # it under the terms of the GNU General Public License version 3 as
7 # published by the Free Software Foundation.
8 #
9 # This program is distributed in the hope that it will be useful,
10 # but WITHOUT ANY WARRANTY; without even the implied warranty of
11 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
12 # GNU General Public License for more details.
13 #
14 # You should have received a copy of the GNU General Public License
15 # along with this program. If not, see <http://www.gnu.org/licenses/>.
16
17 """Snapcraft remote build command."""
18
19 import argparse
20 import os
21 import textwrap
22 from enum import Enum
23 from typing import Optional
24
25 from craft_cli import BaseCommand, emit
26 from craft_cli.helptexts import HIDDEN
27 from overrides import overrides
28
29 from snapcraft.errors import MaintenanceBase, SnapcraftError
30 from snapcraft.legacy_cli import run_legacy
31 from snapcraft.parts import yaml_utils
32 from snapcraft.utils import confirm_with_user, humanize_list
33 from snapcraft_legacy.internal.remote_build.errors import AcceptPublicUploadError
34
35 _CONFIRMATION_PROMPT = (
36 "All data sent to remote builders will be publicly available. "
37 "Are you sure you want to continue?"
38 )
39
40
41 _STRATEGY_ENVVAR = "SNAPCRAFT_REMOTE_BUILD_STRATEGY"
42
43
44 class _Strategies(Enum):
45 """Possible values of the build strategy."""
46
47 DISABLE_FALLBACK = "disable-fallback"
48 FORCE_FALLBACK = "force-fallback"
49
50
51 class RemoteBuildCommand(BaseCommand):
52 """Command passthrough for the remote-build command."""
53
54 name = "remote-build"
55 help_msg = "Dispatch a snap for remote build"
56 overview = textwrap.dedent(
57 """
58 Command remote-build sends the current project to be built
59 remotely. After the build is complete, packages for each
60 architecture are retrieved and will be available in the
61 local filesystem.
62
63 If not specified in the snapcraft.yaml file, the list of
64 architectures to build can be set using the --build-on option.
65 If both are specified, an error will occur.
66
67 Interrupted remote builds can be resumed using the --recover
68 option, followed by the build number informed when the remote
69 build was originally dispatched. The current state of the
70 remote build for each architecture can be checked using the
71 --status option."""
72 )
73
74 @overrides
75 def fill_parser(self, parser: argparse.ArgumentParser) -> None:
76 parser.add_argument(
77 "--recover", action="store_true", help="recover an interrupted build"
78 )
79 parser.add_argument(
80 "--status", action="store_true", help="display remote build status"
81 )
82 parser_target = parser.add_mutually_exclusive_group()
83 parser_target.add_argument(
84 "--build-on",
85 metavar="arch",
86 nargs="+",
87 help=HIDDEN,
88 )
89 parser_target.add_argument(
90 "--build-for",
91 metavar="arch",
92 nargs="+",
93 help="architecture to build for",
94 )
95 parser.add_argument(
96 "--build-id", metavar="build-id", help="specific build id to retrieve"
97 )
98 parser.add_argument(
99 "--launchpad-accept-public-upload",
100 action="store_true",
101 help="acknowledge that uploaded code will be publicly available.",
102 )
103
104 def _get_build_strategy(self) -> Optional[_Strategies]:
105 """Get the build strategy from the envvar `SNAPCRAFT_REMOTE_BUILD_STRATEGY`.
106
107 :returns: The strategy or None.
108
109 :raises SnapcraftError: If the variable is set to an invalid value.
110 """
111 strategy = os.getenv(_STRATEGY_ENVVAR)
112
113 if not strategy:
114 return None
115
116 try:
117 return _Strategies(strategy)
118 except ValueError as err:
119 valid_strategies = humanize_list(
120 (strategy.value for strategy in _Strategies), "and"
121 )
122 raise SnapcraftError(
123 f"Unknown value {strategy!r} in environment variable "
124 f"{_STRATEGY_ENVVAR!r}. Valid values are {valid_strategies}."
125 ) from err
126
127 def _get_effective_base(self) -> str:
128 """Get a valid effective base from the project's snapcraft.yaml.
129
130 :returns: The project's effective base.
131
132 :raises SnapcraftError: If the base is unknown or missing or if the
133 snapcraft.yaml cannot be loaded.
134 :raises MaintenanceBase: If the base is not supported
135 """
136 snapcraft_yaml = yaml_utils.get_snap_project().project_file
137
138 with open(snapcraft_yaml, encoding="utf-8") as file:
139 base = yaml_utils.get_base(file)
140
141 if base is None:
142 raise SnapcraftError(
143 f"Could not determine base from {str(snapcraft_yaml)!r}."
144 )
145
146 emit.debug(f"Got base {base!r} from {str(snapcraft_yaml)!r}.")
147
148 if base in yaml_utils.ESM_BASES:
149 raise MaintenanceBase(base)
150
151 if base not in yaml_utils.BASES:
152 raise SnapcraftError(f"Unknown base {base!r} in {str(snapcraft_yaml)!r}.")
153
154 return base
155
156 def _run_remote_build(self, base: str) -> None:
157 # bases newer than core22 must use the new remote-build
158 if base in yaml_utils.CURRENT_BASES - {"core22"}:
159 emit.debug(
160 "Using fallback remote-build because new remote-build is not available."
161 )
162 # TODO: use new remote-build code (#4323)
163 run_legacy()
164 return
165
166 strategy = self._get_build_strategy()
167
168 if strategy == _Strategies.DISABLE_FALLBACK:
169 emit.debug(
170 f"Environment variable {_STRATEGY_ENVVAR!r} is "
171 f"{_Strategies.DISABLE_FALLBACK.value!r} but running fallback "
172 "remote-build because new remote-build is not available."
173 )
174 run_legacy()
175 return
176
177 if strategy == _Strategies.FORCE_FALLBACK:
178 emit.debug(
179 "Running fallback remote-build because environment variable "
180 f"{_STRATEGY_ENVVAR!r} is {_Strategies.FORCE_FALLBACK.value!r}."
181 )
182 run_legacy()
183 return
184
185 emit.debug("Running fallback remote-build.")
186 run_legacy()
187
188 @overrides
189 def run(self, parsed_args) -> None:
190 if os.getenv("SUDO_USER") and os.geteuid() == 0:
191 emit.message(
192 "Running with 'sudo' may cause permission errors and is discouraged."
193 )
194
195 emit.message(
196 "snapcraft remote-build is experimental and is subject to change "
197 "- use with caution."
198 )
199
200 if parsed_args.build_on:
201 emit.message("Use --build-for instead of --build-on")
202 parsed_args.build_for = parsed_args.build_on
203
204 if not parsed_args.launchpad_accept_public_upload and not confirm_with_user(
205 _CONFIRMATION_PROMPT
206 ):
207 raise AcceptPublicUploadError()
208
209 base = self._get_effective_base()
210 self._run_remote_build(base)
211
[end of snapcraft/commands/remote.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/snapcraft/commands/remote.py b/snapcraft/commands/remote.py
--- a/snapcraft/commands/remote.py
+++ b/snapcraft/commands/remote.py
@@ -20,6 +20,7 @@
import os
import textwrap
from enum import Enum
+from pathlib import Path
from typing import Optional
from craft_cli import BaseCommand, emit
@@ -29,6 +30,7 @@
from snapcraft.errors import MaintenanceBase, SnapcraftError
from snapcraft.legacy_cli import run_legacy
from snapcraft.parts import yaml_utils
+from snapcraft.remote import is_repo
from snapcraft.utils import confirm_with_user, humanize_list
from snapcraft_legacy.internal.remote_build.errors import AcceptPublicUploadError
@@ -182,6 +184,14 @@
run_legacy()
return
+ if is_repo(Path().absolute()):
+ emit.debug(
+ "Project is in a git repository but running fallback remote-build "
+ "because new remote-build is not available."
+ )
+ run_legacy()
+ return
+
emit.debug("Running fallback remote-build.")
run_legacy()
|
{"golden_diff": "diff --git a/snapcraft/commands/remote.py b/snapcraft/commands/remote.py\n--- a/snapcraft/commands/remote.py\n+++ b/snapcraft/commands/remote.py\n@@ -20,6 +20,7 @@\n import os\n import textwrap\n from enum import Enum\n+from pathlib import Path\n from typing import Optional\n \n from craft_cli import BaseCommand, emit\n@@ -29,6 +30,7 @@\n from snapcraft.errors import MaintenanceBase, SnapcraftError\n from snapcraft.legacy_cli import run_legacy\n from snapcraft.parts import yaml_utils\n+from snapcraft.remote import is_repo\n from snapcraft.utils import confirm_with_user, humanize_list\n from snapcraft_legacy.internal.remote_build.errors import AcceptPublicUploadError\n \n@@ -182,6 +184,14 @@\n run_legacy()\n return\n \n+ if is_repo(Path().absolute()):\n+ emit.debug(\n+ \"Project is in a git repository but running fallback remote-build \"\n+ \"because new remote-build is not available.\"\n+ )\n+ run_legacy()\n+ return\n+\n emit.debug(\"Running fallback remote-build.\")\n run_legacy()\n", "issue": "remote-build: add control logic when the project is in a git repo\n### What needs to get done\n\nThis adds control logic to determine whether to execute the new or legacy remote-build code.\r\n\r\nIf the project is not part of a git repository, then execute the legacy remote-build code. Otherwise, execute the new remote-build code.\r\n\r\nThis check can be done by using the GitPython wrapper class (#4320).\r\n\r\n\n\n### Why it needs to get done\n\nThis check exists to minimize changes for existing workflows.\n", "before_files": [{"content": "# -*- Mode:Python; indent-tabs-mode:nil; tab-width:4 -*-\n#\n# Copyright 2022-2023 Canonical Ltd.\n#\n# This program is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License version 3 as\n# published by the Free Software Foundation.\n#\n# This program is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with this program. If not, see <http://www.gnu.org/licenses/>.\n\n\"\"\"Snapcraft remote build command.\"\"\"\n\nimport argparse\nimport os\nimport textwrap\nfrom enum import Enum\nfrom typing import Optional\n\nfrom craft_cli import BaseCommand, emit\nfrom craft_cli.helptexts import HIDDEN\nfrom overrides import overrides\n\nfrom snapcraft.errors import MaintenanceBase, SnapcraftError\nfrom snapcraft.legacy_cli import run_legacy\nfrom snapcraft.parts import yaml_utils\nfrom snapcraft.utils import confirm_with_user, humanize_list\nfrom snapcraft_legacy.internal.remote_build.errors import AcceptPublicUploadError\n\n_CONFIRMATION_PROMPT = (\n \"All data sent to remote builders will be publicly available. \"\n \"Are you sure you want to continue?\"\n)\n\n\n_STRATEGY_ENVVAR = \"SNAPCRAFT_REMOTE_BUILD_STRATEGY\"\n\n\nclass _Strategies(Enum):\n \"\"\"Possible values of the build strategy.\"\"\"\n\n DISABLE_FALLBACK = \"disable-fallback\"\n FORCE_FALLBACK = \"force-fallback\"\n\n\nclass RemoteBuildCommand(BaseCommand):\n \"\"\"Command passthrough for the remote-build command.\"\"\"\n\n name = \"remote-build\"\n help_msg = \"Dispatch a snap for remote build\"\n overview = textwrap.dedent(\n \"\"\"\n Command remote-build sends the current project to be built\n remotely. After the build is complete, packages for each\n architecture are retrieved and will be available in the\n local filesystem.\n\n If not specified in the snapcraft.yaml file, the list of\n architectures to build can be set using the --build-on option.\n If both are specified, an error will occur.\n\n Interrupted remote builds can be resumed using the --recover\n option, followed by the build number informed when the remote\n build was originally dispatched. The current state of the\n remote build for each architecture can be checked using the\n --status option.\"\"\"\n )\n\n @overrides\n def fill_parser(self, parser: argparse.ArgumentParser) -> None:\n parser.add_argument(\n \"--recover\", action=\"store_true\", help=\"recover an interrupted build\"\n )\n parser.add_argument(\n \"--status\", action=\"store_true\", help=\"display remote build status\"\n )\n parser_target = parser.add_mutually_exclusive_group()\n parser_target.add_argument(\n \"--build-on\",\n metavar=\"arch\",\n nargs=\"+\",\n help=HIDDEN,\n )\n parser_target.add_argument(\n \"--build-for\",\n metavar=\"arch\",\n nargs=\"+\",\n help=\"architecture to build for\",\n )\n parser.add_argument(\n \"--build-id\", metavar=\"build-id\", help=\"specific build id to retrieve\"\n )\n parser.add_argument(\n \"--launchpad-accept-public-upload\",\n action=\"store_true\",\n help=\"acknowledge that uploaded code will be publicly available.\",\n )\n\n def _get_build_strategy(self) -> Optional[_Strategies]:\n \"\"\"Get the build strategy from the envvar `SNAPCRAFT_REMOTE_BUILD_STRATEGY`.\n\n :returns: The strategy or None.\n\n :raises SnapcraftError: If the variable is set to an invalid value.\n \"\"\"\n strategy = os.getenv(_STRATEGY_ENVVAR)\n\n if not strategy:\n return None\n\n try:\n return _Strategies(strategy)\n except ValueError as err:\n valid_strategies = humanize_list(\n (strategy.value for strategy in _Strategies), \"and\"\n )\n raise SnapcraftError(\n f\"Unknown value {strategy!r} in environment variable \"\n f\"{_STRATEGY_ENVVAR!r}. Valid values are {valid_strategies}.\"\n ) from err\n\n def _get_effective_base(self) -> str:\n \"\"\"Get a valid effective base from the project's snapcraft.yaml.\n\n :returns: The project's effective base.\n\n :raises SnapcraftError: If the base is unknown or missing or if the\n snapcraft.yaml cannot be loaded.\n :raises MaintenanceBase: If the base is not supported\n \"\"\"\n snapcraft_yaml = yaml_utils.get_snap_project().project_file\n\n with open(snapcraft_yaml, encoding=\"utf-8\") as file:\n base = yaml_utils.get_base(file)\n\n if base is None:\n raise SnapcraftError(\n f\"Could not determine base from {str(snapcraft_yaml)!r}.\"\n )\n\n emit.debug(f\"Got base {base!r} from {str(snapcraft_yaml)!r}.\")\n\n if base in yaml_utils.ESM_BASES:\n raise MaintenanceBase(base)\n\n if base not in yaml_utils.BASES:\n raise SnapcraftError(f\"Unknown base {base!r} in {str(snapcraft_yaml)!r}.\")\n\n return base\n\n def _run_remote_build(self, base: str) -> None:\n # bases newer than core22 must use the new remote-build\n if base in yaml_utils.CURRENT_BASES - {\"core22\"}:\n emit.debug(\n \"Using fallback remote-build because new remote-build is not available.\"\n )\n # TODO: use new remote-build code (#4323)\n run_legacy()\n return\n\n strategy = self._get_build_strategy()\n\n if strategy == _Strategies.DISABLE_FALLBACK:\n emit.debug(\n f\"Environment variable {_STRATEGY_ENVVAR!r} is \"\n f\"{_Strategies.DISABLE_FALLBACK.value!r} but running fallback \"\n \"remote-build because new remote-build is not available.\"\n )\n run_legacy()\n return\n\n if strategy == _Strategies.FORCE_FALLBACK:\n emit.debug(\n \"Running fallback remote-build because environment variable \"\n f\"{_STRATEGY_ENVVAR!r} is {_Strategies.FORCE_FALLBACK.value!r}.\"\n )\n run_legacy()\n return\n\n emit.debug(\"Running fallback remote-build.\")\n run_legacy()\n\n @overrides\n def run(self, parsed_args) -> None:\n if os.getenv(\"SUDO_USER\") and os.geteuid() == 0:\n emit.message(\n \"Running with 'sudo' may cause permission errors and is discouraged.\"\n )\n\n emit.message(\n \"snapcraft remote-build is experimental and is subject to change \"\n \"- use with caution.\"\n )\n\n if parsed_args.build_on:\n emit.message(\"Use --build-for instead of --build-on\")\n parsed_args.build_for = parsed_args.build_on\n\n if not parsed_args.launchpad_accept_public_upload and not confirm_with_user(\n _CONFIRMATION_PROMPT\n ):\n raise AcceptPublicUploadError()\n\n base = self._get_effective_base()\n self._run_remote_build(base)\n", "path": "snapcraft/commands/remote.py"}]}
| 2,802 | 254 |
gh_patches_debug_3608
|
rasdani/github-patches
|
git_diff
|
bokeh__bokeh-5620
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Correctly handle data values <= 0 on a log scale
This is a continuation from issue #5389, partially adressed by PR #5477. There persists an issue where negative data is not handled correctly. All data <= 0 should be discarded before generating the plot.
As is, if `values = np.linspace(-0.1, 0.9), a JS error complains that it "could not set initial ranges", probably because `log(n)` for `n<=0` is not defined.
</issue>
<code>
[start of sphinx/source/docs/user_guide/examples/plotting_log_scale_axis.py]
1 from bokeh.plotting import figure, output_file, show
2
3 x = [0.1, 0.5, 1.0, 1.5, 2.0, 2.5, 3.0]
4 y = [10**xx for xx in x]
5
6 output_file("log.html")
7
8 # create a new plot with a log axis type
9 p = figure(plot_width=400, plot_height=400,
10 y_axis_type="log", y_range=(10**-1, 10**4))
11
12 p.line(x, y, line_width=2)
13 p.circle(x, y, fill_color="white", size=8)
14
15 show(p)
16
[end of sphinx/source/docs/user_guide/examples/plotting_log_scale_axis.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/sphinx/source/docs/user_guide/examples/plotting_log_scale_axis.py b/sphinx/source/docs/user_guide/examples/plotting_log_scale_axis.py
--- a/sphinx/source/docs/user_guide/examples/plotting_log_scale_axis.py
+++ b/sphinx/source/docs/user_guide/examples/plotting_log_scale_axis.py
@@ -6,8 +6,7 @@
output_file("log.html")
# create a new plot with a log axis type
-p = figure(plot_width=400, plot_height=400,
- y_axis_type="log", y_range=(10**-1, 10**4))
+p = figure(plot_width=400, plot_height=400, y_axis_type="log")
p.line(x, y, line_width=2)
p.circle(x, y, fill_color="white", size=8)
|
{"golden_diff": "diff --git a/sphinx/source/docs/user_guide/examples/plotting_log_scale_axis.py b/sphinx/source/docs/user_guide/examples/plotting_log_scale_axis.py\n--- a/sphinx/source/docs/user_guide/examples/plotting_log_scale_axis.py\n+++ b/sphinx/source/docs/user_guide/examples/plotting_log_scale_axis.py\n@@ -6,8 +6,7 @@\n output_file(\"log.html\")\n \n # create a new plot with a log axis type\n-p = figure(plot_width=400, plot_height=400,\n- y_axis_type=\"log\", y_range=(10**-1, 10**4))\n+p = figure(plot_width=400, plot_height=400, y_axis_type=\"log\")\n \n p.line(x, y, line_width=2)\n p.circle(x, y, fill_color=\"white\", size=8)\n", "issue": "Correctly handle data values <= 0 on a log scale\nThis is a continuation from issue #5389, partially adressed by PR #5477. There persists an issue where negative data is not handled correctly. All data <= 0 should be discarded before generating the plot.\r\n\r\nAs is, if `values = np.linspace(-0.1, 0.9), a JS error complains that it \"could not set initial ranges\", probably because `log(n)` for `n<=0` is not defined.\n", "before_files": [{"content": "from bokeh.plotting import figure, output_file, show\n\nx = [0.1, 0.5, 1.0, 1.5, 2.0, 2.5, 3.0]\ny = [10**xx for xx in x]\n\noutput_file(\"log.html\")\n\n# create a new plot with a log axis type\np = figure(plot_width=400, plot_height=400,\n y_axis_type=\"log\", y_range=(10**-1, 10**4))\n\np.line(x, y, line_width=2)\np.circle(x, y, fill_color=\"white\", size=8)\n\nshow(p)\n", "path": "sphinx/source/docs/user_guide/examples/plotting_log_scale_axis.py"}]}
| 839 | 186 |
gh_patches_debug_17576
|
rasdani/github-patches
|
git_diff
|
nipy__nipype-2429
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Mrtrix3 `dwi2response` - bad algorithm argument position
### Summary
Th Mrtrix3 `dwi2response` CL wrapper generates the following runtime error:
```shell
dwi2response:
mrinfo: [ERROR] no diffusion encoding information found in image "<DWI_FILE>"
dwi2response: [ERROR] Script requires diffusion gradient table: either in image header, or using -grad / -fslgrad option
```
It turns out that the command generated by `nipype` does not respect (my version of) the Mrtrix3 CL format.
### Actual behavior
Generated command (not runnable):
```shell
dwi2response -fslgrad <BVEC_FILE> <BVAL_FILE> -mask <MASK_FILE> tournier <WM_FILE>
```
### Expected behavior
Runnable command:
```shell
dwi2response tournier -fslgrad <BVEC_FILE> <BVAL_FILE> -mask <MASK_FILE> <WM_FILE>
```
### Environment
- `MRtrix 3.0_RC2-117-gf098f097 dwi2response bin version: 3.0_RC2-117-gf098f097`
- `Python 2.7.12`
- `nipype v1.0.0`
### Quick and dirty solution
I'm really not sure how clean it is, but it worked for me; in the `ResponseSDInputSpec` class, I changed `position=-6` to `position=1` in the `algorithm` traits.
</issue>
<code>
[start of nipype/interfaces/mrtrix3/preprocess.py]
1 # emacs: -*- mode: python; py-indent-offset: 4; indent-tabs-mode: nil -*-
2 # vi: set ft=python sts=4 ts=4 sw=4 et:
3 # -*- coding: utf-8 -*-
4 """
5 Change directory to provide relative paths for doctests
6 >>> import os
7 >>> filepath = os.path.dirname(os.path.realpath(__file__ ))
8 >>> datadir = os.path.realpath(os.path.join(filepath,
9 ... '../../testing/data'))
10 >>> os.chdir(datadir)
11
12 """
13 from __future__ import (print_function, division, unicode_literals,
14 absolute_import)
15
16 import os.path as op
17
18 from ..base import (CommandLineInputSpec, CommandLine, traits, TraitedSpec,
19 File, isdefined, Undefined)
20 from .base import MRTrix3BaseInputSpec, MRTrix3Base
21
22
23 class ResponseSDInputSpec(MRTrix3BaseInputSpec):
24 algorithm = traits.Enum(
25 'msmt_5tt',
26 'dhollander',
27 'tournier',
28 'tax',
29 argstr='%s',
30 position=-6,
31 mandatory=True,
32 desc='response estimation algorithm (multi-tissue)')
33 in_file = File(
34 exists=True,
35 argstr='%s',
36 position=-5,
37 mandatory=True,
38 desc='input DWI image')
39 mtt_file = File(argstr='%s', position=-4, desc='input 5tt image')
40 wm_file = File(
41 'wm.txt',
42 argstr='%s',
43 position=-3,
44 usedefault=True,
45 desc='output WM response text file')
46 gm_file = File(
47 argstr='%s', position=-2, desc='output GM response text file')
48 csf_file = File(
49 argstr='%s', position=-1, desc='output CSF response text file')
50 in_mask = File(
51 exists=True, argstr='-mask %s', desc='provide initial mask image')
52 max_sh = traits.Int(
53 8,
54 argstr='-lmax %d',
55 desc='maximum harmonic degree of response function')
56
57
58 class ResponseSDOutputSpec(TraitedSpec):
59 wm_file = File(argstr='%s', desc='output WM response text file')
60 gm_file = File(argstr='%s', desc='output GM response text file')
61 csf_file = File(argstr='%s', desc='output CSF response text file')
62
63
64 class ResponseSD(MRTrix3Base):
65 """
66 Estimate response function(s) for spherical deconvolution using the specified algorithm.
67
68 Example
69 -------
70
71 >>> import nipype.interfaces.mrtrix3 as mrt
72 >>> resp = mrt.ResponseSD()
73 >>> resp.inputs.in_file = 'dwi.mif'
74 >>> resp.inputs.algorithm = 'tournier'
75 >>> resp.inputs.grad_fsl = ('bvecs', 'bvals')
76 >>> resp.cmdline # doctest: +ELLIPSIS
77 'dwi2response -fslgrad bvecs bvals tournier dwi.mif wm.txt'
78 >>> resp.run() # doctest: +SKIP
79 """
80
81 _cmd = 'dwi2response'
82 input_spec = ResponseSDInputSpec
83 output_spec = ResponseSDOutputSpec
84
85 def _list_outputs(self):
86 outputs = self.output_spec().get()
87 outputs['wm_file'] = op.abspath(self.inputs.wm_file)
88 if self.inputs.gm_file != Undefined:
89 outputs['gm_file'] = op.abspath(self.inputs.gm_file)
90 if self.inputs.csf_file != Undefined:
91 outputs['csf_file'] = op.abspath(self.inputs.csf_file)
92 return outputs
93
94
95 class ACTPrepareFSLInputSpec(CommandLineInputSpec):
96 in_file = File(
97 exists=True,
98 argstr='%s',
99 mandatory=True,
100 position=-2,
101 desc='input anatomical image')
102
103 out_file = File(
104 'act_5tt.mif',
105 argstr='%s',
106 mandatory=True,
107 position=-1,
108 usedefault=True,
109 desc='output file after processing')
110
111
112 class ACTPrepareFSLOutputSpec(TraitedSpec):
113 out_file = File(exists=True, desc='the output response file')
114
115
116 class ACTPrepareFSL(CommandLine):
117 """
118 Generate anatomical information necessary for Anatomically
119 Constrained Tractography (ACT).
120
121 Example
122 -------
123
124 >>> import nipype.interfaces.mrtrix3 as mrt
125 >>> prep = mrt.ACTPrepareFSL()
126 >>> prep.inputs.in_file = 'T1.nii.gz'
127 >>> prep.cmdline # doctest: +ELLIPSIS
128 'act_anat_prepare_fsl T1.nii.gz act_5tt.mif'
129 >>> prep.run() # doctest: +SKIP
130 """
131
132 _cmd = 'act_anat_prepare_fsl'
133 input_spec = ACTPrepareFSLInputSpec
134 output_spec = ACTPrepareFSLOutputSpec
135
136 def _list_outputs(self):
137 outputs = self.output_spec().get()
138 outputs['out_file'] = op.abspath(self.inputs.out_file)
139 return outputs
140
141
142 class ReplaceFSwithFIRSTInputSpec(CommandLineInputSpec):
143 in_file = File(
144 exists=True,
145 argstr='%s',
146 mandatory=True,
147 position=-4,
148 desc='input anatomical image')
149 in_t1w = File(
150 exists=True,
151 argstr='%s',
152 mandatory=True,
153 position=-3,
154 desc='input T1 image')
155 in_config = File(
156 exists=True,
157 argstr='%s',
158 position=-2,
159 desc='connectome configuration file')
160
161 out_file = File(
162 'aparc+first.mif',
163 argstr='%s',
164 mandatory=True,
165 position=-1,
166 usedefault=True,
167 desc='output file after processing')
168
169
170 class ReplaceFSwithFIRSTOutputSpec(TraitedSpec):
171 out_file = File(exists=True, desc='the output response file')
172
173
174 class ReplaceFSwithFIRST(CommandLine):
175 """
176 Replace deep gray matter structures segmented with FSL FIRST in a
177 FreeSurfer parcellation.
178
179 Example
180 -------
181
182 >>> import nipype.interfaces.mrtrix3 as mrt
183 >>> prep = mrt.ReplaceFSwithFIRST()
184 >>> prep.inputs.in_file = 'aparc+aseg.nii'
185 >>> prep.inputs.in_t1w = 'T1.nii.gz'
186 >>> prep.inputs.in_config = 'mrtrix3_labelconfig.txt'
187 >>> prep.cmdline # doctest: +ELLIPSIS
188 'fs_parc_replace_sgm_first aparc+aseg.nii T1.nii.gz \
189 mrtrix3_labelconfig.txt aparc+first.mif'
190 >>> prep.run() # doctest: +SKIP
191 """
192
193 _cmd = 'fs_parc_replace_sgm_first'
194 input_spec = ReplaceFSwithFIRSTInputSpec
195 output_spec = ReplaceFSwithFIRSTOutputSpec
196
197 def _list_outputs(self):
198 outputs = self.output_spec().get()
199 outputs['out_file'] = op.abspath(self.inputs.out_file)
200 return outputs
201
[end of nipype/interfaces/mrtrix3/preprocess.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/nipype/interfaces/mrtrix3/preprocess.py b/nipype/interfaces/mrtrix3/preprocess.py
--- a/nipype/interfaces/mrtrix3/preprocess.py
+++ b/nipype/interfaces/mrtrix3/preprocess.py
@@ -27,7 +27,7 @@
'tournier',
'tax',
argstr='%s',
- position=-6,
+ position=1,
mandatory=True,
desc='response estimation algorithm (multi-tissue)')
in_file = File(
@@ -74,7 +74,7 @@
>>> resp.inputs.algorithm = 'tournier'
>>> resp.inputs.grad_fsl = ('bvecs', 'bvals')
>>> resp.cmdline # doctest: +ELLIPSIS
- 'dwi2response -fslgrad bvecs bvals tournier dwi.mif wm.txt'
+ 'dwi2response tournier -fslgrad bvecs bvals dwi.mif wm.txt'
>>> resp.run() # doctest: +SKIP
"""
|
{"golden_diff": "diff --git a/nipype/interfaces/mrtrix3/preprocess.py b/nipype/interfaces/mrtrix3/preprocess.py\n--- a/nipype/interfaces/mrtrix3/preprocess.py\n+++ b/nipype/interfaces/mrtrix3/preprocess.py\n@@ -27,7 +27,7 @@\n 'tournier',\n 'tax',\n argstr='%s',\n- position=-6,\n+ position=1,\n mandatory=True,\n desc='response estimation algorithm (multi-tissue)')\n in_file = File(\n@@ -74,7 +74,7 @@\n >>> resp.inputs.algorithm = 'tournier'\n >>> resp.inputs.grad_fsl = ('bvecs', 'bvals')\n >>> resp.cmdline # doctest: +ELLIPSIS\n- 'dwi2response -fslgrad bvecs bvals tournier dwi.mif wm.txt'\n+ 'dwi2response tournier -fslgrad bvecs bvals dwi.mif wm.txt'\n >>> resp.run() # doctest: +SKIP\n \"\"\"\n", "issue": "Mrtrix3 `dwi2response` - bad algorithm argument position\n### Summary\r\n\r\nTh Mrtrix3 `dwi2response` CL wrapper generates the following runtime error:\r\n```shell\r\ndwi2response:\r\nmrinfo: [ERROR] no diffusion encoding information found in image \"<DWI_FILE>\"\r\ndwi2response: [ERROR] Script requires diffusion gradient table: either in image header, or using -grad / -fslgrad option\r\n```\r\nIt turns out that the command generated by `nipype` does not respect (my version of) the Mrtrix3 CL format.\r\n\r\n### Actual behavior\r\n\r\nGenerated command (not runnable):\r\n```shell\r\ndwi2response -fslgrad <BVEC_FILE> <BVAL_FILE> -mask <MASK_FILE> tournier <WM_FILE>\r\n```\r\n\r\n### Expected behavior\r\n\r\nRunnable command:\r\n```shell\r\ndwi2response tournier -fslgrad <BVEC_FILE> <BVAL_FILE> -mask <MASK_FILE> <WM_FILE>\r\n```\r\n\r\n### Environment\r\n\r\n- `MRtrix 3.0_RC2-117-gf098f097 dwi2response bin version: 3.0_RC2-117-gf098f097`\r\n- `Python 2.7.12`\r\n- `nipype v1.0.0`\r\n \r\n\r\n### Quick and dirty solution\r\n\r\nI'm really not sure how clean it is, but it worked for me; in the `ResponseSDInputSpec` class, I changed `position=-6` to `position=1` in the `algorithm` traits.\r\n\r\n\n", "before_files": [{"content": "# emacs: -*- mode: python; py-indent-offset: 4; indent-tabs-mode: nil -*-\n# vi: set ft=python sts=4 ts=4 sw=4 et:\n# -*- coding: utf-8 -*-\n\"\"\"\n Change directory to provide relative paths for doctests\n >>> import os\n >>> filepath = os.path.dirname(os.path.realpath(__file__ ))\n >>> datadir = os.path.realpath(os.path.join(filepath,\n ... '../../testing/data'))\n >>> os.chdir(datadir)\n\n\"\"\"\nfrom __future__ import (print_function, division, unicode_literals,\n absolute_import)\n\nimport os.path as op\n\nfrom ..base import (CommandLineInputSpec, CommandLine, traits, TraitedSpec,\n File, isdefined, Undefined)\nfrom .base import MRTrix3BaseInputSpec, MRTrix3Base\n\n\nclass ResponseSDInputSpec(MRTrix3BaseInputSpec):\n algorithm = traits.Enum(\n 'msmt_5tt',\n 'dhollander',\n 'tournier',\n 'tax',\n argstr='%s',\n position=-6,\n mandatory=True,\n desc='response estimation algorithm (multi-tissue)')\n in_file = File(\n exists=True,\n argstr='%s',\n position=-5,\n mandatory=True,\n desc='input DWI image')\n mtt_file = File(argstr='%s', position=-4, desc='input 5tt image')\n wm_file = File(\n 'wm.txt',\n argstr='%s',\n position=-3,\n usedefault=True,\n desc='output WM response text file')\n gm_file = File(\n argstr='%s', position=-2, desc='output GM response text file')\n csf_file = File(\n argstr='%s', position=-1, desc='output CSF response text file')\n in_mask = File(\n exists=True, argstr='-mask %s', desc='provide initial mask image')\n max_sh = traits.Int(\n 8,\n argstr='-lmax %d',\n desc='maximum harmonic degree of response function')\n\n\nclass ResponseSDOutputSpec(TraitedSpec):\n wm_file = File(argstr='%s', desc='output WM response text file')\n gm_file = File(argstr='%s', desc='output GM response text file')\n csf_file = File(argstr='%s', desc='output CSF response text file')\n\n\nclass ResponseSD(MRTrix3Base):\n \"\"\"\n Estimate response function(s) for spherical deconvolution using the specified algorithm.\n\n Example\n -------\n\n >>> import nipype.interfaces.mrtrix3 as mrt\n >>> resp = mrt.ResponseSD()\n >>> resp.inputs.in_file = 'dwi.mif'\n >>> resp.inputs.algorithm = 'tournier'\n >>> resp.inputs.grad_fsl = ('bvecs', 'bvals')\n >>> resp.cmdline # doctest: +ELLIPSIS\n 'dwi2response -fslgrad bvecs bvals tournier dwi.mif wm.txt'\n >>> resp.run() # doctest: +SKIP\n \"\"\"\n\n _cmd = 'dwi2response'\n input_spec = ResponseSDInputSpec\n output_spec = ResponseSDOutputSpec\n\n def _list_outputs(self):\n outputs = self.output_spec().get()\n outputs['wm_file'] = op.abspath(self.inputs.wm_file)\n if self.inputs.gm_file != Undefined:\n outputs['gm_file'] = op.abspath(self.inputs.gm_file)\n if self.inputs.csf_file != Undefined:\n outputs['csf_file'] = op.abspath(self.inputs.csf_file)\n return outputs\n\n\nclass ACTPrepareFSLInputSpec(CommandLineInputSpec):\n in_file = File(\n exists=True,\n argstr='%s',\n mandatory=True,\n position=-2,\n desc='input anatomical image')\n\n out_file = File(\n 'act_5tt.mif',\n argstr='%s',\n mandatory=True,\n position=-1,\n usedefault=True,\n desc='output file after processing')\n\n\nclass ACTPrepareFSLOutputSpec(TraitedSpec):\n out_file = File(exists=True, desc='the output response file')\n\n\nclass ACTPrepareFSL(CommandLine):\n \"\"\"\n Generate anatomical information necessary for Anatomically\n Constrained Tractography (ACT).\n\n Example\n -------\n\n >>> import nipype.interfaces.mrtrix3 as mrt\n >>> prep = mrt.ACTPrepareFSL()\n >>> prep.inputs.in_file = 'T1.nii.gz'\n >>> prep.cmdline # doctest: +ELLIPSIS\n 'act_anat_prepare_fsl T1.nii.gz act_5tt.mif'\n >>> prep.run() # doctest: +SKIP\n \"\"\"\n\n _cmd = 'act_anat_prepare_fsl'\n input_spec = ACTPrepareFSLInputSpec\n output_spec = ACTPrepareFSLOutputSpec\n\n def _list_outputs(self):\n outputs = self.output_spec().get()\n outputs['out_file'] = op.abspath(self.inputs.out_file)\n return outputs\n\n\nclass ReplaceFSwithFIRSTInputSpec(CommandLineInputSpec):\n in_file = File(\n exists=True,\n argstr='%s',\n mandatory=True,\n position=-4,\n desc='input anatomical image')\n in_t1w = File(\n exists=True,\n argstr='%s',\n mandatory=True,\n position=-3,\n desc='input T1 image')\n in_config = File(\n exists=True,\n argstr='%s',\n position=-2,\n desc='connectome configuration file')\n\n out_file = File(\n 'aparc+first.mif',\n argstr='%s',\n mandatory=True,\n position=-1,\n usedefault=True,\n desc='output file after processing')\n\n\nclass ReplaceFSwithFIRSTOutputSpec(TraitedSpec):\n out_file = File(exists=True, desc='the output response file')\n\n\nclass ReplaceFSwithFIRST(CommandLine):\n \"\"\"\n Replace deep gray matter structures segmented with FSL FIRST in a\n FreeSurfer parcellation.\n\n Example\n -------\n\n >>> import nipype.interfaces.mrtrix3 as mrt\n >>> prep = mrt.ReplaceFSwithFIRST()\n >>> prep.inputs.in_file = 'aparc+aseg.nii'\n >>> prep.inputs.in_t1w = 'T1.nii.gz'\n >>> prep.inputs.in_config = 'mrtrix3_labelconfig.txt'\n >>> prep.cmdline # doctest: +ELLIPSIS\n 'fs_parc_replace_sgm_first aparc+aseg.nii T1.nii.gz \\\nmrtrix3_labelconfig.txt aparc+first.mif'\n >>> prep.run() # doctest: +SKIP\n \"\"\"\n\n _cmd = 'fs_parc_replace_sgm_first'\n input_spec = ReplaceFSwithFIRSTInputSpec\n output_spec = ReplaceFSwithFIRSTOutputSpec\n\n def _list_outputs(self):\n outputs = self.output_spec().get()\n outputs['out_file'] = op.abspath(self.inputs.out_file)\n return outputs\n", "path": "nipype/interfaces/mrtrix3/preprocess.py"}]}
| 2,923 | 240 |
gh_patches_debug_20688
|
rasdani/github-patches
|
git_diff
|
uccser__cs-unplugged-887
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Remove admin application
</issue>
<code>
[start of csunplugged/config/settings/base.py]
1 # -*- coding: utf-8 -*-
2 """
3 Base Django settings for CS Unplugged project.
4
5 For more information on this file, see
6 https://docs.djangoproject.com/en/dev/topics/settings/
7
8 For the full list of settings and their values, see
9 https://docs.djangoproject.com/en/dev/ref/settings/
10 """
11
12 import environ
13 import os.path
14
15 # Add custom languages not provided by Django
16 import django.conf.locale
17 from django.conf import global_settings
18 from django.utils.translation import ugettext_lazy as _
19
20 # cs-unplugged/csunplugged/config/settings/base.py - 3 = csunplugged/
21 ROOT_DIR = environ.Path(__file__) - 3
22
23 # Load operating system environment variables and then prepare to use them
24 env = environ.Env()
25
26 # APP CONFIGURATION
27 # ----------------------------------------------------------------------------
28 DJANGO_APPS = [
29 # Default Django apps:
30 "django.contrib.auth",
31 "django.contrib.contenttypes",
32 "django.contrib.sessions",
33 "django.contrib.messages",
34 "django.contrib.staticfiles",
35 "django.contrib.postgres",
36
37 # Useful template tags
38 "django.contrib.humanize",
39
40 # Admin
41 "django.contrib.admin",
42 ]
43
44 THIRD_PARTY_APPS = [
45 "django_bootstrap_breadcrumbs",
46 "haystack",
47 "widget_tweaks",
48 "modeltranslation",
49 "bidiutils",
50 ]
51
52 # Apps specific for this project go here.
53 LOCAL_APPS = [
54 "general.apps.GeneralConfig",
55 "topics.apps.TopicsConfig",
56 "resources.apps.ResourcesConfig",
57 "search.apps.SearchConfig",
58 ]
59
60 # See: https://docs.djangoproject.com/en/dev/ref/settings/#installed-apps
61 INSTALLED_APPS = DJANGO_APPS + LOCAL_APPS + THIRD_PARTY_APPS
62
63 # MIDDLEWARE CONFIGURATION
64 # ----------------------------------------------------------------------------
65 MIDDLEWARE = [
66 "django.middleware.security.SecurityMiddleware",
67 "django.contrib.sessions.middleware.SessionMiddleware",
68 "django.middleware.locale.LocaleMiddleware",
69 "django.middleware.common.CommonMiddleware",
70 "django.middleware.csrf.CsrfViewMiddleware",
71 "django.contrib.auth.middleware.AuthenticationMiddleware",
72 "django.contrib.messages.middleware.MessageMiddleware",
73 "django.middleware.clickjacking.XFrameOptionsMiddleware",
74 ]
75
76 # DEBUG
77 # ----------------------------------------------------------------------------
78 # See: https://docs.djangoproject.com/en/dev/ref/settings/#debug
79 DEBUG = env.bool("DJANGO_DEBUG", False)
80
81 # FIXTURE CONFIGURATION
82 # ----------------------------------------------------------------------------
83 # See: https://docs.djangoproject.com/en/dev/ref/settings/#std:setting-FIXTURE_DIRS
84 FIXTURE_DIRS = (
85 str(ROOT_DIR.path("fixtures")),
86 )
87
88 # EMAIL CONFIGURATION
89 # -----------------------------------------------------------------------------
90 # EMAIL_BACKEND = env("DJANGO_EMAIL_BACKEND",
91 # default="django.core.mail.backends.smtp.EmailBackend")
92
93 # MANAGER CONFIGURATION
94 # ----------------------------------------------------------------------------
95 # See: https://docs.djangoproject.com/en/dev/ref/settings/#admins
96 # ADMINS = [
97 # ("University of Canterbury Computer Science Research Group",
98 # "[email protected]"),
99 # ]
100
101 # See: https://docs.djangoproject.com/en/dev/ref/settings/#managers
102 # MANAGERS = ADMINS
103
104 # GENERAL CONFIGURATION
105 # ----------------------------------------------------------------------------
106 # Local time zone for this installation. Choices can be found here:
107 # http://en.wikipedia.org/wiki/List_of_tz_zones_by_name
108 # although not all choices may be available on all operating systems.
109 # In a Windows environment this must be set to your system time zone.
110 TIME_ZONE = "UTC"
111
112 # See: https://docs.djangoproject.com/en/dev/ref/settings/#language-code
113 LANGUAGE_CODE = "en"
114
115 INCONTEXT_L10N_PSEUDOLANGUAGE = "xx-lr"
116 INCONTEXT_L10N_PSEUDOLANGUAGE_BIDI = "yy-rl"
117 INCONTEXT_L10N_PSEUDOLANGUAGES = (
118 INCONTEXT_L10N_PSEUDOLANGUAGE,
119 INCONTEXT_L10N_PSEUDOLANGUAGE_BIDI
120 )
121
122 LANGUAGES = (
123 ("en", "English"),
124 )
125
126 if env.bool("INCLUDE_INCONTEXT_L10N", False):
127 EXTRA_LANGUAGES = [
128 (INCONTEXT_L10N_PSEUDOLANGUAGE, "Translation mode"),
129 (INCONTEXT_L10N_PSEUDOLANGUAGE_BIDI, "Translation mode (Bi-directional)"),
130 ]
131
132 EXTRA_LANG_INFO = {
133 INCONTEXT_L10N_PSEUDOLANGUAGE: {
134 'bidi': False,
135 'code': INCONTEXT_L10N_PSEUDOLANGUAGE,
136 'name': "Translation mode",
137 'name_local': _("Translation mode"),
138 },
139 INCONTEXT_L10N_PSEUDOLANGUAGE_BIDI: {
140 'bidi': True,
141 'code': INCONTEXT_L10N_PSEUDOLANGUAGE_BIDI,
142 'name': "Translation mode (Bi-directional)",
143 'name_local': _("Translation mode (Bi-directional)"),
144 }
145 }
146
147 django.conf.locale.LANG_INFO.update(EXTRA_LANG_INFO)
148 # Add new languages to the list of all django languages
149 global_settings.LANGUAGES = global_settings.LANGUAGES + EXTRA_LANGUAGES
150 global_settings.LANGUAGES_BIDI = (global_settings.LANGUAGES_BIDI +
151 [INCONTEXT_L10N_PSEUDOLANGUAGE_BIDI.split('-')[0]])
152 # Add new languages to the list of languages used for this project
153 LANGUAGES += tuple(EXTRA_LANGUAGES)
154 LANGUAGES_BIDI = global_settings.LANGUAGES_BIDI
155
156
157 # See: https://docs.djangoproject.com/en/dev/ref/settings/#site-id
158 SITE_ID = 1
159
160 # See: https://docs.djangoproject.com/en/dev/ref/settings/#use-i18n
161 USE_I18N = True
162
163 # See: https://docs.djangoproject.com/en/dev/ref/settings/#use-l10n
164 USE_L10N = True
165
166 # See: https://docs.djangoproject.com/en/dev/ref/settings/#use-tz
167 USE_TZ = True
168
169 # See: https://docs.djangoproject.com/en/dev/ref/settings/#locale-paths
170 LOCALE_PATHS = ["locale"]
171
172 # TEMPLATE CONFIGURATION
173 # ----------------------------------------------------------------------------
174 # See: https://docs.djangoproject.com/en/dev/ref/settings/#templates
175 TEMPLATES = [
176 {
177 # See: https://docs.djangoproject.com/en/dev/ref/settings/#std:setting-TEMPLATES-BACKEND
178 "BACKEND": "django.template.backends.django.DjangoTemplates",
179 # See: https://docs.djangoproject.com/en/dev/ref/settings/#template-dirs
180 "DIRS": [
181 str(ROOT_DIR.path("templates")),
182 ],
183 "OPTIONS": {
184 # See: https://docs.djangoproject.com/en/dev/ref/settings/#template-debug
185 "debug": DEBUG,
186 # See: https://docs.djangoproject.com/en/dev/ref/settings/#template-loaders
187 # https://docs.djangoproject.com/en/dev/ref/templates/api/#loader-types
188 "loaders": [
189 "django.template.loaders.filesystem.Loader",
190 "django.template.loaders.app_directories.Loader",
191 ],
192 # See: https://docs.djangoproject.com/en/dev/ref/settings/#template-context-processors
193 "context_processors": [
194 "django.template.context_processors.debug",
195 "django.template.context_processors.request",
196 "django.contrib.auth.context_processors.auth",
197 "django.template.context_processors.i18n",
198 "django.template.context_processors.media",
199 "django.template.context_processors.static",
200 "django.template.context_processors.tz",
201 "django.contrib.messages.context_processors.messages",
202 "config.context_processors.version_number.version_number",
203 "config.context_processors.deployed.deployed",
204 "bidiutils.context_processors.bidi",
205 ],
206 "libraries": {
207 "render_html_field": "config.templatetags.render_html_field",
208 "translate_url": "config.templatetags.translate_url",
209 "query_replace": "config.templatetags.query_replace",
210 },
211 },
212 },
213 ]
214
215 # STATIC FILE CONFIGURATION
216 # ------------------------------------------------------------------------------
217 # See: https://docs.djangoproject.com/en/dev/ref/settings/#static-root
218 STATIC_ROOT = os.path.join(str(ROOT_DIR.path("staticfiles")), "")
219
220 # See: https://docs.djangoproject.com/en/dev/ref/settings/#static-url
221 BUILD_ROOT = os.path.join(str(ROOT_DIR.path("build")), "")
222 STATIC_URL = "/staticfiles/"
223
224 # See: https://docs.djangoproject.com/en/dev/ref/contrib/staticfiles/#std:setting-STATICFILES_DIRS
225 STATICFILES_DIRS = [
226 BUILD_ROOT,
227 ]
228
229 # See: https://docs.djangoproject.com/en/dev/ref/contrib/staticfiles/#staticfiles-finders
230 STATICFILES_FINDERS = [
231 "django.contrib.staticfiles.finders.FileSystemFinder",
232 "django.contrib.staticfiles.finders.AppDirectoriesFinder",
233 ]
234
235 # MEDIA CONFIGURATION
236 # ------------------------------------------------------------------------------
237 # See: https://docs.djangoproject.com/en/dev/ref/settings/#media-root
238 MEDIA_ROOT = str(ROOT_DIR("media"))
239
240 # See: https://docs.djangoproject.com/en/dev/ref/settings/#media-url
241 MEDIA_URL = "/media/"
242
243 # URL Configuration
244 # ------------------------------------------------------------------------------
245 ROOT_URLCONF = "config.urls"
246
247 # See: https://docs.djangoproject.com/en/dev/ref/settings/#wsgi-application
248 WSGI_APPLICATION = "config.wsgi.application"
249
250 # PASSWORD VALIDATION
251 # https://docs.djangoproject.com/en/dev/ref/settings/#auth-password-validators
252 # ------------------------------------------------------------------------------
253
254 AUTH_PASSWORD_VALIDATORS = [
255 {
256 "NAME": "django.contrib.auth.password_validation.UserAttributeSimilarityValidator",
257 },
258 {
259 "NAME": "django.contrib.auth.password_validation.MinimumLengthValidator",
260 },
261 {
262 "NAME": "django.contrib.auth.password_validation.CommonPasswordValidator",
263 },
264 {
265 "NAME": "django.contrib.auth.password_validation.NumericPasswordValidator",
266 },
267 ]
268
269 # SEARCH CONFIGURATION
270 # ------------------------------------------------------------------------------
271 # See: http://django-haystack.readthedocs.io/en/v2.6.0/settings.html
272 HAYSTACK_CONNECTIONS = {
273 "default": {
274 "ENGINE": "haystack.backends.whoosh_backend.WhooshEngine",
275 "PATH": str(ROOT_DIR.path("whoosh_index")),
276 },
277 }
278 HAYSTACK_SEARCH_RESULTS_PER_PAGE = 10
279
280 # OTHER SETTINGS
281 # ------------------------------------------------------------------------------
282 DJANGO_PRODUCTION = env.bool("DJANGO_PRODUCTION")
283 TOPICS_CONTENT_BASE_PATH = os.path.join(str(ROOT_DIR.path("topics")), "content")
284 RESOURCES_CONTENT_BASE_PATH = os.path.join(str(ROOT_DIR.path("resources")), "content")
285 RESOURCE_GENERATION_LOCATION = os.path.join(str(ROOT_DIR.path("staticfiles")), "resources")
286 RESOURCE_GENERATORS_PACKAGE = "resources.generators"
287 RESOURCE_COPY_AMOUNT = 20
288 SCRATCH_GENERATION_LOCATION = str(ROOT_DIR.path("temp"))
289 CUSTOM_VERTO_TEMPLATES = os.path.join(str(ROOT_DIR.path("utils")), "custom_converter_templates", "")
290 MODELTRANSLATION_CUSTOM_FIELDS = ("JSONField",)
291
[end of csunplugged/config/settings/base.py]
[start of csunplugged/config/urls.py]
1 """URL configuration for the Django system.
2
3 The `urlpatterns` list routes URLs to views. For more information please see:
4 https://docs.djangoproject.com/en/dev/topics/http/urls/
5 """
6
7 from django.conf import settings
8 from django.conf.urls import include, url
9 from django.conf.urls.i18n import i18n_patterns
10 from django.contrib import admin
11 from general import views
12
13 urlpatterns = i18n_patterns(
14 url(r"", include("general.urls", namespace="general")),
15 url(r"^topics/", include("topics.urls", namespace="topics")),
16 url(r"^resources/", include("resources.urls", namespace="resources")),
17 url(r"^admin/", include(admin.site.urls)),
18 url(r"^search/", include("search.urls", namespace="search")),
19 )
20
21 urlpatterns += [
22 url(r"", include("classic.urls")),
23 url(r"^_ah/health", views.health_check),
24 ]
25
26 if settings.DEBUG: # pragma: no cover
27 import debug_toolbar
28 urlpatterns += [
29 url(r"^__debug__/", include(debug_toolbar.urls)),
30 ]
31 urlpatterns += i18n_patterns(
32 url(r"^__dev__/", include("dev.urls", namespace="dev")),
33 )
34 # These patterns allows these error pages to be debugged during development.
35 from django.views import defaults
36 urlpatterns += [
37 url(r'^400/$', defaults.bad_request, kwargs={'exception': Exception("Bad request")}),
38 url(r'^403/$', defaults.permission_denied, kwargs={'exception': Exception("Permissin denied")}),
39 url(r'^404/$', defaults.page_not_found, kwargs={'exception': Exception("Page not found")}),
40 url(r'^500/$', defaults.server_error),
41 ]
42
[end of csunplugged/config/urls.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/csunplugged/config/settings/base.py b/csunplugged/config/settings/base.py
--- a/csunplugged/config/settings/base.py
+++ b/csunplugged/config/settings/base.py
@@ -36,9 +36,6 @@
# Useful template tags
"django.contrib.humanize",
-
- # Admin
- "django.contrib.admin",
]
THIRD_PARTY_APPS = [
diff --git a/csunplugged/config/urls.py b/csunplugged/config/urls.py
--- a/csunplugged/config/urls.py
+++ b/csunplugged/config/urls.py
@@ -7,14 +7,12 @@
from django.conf import settings
from django.conf.urls import include, url
from django.conf.urls.i18n import i18n_patterns
-from django.contrib import admin
from general import views
urlpatterns = i18n_patterns(
url(r"", include("general.urls", namespace="general")),
url(r"^topics/", include("topics.urls", namespace="topics")),
url(r"^resources/", include("resources.urls", namespace="resources")),
- url(r"^admin/", include(admin.site.urls)),
url(r"^search/", include("search.urls", namespace="search")),
)
|
{"golden_diff": "diff --git a/csunplugged/config/settings/base.py b/csunplugged/config/settings/base.py\n--- a/csunplugged/config/settings/base.py\n+++ b/csunplugged/config/settings/base.py\n@@ -36,9 +36,6 @@\n \n # Useful template tags\n \"django.contrib.humanize\",\n-\n- # Admin\n- \"django.contrib.admin\",\n ]\n \n THIRD_PARTY_APPS = [\ndiff --git a/csunplugged/config/urls.py b/csunplugged/config/urls.py\n--- a/csunplugged/config/urls.py\n+++ b/csunplugged/config/urls.py\n@@ -7,14 +7,12 @@\n from django.conf import settings\n from django.conf.urls import include, url\n from django.conf.urls.i18n import i18n_patterns\n-from django.contrib import admin\n from general import views\n \n urlpatterns = i18n_patterns(\n url(r\"\", include(\"general.urls\", namespace=\"general\")),\n url(r\"^topics/\", include(\"topics.urls\", namespace=\"topics\")),\n url(r\"^resources/\", include(\"resources.urls\", namespace=\"resources\")),\n- url(r\"^admin/\", include(admin.site.urls)),\n url(r\"^search/\", include(\"search.urls\", namespace=\"search\")),\n )\n", "issue": "Remove admin application\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"\nBase Django settings for CS Unplugged project.\n\nFor more information on this file, see\nhttps://docs.djangoproject.com/en/dev/topics/settings/\n\nFor the full list of settings and their values, see\nhttps://docs.djangoproject.com/en/dev/ref/settings/\n\"\"\"\n\nimport environ\nimport os.path\n\n# Add custom languages not provided by Django\nimport django.conf.locale\nfrom django.conf import global_settings\nfrom django.utils.translation import ugettext_lazy as _\n\n# cs-unplugged/csunplugged/config/settings/base.py - 3 = csunplugged/\nROOT_DIR = environ.Path(__file__) - 3\n\n# Load operating system environment variables and then prepare to use them\nenv = environ.Env()\n\n# APP CONFIGURATION\n# ----------------------------------------------------------------------------\nDJANGO_APPS = [\n # Default Django apps:\n \"django.contrib.auth\",\n \"django.contrib.contenttypes\",\n \"django.contrib.sessions\",\n \"django.contrib.messages\",\n \"django.contrib.staticfiles\",\n \"django.contrib.postgres\",\n\n # Useful template tags\n \"django.contrib.humanize\",\n\n # Admin\n \"django.contrib.admin\",\n]\n\nTHIRD_PARTY_APPS = [\n \"django_bootstrap_breadcrumbs\",\n \"haystack\",\n \"widget_tweaks\",\n \"modeltranslation\",\n \"bidiutils\",\n]\n\n# Apps specific for this project go here.\nLOCAL_APPS = [\n \"general.apps.GeneralConfig\",\n \"topics.apps.TopicsConfig\",\n \"resources.apps.ResourcesConfig\",\n \"search.apps.SearchConfig\",\n]\n\n# See: https://docs.djangoproject.com/en/dev/ref/settings/#installed-apps\nINSTALLED_APPS = DJANGO_APPS + LOCAL_APPS + THIRD_PARTY_APPS\n\n# MIDDLEWARE CONFIGURATION\n# ----------------------------------------------------------------------------\nMIDDLEWARE = [\n \"django.middleware.security.SecurityMiddleware\",\n \"django.contrib.sessions.middleware.SessionMiddleware\",\n \"django.middleware.locale.LocaleMiddleware\",\n \"django.middleware.common.CommonMiddleware\",\n \"django.middleware.csrf.CsrfViewMiddleware\",\n \"django.contrib.auth.middleware.AuthenticationMiddleware\",\n \"django.contrib.messages.middleware.MessageMiddleware\",\n \"django.middleware.clickjacking.XFrameOptionsMiddleware\",\n]\n\n# DEBUG\n# ----------------------------------------------------------------------------\n# See: https://docs.djangoproject.com/en/dev/ref/settings/#debug\nDEBUG = env.bool(\"DJANGO_DEBUG\", False)\n\n# FIXTURE CONFIGURATION\n# ----------------------------------------------------------------------------\n# See: https://docs.djangoproject.com/en/dev/ref/settings/#std:setting-FIXTURE_DIRS\nFIXTURE_DIRS = (\n str(ROOT_DIR.path(\"fixtures\")),\n)\n\n# EMAIL CONFIGURATION\n# -----------------------------------------------------------------------------\n# EMAIL_BACKEND = env(\"DJANGO_EMAIL_BACKEND\",\n# default=\"django.core.mail.backends.smtp.EmailBackend\")\n\n# MANAGER CONFIGURATION\n# ----------------------------------------------------------------------------\n# See: https://docs.djangoproject.com/en/dev/ref/settings/#admins\n# ADMINS = [\n# (\"University of Canterbury Computer Science Research Group\",\n# \"[email protected]\"),\n# ]\n\n# See: https://docs.djangoproject.com/en/dev/ref/settings/#managers\n# MANAGERS = ADMINS\n\n# GENERAL CONFIGURATION\n# ----------------------------------------------------------------------------\n# Local time zone for this installation. Choices can be found here:\n# http://en.wikipedia.org/wiki/List_of_tz_zones_by_name\n# although not all choices may be available on all operating systems.\n# In a Windows environment this must be set to your system time zone.\nTIME_ZONE = \"UTC\"\n\n# See: https://docs.djangoproject.com/en/dev/ref/settings/#language-code\nLANGUAGE_CODE = \"en\"\n\nINCONTEXT_L10N_PSEUDOLANGUAGE = \"xx-lr\"\nINCONTEXT_L10N_PSEUDOLANGUAGE_BIDI = \"yy-rl\"\nINCONTEXT_L10N_PSEUDOLANGUAGES = (\n INCONTEXT_L10N_PSEUDOLANGUAGE,\n INCONTEXT_L10N_PSEUDOLANGUAGE_BIDI\n)\n\nLANGUAGES = (\n (\"en\", \"English\"),\n)\n\nif env.bool(\"INCLUDE_INCONTEXT_L10N\", False):\n EXTRA_LANGUAGES = [\n (INCONTEXT_L10N_PSEUDOLANGUAGE, \"Translation mode\"),\n (INCONTEXT_L10N_PSEUDOLANGUAGE_BIDI, \"Translation mode (Bi-directional)\"),\n ]\n\n EXTRA_LANG_INFO = {\n INCONTEXT_L10N_PSEUDOLANGUAGE: {\n 'bidi': False,\n 'code': INCONTEXT_L10N_PSEUDOLANGUAGE,\n 'name': \"Translation mode\",\n 'name_local': _(\"Translation mode\"),\n },\n INCONTEXT_L10N_PSEUDOLANGUAGE_BIDI: {\n 'bidi': True,\n 'code': INCONTEXT_L10N_PSEUDOLANGUAGE_BIDI,\n 'name': \"Translation mode (Bi-directional)\",\n 'name_local': _(\"Translation mode (Bi-directional)\"),\n }\n }\n\n django.conf.locale.LANG_INFO.update(EXTRA_LANG_INFO)\n # Add new languages to the list of all django languages\n global_settings.LANGUAGES = global_settings.LANGUAGES + EXTRA_LANGUAGES\n global_settings.LANGUAGES_BIDI = (global_settings.LANGUAGES_BIDI +\n [INCONTEXT_L10N_PSEUDOLANGUAGE_BIDI.split('-')[0]])\n # Add new languages to the list of languages used for this project\n LANGUAGES += tuple(EXTRA_LANGUAGES)\n LANGUAGES_BIDI = global_settings.LANGUAGES_BIDI\n\n\n# See: https://docs.djangoproject.com/en/dev/ref/settings/#site-id\nSITE_ID = 1\n\n# See: https://docs.djangoproject.com/en/dev/ref/settings/#use-i18n\nUSE_I18N = True\n\n# See: https://docs.djangoproject.com/en/dev/ref/settings/#use-l10n\nUSE_L10N = True\n\n# See: https://docs.djangoproject.com/en/dev/ref/settings/#use-tz\nUSE_TZ = True\n\n# See: https://docs.djangoproject.com/en/dev/ref/settings/#locale-paths\nLOCALE_PATHS = [\"locale\"]\n\n# TEMPLATE CONFIGURATION\n# ----------------------------------------------------------------------------\n# See: https://docs.djangoproject.com/en/dev/ref/settings/#templates\nTEMPLATES = [\n {\n # See: https://docs.djangoproject.com/en/dev/ref/settings/#std:setting-TEMPLATES-BACKEND\n \"BACKEND\": \"django.template.backends.django.DjangoTemplates\",\n # See: https://docs.djangoproject.com/en/dev/ref/settings/#template-dirs\n \"DIRS\": [\n str(ROOT_DIR.path(\"templates\")),\n ],\n \"OPTIONS\": {\n # See: https://docs.djangoproject.com/en/dev/ref/settings/#template-debug\n \"debug\": DEBUG,\n # See: https://docs.djangoproject.com/en/dev/ref/settings/#template-loaders\n # https://docs.djangoproject.com/en/dev/ref/templates/api/#loader-types\n \"loaders\": [\n \"django.template.loaders.filesystem.Loader\",\n \"django.template.loaders.app_directories.Loader\",\n ],\n # See: https://docs.djangoproject.com/en/dev/ref/settings/#template-context-processors\n \"context_processors\": [\n \"django.template.context_processors.debug\",\n \"django.template.context_processors.request\",\n \"django.contrib.auth.context_processors.auth\",\n \"django.template.context_processors.i18n\",\n \"django.template.context_processors.media\",\n \"django.template.context_processors.static\",\n \"django.template.context_processors.tz\",\n \"django.contrib.messages.context_processors.messages\",\n \"config.context_processors.version_number.version_number\",\n \"config.context_processors.deployed.deployed\",\n \"bidiutils.context_processors.bidi\",\n ],\n \"libraries\": {\n \"render_html_field\": \"config.templatetags.render_html_field\",\n \"translate_url\": \"config.templatetags.translate_url\",\n \"query_replace\": \"config.templatetags.query_replace\",\n },\n },\n },\n]\n\n# STATIC FILE CONFIGURATION\n# ------------------------------------------------------------------------------\n# See: https://docs.djangoproject.com/en/dev/ref/settings/#static-root\nSTATIC_ROOT = os.path.join(str(ROOT_DIR.path(\"staticfiles\")), \"\")\n\n# See: https://docs.djangoproject.com/en/dev/ref/settings/#static-url\nBUILD_ROOT = os.path.join(str(ROOT_DIR.path(\"build\")), \"\")\nSTATIC_URL = \"/staticfiles/\"\n\n# See: https://docs.djangoproject.com/en/dev/ref/contrib/staticfiles/#std:setting-STATICFILES_DIRS\nSTATICFILES_DIRS = [\n BUILD_ROOT,\n]\n\n# See: https://docs.djangoproject.com/en/dev/ref/contrib/staticfiles/#staticfiles-finders\nSTATICFILES_FINDERS = [\n \"django.contrib.staticfiles.finders.FileSystemFinder\",\n \"django.contrib.staticfiles.finders.AppDirectoriesFinder\",\n]\n\n# MEDIA CONFIGURATION\n# ------------------------------------------------------------------------------\n# See: https://docs.djangoproject.com/en/dev/ref/settings/#media-root\nMEDIA_ROOT = str(ROOT_DIR(\"media\"))\n\n# See: https://docs.djangoproject.com/en/dev/ref/settings/#media-url\nMEDIA_URL = \"/media/\"\n\n# URL Configuration\n# ------------------------------------------------------------------------------\nROOT_URLCONF = \"config.urls\"\n\n# See: https://docs.djangoproject.com/en/dev/ref/settings/#wsgi-application\nWSGI_APPLICATION = \"config.wsgi.application\"\n\n# PASSWORD VALIDATION\n# https://docs.djangoproject.com/en/dev/ref/settings/#auth-password-validators\n# ------------------------------------------------------------------------------\n\nAUTH_PASSWORD_VALIDATORS = [\n {\n \"NAME\": \"django.contrib.auth.password_validation.UserAttributeSimilarityValidator\",\n },\n {\n \"NAME\": \"django.contrib.auth.password_validation.MinimumLengthValidator\",\n },\n {\n \"NAME\": \"django.contrib.auth.password_validation.CommonPasswordValidator\",\n },\n {\n \"NAME\": \"django.contrib.auth.password_validation.NumericPasswordValidator\",\n },\n]\n\n# SEARCH CONFIGURATION\n# ------------------------------------------------------------------------------\n# See: http://django-haystack.readthedocs.io/en/v2.6.0/settings.html\nHAYSTACK_CONNECTIONS = {\n \"default\": {\n \"ENGINE\": \"haystack.backends.whoosh_backend.WhooshEngine\",\n \"PATH\": str(ROOT_DIR.path(\"whoosh_index\")),\n },\n}\nHAYSTACK_SEARCH_RESULTS_PER_PAGE = 10\n\n# OTHER SETTINGS\n# ------------------------------------------------------------------------------\nDJANGO_PRODUCTION = env.bool(\"DJANGO_PRODUCTION\")\nTOPICS_CONTENT_BASE_PATH = os.path.join(str(ROOT_DIR.path(\"topics\")), \"content\")\nRESOURCES_CONTENT_BASE_PATH = os.path.join(str(ROOT_DIR.path(\"resources\")), \"content\")\nRESOURCE_GENERATION_LOCATION = os.path.join(str(ROOT_DIR.path(\"staticfiles\")), \"resources\")\nRESOURCE_GENERATORS_PACKAGE = \"resources.generators\"\nRESOURCE_COPY_AMOUNT = 20\nSCRATCH_GENERATION_LOCATION = str(ROOT_DIR.path(\"temp\"))\nCUSTOM_VERTO_TEMPLATES = os.path.join(str(ROOT_DIR.path(\"utils\")), \"custom_converter_templates\", \"\")\nMODELTRANSLATION_CUSTOM_FIELDS = (\"JSONField\",)\n", "path": "csunplugged/config/settings/base.py"}, {"content": "\"\"\"URL configuration for the Django system.\n\nThe `urlpatterns` list routes URLs to views. For more information please see:\n https://docs.djangoproject.com/en/dev/topics/http/urls/\n\"\"\"\n\nfrom django.conf import settings\nfrom django.conf.urls import include, url\nfrom django.conf.urls.i18n import i18n_patterns\nfrom django.contrib import admin\nfrom general import views\n\nurlpatterns = i18n_patterns(\n url(r\"\", include(\"general.urls\", namespace=\"general\")),\n url(r\"^topics/\", include(\"topics.urls\", namespace=\"topics\")),\n url(r\"^resources/\", include(\"resources.urls\", namespace=\"resources\")),\n url(r\"^admin/\", include(admin.site.urls)),\n url(r\"^search/\", include(\"search.urls\", namespace=\"search\")),\n)\n\nurlpatterns += [\n url(r\"\", include(\"classic.urls\")),\n url(r\"^_ah/health\", views.health_check),\n]\n\nif settings.DEBUG: # pragma: no cover\n import debug_toolbar\n urlpatterns += [\n url(r\"^__debug__/\", include(debug_toolbar.urls)),\n ]\n urlpatterns += i18n_patterns(\n url(r\"^__dev__/\", include(\"dev.urls\", namespace=\"dev\")),\n )\n # These patterns allows these error pages to be debugged during development.\n from django.views import defaults\n urlpatterns += [\n url(r'^400/$', defaults.bad_request, kwargs={'exception': Exception(\"Bad request\")}),\n url(r'^403/$', defaults.permission_denied, kwargs={'exception': Exception(\"Permissin denied\")}),\n url(r'^404/$', defaults.page_not_found, kwargs={'exception': Exception(\"Page not found\")}),\n url(r'^500/$', defaults.server_error),\n ]\n", "path": "csunplugged/config/urls.py"}]}
| 4,046 | 268 |
gh_patches_debug_8762
|
rasdani/github-patches
|
git_diff
|
aws__sagemaker-python-sdk-1848
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
bug: AttributeError: 'NoneType' object has no attribute 'module' when applying the v2 upgrade tool
**Describe the bug**
When applying the `sagemaker-upgrade-v2` automated CLI tool, earlier ast modification leads to an `AttributeError` exception.
**To reproduce**
1. Craft a script that will be modified by a SerDe import renamer
1. Apply the V2 upgrade CLI tool:
An `AttributeError` exception is raised.
**Expected behavior**
The V2 upgrade CLI would properly upgrade the script.
**Screenshots or logs**
```
❯ cat v1.py
import sagemaker
from sagemaker.predictor import csv_serializer
csv_serializer.__doc___
❯ sagemaker-upgrade-v2 --in-file v1.py --out-file v2.py
Traceback (most recent call last):
File "~/testvenv/bin/sagemaker-upgrade-v2", line 8, in <module>
sys.exit(main())
File "~/testvenv/lib/python3.8/site-packages/sagemaker/cli/compatibility/v2/sagemaker_upgrade_v2.py", line 78, in main
_update_file(args.in_file, args.out_file)
File "~/testvenv/lib/python3.8/site-packages/sagemaker/cli/compatibility/v2/sagemaker_upgrade_v2.py", line 50, in _update_file
updater_cls(input_path=input_file, output_path=output_file).update()
File "~/testvenv/lib/python3.8/site-packages/sagemaker/cli/compatibility/v2/files.py", line 72, in update
output = self._update_ast(self._read_input_file())
File "~/testvenv/lib/python3.8/site-packages/sagemaker/cli/compatibility/v2/files.py", line 86, in _update_ast
return ASTTransformer().visit(input_ast)
File "/usr/lib/python3.8/ast.py", line 363, in visit
return visitor(node)
File "~/testvenv/lib/python3.8/site-packages/sagemaker/cli/compatibility/v2/ast_transformer.py", line 136, in visit_Module
self.generic_visit(node)
File "/usr/lib/python3.8/ast.py", line 439, in generic_visit
value = self.visit(value)
File "/usr/lib/python3.8/ast.py", line 363, in visit
return visitor(node)
File "~/testvenv/lib/python3.8/site-packages/sagemaker/cli/compatibility/v2/ast_transformer.py", line 155, in visit_ImportFrom
node = import_checker.check_and_modify_node(node)
File "~/testvenv/lib/python3.8/site-packages/sagemaker/cli/compatibility/v2/modifiers/modifier.py", line 26, in check_and_modify_node
if self.node_should_be_modified(node):
File "~/testvenv/lib/python3.8/site-packages/sagemaker/cli/compatibility/v2/modifiers/image_uris.py", line 115, in node_should_be_modified
return node.module in GET_IMAGE_URI_NAMESPACES and any(
AttributeError: 'NoneType' object has no attribute 'module'
```
**System information**
A description of your system. Please provide:
- **SageMaker Python SDK version**: 2.4.1
- **Framework name (eg. PyTorch) or algorithm (eg. KMeans)**:
- **Framework version**:
- **Python version**:
- **CPU or GPU**:
- **Custom Docker image (Y/N)**:
**Additional context**
The problem comes from [the ordering](https://github.com/aws/sagemaker-python-sdk/blob/v2.4.1/src/sagemaker/cli/compatibility/v2/ast_transformer.py#L59-L60), which the existing, isolated unit tests do not cover. The earlier renamer modifies the ast, and the later renamer cannot handle this situation:
```
59 modifiers.serde.SerdeImportFromPredictorRenamer(),
60 modifiers.image_uris.ImageURIRetrieveImportFromRenamer(),
```
</issue>
<code>
[start of src/sagemaker/cli/compatibility/v2/modifiers/image_uris.py]
1 # Copyright 2020 Amazon.com, Inc. or its affiliates. All Rights Reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License"). You
4 # may not use this file except in compliance with the License. A copy of
5 # the License is located at
6 #
7 # http://aws.amazon.com/apache2.0/
8 #
9 # or in the "license" file accompanying this file. This file is
10 # distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
11 # ANY KIND, either express or implied. See the License for the specific
12 # language governing permissions and limitations under the License.
13 """Classes to modify image uri retrieve methods for Python SDK v2.0 and later."""
14 from __future__ import absolute_import
15
16 import ast
17
18 from sagemaker.cli.compatibility.v2.modifiers import matching
19 from sagemaker.cli.compatibility.v2.modifiers.modifier import Modifier
20
21 GET_IMAGE_URI_NAME = "get_image_uri"
22 GET_IMAGE_URI_NAMESPACES = (
23 "sagemaker",
24 "sagemaker.amazon_estimator",
25 "sagemaker.amazon.amazon_estimator",
26 "amazon_estimator",
27 "amazon.amazon_estimator",
28 )
29
30
31 class ImageURIRetrieveRefactor(Modifier):
32 """A class to refactor *get_image_uri() method."""
33
34 def node_should_be_modified(self, node):
35 """Checks if the ``ast.Call`` node calls a function of interest.
36
37 This looks for the following calls:
38
39 - ``sagemaker.get_image_uri``
40 - ``sagemaker.amazon_estimator.get_image_uri``
41 - ``get_image_uri``
42
43 Args:
44 node (ast.Call): a node that represents a function call. For more,
45 see https://docs.python.org/3/library/ast.html#abstract-grammar.
46
47 Returns:
48 bool: If the ``ast.Call`` instantiates a class of interest.
49 """
50 return matching.matches_name_or_namespaces(
51 node, GET_IMAGE_URI_NAME, GET_IMAGE_URI_NAMESPACES
52 )
53
54 def modify_node(self, node):
55 """Modifies the ``ast.Call`` node to call ``image_uris.retrieve`` instead.
56 And switch the first two parameters from (region, repo) to (framework, region)
57
58 Args:
59 node (ast.Call): a node that represents a *image_uris.retrieve call.
60 """
61 original_args = [None] * 3
62 for kw in node.keywords:
63 if kw.arg == "repo_name":
64 original_args[0] = ast.Str(kw.value.s)
65 elif kw.arg == "repo_region":
66 original_args[1] = ast.Str(kw.value.s)
67 elif kw.arg == "repo_version":
68 original_args[2] = ast.Str(kw.value.s)
69
70 if len(node.args) > 0:
71 original_args[1] = ast.Str(node.args[0].s)
72 if len(node.args) > 1:
73 original_args[0] = ast.Str(node.args[1].s)
74 if len(node.args) > 2:
75 original_args[2] = ast.Str(node.args[2].s)
76
77 args = []
78 for arg in original_args:
79 if arg:
80 args.append(arg)
81
82 func = node.func
83 has_sagemaker = False
84 while hasattr(func, "value"):
85 if hasattr(func.value, "id") and func.value.id == "sagemaker":
86 has_sagemaker = True
87 break
88 func = func.value
89
90 if has_sagemaker:
91 node.func = ast.Attribute(
92 value=ast.Attribute(attr="image_uris", value=ast.Name(id="sagemaker")),
93 attr="retrieve",
94 )
95 else:
96 node.func = ast.Attribute(value=ast.Name(id="image_uris"), attr="retrieve")
97 node.args = args
98 node.keywords = []
99 return node
100
101
102 class ImageURIRetrieveImportFromRenamer(Modifier):
103 """A class to update import statements of ``get_image_uri``."""
104
105 def node_should_be_modified(self, node):
106 """Checks if the import statement imports ``get_image_uri`` from the correct module.
107
108 Args:
109 node (ast.ImportFrom): a node that represents a ``from ... import ... `` statement.
110 For more, see https://docs.python.org/3/library/ast.html#abstract-grammar.
111
112 Returns:
113 bool: If the import statement imports ``get_image_uri`` from the correct module.
114 """
115 return node.module in GET_IMAGE_URI_NAMESPACES and any(
116 name.name == GET_IMAGE_URI_NAME for name in node.names
117 )
118
119 def modify_node(self, node):
120 """Changes the ``ast.ImportFrom`` node's name from ``get_image_uri`` to ``image_uris``.
121
122 Args:
123 node (ast.ImportFrom): a node that represents a ``from ... import ... `` statement.
124 For more, see https://docs.python.org/3/library/ast.html#abstract-grammar.
125
126 Returns:
127 ast.AST: the original node, which has been potentially modified.
128 """
129 for name in node.names:
130 if name.name == GET_IMAGE_URI_NAME:
131 name.name = "image_uris"
132 if node.module in GET_IMAGE_URI_NAMESPACES:
133 node.module = "sagemaker"
134 return node
135
[end of src/sagemaker/cli/compatibility/v2/modifiers/image_uris.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/src/sagemaker/cli/compatibility/v2/modifiers/image_uris.py b/src/sagemaker/cli/compatibility/v2/modifiers/image_uris.py
--- a/src/sagemaker/cli/compatibility/v2/modifiers/image_uris.py
+++ b/src/sagemaker/cli/compatibility/v2/modifiers/image_uris.py
@@ -112,8 +112,10 @@
Returns:
bool: If the import statement imports ``get_image_uri`` from the correct module.
"""
- return node.module in GET_IMAGE_URI_NAMESPACES and any(
- name.name == GET_IMAGE_URI_NAME for name in node.names
+ return (
+ node is not None
+ and node.module in GET_IMAGE_URI_NAMESPACES
+ and any(name.name == GET_IMAGE_URI_NAME for name in node.names)
)
def modify_node(self, node):
|
{"golden_diff": "diff --git a/src/sagemaker/cli/compatibility/v2/modifiers/image_uris.py b/src/sagemaker/cli/compatibility/v2/modifiers/image_uris.py\n--- a/src/sagemaker/cli/compatibility/v2/modifiers/image_uris.py\n+++ b/src/sagemaker/cli/compatibility/v2/modifiers/image_uris.py\n@@ -112,8 +112,10 @@\n Returns:\n bool: If the import statement imports ``get_image_uri`` from the correct module.\n \"\"\"\n- return node.module in GET_IMAGE_URI_NAMESPACES and any(\n- name.name == GET_IMAGE_URI_NAME for name in node.names\n+ return (\n+ node is not None\n+ and node.module in GET_IMAGE_URI_NAMESPACES\n+ and any(name.name == GET_IMAGE_URI_NAME for name in node.names)\n )\n \n def modify_node(self, node):\n", "issue": "bug: AttributeError: 'NoneType' object has no attribute 'module' when applying the v2 upgrade tool\n**Describe the bug**\r\n\r\nWhen applying the `sagemaker-upgrade-v2` automated CLI tool, earlier ast modification leads to an `AttributeError` exception.\r\n\r\n**To reproduce**\r\n\r\n1. Craft a script that will be modified by a SerDe import renamer\r\n1. Apply the V2 upgrade CLI tool: \r\n\r\nAn `AttributeError` exception is raised.\r\n\r\n**Expected behavior**\r\n\r\nThe V2 upgrade CLI would properly upgrade the script.\r\n\r\n**Screenshots or logs**\r\n\r\n```\r\n\u276f cat v1.py\r\nimport sagemaker\r\n\r\nfrom sagemaker.predictor import csv_serializer\r\n\r\ncsv_serializer.__doc___\r\n\r\n\u276f sagemaker-upgrade-v2 --in-file v1.py --out-file v2.py\r\nTraceback (most recent call last):\r\n File \"~/testvenv/bin/sagemaker-upgrade-v2\", line 8, in <module>\r\n sys.exit(main())\r\n File \"~/testvenv/lib/python3.8/site-packages/sagemaker/cli/compatibility/v2/sagemaker_upgrade_v2.py\", line 78, in main\r\n _update_file(args.in_file, args.out_file)\r\n File \"~/testvenv/lib/python3.8/site-packages/sagemaker/cli/compatibility/v2/sagemaker_upgrade_v2.py\", line 50, in _update_file\r\n updater_cls(input_path=input_file, output_path=output_file).update()\r\n File \"~/testvenv/lib/python3.8/site-packages/sagemaker/cli/compatibility/v2/files.py\", line 72, in update\r\n output = self._update_ast(self._read_input_file())\r\n File \"~/testvenv/lib/python3.8/site-packages/sagemaker/cli/compatibility/v2/files.py\", line 86, in _update_ast\r\n return ASTTransformer().visit(input_ast)\r\n File \"/usr/lib/python3.8/ast.py\", line 363, in visit\r\n return visitor(node)\r\n File \"~/testvenv/lib/python3.8/site-packages/sagemaker/cli/compatibility/v2/ast_transformer.py\", line 136, in visit_Module\r\n self.generic_visit(node)\r\n File \"/usr/lib/python3.8/ast.py\", line 439, in generic_visit\r\n value = self.visit(value)\r\n File \"/usr/lib/python3.8/ast.py\", line 363, in visit\r\n return visitor(node)\r\n File \"~/testvenv/lib/python3.8/site-packages/sagemaker/cli/compatibility/v2/ast_transformer.py\", line 155, in visit_ImportFrom\r\n node = import_checker.check_and_modify_node(node)\r\n File \"~/testvenv/lib/python3.8/site-packages/sagemaker/cli/compatibility/v2/modifiers/modifier.py\", line 26, in check_and_modify_node\r\n if self.node_should_be_modified(node):\r\n File \"~/testvenv/lib/python3.8/site-packages/sagemaker/cli/compatibility/v2/modifiers/image_uris.py\", line 115, in node_should_be_modified\r\n return node.module in GET_IMAGE_URI_NAMESPACES and any(\r\nAttributeError: 'NoneType' object has no attribute 'module'\r\n```\r\n\r\n**System information**\r\nA description of your system. Please provide:\r\n- **SageMaker Python SDK version**: 2.4.1\r\n- **Framework name (eg. PyTorch) or algorithm (eg. KMeans)**:\r\n- **Framework version**:\r\n- **Python version**:\r\n- **CPU or GPU**:\r\n- **Custom Docker image (Y/N)**:\r\n\r\n**Additional context**\r\n\r\nThe problem comes from [the ordering](https://github.com/aws/sagemaker-python-sdk/blob/v2.4.1/src/sagemaker/cli/compatibility/v2/ast_transformer.py#L59-L60), which the existing, isolated unit tests do not cover. The earlier renamer modifies the ast, and the later renamer cannot handle this situation:\r\n\r\n```\r\n 59 modifiers.serde.SerdeImportFromPredictorRenamer(),\r\n 60 modifiers.image_uris.ImageURIRetrieveImportFromRenamer(),\r\n```\n", "before_files": [{"content": "# Copyright 2020 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\"). You\n# may not use this file except in compliance with the License. A copy of\n# the License is located at\n#\n# http://aws.amazon.com/apache2.0/\n#\n# or in the \"license\" file accompanying this file. This file is\n# distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF\n# ANY KIND, either express or implied. See the License for the specific\n# language governing permissions and limitations under the License.\n\"\"\"Classes to modify image uri retrieve methods for Python SDK v2.0 and later.\"\"\"\nfrom __future__ import absolute_import\n\nimport ast\n\nfrom sagemaker.cli.compatibility.v2.modifiers import matching\nfrom sagemaker.cli.compatibility.v2.modifiers.modifier import Modifier\n\nGET_IMAGE_URI_NAME = \"get_image_uri\"\nGET_IMAGE_URI_NAMESPACES = (\n \"sagemaker\",\n \"sagemaker.amazon_estimator\",\n \"sagemaker.amazon.amazon_estimator\",\n \"amazon_estimator\",\n \"amazon.amazon_estimator\",\n)\n\n\nclass ImageURIRetrieveRefactor(Modifier):\n \"\"\"A class to refactor *get_image_uri() method.\"\"\"\n\n def node_should_be_modified(self, node):\n \"\"\"Checks if the ``ast.Call`` node calls a function of interest.\n\n This looks for the following calls:\n\n - ``sagemaker.get_image_uri``\n - ``sagemaker.amazon_estimator.get_image_uri``\n - ``get_image_uri``\n\n Args:\n node (ast.Call): a node that represents a function call. For more,\n see https://docs.python.org/3/library/ast.html#abstract-grammar.\n\n Returns:\n bool: If the ``ast.Call`` instantiates a class of interest.\n \"\"\"\n return matching.matches_name_or_namespaces(\n node, GET_IMAGE_URI_NAME, GET_IMAGE_URI_NAMESPACES\n )\n\n def modify_node(self, node):\n \"\"\"Modifies the ``ast.Call`` node to call ``image_uris.retrieve`` instead.\n And switch the first two parameters from (region, repo) to (framework, region)\n\n Args:\n node (ast.Call): a node that represents a *image_uris.retrieve call.\n \"\"\"\n original_args = [None] * 3\n for kw in node.keywords:\n if kw.arg == \"repo_name\":\n original_args[0] = ast.Str(kw.value.s)\n elif kw.arg == \"repo_region\":\n original_args[1] = ast.Str(kw.value.s)\n elif kw.arg == \"repo_version\":\n original_args[2] = ast.Str(kw.value.s)\n\n if len(node.args) > 0:\n original_args[1] = ast.Str(node.args[0].s)\n if len(node.args) > 1:\n original_args[0] = ast.Str(node.args[1].s)\n if len(node.args) > 2:\n original_args[2] = ast.Str(node.args[2].s)\n\n args = []\n for arg in original_args:\n if arg:\n args.append(arg)\n\n func = node.func\n has_sagemaker = False\n while hasattr(func, \"value\"):\n if hasattr(func.value, \"id\") and func.value.id == \"sagemaker\":\n has_sagemaker = True\n break\n func = func.value\n\n if has_sagemaker:\n node.func = ast.Attribute(\n value=ast.Attribute(attr=\"image_uris\", value=ast.Name(id=\"sagemaker\")),\n attr=\"retrieve\",\n )\n else:\n node.func = ast.Attribute(value=ast.Name(id=\"image_uris\"), attr=\"retrieve\")\n node.args = args\n node.keywords = []\n return node\n\n\nclass ImageURIRetrieveImportFromRenamer(Modifier):\n \"\"\"A class to update import statements of ``get_image_uri``.\"\"\"\n\n def node_should_be_modified(self, node):\n \"\"\"Checks if the import statement imports ``get_image_uri`` from the correct module.\n\n Args:\n node (ast.ImportFrom): a node that represents a ``from ... import ... `` statement.\n For more, see https://docs.python.org/3/library/ast.html#abstract-grammar.\n\n Returns:\n bool: If the import statement imports ``get_image_uri`` from the correct module.\n \"\"\"\n return node.module in GET_IMAGE_URI_NAMESPACES and any(\n name.name == GET_IMAGE_URI_NAME for name in node.names\n )\n\n def modify_node(self, node):\n \"\"\"Changes the ``ast.ImportFrom`` node's name from ``get_image_uri`` to ``image_uris``.\n\n Args:\n node (ast.ImportFrom): a node that represents a ``from ... import ... `` statement.\n For more, see https://docs.python.org/3/library/ast.html#abstract-grammar.\n\n Returns:\n ast.AST: the original node, which has been potentially modified.\n \"\"\"\n for name in node.names:\n if name.name == GET_IMAGE_URI_NAME:\n name.name = \"image_uris\"\n if node.module in GET_IMAGE_URI_NAMESPACES:\n node.module = \"sagemaker\"\n return node\n", "path": "src/sagemaker/cli/compatibility/v2/modifiers/image_uris.py"}]}
| 2,891 | 196 |
gh_patches_debug_26426
|
rasdani/github-patches
|
git_diff
|
weni-ai__bothub-engine-106
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Disallow samples without intent or entities
Disallow samples creation without an intent or one entity at least.
</issue>
<code>
[start of bothub/api/validators.py]
1 from django.utils.translation import gettext as _
2 from rest_framework.exceptions import PermissionDenied
3 from rest_framework.exceptions import ValidationError
4
5 from bothub.common.models import RepositoryTranslatedExample
6
7
8 class CanContributeInRepositoryValidator(object):
9 def __call__(self, value):
10 user_authorization = value.get_user_authorization(
11 self.request.user)
12 if not user_authorization.can_contribute:
13 raise PermissionDenied(
14 _('You can\'t contribute in this repository'))
15
16 def set_context(self, serializer):
17 self.request = serializer.context.get('request')
18
19
20 class CanContributeInRepositoryExampleValidator(object):
21 def __call__(self, value):
22 repository = value.repository_update.repository
23 user_authorization = repository.get_user_authorization(
24 self.request.user)
25 if not user_authorization.can_contribute:
26 raise PermissionDenied(
27 _('You can\'t contribute in this repository'))
28
29 def set_context(self, serializer):
30 self.request = serializer.context.get('request')
31
32
33 class CanContributeInRepositoryTranslatedExampleValidator(object):
34 def __call__(self, value):
35 repository = value.original_example.repository_update.repository
36 user_authorization = repository.get_user_authorization(
37 self.request.user)
38 if not user_authorization.can_contribute:
39 raise PermissionDenied(
40 _('You can\'t contribute in this repository'))
41
42 def set_context(self, serializer):
43 self.request = serializer.context.get('request')
44
45
46 class TranslatedExampleEntitiesValidator(object):
47 def __call__(self, attrs):
48 original_example = attrs.get('original_example')
49 entities_valid = RepositoryTranslatedExample.same_entities_validator(
50 list(map(lambda x: dict(x), attrs.get('entities'))),
51 list(map(lambda x: x.to_dict, original_example.entities.all())))
52 if not entities_valid:
53 raise ValidationError({'entities': _('Invalid entities')})
54
55
56 class TranslatedExampleLanguageValidator(object):
57 def __call__(self, attrs):
58 original_example = attrs.get('original_example')
59 language = attrs.get('language')
60 if original_example.repository_update.language == language:
61 raise ValidationError({'language': _(
62 'Can\'t translate to the same language')})
63
[end of bothub/api/validators.py]
[start of bothub/api/serializers/example.py]
1 from rest_framework import serializers
2
3 from django.utils.translation import gettext as _
4
5 from bothub.common.models import Repository
6 from bothub.common.models import RepositoryExample
7 from bothub.common.models import RepositoryExampleEntity
8
9 from ..fields import EntityText
10 from ..validators import CanContributeInRepositoryExampleValidator
11 from ..validators import CanContributeInRepositoryValidator
12 from .translate import RepositoryTranslatedExampleSerializer
13
14
15 class RepositoryExampleEntitySerializer(serializers.ModelSerializer):
16 class Meta:
17 model = RepositoryExampleEntity
18 fields = [
19 'id',
20 'repository_example',
21 'start',
22 'end',
23 'entity',
24 'created_at',
25 'value',
26 ]
27
28 repository_example = serializers.PrimaryKeyRelatedField(
29 queryset=RepositoryExample.objects,
30 validators=[
31 CanContributeInRepositoryExampleValidator(),
32 ],
33 help_text=_('Example\'s ID'))
34 value = serializers.SerializerMethodField()
35
36 def get_value(self, obj):
37 return obj.value
38
39
40 class NewRepositoryExampleEntitySerializer(serializers.ModelSerializer):
41 class Meta:
42 model = RepositoryExampleEntity
43 fields = [
44 'repository_example',
45 'start',
46 'end',
47 'entity',
48 ]
49
50
51 class RepositoryExampleSerializer(serializers.ModelSerializer):
52 class Meta:
53 model = RepositoryExample
54 fields = [
55 'id',
56 'repository_update',
57 'deleted_in',
58 'text',
59 'intent',
60 'language',
61 'created_at',
62 'entities',
63 'translations',
64 ]
65 read_only_fields = [
66 'repository_update',
67 'deleted_in',
68 ]
69
70 entities = RepositoryExampleEntitySerializer(
71 many=True,
72 read_only=True)
73 translations = RepositoryTranslatedExampleSerializer(
74 many=True,
75 read_only=True)
76 language = serializers.SerializerMethodField()
77
78 def get_language(self, obj):
79 return obj.language
80
81
82 class NewRepositoryExampleSerializer(serializers.ModelSerializer):
83 class Meta:
84 model = RepositoryExample
85 fields = [
86 'id',
87 'repository',
88 'repository_update',
89 'text',
90 'entities',
91 'intent',
92 ]
93
94 id = serializers.PrimaryKeyRelatedField(
95 read_only=True,
96 style={'show': False})
97 text = EntityText(style={'entities_field': 'entities'})
98 repository = serializers.PrimaryKeyRelatedField(
99 queryset=Repository.objects,
100 validators=[
101 CanContributeInRepositoryValidator(),
102 ],
103 source='repository_update',
104 style={'show': False})
105 repository_update = serializers.PrimaryKeyRelatedField(
106 read_only=True,
107 style={'show': False})
108 entities = NewRepositoryExampleEntitySerializer(
109 many=True,
110 style={'text_field': 'text'})
111
112 def validate_repository(self, repository):
113 return repository.current_update()
114
115 def create(self, validated_data):
116 entities_data = validated_data.pop('entities')
117 example = self.Meta.model.objects.create(**validated_data)
118 for entity_data in entities_data:
119 RepositoryExampleEntity.objects.create(
120 repository_example=example,
121 **entity_data)
122 return example
123
[end of bothub/api/serializers/example.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/bothub/api/serializers/example.py b/bothub/api/serializers/example.py
--- a/bothub/api/serializers/example.py
+++ b/bothub/api/serializers/example.py
@@ -9,6 +9,7 @@
from ..fields import EntityText
from ..validators import CanContributeInRepositoryExampleValidator
from ..validators import CanContributeInRepositoryValidator
+from ..validators import ExampleWithIntentOrEntityValidator
from .translate import RepositoryTranslatedExampleSerializer
@@ -109,6 +110,10 @@
many=True,
style={'text_field': 'text'})
+ def __init__(self, *args, **kwargs):
+ super().__init__(*args, **kwargs)
+ self.validators.append(ExampleWithIntentOrEntityValidator())
+
def validate_repository(self, repository):
return repository.current_update()
diff --git a/bothub/api/validators.py b/bothub/api/validators.py
--- a/bothub/api/validators.py
+++ b/bothub/api/validators.py
@@ -60,3 +60,12 @@
if original_example.repository_update.language == language:
raise ValidationError({'language': _(
'Can\'t translate to the same language')})
+
+
+class ExampleWithIntentOrEntityValidator(object):
+ def __call__(self, attrs):
+ intent = attrs.get('intent')
+ entities = attrs.get('entities')
+
+ if not intent and not entities:
+ raise ValidationError(_('Define a intent or one entity'))
|
{"golden_diff": "diff --git a/bothub/api/serializers/example.py b/bothub/api/serializers/example.py\n--- a/bothub/api/serializers/example.py\n+++ b/bothub/api/serializers/example.py\n@@ -9,6 +9,7 @@\n from ..fields import EntityText\n from ..validators import CanContributeInRepositoryExampleValidator\n from ..validators import CanContributeInRepositoryValidator\n+from ..validators import ExampleWithIntentOrEntityValidator\n from .translate import RepositoryTranslatedExampleSerializer\n \n \n@@ -109,6 +110,10 @@\n many=True,\n style={'text_field': 'text'})\n \n+ def __init__(self, *args, **kwargs):\n+ super().__init__(*args, **kwargs)\n+ self.validators.append(ExampleWithIntentOrEntityValidator())\n+\n def validate_repository(self, repository):\n return repository.current_update()\n \ndiff --git a/bothub/api/validators.py b/bothub/api/validators.py\n--- a/bothub/api/validators.py\n+++ b/bothub/api/validators.py\n@@ -60,3 +60,12 @@\n if original_example.repository_update.language == language:\n raise ValidationError({'language': _(\n 'Can\\'t translate to the same language')})\n+\n+\n+class ExampleWithIntentOrEntityValidator(object):\n+ def __call__(self, attrs):\n+ intent = attrs.get('intent')\n+ entities = attrs.get('entities')\n+\n+ if not intent and not entities:\n+ raise ValidationError(_('Define a intent or one entity'))\n", "issue": "Disallow samples without intent or entities\nDisallow samples creation without an intent or one entity at least.\n", "before_files": [{"content": "from django.utils.translation import gettext as _\nfrom rest_framework.exceptions import PermissionDenied\nfrom rest_framework.exceptions import ValidationError\n\nfrom bothub.common.models import RepositoryTranslatedExample\n\n\nclass CanContributeInRepositoryValidator(object):\n def __call__(self, value):\n user_authorization = value.get_user_authorization(\n self.request.user)\n if not user_authorization.can_contribute:\n raise PermissionDenied(\n _('You can\\'t contribute in this repository'))\n\n def set_context(self, serializer):\n self.request = serializer.context.get('request')\n\n\nclass CanContributeInRepositoryExampleValidator(object):\n def __call__(self, value):\n repository = value.repository_update.repository\n user_authorization = repository.get_user_authorization(\n self.request.user)\n if not user_authorization.can_contribute:\n raise PermissionDenied(\n _('You can\\'t contribute in this repository'))\n\n def set_context(self, serializer):\n self.request = serializer.context.get('request')\n\n\nclass CanContributeInRepositoryTranslatedExampleValidator(object):\n def __call__(self, value):\n repository = value.original_example.repository_update.repository\n user_authorization = repository.get_user_authorization(\n self.request.user)\n if not user_authorization.can_contribute:\n raise PermissionDenied(\n _('You can\\'t contribute in this repository'))\n\n def set_context(self, serializer):\n self.request = serializer.context.get('request')\n\n\nclass TranslatedExampleEntitiesValidator(object):\n def __call__(self, attrs):\n original_example = attrs.get('original_example')\n entities_valid = RepositoryTranslatedExample.same_entities_validator(\n list(map(lambda x: dict(x), attrs.get('entities'))),\n list(map(lambda x: x.to_dict, original_example.entities.all())))\n if not entities_valid:\n raise ValidationError({'entities': _('Invalid entities')})\n\n\nclass TranslatedExampleLanguageValidator(object):\n def __call__(self, attrs):\n original_example = attrs.get('original_example')\n language = attrs.get('language')\n if original_example.repository_update.language == language:\n raise ValidationError({'language': _(\n 'Can\\'t translate to the same language')})\n", "path": "bothub/api/validators.py"}, {"content": "from rest_framework import serializers\n\nfrom django.utils.translation import gettext as _\n\nfrom bothub.common.models import Repository\nfrom bothub.common.models import RepositoryExample\nfrom bothub.common.models import RepositoryExampleEntity\n\nfrom ..fields import EntityText\nfrom ..validators import CanContributeInRepositoryExampleValidator\nfrom ..validators import CanContributeInRepositoryValidator\nfrom .translate import RepositoryTranslatedExampleSerializer\n\n\nclass RepositoryExampleEntitySerializer(serializers.ModelSerializer):\n class Meta:\n model = RepositoryExampleEntity\n fields = [\n 'id',\n 'repository_example',\n 'start',\n 'end',\n 'entity',\n 'created_at',\n 'value',\n ]\n\n repository_example = serializers.PrimaryKeyRelatedField(\n queryset=RepositoryExample.objects,\n validators=[\n CanContributeInRepositoryExampleValidator(),\n ],\n help_text=_('Example\\'s ID'))\n value = serializers.SerializerMethodField()\n\n def get_value(self, obj):\n return obj.value\n\n\nclass NewRepositoryExampleEntitySerializer(serializers.ModelSerializer):\n class Meta:\n model = RepositoryExampleEntity\n fields = [\n 'repository_example',\n 'start',\n 'end',\n 'entity',\n ]\n\n\nclass RepositoryExampleSerializer(serializers.ModelSerializer):\n class Meta:\n model = RepositoryExample\n fields = [\n 'id',\n 'repository_update',\n 'deleted_in',\n 'text',\n 'intent',\n 'language',\n 'created_at',\n 'entities',\n 'translations',\n ]\n read_only_fields = [\n 'repository_update',\n 'deleted_in',\n ]\n\n entities = RepositoryExampleEntitySerializer(\n many=True,\n read_only=True)\n translations = RepositoryTranslatedExampleSerializer(\n many=True,\n read_only=True)\n language = serializers.SerializerMethodField()\n\n def get_language(self, obj):\n return obj.language\n\n\nclass NewRepositoryExampleSerializer(serializers.ModelSerializer):\n class Meta:\n model = RepositoryExample\n fields = [\n 'id',\n 'repository',\n 'repository_update',\n 'text',\n 'entities',\n 'intent',\n ]\n\n id = serializers.PrimaryKeyRelatedField(\n read_only=True,\n style={'show': False})\n text = EntityText(style={'entities_field': 'entities'})\n repository = serializers.PrimaryKeyRelatedField(\n queryset=Repository.objects,\n validators=[\n CanContributeInRepositoryValidator(),\n ],\n source='repository_update',\n style={'show': False})\n repository_update = serializers.PrimaryKeyRelatedField(\n read_only=True,\n style={'show': False})\n entities = NewRepositoryExampleEntitySerializer(\n many=True,\n style={'text_field': 'text'})\n\n def validate_repository(self, repository):\n return repository.current_update()\n\n def create(self, validated_data):\n entities_data = validated_data.pop('entities')\n example = self.Meta.model.objects.create(**validated_data)\n for entity_data in entities_data:\n RepositoryExampleEntity.objects.create(\n repository_example=example,\n **entity_data)\n return example\n", "path": "bothub/api/serializers/example.py"}]}
| 2,039 | 338 |
gh_patches_debug_57917
|
rasdani/github-patches
|
git_diff
|
dj-stripe__dj-stripe-701
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
1.1.0 not compatible with python 2
First of all, thanks a lot for your hard work! We've been using dj-stripe for a long time and it has served us well. Now we're ready to upgrade! Let the fun begin ;).
We're using python 2 and Django 1.11. I'm correct that 1.2 should support that right? Anyway we have to move to 1.1 first for the migrations. There is one problem though in the 1.1 release.
In commit https://github.com/dj-stripe/dj-stripe/commit/6a6f048a3a432a3ba40fba8bf90f8789139daec4 `StripeEnumField` was added with the non python 2 compatible `super()` call:
```name, path, args, kwargs = super().deconstruct()```
What do you guys think? Can we backport a hotfix fix to 1.1.1 or something?
</issue>
<code>
[start of djstripe/fields.py]
1 # -*- coding: utf-8 -*-
2 """
3 .. module:: djstripe.fields.
4
5 :synopsis: dj-stripe Custom Field Definitions
6
7 .. moduleauthor:: Bill Huneke (@wahuneke)
8 """
9 from __future__ import absolute_import, division, print_function, unicode_literals
10
11 import decimal
12
13 from django.core.exceptions import FieldError, ImproperlyConfigured
14 from django.core.validators import MaxValueValidator, MinValueValidator
15 from django.db import models
16
17 from .settings import USE_NATIVE_JSONFIELD
18 from .utils import convert_tstamp, dict_nested_accessor
19
20
21 if USE_NATIVE_JSONFIELD:
22 from django.contrib.postgres.fields import JSONField
23 else:
24 from jsonfield import JSONField
25
26
27 class PaymentMethodForeignKey(models.ForeignKey):
28 def __init__(self, **kwargs):
29 kwargs.setdefault("to", "PaymentMethod")
30 super(PaymentMethodForeignKey, self).__init__(**kwargs)
31
32
33 class StripeFieldMixin(object):
34 """
35 Custom fields for all Stripe data.
36
37 This allows keeping track of which database fields are suitable for
38 sending to or receiving from Stripe. Also, allows a few handy extra parameters.
39 """
40
41 # Used if the name at stripe is different from the name in our database
42 # Include a . in name if value is nested in dict in Stripe's object
43 # (e.g. stripe_name = "data.id" --> obj["data"]["id"])
44 stripe_name = None
45
46 # If stripe_name is None, this can also be used to specify a nested value, but
47 # the final value is assumed to be the database field name
48 # (e.g. nested_name = "data" --> obj["data"][db_field_name]
49 nested_name = None
50
51 # This indicates that this field will always appear in a stripe object. It will be
52 # an Exception if we try to parse a stripe object that does not include this field
53 # in the data. If set to False then null=True attribute will be automatically set
54 stripe_required = True
55
56 # If a field was populated in previous API versions but we don't want to drop the old
57 # data for some reason, mark it as deprecated. This will make sure we never try to send
58 # it to Stripe or expect in Stripe data received
59 # This setting automatically implies Null=True
60 deprecated = False
61
62 def __init__(self, *args, **kwargs):
63 """
64 Assign class instance variables based on kwargs.
65
66 Assign extra class instance variables if stripe_required is defined or
67 if deprecated is defined.
68 """
69 self.stripe_name = kwargs.pop('stripe_name', self.stripe_name)
70 self.nested_name = kwargs.pop('nested_name', self.nested_name)
71 self.stripe_required = kwargs.pop('stripe_required', self.stripe_required)
72 self.deprecated = kwargs.pop('deprecated', self.deprecated)
73 if not self.stripe_required:
74 kwargs["null"] = True
75
76 if self.deprecated:
77 kwargs["null"] = True
78 kwargs["default"] = None
79 super(StripeFieldMixin, self).__init__(*args, **kwargs)
80
81 def stripe_to_db(self, data):
82 """Try converting stripe fields to defined database fields."""
83 if not self.deprecated:
84 try:
85 if self.stripe_name:
86 result = dict_nested_accessor(data, self.stripe_name)
87 elif self.nested_name:
88 result = dict_nested_accessor(data, self.nested_name + "." + self.name)
89 else:
90 result = data[self.name]
91 except (KeyError, TypeError):
92 if self.stripe_required:
93 model_name = self.model._meta.object_name if hasattr(self, "model") else ""
94 raise FieldError("Required stripe field '{field_name}' was not"
95 " provided in {model_name} data object.".format(field_name=self.name,
96 model_name=model_name))
97 else:
98 result = None
99
100 return result
101
102
103 class StripePercentField(StripeFieldMixin, models.DecimalField):
104 """A field used to define a percent according to djstripe logic."""
105
106 def __init__(self, *args, **kwargs):
107 """Assign default args to this field."""
108 defaults = {
109 'decimal_places': 2,
110 'max_digits': 5,
111 'validators': [MinValueValidator(1.00), MaxValueValidator(100.00)]
112 }
113 defaults.update(kwargs)
114 super(StripePercentField, self).__init__(*args, **defaults)
115
116
117 class StripeCurrencyField(StripeFieldMixin, models.DecimalField):
118 """
119 A field used to define currency according to djstripe logic.
120
121 Stripe is always in cents. djstripe stores everything in dollars.
122 """
123
124 def __init__(self, *args, **kwargs):
125 """Assign default args to this field."""
126 defaults = {
127 'decimal_places': 2,
128 'max_digits': 8,
129 }
130 defaults.update(kwargs)
131 super(StripeCurrencyField, self).__init__(*args, **defaults)
132
133 def stripe_to_db(self, data):
134 """Convert the raw value to decimal representation."""
135 val = super(StripeCurrencyField, self).stripe_to_db(data)
136
137 # Note: 0 is a possible return value, which is 'falseish'
138 if val is not None:
139 return val / decimal.Decimal("100")
140
141
142 class StripeBooleanField(StripeFieldMixin, models.BooleanField):
143 """A field used to define a boolean value according to djstripe logic."""
144
145 def __init__(self, *args, **kwargs):
146 """Throw an error when a user tries to deprecate."""
147 if kwargs.get("deprecated", False):
148 raise ImproperlyConfigured("Boolean field cannot be deprecated. Change field type to "
149 "StripeNullBooleanField")
150 super(StripeBooleanField, self).__init__(*args, **kwargs)
151
152
153 class StripeNullBooleanField(StripeFieldMixin, models.NullBooleanField):
154 """A field used to define a NullBooleanField value according to djstripe logic."""
155
156 pass
157
158
159 class StripeCharField(StripeFieldMixin, models.CharField):
160 """A field used to define a CharField value according to djstripe logic."""
161
162 pass
163
164
165 class StripeEnumField(StripeCharField):
166 def __init__(self, enum, *args, **kwargs):
167 self.enum = enum
168 choices = enum.choices
169 defaults = {
170 "choices": choices,
171 "max_length": max(len(k) for k, v in choices)
172 }
173 defaults.update(kwargs)
174 super(StripeEnumField, self).__init__(*args, **defaults)
175
176 def deconstruct(self):
177 name, path, args, kwargs = super().deconstruct()
178 kwargs["enum"] = self.enum
179 del kwargs["choices"]
180 return name, path, args, kwargs
181
182
183 class StripeIdField(StripeCharField):
184 """A field with enough space to hold any stripe ID."""
185
186 def __init__(self, *args, **kwargs):
187 """
188 Assign default args to this field.
189
190 As per: https://stripe.com/docs/upgrades
191 You can safely assume object IDs we generate will never exceed 255
192 characters, but you should be able to handle IDs of up to that
193 length.
194 """
195 defaults = {
196 'max_length': 255,
197 'blank': False,
198 'null': False,
199 }
200 defaults.update(kwargs)
201 super(StripeIdField, self).__init__(*args, **defaults)
202
203
204 class StripeTextField(StripeFieldMixin, models.TextField):
205 """A field used to define a TextField value according to djstripe logic."""
206
207 pass
208
209
210 class StripeDateTimeField(StripeFieldMixin, models.DateTimeField):
211 """A field used to define a DateTimeField value according to djstripe logic."""
212
213 def stripe_to_db(self, data):
214 """Convert the raw timestamp value to a DateTime representation."""
215 val = super(StripeDateTimeField, self).stripe_to_db(data)
216
217 # Note: 0 is a possible return value, which is 'falseish'
218 if val is not None:
219 return convert_tstamp(val)
220
221
222 class StripeIntegerField(StripeFieldMixin, models.IntegerField):
223 """A field used to define a IntegerField value according to djstripe logic."""
224
225 pass
226
227
228 class StripePositiveIntegerField(StripeFieldMixin, models.PositiveIntegerField):
229 """A field used to define a PositiveIntegerField value according to djstripe logic."""
230
231 pass
232
233
234 class StripeJSONField(StripeFieldMixin, JSONField):
235 """A field used to define a JSONField value according to djstripe logic."""
236
237 pass
238
[end of djstripe/fields.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/djstripe/fields.py b/djstripe/fields.py
--- a/djstripe/fields.py
+++ b/djstripe/fields.py
@@ -174,7 +174,7 @@
super(StripeEnumField, self).__init__(*args, **defaults)
def deconstruct(self):
- name, path, args, kwargs = super().deconstruct()
+ name, path, args, kwargs = super(StripeEnumField, self).deconstruct()
kwargs["enum"] = self.enum
del kwargs["choices"]
return name, path, args, kwargs
|
{"golden_diff": "diff --git a/djstripe/fields.py b/djstripe/fields.py\n--- a/djstripe/fields.py\n+++ b/djstripe/fields.py\n@@ -174,7 +174,7 @@\n super(StripeEnumField, self).__init__(*args, **defaults)\n \n def deconstruct(self):\n- name, path, args, kwargs = super().deconstruct()\n+ name, path, args, kwargs = super(StripeEnumField, self).deconstruct()\n kwargs[\"enum\"] = self.enum\n del kwargs[\"choices\"]\n return name, path, args, kwargs\n", "issue": "1.1.0 not compatible with python 2\nFirst of all, thanks a lot for your hard work! We've been using dj-stripe for a long time and it has served us well. Now we're ready to upgrade! Let the fun begin ;).\r\n\r\nWe're using python 2 and Django 1.11. I'm correct that 1.2 should support that right? Anyway we have to move to 1.1 first for the migrations. There is one problem though in the 1.1 release.\r\n\r\nIn commit https://github.com/dj-stripe/dj-stripe/commit/6a6f048a3a432a3ba40fba8bf90f8789139daec4 `StripeEnumField` was added with the non python 2 compatible `super()` call:\r\n\r\n```name, path, args, kwargs = super().deconstruct()```\r\n\r\nWhat do you guys think? Can we backport a hotfix fix to 1.1.1 or something?\r\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"\n.. module:: djstripe.fields.\n\n :synopsis: dj-stripe Custom Field Definitions\n\n.. moduleauthor:: Bill Huneke (@wahuneke)\n\"\"\"\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nimport decimal\n\nfrom django.core.exceptions import FieldError, ImproperlyConfigured\nfrom django.core.validators import MaxValueValidator, MinValueValidator\nfrom django.db import models\n\nfrom .settings import USE_NATIVE_JSONFIELD\nfrom .utils import convert_tstamp, dict_nested_accessor\n\n\nif USE_NATIVE_JSONFIELD:\n from django.contrib.postgres.fields import JSONField\nelse:\n from jsonfield import JSONField\n\n\nclass PaymentMethodForeignKey(models.ForeignKey):\n def __init__(self, **kwargs):\n kwargs.setdefault(\"to\", \"PaymentMethod\")\n super(PaymentMethodForeignKey, self).__init__(**kwargs)\n\n\nclass StripeFieldMixin(object):\n \"\"\"\n Custom fields for all Stripe data.\n\n This allows keeping track of which database fields are suitable for\n sending to or receiving from Stripe. Also, allows a few handy extra parameters.\n \"\"\"\n\n # Used if the name at stripe is different from the name in our database\n # Include a . in name if value is nested in dict in Stripe's object\n # (e.g. stripe_name = \"data.id\" --> obj[\"data\"][\"id\"])\n stripe_name = None\n\n # If stripe_name is None, this can also be used to specify a nested value, but\n # the final value is assumed to be the database field name\n # (e.g. nested_name = \"data\" --> obj[\"data\"][db_field_name]\n nested_name = None\n\n # This indicates that this field will always appear in a stripe object. It will be\n # an Exception if we try to parse a stripe object that does not include this field\n # in the data. If set to False then null=True attribute will be automatically set\n stripe_required = True\n\n # If a field was populated in previous API versions but we don't want to drop the old\n # data for some reason, mark it as deprecated. This will make sure we never try to send\n # it to Stripe or expect in Stripe data received\n # This setting automatically implies Null=True\n deprecated = False\n\n def __init__(self, *args, **kwargs):\n \"\"\"\n Assign class instance variables based on kwargs.\n\n Assign extra class instance variables if stripe_required is defined or\n if deprecated is defined.\n \"\"\"\n self.stripe_name = kwargs.pop('stripe_name', self.stripe_name)\n self.nested_name = kwargs.pop('nested_name', self.nested_name)\n self.stripe_required = kwargs.pop('stripe_required', self.stripe_required)\n self.deprecated = kwargs.pop('deprecated', self.deprecated)\n if not self.stripe_required:\n kwargs[\"null\"] = True\n\n if self.deprecated:\n kwargs[\"null\"] = True\n kwargs[\"default\"] = None\n super(StripeFieldMixin, self).__init__(*args, **kwargs)\n\n def stripe_to_db(self, data):\n \"\"\"Try converting stripe fields to defined database fields.\"\"\"\n if not self.deprecated:\n try:\n if self.stripe_name:\n result = dict_nested_accessor(data, self.stripe_name)\n elif self.nested_name:\n result = dict_nested_accessor(data, self.nested_name + \".\" + self.name)\n else:\n result = data[self.name]\n except (KeyError, TypeError):\n if self.stripe_required:\n model_name = self.model._meta.object_name if hasattr(self, \"model\") else \"\"\n raise FieldError(\"Required stripe field '{field_name}' was not\"\n \" provided in {model_name} data object.\".format(field_name=self.name,\n model_name=model_name))\n else:\n result = None\n\n return result\n\n\nclass StripePercentField(StripeFieldMixin, models.DecimalField):\n \"\"\"A field used to define a percent according to djstripe logic.\"\"\"\n\n def __init__(self, *args, **kwargs):\n \"\"\"Assign default args to this field.\"\"\"\n defaults = {\n 'decimal_places': 2,\n 'max_digits': 5,\n 'validators': [MinValueValidator(1.00), MaxValueValidator(100.00)]\n }\n defaults.update(kwargs)\n super(StripePercentField, self).__init__(*args, **defaults)\n\n\nclass StripeCurrencyField(StripeFieldMixin, models.DecimalField):\n \"\"\"\n A field used to define currency according to djstripe logic.\n\n Stripe is always in cents. djstripe stores everything in dollars.\n \"\"\"\n\n def __init__(self, *args, **kwargs):\n \"\"\"Assign default args to this field.\"\"\"\n defaults = {\n 'decimal_places': 2,\n 'max_digits': 8,\n }\n defaults.update(kwargs)\n super(StripeCurrencyField, self).__init__(*args, **defaults)\n\n def stripe_to_db(self, data):\n \"\"\"Convert the raw value to decimal representation.\"\"\"\n val = super(StripeCurrencyField, self).stripe_to_db(data)\n\n # Note: 0 is a possible return value, which is 'falseish'\n if val is not None:\n return val / decimal.Decimal(\"100\")\n\n\nclass StripeBooleanField(StripeFieldMixin, models.BooleanField):\n \"\"\"A field used to define a boolean value according to djstripe logic.\"\"\"\n\n def __init__(self, *args, **kwargs):\n \"\"\"Throw an error when a user tries to deprecate.\"\"\"\n if kwargs.get(\"deprecated\", False):\n raise ImproperlyConfigured(\"Boolean field cannot be deprecated. Change field type to \"\n \"StripeNullBooleanField\")\n super(StripeBooleanField, self).__init__(*args, **kwargs)\n\n\nclass StripeNullBooleanField(StripeFieldMixin, models.NullBooleanField):\n \"\"\"A field used to define a NullBooleanField value according to djstripe logic.\"\"\"\n\n pass\n\n\nclass StripeCharField(StripeFieldMixin, models.CharField):\n \"\"\"A field used to define a CharField value according to djstripe logic.\"\"\"\n\n pass\n\n\nclass StripeEnumField(StripeCharField):\n def __init__(self, enum, *args, **kwargs):\n self.enum = enum\n choices = enum.choices\n defaults = {\n \"choices\": choices,\n \"max_length\": max(len(k) for k, v in choices)\n }\n defaults.update(kwargs)\n super(StripeEnumField, self).__init__(*args, **defaults)\n\n def deconstruct(self):\n name, path, args, kwargs = super().deconstruct()\n kwargs[\"enum\"] = self.enum\n del kwargs[\"choices\"]\n return name, path, args, kwargs\n\n\nclass StripeIdField(StripeCharField):\n \"\"\"A field with enough space to hold any stripe ID.\"\"\"\n\n def __init__(self, *args, **kwargs):\n \"\"\"\n Assign default args to this field.\n\n As per: https://stripe.com/docs/upgrades\n You can safely assume object IDs we generate will never exceed 255\n characters, but you should be able to handle IDs of up to that\n length.\n \"\"\"\n defaults = {\n 'max_length': 255,\n 'blank': False,\n 'null': False,\n }\n defaults.update(kwargs)\n super(StripeIdField, self).__init__(*args, **defaults)\n\n\nclass StripeTextField(StripeFieldMixin, models.TextField):\n \"\"\"A field used to define a TextField value according to djstripe logic.\"\"\"\n\n pass\n\n\nclass StripeDateTimeField(StripeFieldMixin, models.DateTimeField):\n \"\"\"A field used to define a DateTimeField value according to djstripe logic.\"\"\"\n\n def stripe_to_db(self, data):\n \"\"\"Convert the raw timestamp value to a DateTime representation.\"\"\"\n val = super(StripeDateTimeField, self).stripe_to_db(data)\n\n # Note: 0 is a possible return value, which is 'falseish'\n if val is not None:\n return convert_tstamp(val)\n\n\nclass StripeIntegerField(StripeFieldMixin, models.IntegerField):\n \"\"\"A field used to define a IntegerField value according to djstripe logic.\"\"\"\n\n pass\n\n\nclass StripePositiveIntegerField(StripeFieldMixin, models.PositiveIntegerField):\n \"\"\"A field used to define a PositiveIntegerField value according to djstripe logic.\"\"\"\n\n pass\n\n\nclass StripeJSONField(StripeFieldMixin, JSONField):\n \"\"\"A field used to define a JSONField value according to djstripe logic.\"\"\"\n\n pass\n", "path": "djstripe/fields.py"}]}
| 3,222 | 134 |
gh_patches_debug_28488
|
rasdani/github-patches
|
git_diff
|
microsoft__ptvsd-443
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Execution when debugging is incorrect
From developer community
https://developercommunity.visualstudio.com/content/problem/254447/python-execute-problem.html?childToView=255082#comment-255082
```
import pymysql
db = pymysql.connect(host="localhost",user="root",password="",database="zhidao",charset='utf8')
cursor_select = db.cursor()
sql_select = "select question_id from zhidao"
size_select = cursor_select.execute(sql_select)
for i in range(size_select):
data = cursor_select.fetchone()
print(data[0])
db.close()
```
Table zhidao is a small one, which has about 1400 rows, it works fine if you execute the code directly, but if you set a breakpoint and execute "size_select = cursor_select.execute(sql_select)" using F10, data will become NoneType somehow, if you check this problem deeper, you can find execution using F10 will somehow change cursor_select.rownumber to nonzero, related code of pymysql is here: https://github.com/PyMySQL/PyMySQL/blob/master/pymysql/cursors.py
I hope this is helpful.
Version:
Win10 1803
VS Community 2017 - 15.7.1
Python - 15.7.18116.1
Python 3.6(64-bit)
</issue>
<code>
[start of ptvsd/safe_repr.py]
1 # Copyright (c) Microsoft Corporation. All rights reserved.
2 # Licensed under the MIT License. See LICENSE in the project root
3 # for license information.
4
5 import sys
6
7
8 # Py3 compat - alias unicode to str, and xrange to range
9 try:
10 unicode # noqa
11 except NameError:
12 unicode = str
13 try:
14 xrange # noqa
15 except NameError:
16 xrange = range
17
18
19 class SafeRepr(object):
20 # String types are truncated to maxstring_outer when at the outer-
21 # most level, and truncated to maxstring_inner characters inside
22 # collections.
23 maxstring_outer = 2 ** 16
24 maxstring_inner = 30
25 if sys.version_info >= (3, 0):
26 string_types = (str, bytes)
27 set_info = (set, '{', '}', False)
28 frozenset_info = (frozenset, 'frozenset({', '})', False)
29 int_types = (int,)
30 else:
31 string_types = (str, unicode)
32 set_info = (set, 'set([', '])', False)
33 frozenset_info = (frozenset, 'frozenset([', '])', False)
34 int_types = (int, long) # noqa
35
36 # Collection types are recursively iterated for each limit in
37 # maxcollection.
38 maxcollection = (15, 10)
39
40 # Specifies type, prefix string, suffix string, and whether to include a
41 # comma if there is only one element. (Using a sequence rather than a
42 # mapping because we use isinstance() to determine the matching type.)
43 collection_types = [
44 (tuple, '(', ')', True),
45 (list, '[', ']', False),
46 frozenset_info,
47 set_info,
48 ]
49 try:
50 from collections import deque
51 collection_types.append((deque, 'deque([', '])', False))
52 except Exception:
53 pass
54
55 # type, prefix string, suffix string, item prefix string,
56 # item key/value separator, item suffix string
57 dict_types = [(dict, '{', '}', '', ': ', '')]
58 try:
59 from collections import OrderedDict
60 dict_types.append((OrderedDict, 'OrderedDict([', '])', '(', ', ', ')'))
61 except Exception:
62 pass
63
64 # All other types are treated identically to strings, but using
65 # different limits.
66 maxother_outer = 2 ** 16
67 maxother_inner = 30
68
69 convert_to_hex = False
70 raw_value = False
71
72 def __call__(self, obj):
73 try:
74 return ''.join(self._repr(obj, 0))
75 except Exception:
76 try:
77 return 'An exception was raised: %r' % sys.exc_info()[1]
78 except Exception:
79 return 'An exception was raised'
80
81 def _repr(self, obj, level):
82 '''Returns an iterable of the parts in the final repr string.'''
83
84 try:
85 obj_repr = type(obj).__repr__
86 except Exception:
87 obj_repr = None
88
89 def has_obj_repr(t):
90 r = t.__repr__
91 try:
92 return obj_repr == r
93 except Exception:
94 return obj_repr is r
95
96 for t, prefix, suffix, comma in self.collection_types:
97 if isinstance(obj, t) and has_obj_repr(t):
98 return self._repr_iter(obj, level, prefix, suffix, comma)
99
100 for t, prefix, suffix, item_prefix, item_sep, item_suffix in self.dict_types: # noqa
101 if isinstance(obj, t) and has_obj_repr(t):
102 return self._repr_dict(obj, level, prefix, suffix,
103 item_prefix, item_sep, item_suffix)
104
105 for t in self.string_types:
106 if isinstance(obj, t) and has_obj_repr(t):
107 return self._repr_str(obj, level)
108
109 if self._is_long_iter(obj):
110 return self._repr_long_iter(obj)
111
112 return self._repr_other(obj, level)
113
114 # Determines whether an iterable exceeds the limits set in
115 # maxlimits, and is therefore unsafe to repr().
116 def _is_long_iter(self, obj, level=0):
117 try:
118 # Strings have their own limits (and do not nest). Because
119 # they don't have __iter__ in 2.x, this check goes before
120 # the next one.
121 if isinstance(obj, self.string_types):
122 return len(obj) > self.maxstring_inner
123
124 # If it's not an iterable (and not a string), it's fine.
125 if not hasattr(obj, '__iter__'):
126 return False
127
128 # Iterable is its own iterator - this is a one-off iterable
129 # like generator or enumerate(). We can't really count that,
130 # but repr() for these should not include any elements anyway,
131 # so we can treat it the same as non-iterables.
132 if obj is iter(obj):
133 return False
134
135 # xrange reprs fine regardless of length.
136 if isinstance(obj, xrange):
137 return False
138
139 # numpy and scipy collections (ndarray etc) have
140 # self-truncating repr, so they're always safe.
141 try:
142 module = type(obj).__module__.partition('.')[0]
143 if module in ('numpy', 'scipy'):
144 return False
145 except Exception:
146 pass
147
148 # Iterables that nest too deep are considered long.
149 if level >= len(self.maxcollection):
150 return True
151
152 # It is too long if the length exceeds the limit, or any
153 # of its elements are long iterables.
154 if hasattr(obj, '__len__'):
155 try:
156 size = len(obj)
157 except Exception:
158 size = None
159 if size is not None and size > self.maxcollection[level]:
160 return True
161 return any((self._is_long_iter(item, level + 1) for item in obj)) # noqa
162 return any(i > self.maxcollection[level] or self._is_long_iter(item, level + 1) for i, item in enumerate(obj)) # noqa
163
164 except Exception:
165 # If anything breaks, assume the worst case.
166 return True
167
168 def _repr_iter(self, obj, level, prefix, suffix,
169 comma_after_single_element=False):
170 yield prefix
171
172 if level >= len(self.maxcollection):
173 yield '...'
174 else:
175 count = self.maxcollection[level]
176 yield_comma = False
177 for item in obj:
178 if yield_comma:
179 yield ', '
180 yield_comma = True
181
182 count -= 1
183 if count <= 0:
184 yield '...'
185 break
186
187 for p in self._repr(item, 100 if item is obj else level + 1):
188 yield p
189 else:
190 if comma_after_single_element:
191 if count == self.maxcollection[level] - 1:
192 yield ','
193 yield suffix
194
195 def _repr_long_iter(self, obj):
196 try:
197 length = hex(len(obj)) if self.convert_to_hex else len(obj)
198 obj_repr = '<%s, len() = %s>' % (type(obj).__name__, length)
199 except Exception:
200 try:
201 obj_repr = '<' + type(obj).__name__ + '>'
202 except Exception:
203 obj_repr = '<no repr available for object>'
204 yield obj_repr
205
206 def _repr_dict(self, obj, level, prefix, suffix,
207 item_prefix, item_sep, item_suffix):
208 if not obj:
209 yield prefix + suffix
210 return
211 if level >= len(self.maxcollection):
212 yield prefix + '...' + suffix
213 return
214
215 yield prefix
216
217 count = self.maxcollection[level]
218 yield_comma = False
219
220 try:
221 sorted_keys = sorted(obj)
222 except Exception:
223 sorted_keys = list(obj)
224
225 for key in sorted_keys:
226 if yield_comma:
227 yield ', '
228 yield_comma = True
229
230 count -= 1
231 if count <= 0:
232 yield '...'
233 break
234
235 yield item_prefix
236 for p in self._repr(key, level + 1):
237 yield p
238
239 yield item_sep
240
241 try:
242 item = obj[key]
243 except Exception:
244 yield '<?>'
245 else:
246 for p in self._repr(item, 100 if item is obj else level + 1):
247 yield p
248 yield item_suffix
249
250 yield suffix
251
252 def _repr_str(self, obj, level):
253 return self._repr_obj(obj, level,
254 self.maxstring_inner, self.maxstring_outer)
255
256 def _repr_other(self, obj, level):
257 return self._repr_obj(obj, level,
258 self.maxother_inner, self.maxother_outer)
259
260 def _repr_obj(self, obj, level, limit_inner, limit_outer):
261 try:
262 if self.raw_value:
263 # For raw value retrieval, ignore all limits.
264 try:
265 mv = memoryview(obj)
266 except Exception:
267 yield unicode(obj)
268 return
269 else:
270 # Map bytes to Unicode codepoints with same values.
271 yield mv.tobytes().decode('latin-1')
272 return
273 elif self.convert_to_hex and isinstance(obj, self.int_types):
274 obj_repr = hex(obj)
275 else:
276 obj_repr = repr(obj)
277 except Exception:
278 try:
279 obj_repr = object.__repr__(obj)
280 except Exception:
281 try:
282 obj_repr = '<no repr available for ' + type(obj).__name__ + '>' # noqa
283 except Exception:
284 obj_repr = '<no repr available for object>'
285
286 limit = limit_inner if level > 0 else limit_outer
287
288 if limit >= len(obj_repr):
289 yield obj_repr
290 return
291
292 # Slightly imprecise calculations - we may end up with a string that is
293 # up to 3 characters longer than limit. If you need precise formatting,
294 # you are using the wrong class.
295 left_count, right_count = max(1, int(2 * limit / 3)), max(1, int(limit / 3)) # noqa
296
297 yield obj_repr[:left_count]
298 yield '...'
299 yield obj_repr[-right_count:]
300
[end of ptvsd/safe_repr.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/ptvsd/safe_repr.py b/ptvsd/safe_repr.py
--- a/ptvsd/safe_repr.py
+++ b/ptvsd/safe_repr.py
@@ -27,11 +27,15 @@
set_info = (set, '{', '}', False)
frozenset_info = (frozenset, 'frozenset({', '})', False)
int_types = (int,)
+ long_iter_types = (list, tuple, bytearray, range,
+ dict, set, frozenset)
else:
string_types = (str, unicode)
set_info = (set, 'set([', '])', False)
frozenset_info = (frozenset, 'frozenset([', '])', False)
int_types = (int, long) # noqa
+ long_iter_types = (list, tuple, bytearray, xrange,
+ dict, set, frozenset, buffer) # noqa
# Collection types are recursively iterated for each limit in
# maxcollection.
@@ -125,6 +129,12 @@
if not hasattr(obj, '__iter__'):
return False
+ # If it's not an instance of these collection types then it
+ # is fine. Note: this is a fix for
+ # https://github.com/Microsoft/ptvsd/issues/406
+ if not isinstance(obj, self.long_iter_types):
+ return False
+
# Iterable is its own iterator - this is a one-off iterable
# like generator or enumerate(). We can't really count that,
# but repr() for these should not include any elements anyway,
|
{"golden_diff": "diff --git a/ptvsd/safe_repr.py b/ptvsd/safe_repr.py\n--- a/ptvsd/safe_repr.py\n+++ b/ptvsd/safe_repr.py\n@@ -27,11 +27,15 @@\n set_info = (set, '{', '}', False)\n frozenset_info = (frozenset, 'frozenset({', '})', False)\n int_types = (int,)\n+ long_iter_types = (list, tuple, bytearray, range,\n+ dict, set, frozenset)\n else:\n string_types = (str, unicode)\n set_info = (set, 'set([', '])', False)\n frozenset_info = (frozenset, 'frozenset([', '])', False)\n int_types = (int, long) # noqa\n+ long_iter_types = (list, tuple, bytearray, xrange,\n+ dict, set, frozenset, buffer) # noqa\n \n # Collection types are recursively iterated for each limit in\n # maxcollection.\n@@ -125,6 +129,12 @@\n if not hasattr(obj, '__iter__'):\n return False\n \n+ # If it's not an instance of these collection types then it\n+ # is fine. Note: this is a fix for\n+ # https://github.com/Microsoft/ptvsd/issues/406\n+ if not isinstance(obj, self.long_iter_types):\n+ return False\n+\n # Iterable is its own iterator - this is a one-off iterable\n # like generator or enumerate(). We can't really count that,\n # but repr() for these should not include any elements anyway,\n", "issue": "Execution when debugging is incorrect\nFrom developer community\r\nhttps://developercommunity.visualstudio.com/content/problem/254447/python-execute-problem.html?childToView=255082#comment-255082\r\n\r\n```\r\nimport pymysql\r\ndb = pymysql.connect(host=\"localhost\",user=\"root\",password=\"\",database=\"zhidao\",charset='utf8')\r\ncursor_select = db.cursor()\r\nsql_select = \"select question_id from zhidao\"\r\nsize_select = cursor_select.execute(sql_select)\r\nfor i in range(size_select):\r\n\tdata = cursor_select.fetchone()\r\n\tprint(data[0])\r\ndb.close()\r\n```\r\n\r\nTable zhidao is a small one, which has about 1400 rows, it works fine if you execute the code directly, but if you set a breakpoint and execute \"size_select = cursor_select.execute(sql_select)\" using F10, data will become NoneType somehow, if you check this problem deeper, you can find execution using F10 will somehow change cursor_select.rownumber to nonzero, related code of pymysql is here: https://github.com/PyMySQL/PyMySQL/blob/master/pymysql/cursors.py\r\nI hope this is helpful.\r\nVersion:\r\nWin10 1803\r\nVS Community 2017 - 15.7.1\r\nPython - 15.7.18116.1\r\nPython 3.6(64-bit) \r\n\n", "before_files": [{"content": "# Copyright (c) Microsoft Corporation. All rights reserved.\n# Licensed under the MIT License. See LICENSE in the project root\n# for license information.\n\nimport sys\n\n\n# Py3 compat - alias unicode to str, and xrange to range\ntry:\n unicode # noqa\nexcept NameError:\n unicode = str\ntry:\n xrange # noqa\nexcept NameError:\n xrange = range\n\n\nclass SafeRepr(object):\n # String types are truncated to maxstring_outer when at the outer-\n # most level, and truncated to maxstring_inner characters inside\n # collections.\n maxstring_outer = 2 ** 16\n maxstring_inner = 30\n if sys.version_info >= (3, 0):\n string_types = (str, bytes)\n set_info = (set, '{', '}', False)\n frozenset_info = (frozenset, 'frozenset({', '})', False)\n int_types = (int,)\n else:\n string_types = (str, unicode)\n set_info = (set, 'set([', '])', False)\n frozenset_info = (frozenset, 'frozenset([', '])', False)\n int_types = (int, long) # noqa\n\n # Collection types are recursively iterated for each limit in\n # maxcollection.\n maxcollection = (15, 10)\n\n # Specifies type, prefix string, suffix string, and whether to include a\n # comma if there is only one element. (Using a sequence rather than a\n # mapping because we use isinstance() to determine the matching type.)\n collection_types = [\n (tuple, '(', ')', True),\n (list, '[', ']', False),\n frozenset_info,\n set_info,\n ]\n try:\n from collections import deque\n collection_types.append((deque, 'deque([', '])', False))\n except Exception:\n pass\n\n # type, prefix string, suffix string, item prefix string,\n # item key/value separator, item suffix string\n dict_types = [(dict, '{', '}', '', ': ', '')]\n try:\n from collections import OrderedDict\n dict_types.append((OrderedDict, 'OrderedDict([', '])', '(', ', ', ')'))\n except Exception:\n pass\n\n # All other types are treated identically to strings, but using\n # different limits.\n maxother_outer = 2 ** 16\n maxother_inner = 30\n\n convert_to_hex = False\n raw_value = False\n\n def __call__(self, obj):\n try:\n return ''.join(self._repr(obj, 0))\n except Exception:\n try:\n return 'An exception was raised: %r' % sys.exc_info()[1]\n except Exception:\n return 'An exception was raised'\n\n def _repr(self, obj, level):\n '''Returns an iterable of the parts in the final repr string.'''\n\n try:\n obj_repr = type(obj).__repr__\n except Exception:\n obj_repr = None\n\n def has_obj_repr(t):\n r = t.__repr__\n try:\n return obj_repr == r\n except Exception:\n return obj_repr is r\n\n for t, prefix, suffix, comma in self.collection_types:\n if isinstance(obj, t) and has_obj_repr(t):\n return self._repr_iter(obj, level, prefix, suffix, comma)\n\n for t, prefix, suffix, item_prefix, item_sep, item_suffix in self.dict_types: # noqa\n if isinstance(obj, t) and has_obj_repr(t):\n return self._repr_dict(obj, level, prefix, suffix,\n item_prefix, item_sep, item_suffix)\n\n for t in self.string_types:\n if isinstance(obj, t) and has_obj_repr(t):\n return self._repr_str(obj, level)\n\n if self._is_long_iter(obj):\n return self._repr_long_iter(obj)\n\n return self._repr_other(obj, level)\n\n # Determines whether an iterable exceeds the limits set in\n # maxlimits, and is therefore unsafe to repr().\n def _is_long_iter(self, obj, level=0):\n try:\n # Strings have their own limits (and do not nest). Because\n # they don't have __iter__ in 2.x, this check goes before\n # the next one.\n if isinstance(obj, self.string_types):\n return len(obj) > self.maxstring_inner\n\n # If it's not an iterable (and not a string), it's fine.\n if not hasattr(obj, '__iter__'):\n return False\n\n # Iterable is its own iterator - this is a one-off iterable\n # like generator or enumerate(). We can't really count that,\n # but repr() for these should not include any elements anyway,\n # so we can treat it the same as non-iterables.\n if obj is iter(obj):\n return False\n\n # xrange reprs fine regardless of length.\n if isinstance(obj, xrange):\n return False\n\n # numpy and scipy collections (ndarray etc) have\n # self-truncating repr, so they're always safe.\n try:\n module = type(obj).__module__.partition('.')[0]\n if module in ('numpy', 'scipy'):\n return False\n except Exception:\n pass\n\n # Iterables that nest too deep are considered long.\n if level >= len(self.maxcollection):\n return True\n\n # It is too long if the length exceeds the limit, or any\n # of its elements are long iterables.\n if hasattr(obj, '__len__'):\n try:\n size = len(obj)\n except Exception:\n size = None\n if size is not None and size > self.maxcollection[level]:\n return True\n return any((self._is_long_iter(item, level + 1) for item in obj)) # noqa\n return any(i > self.maxcollection[level] or self._is_long_iter(item, level + 1) for i, item in enumerate(obj)) # noqa\n\n except Exception:\n # If anything breaks, assume the worst case.\n return True\n\n def _repr_iter(self, obj, level, prefix, suffix,\n comma_after_single_element=False):\n yield prefix\n\n if level >= len(self.maxcollection):\n yield '...'\n else:\n count = self.maxcollection[level]\n yield_comma = False\n for item in obj:\n if yield_comma:\n yield ', '\n yield_comma = True\n\n count -= 1\n if count <= 0:\n yield '...'\n break\n\n for p in self._repr(item, 100 if item is obj else level + 1):\n yield p\n else:\n if comma_after_single_element:\n if count == self.maxcollection[level] - 1:\n yield ','\n yield suffix\n\n def _repr_long_iter(self, obj):\n try:\n length = hex(len(obj)) if self.convert_to_hex else len(obj)\n obj_repr = '<%s, len() = %s>' % (type(obj).__name__, length)\n except Exception:\n try:\n obj_repr = '<' + type(obj).__name__ + '>'\n except Exception:\n obj_repr = '<no repr available for object>'\n yield obj_repr\n\n def _repr_dict(self, obj, level, prefix, suffix,\n item_prefix, item_sep, item_suffix):\n if not obj:\n yield prefix + suffix\n return\n if level >= len(self.maxcollection):\n yield prefix + '...' + suffix\n return\n\n yield prefix\n\n count = self.maxcollection[level]\n yield_comma = False\n\n try:\n sorted_keys = sorted(obj)\n except Exception:\n sorted_keys = list(obj)\n\n for key in sorted_keys:\n if yield_comma:\n yield ', '\n yield_comma = True\n\n count -= 1\n if count <= 0:\n yield '...'\n break\n\n yield item_prefix\n for p in self._repr(key, level + 1):\n yield p\n\n yield item_sep\n\n try:\n item = obj[key]\n except Exception:\n yield '<?>'\n else:\n for p in self._repr(item, 100 if item is obj else level + 1):\n yield p\n yield item_suffix\n\n yield suffix\n\n def _repr_str(self, obj, level):\n return self._repr_obj(obj, level,\n self.maxstring_inner, self.maxstring_outer)\n\n def _repr_other(self, obj, level):\n return self._repr_obj(obj, level,\n self.maxother_inner, self.maxother_outer)\n\n def _repr_obj(self, obj, level, limit_inner, limit_outer):\n try:\n if self.raw_value:\n # For raw value retrieval, ignore all limits.\n try:\n mv = memoryview(obj)\n except Exception:\n yield unicode(obj)\n return\n else:\n # Map bytes to Unicode codepoints with same values.\n yield mv.tobytes().decode('latin-1')\n return\n elif self.convert_to_hex and isinstance(obj, self.int_types):\n obj_repr = hex(obj)\n else:\n obj_repr = repr(obj)\n except Exception:\n try:\n obj_repr = object.__repr__(obj)\n except Exception:\n try:\n obj_repr = '<no repr available for ' + type(obj).__name__ + '>' # noqa\n except Exception:\n obj_repr = '<no repr available for object>'\n\n limit = limit_inner if level > 0 else limit_outer\n\n if limit >= len(obj_repr):\n yield obj_repr\n return\n\n # Slightly imprecise calculations - we may end up with a string that is\n # up to 3 characters longer than limit. If you need precise formatting,\n # you are using the wrong class.\n left_count, right_count = max(1, int(2 * limit / 3)), max(1, int(limit / 3)) # noqa\n\n yield obj_repr[:left_count]\n yield '...'\n yield obj_repr[-right_count:]\n", "path": "ptvsd/safe_repr.py"}]}
| 3,863 | 380 |
gh_patches_debug_14973
|
rasdani/github-patches
|
git_diff
|
chainer__chainer-104
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Remove `chainer-cuda-requirements` that is deprecated
`pip install chainer-cuda-deps` is recommended, and `chainer-cuda-requirements` is deprecated now. It will be removed in the future minor release.
</issue>
<code>
[start of setup.py]
1 #!/usr/bin/env python
2 from setuptools import setup
3
4 setup(
5 name='chainer',
6 version='1.0.1',
7 description='A flexible framework of neural networks',
8 author='Seiya Tokui',
9 author_email='[email protected]',
10 url='http://chainer.org/',
11 packages=['chainer',
12 'chainer.cudnn',
13 'chainer.functions',
14 'chainer.optimizers',
15 'chainer.requirements',
16 'chainer.utils'],
17 package_data={'chainer.requirements': ['cuda-requirements.txt']},
18 install_requires=['numpy',
19 'six>=1.9.0'],
20 scripts=['scripts/chainer-cuda-requirements'],
21 tests_require=['nose'],
22 )
23
[end of setup.py]
[start of chainer/requirements/__init__.py]
1 import os
2
3
4 def get_cuda_requirements_path():
5 return os.path.join(os.path.dirname(__file__), 'cuda-requirements.txt')
6
7
8 def get_cuda_requirements():
9 with open(get_cuda_requirements_path()) as f:
10 return f.read()
11
[end of chainer/requirements/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/chainer/requirements/__init__.py b/chainer/requirements/__init__.py
deleted file mode 100644
--- a/chainer/requirements/__init__.py
+++ /dev/null
@@ -1,10 +0,0 @@
-import os
-
-
-def get_cuda_requirements_path():
- return os.path.join(os.path.dirname(__file__), 'cuda-requirements.txt')
-
-
-def get_cuda_requirements():
- with open(get_cuda_requirements_path()) as f:
- return f.read()
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -12,11 +12,8 @@
'chainer.cudnn',
'chainer.functions',
'chainer.optimizers',
- 'chainer.requirements',
'chainer.utils'],
- package_data={'chainer.requirements': ['cuda-requirements.txt']},
install_requires=['numpy',
'six>=1.9.0'],
- scripts=['scripts/chainer-cuda-requirements'],
tests_require=['nose'],
)
|
{"golden_diff": "diff --git a/chainer/requirements/__init__.py b/chainer/requirements/__init__.py\ndeleted file mode 100644\n--- a/chainer/requirements/__init__.py\n+++ /dev/null\n@@ -1,10 +0,0 @@\n-import os\n-\n-\n-def get_cuda_requirements_path():\n- return os.path.join(os.path.dirname(__file__), 'cuda-requirements.txt')\n-\n-\n-def get_cuda_requirements():\n- with open(get_cuda_requirements_path()) as f:\n- return f.read()\ndiff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -12,11 +12,8 @@\n 'chainer.cudnn',\n 'chainer.functions',\n 'chainer.optimizers',\n- 'chainer.requirements',\n 'chainer.utils'],\n- package_data={'chainer.requirements': ['cuda-requirements.txt']},\n install_requires=['numpy',\n 'six>=1.9.0'],\n- scripts=['scripts/chainer-cuda-requirements'],\n tests_require=['nose'],\n )\n", "issue": "Remove `chainer-cuda-requirements` that is deprecated\n`pip install chainer-cuda-deps` is recommended, and `chainer-cuda-requirements` is deprecated now. It will be removed in the future minor release.\n\n", "before_files": [{"content": "#!/usr/bin/env python\nfrom setuptools import setup\n\nsetup(\n name='chainer',\n version='1.0.1',\n description='A flexible framework of neural networks',\n author='Seiya Tokui',\n author_email='[email protected]',\n url='http://chainer.org/',\n packages=['chainer',\n 'chainer.cudnn',\n 'chainer.functions',\n 'chainer.optimizers',\n 'chainer.requirements',\n 'chainer.utils'],\n package_data={'chainer.requirements': ['cuda-requirements.txt']},\n install_requires=['numpy',\n 'six>=1.9.0'],\n scripts=['scripts/chainer-cuda-requirements'],\n tests_require=['nose'],\n)\n", "path": "setup.py"}, {"content": "import os\n\n\ndef get_cuda_requirements_path():\n return os.path.join(os.path.dirname(__file__), 'cuda-requirements.txt')\n\n\ndef get_cuda_requirements():\n with open(get_cuda_requirements_path()) as f:\n return f.read()\n", "path": "chainer/requirements/__init__.py"}]}
| 859 | 234 |
gh_patches_debug_6793
|
rasdani/github-patches
|
git_diff
|
networkx__networkx-6674
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Improve test coverage for clique.py
There is a line left uncovered in https://app.codecov.io/gh/networkx/networkx/blob/main/networkx/algorithms/approximation/clique.py and I am looking into it. There is a bit of an issue though when I try G = None. An attribute error is raised not a value error.
Steps to Reproduce
from networkx.algorithms.approximation.clique import maximum_independent_set,max_clique
G=nx.Graph()
G.add_nodes_from([(2,3),(5,6),(3,6)])
max_clique(G=None)
output:
AttributeError Traceback (most recent call last)
Cell In[84], line 1
----> 1 max_clique(G)
File <class 'networkx.utils.decorators.argmap'> compilation 32:3, in argmap_max_clique_28(G)
1 import bz2
2 import collections
----> 3 import gzip
4 import inspect
5 import itertools
File ~\anaconda3\lib\site-packages\networkx\utils\decorators.py:83, in not_implemented_for.<locals>._not_implemented_for(g)
82 def _not_implemented_for(g):
---> 83 if (mval is None or mval == g.is_multigraph()) and (
84 dval is None or dval == g.is_directed()
85 ):
86 raise nx.NetworkXNotImplemented(errmsg)
88 return g
AttributeError: 'NoneType' object has no attribute 'is_multigraph'
</issue>
<code>
[start of networkx/algorithms/approximation/clique.py]
1 """Functions for computing large cliques and maximum independent sets."""
2 import networkx as nx
3 from networkx.algorithms.approximation import ramsey
4 from networkx.utils import not_implemented_for
5
6 __all__ = [
7 "clique_removal",
8 "max_clique",
9 "large_clique_size",
10 "maximum_independent_set",
11 ]
12
13
14 @not_implemented_for("directed")
15 @not_implemented_for("multigraph")
16 def maximum_independent_set(G):
17 """Returns an approximate maximum independent set.
18
19 Independent set or stable set is a set of vertices in a graph, no two of
20 which are adjacent. That is, it is a set I of vertices such that for every
21 two vertices in I, there is no edge connecting the two. Equivalently, each
22 edge in the graph has at most one endpoint in I. The size of an independent
23 set is the number of vertices it contains [1]_.
24
25 A maximum independent set is a largest independent set for a given graph G
26 and its size is denoted $\\alpha(G)$. The problem of finding such a set is called
27 the maximum independent set problem and is an NP-hard optimization problem.
28 As such, it is unlikely that there exists an efficient algorithm for finding
29 a maximum independent set of a graph.
30
31 The Independent Set algorithm is based on [2]_.
32
33 Parameters
34 ----------
35 G : NetworkX graph
36 Undirected graph
37
38 Returns
39 -------
40 iset : Set
41 The apx-maximum independent set
42
43 Examples
44 --------
45 >>> G = nx.path_graph(10)
46 >>> nx.approximation.maximum_independent_set(G)
47 {0, 2, 4, 6, 9}
48
49 Raises
50 ------
51 NetworkXNotImplemented
52 If the graph is directed or is a multigraph.
53
54 Notes
55 -----
56 Finds the $O(|V|/(log|V|)^2)$ apx of independent set in the worst case.
57
58 References
59 ----------
60 .. [1] `Wikipedia: Independent set
61 <https://en.wikipedia.org/wiki/Independent_set_(graph_theory)>`_
62 .. [2] Boppana, R., & Halldórsson, M. M. (1992).
63 Approximating maximum independent sets by excluding subgraphs.
64 BIT Numerical Mathematics, 32(2), 180–196. Springer.
65 """
66 iset, _ = clique_removal(G)
67 return iset
68
69
70 @not_implemented_for("directed")
71 @not_implemented_for("multigraph")
72 def max_clique(G):
73 r"""Find the Maximum Clique
74
75 Finds the $O(|V|/(log|V|)^2)$ apx of maximum clique/independent set
76 in the worst case.
77
78 Parameters
79 ----------
80 G : NetworkX graph
81 Undirected graph
82
83 Returns
84 -------
85 clique : set
86 The apx-maximum clique of the graph
87
88 Examples
89 --------
90 >>> G = nx.path_graph(10)
91 >>> nx.approximation.max_clique(G)
92 {8, 9}
93
94 Raises
95 ------
96 NetworkXNotImplemented
97 If the graph is directed or is a multigraph.
98
99 Notes
100 -----
101 A clique in an undirected graph G = (V, E) is a subset of the vertex set
102 `C \subseteq V` such that for every two vertices in C there exists an edge
103 connecting the two. This is equivalent to saying that the subgraph
104 induced by C is complete (in some cases, the term clique may also refer
105 to the subgraph).
106
107 A maximum clique is a clique of the largest possible size in a given graph.
108 The clique number `\omega(G)` of a graph G is the number of
109 vertices in a maximum clique in G. The intersection number of
110 G is the smallest number of cliques that together cover all edges of G.
111
112 https://en.wikipedia.org/wiki/Maximum_clique
113
114 References
115 ----------
116 .. [1] Boppana, R., & Halldórsson, M. M. (1992).
117 Approximating maximum independent sets by excluding subgraphs.
118 BIT Numerical Mathematics, 32(2), 180–196. Springer.
119 doi:10.1007/BF01994876
120 """
121 if G is None:
122 raise ValueError("Expected NetworkX graph!")
123
124 # finding the maximum clique in a graph is equivalent to finding
125 # the independent set in the complementary graph
126 cgraph = nx.complement(G)
127 iset, _ = clique_removal(cgraph)
128 return iset
129
130
131 @not_implemented_for("directed")
132 @not_implemented_for("multigraph")
133 def clique_removal(G):
134 r"""Repeatedly remove cliques from the graph.
135
136 Results in a $O(|V|/(\log |V|)^2)$ approximation of maximum clique
137 and independent set. Returns the largest independent set found, along
138 with found maximal cliques.
139
140 Parameters
141 ----------
142 G : NetworkX graph
143 Undirected graph
144
145 Returns
146 -------
147 max_ind_cliques : (set, list) tuple
148 2-tuple of Maximal Independent Set and list of maximal cliques (sets).
149
150 Examples
151 --------
152 >>> G = nx.path_graph(10)
153 >>> nx.approximation.clique_removal(G)
154 ({0, 2, 4, 6, 9}, [{0, 1}, {2, 3}, {4, 5}, {6, 7}, {8, 9}])
155
156 Raises
157 ------
158 NetworkXNotImplemented
159 If the graph is directed or is a multigraph.
160
161 References
162 ----------
163 .. [1] Boppana, R., & Halldórsson, M. M. (1992).
164 Approximating maximum independent sets by excluding subgraphs.
165 BIT Numerical Mathematics, 32(2), 180–196. Springer.
166 """
167 graph = G.copy()
168 c_i, i_i = ramsey.ramsey_R2(graph)
169 cliques = [c_i]
170 isets = [i_i]
171 while graph:
172 graph.remove_nodes_from(c_i)
173 c_i, i_i = ramsey.ramsey_R2(graph)
174 if c_i:
175 cliques.append(c_i)
176 if i_i:
177 isets.append(i_i)
178 # Determine the largest independent set as measured by cardinality.
179 maxiset = max(isets, key=len)
180 return maxiset, cliques
181
182
183 @not_implemented_for("directed")
184 @not_implemented_for("multigraph")
185 def large_clique_size(G):
186 """Find the size of a large clique in a graph.
187
188 A *clique* is a subset of nodes in which each pair of nodes is
189 adjacent. This function is a heuristic for finding the size of a
190 large clique in the graph.
191
192 Parameters
193 ----------
194 G : NetworkX graph
195
196 Returns
197 -------
198 k: integer
199 The size of a large clique in the graph.
200
201 Examples
202 --------
203 >>> G = nx.path_graph(10)
204 >>> nx.approximation.large_clique_size(G)
205 2
206
207 Raises
208 ------
209 NetworkXNotImplemented
210 If the graph is directed or is a multigraph.
211
212 Notes
213 -----
214 This implementation is from [1]_. Its worst case time complexity is
215 :math:`O(n d^2)`, where *n* is the number of nodes in the graph and
216 *d* is the maximum degree.
217
218 This function is a heuristic, which means it may work well in
219 practice, but there is no rigorous mathematical guarantee on the
220 ratio between the returned number and the actual largest clique size
221 in the graph.
222
223 References
224 ----------
225 .. [1] Pattabiraman, Bharath, et al.
226 "Fast Algorithms for the Maximum Clique Problem on Massive Graphs
227 with Applications to Overlapping Community Detection."
228 *Internet Mathematics* 11.4-5 (2015): 421--448.
229 <https://doi.org/10.1080/15427951.2014.986778>
230
231 See also
232 --------
233
234 :func:`networkx.algorithms.approximation.clique.max_clique`
235 A function that returns an approximate maximum clique with a
236 guarantee on the approximation ratio.
237
238 :mod:`networkx.algorithms.clique`
239 Functions for finding the exact maximum clique in a graph.
240
241 """
242 degrees = G.degree
243
244 def _clique_heuristic(G, U, size, best_size):
245 if not U:
246 return max(best_size, size)
247 u = max(U, key=degrees)
248 U.remove(u)
249 N_prime = {v for v in G[u] if degrees[v] >= best_size}
250 return _clique_heuristic(G, U & N_prime, size + 1, best_size)
251
252 best_size = 0
253 nodes = (u for u in G if degrees[u] >= best_size)
254 for u in nodes:
255 neighbors = {v for v in G[u] if degrees[v] >= best_size}
256 best_size = _clique_heuristic(G, neighbors, 1, best_size)
257 return best_size
258
[end of networkx/algorithms/approximation/clique.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/networkx/algorithms/approximation/clique.py b/networkx/algorithms/approximation/clique.py
--- a/networkx/algorithms/approximation/clique.py
+++ b/networkx/algorithms/approximation/clique.py
@@ -118,9 +118,6 @@
BIT Numerical Mathematics, 32(2), 180–196. Springer.
doi:10.1007/BF01994876
"""
- if G is None:
- raise ValueError("Expected NetworkX graph!")
-
# finding the maximum clique in a graph is equivalent to finding
# the independent set in the complementary graph
cgraph = nx.complement(G)
|
{"golden_diff": "diff --git a/networkx/algorithms/approximation/clique.py b/networkx/algorithms/approximation/clique.py\n--- a/networkx/algorithms/approximation/clique.py\n+++ b/networkx/algorithms/approximation/clique.py\n@@ -118,9 +118,6 @@\n BIT Numerical Mathematics, 32(2), 180\u2013196. Springer.\n doi:10.1007/BF01994876\n \"\"\"\n- if G is None:\n- raise ValueError(\"Expected NetworkX graph!\")\n-\n # finding the maximum clique in a graph is equivalent to finding\n # the independent set in the complementary graph\n cgraph = nx.complement(G)\n", "issue": "Improve test coverage for clique.py\nThere is a line left uncovered in https://app.codecov.io/gh/networkx/networkx/blob/main/networkx/algorithms/approximation/clique.py and I am looking into it. There is a bit of an issue though when I try G = None. An attribute error is raised not a value error. \r\n\r\nSteps to Reproduce\r\n\r\nfrom networkx.algorithms.approximation.clique import maximum_independent_set,max_clique\r\nG=nx.Graph()\r\nG.add_nodes_from([(2,3),(5,6),(3,6)])\r\nmax_clique(G=None)\r\n\r\noutput:\r\n\r\nAttributeError Traceback (most recent call last)\r\nCell In[84], line 1\r\n----> 1 max_clique(G)\r\n\r\nFile <class 'networkx.utils.decorators.argmap'> compilation 32:3, in argmap_max_clique_28(G)\r\n 1 import bz2\r\n 2 import collections\r\n----> 3 import gzip\r\n 4 import inspect\r\n 5 import itertools\r\n\r\nFile ~\\anaconda3\\lib\\site-packages\\networkx\\utils\\decorators.py:83, in not_implemented_for.<locals>._not_implemented_for(g)\r\n 82 def _not_implemented_for(g):\r\n---> 83 if (mval is None or mval == g.is_multigraph()) and (\r\n 84 dval is None or dval == g.is_directed()\r\n 85 ):\r\n 86 raise nx.NetworkXNotImplemented(errmsg)\r\n 88 return g\r\n\r\nAttributeError: 'NoneType' object has no attribute 'is_multigraph'\n", "before_files": [{"content": "\"\"\"Functions for computing large cliques and maximum independent sets.\"\"\"\nimport networkx as nx\nfrom networkx.algorithms.approximation import ramsey\nfrom networkx.utils import not_implemented_for\n\n__all__ = [\n \"clique_removal\",\n \"max_clique\",\n \"large_clique_size\",\n \"maximum_independent_set\",\n]\n\n\n@not_implemented_for(\"directed\")\n@not_implemented_for(\"multigraph\")\ndef maximum_independent_set(G):\n \"\"\"Returns an approximate maximum independent set.\n\n Independent set or stable set is a set of vertices in a graph, no two of\n which are adjacent. That is, it is a set I of vertices such that for every\n two vertices in I, there is no edge connecting the two. Equivalently, each\n edge in the graph has at most one endpoint in I. The size of an independent\n set is the number of vertices it contains [1]_.\n\n A maximum independent set is a largest independent set for a given graph G\n and its size is denoted $\\\\alpha(G)$. The problem of finding such a set is called\n the maximum independent set problem and is an NP-hard optimization problem.\n As such, it is unlikely that there exists an efficient algorithm for finding\n a maximum independent set of a graph.\n\n The Independent Set algorithm is based on [2]_.\n\n Parameters\n ----------\n G : NetworkX graph\n Undirected graph\n\n Returns\n -------\n iset : Set\n The apx-maximum independent set\n\n Examples\n --------\n >>> G = nx.path_graph(10)\n >>> nx.approximation.maximum_independent_set(G)\n {0, 2, 4, 6, 9}\n\n Raises\n ------\n NetworkXNotImplemented\n If the graph is directed or is a multigraph.\n\n Notes\n -----\n Finds the $O(|V|/(log|V|)^2)$ apx of independent set in the worst case.\n\n References\n ----------\n .. [1] `Wikipedia: Independent set\n <https://en.wikipedia.org/wiki/Independent_set_(graph_theory)>`_\n .. [2] Boppana, R., & Halld\u00f3rsson, M. M. (1992).\n Approximating maximum independent sets by excluding subgraphs.\n BIT Numerical Mathematics, 32(2), 180\u2013196. Springer.\n \"\"\"\n iset, _ = clique_removal(G)\n return iset\n\n\n@not_implemented_for(\"directed\")\n@not_implemented_for(\"multigraph\")\ndef max_clique(G):\n r\"\"\"Find the Maximum Clique\n\n Finds the $O(|V|/(log|V|)^2)$ apx of maximum clique/independent set\n in the worst case.\n\n Parameters\n ----------\n G : NetworkX graph\n Undirected graph\n\n Returns\n -------\n clique : set\n The apx-maximum clique of the graph\n\n Examples\n --------\n >>> G = nx.path_graph(10)\n >>> nx.approximation.max_clique(G)\n {8, 9}\n\n Raises\n ------\n NetworkXNotImplemented\n If the graph is directed or is a multigraph.\n\n Notes\n -----\n A clique in an undirected graph G = (V, E) is a subset of the vertex set\n `C \\subseteq V` such that for every two vertices in C there exists an edge\n connecting the two. This is equivalent to saying that the subgraph\n induced by C is complete (in some cases, the term clique may also refer\n to the subgraph).\n\n A maximum clique is a clique of the largest possible size in a given graph.\n The clique number `\\omega(G)` of a graph G is the number of\n vertices in a maximum clique in G. The intersection number of\n G is the smallest number of cliques that together cover all edges of G.\n\n https://en.wikipedia.org/wiki/Maximum_clique\n\n References\n ----------\n .. [1] Boppana, R., & Halld\u00f3rsson, M. M. (1992).\n Approximating maximum independent sets by excluding subgraphs.\n BIT Numerical Mathematics, 32(2), 180\u2013196. Springer.\n doi:10.1007/BF01994876\n \"\"\"\n if G is None:\n raise ValueError(\"Expected NetworkX graph!\")\n\n # finding the maximum clique in a graph is equivalent to finding\n # the independent set in the complementary graph\n cgraph = nx.complement(G)\n iset, _ = clique_removal(cgraph)\n return iset\n\n\n@not_implemented_for(\"directed\")\n@not_implemented_for(\"multigraph\")\ndef clique_removal(G):\n r\"\"\"Repeatedly remove cliques from the graph.\n\n Results in a $O(|V|/(\\log |V|)^2)$ approximation of maximum clique\n and independent set. Returns the largest independent set found, along\n with found maximal cliques.\n\n Parameters\n ----------\n G : NetworkX graph\n Undirected graph\n\n Returns\n -------\n max_ind_cliques : (set, list) tuple\n 2-tuple of Maximal Independent Set and list of maximal cliques (sets).\n\n Examples\n --------\n >>> G = nx.path_graph(10)\n >>> nx.approximation.clique_removal(G)\n ({0, 2, 4, 6, 9}, [{0, 1}, {2, 3}, {4, 5}, {6, 7}, {8, 9}])\n\n Raises\n ------\n NetworkXNotImplemented\n If the graph is directed or is a multigraph.\n\n References\n ----------\n .. [1] Boppana, R., & Halld\u00f3rsson, M. M. (1992).\n Approximating maximum independent sets by excluding subgraphs.\n BIT Numerical Mathematics, 32(2), 180\u2013196. Springer.\n \"\"\"\n graph = G.copy()\n c_i, i_i = ramsey.ramsey_R2(graph)\n cliques = [c_i]\n isets = [i_i]\n while graph:\n graph.remove_nodes_from(c_i)\n c_i, i_i = ramsey.ramsey_R2(graph)\n if c_i:\n cliques.append(c_i)\n if i_i:\n isets.append(i_i)\n # Determine the largest independent set as measured by cardinality.\n maxiset = max(isets, key=len)\n return maxiset, cliques\n\n\n@not_implemented_for(\"directed\")\n@not_implemented_for(\"multigraph\")\ndef large_clique_size(G):\n \"\"\"Find the size of a large clique in a graph.\n\n A *clique* is a subset of nodes in which each pair of nodes is\n adjacent. This function is a heuristic for finding the size of a\n large clique in the graph.\n\n Parameters\n ----------\n G : NetworkX graph\n\n Returns\n -------\n k: integer\n The size of a large clique in the graph.\n\n Examples\n --------\n >>> G = nx.path_graph(10)\n >>> nx.approximation.large_clique_size(G)\n 2\n\n Raises\n ------\n NetworkXNotImplemented\n If the graph is directed or is a multigraph.\n\n Notes\n -----\n This implementation is from [1]_. Its worst case time complexity is\n :math:`O(n d^2)`, where *n* is the number of nodes in the graph and\n *d* is the maximum degree.\n\n This function is a heuristic, which means it may work well in\n practice, but there is no rigorous mathematical guarantee on the\n ratio between the returned number and the actual largest clique size\n in the graph.\n\n References\n ----------\n .. [1] Pattabiraman, Bharath, et al.\n \"Fast Algorithms for the Maximum Clique Problem on Massive Graphs\n with Applications to Overlapping Community Detection.\"\n *Internet Mathematics* 11.4-5 (2015): 421--448.\n <https://doi.org/10.1080/15427951.2014.986778>\n\n See also\n --------\n\n :func:`networkx.algorithms.approximation.clique.max_clique`\n A function that returns an approximate maximum clique with a\n guarantee on the approximation ratio.\n\n :mod:`networkx.algorithms.clique`\n Functions for finding the exact maximum clique in a graph.\n\n \"\"\"\n degrees = G.degree\n\n def _clique_heuristic(G, U, size, best_size):\n if not U:\n return max(best_size, size)\n u = max(U, key=degrees)\n U.remove(u)\n N_prime = {v for v in G[u] if degrees[v] >= best_size}\n return _clique_heuristic(G, U & N_prime, size + 1, best_size)\n\n best_size = 0\n nodes = (u for u in G if degrees[u] >= best_size)\n for u in nodes:\n neighbors = {v for v in G[u] if degrees[v] >= best_size}\n best_size = _clique_heuristic(G, neighbors, 1, best_size)\n return best_size\n", "path": "networkx/algorithms/approximation/clique.py"}]}
| 3,671 | 163 |
gh_patches_debug_11666
|
rasdani/github-patches
|
git_diff
|
iterative__dvc-1413
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
tests: Logs are throwing duplicated output
You can review any log (this one, for example: https://travis-ci.org/iterative/dvc/jobs/457244685#L1544-L1571)
</issue>
<code>
[start of dvc/logger.py]
1 import sys
2 import logging
3 import colorama
4 import traceback
5 import re
6
7
8 colorama.init()
9
10
11 def visual_width(line):
12 """ Get the the number of columns required to display a string """
13
14 return len(re.sub(colorama.ansitowin32.AnsiToWin32.ANSI_CSI_RE, '', line))
15
16
17 def visual_center(line, width):
18 """ Center align string according to it's visual width """
19
20 spaces = max(width - visual_width(line), 0)
21 left_padding = int(spaces / 2)
22 right_padding = (spaces - left_padding)
23
24 return (left_padding * ' ') + line + (right_padding * ' ')
25
26
27 class Logger(object):
28 FMT = '%(message)s'
29 DEFAULT_LEVEL = logging.INFO
30
31 LEVEL_MAP = {
32 'debug': logging.DEBUG,
33 'info': logging.INFO,
34 'warn': logging.WARNING,
35 'error': logging.ERROR
36 }
37
38 COLOR_MAP = {
39 'green': colorama.Fore.GREEN,
40 'yellow': colorama.Fore.YELLOW,
41 'blue': colorama.Fore.BLUE,
42 'red': colorama.Fore.RED,
43 }
44
45 LEVEL_COLOR_MAP = {
46 'debug': 'blue',
47 'warn': 'yellow',
48 'error': 'red',
49 }
50
51 def __init__(self, loglevel=None):
52 if loglevel:
53 Logger.set_level(loglevel)
54
55 @staticmethod
56 def init():
57
58 class LogLevelFilter(logging.Filter):
59 def filter(self, record):
60 return record.levelno <= logging.WARNING
61
62 sh_out = logging.StreamHandler(sys.stdout)
63 sh_out.setFormatter(logging.Formatter(Logger.FMT))
64 sh_out.setLevel(logging.DEBUG)
65 sh_out.addFilter(LogLevelFilter())
66
67 sh_err = logging.StreamHandler(sys.stderr)
68 sh_err.setFormatter(logging.Formatter(Logger.FMT))
69 sh_err.setLevel(logging.ERROR)
70
71 Logger.logger().addHandler(sh_out)
72 Logger.logger().addHandler(sh_err)
73 Logger.set_level()
74
75 @staticmethod
76 def logger():
77 return logging.getLogger('dvc')
78
79 @staticmethod
80 def set_level(level=None):
81 if not level:
82 lvl = Logger.DEFAULT_LEVEL
83 else:
84 lvl = Logger.LEVEL_MAP.get(level.lower(), Logger.DEFAULT_LEVEL)
85 Logger.logger().setLevel(lvl)
86
87 @staticmethod
88 def be_quiet():
89 Logger.logger().setLevel(logging.CRITICAL)
90
91 @staticmethod
92 def be_verbose():
93 Logger.logger().setLevel(logging.DEBUG)
94
95 @staticmethod
96 def colorize(msg, color):
97 header = ''
98 footer = ''
99
100 if sys.stdout.isatty(): # pragma: no cover
101 header = Logger.COLOR_MAP.get(color.lower(), '')
102 footer = colorama.Style.RESET_ALL
103
104 return u'{}{}{}'.format(header, msg, footer)
105
106 @staticmethod
107 def parse_exc(exc, tb=None):
108 str_tb = tb if tb else None
109 str_exc = str(exc) if exc else ""
110 l_str_exc = []
111
112 if len(str_exc) != 0:
113 l_str_exc.append(str_exc)
114
115 if exc and hasattr(exc, 'cause') and exc.cause:
116 cause_tb = exc.cause_tb if hasattr(exc, 'cause_tb') else None
117 l_cause_str_exc, cause_str_tb = Logger.parse_exc(exc.cause,
118 cause_tb)
119
120 str_tb = cause_str_tb
121 l_str_exc += l_cause_str_exc
122
123 return (l_str_exc, str_tb)
124
125 @staticmethod
126 def _prefix(msg, typ):
127 color = Logger.LEVEL_COLOR_MAP.get(typ.lower(), '')
128 return Logger.colorize('{}'.format(msg), color)
129
130 @staticmethod
131 def error_prefix():
132 return Logger._prefix('Error', 'error')
133
134 @staticmethod
135 def warning_prefix():
136 return Logger._prefix('Warning', 'warn')
137
138 @staticmethod
139 def debug_prefix():
140 return Logger._prefix('Debug', 'debug')
141
142 @staticmethod
143 def _with_progress(func, msg):
144 from dvc.progress import progress
145 with progress:
146 func(msg)
147
148 @staticmethod
149 def _with_exc(func, prefix, msg, suffix="", exc=None):
150 l_str_exc, str_tb = Logger.parse_exc(exc)
151
152 if exc is not None and Logger.is_verbose():
153 str_tb = str_tb if str_tb else traceback.format_exc()
154 Logger._with_progress(Logger.logger().error, str_tb)
155
156 l_msg = [prefix]
157 if msg is not None and len(msg) != 0:
158 l_msg.append(msg)
159 l_msg += l_str_exc
160
161 Logger._with_progress(func, ': '.join(l_msg) + suffix)
162
163 @staticmethod
164 def error(msg, exc=None):
165 chat = "\n\nHaving any troubles? Hit us up at dvc.org/support, " \
166 "we are always happy to help!"
167 Logger._with_exc(Logger.logger().error,
168 Logger.error_prefix(),
169 msg,
170 suffix=chat,
171 exc=exc)
172
173 @classmethod
174 def warn(cls, msg, exc=None):
175 cls._with_exc(cls.logger().warning,
176 cls.warning_prefix(),
177 msg,
178 exc=exc)
179
180 @classmethod
181 def debug(cls, msg, exc=None):
182 cls._with_exc(cls.logger().debug,
183 cls.debug_prefix(),
184 msg,
185 exc=exc)
186
187 @staticmethod
188 def info(msg):
189 Logger._with_progress(Logger.logger().info, msg)
190
191 @staticmethod
192 def is_quiet():
193 return Logger.logger().level == logging.CRITICAL
194
195 @staticmethod
196 def is_verbose():
197 return Logger.logger().getEffectiveLevel() == logging.DEBUG
198
199 @classmethod
200 def box(cls, msg, border_color=''):
201 if cls.is_quiet():
202 return
203
204 lines = msg.split('\n')
205 max_width = max(visual_width(line) for line in lines)
206
207 padding_horizontal = 5
208 padding_vertical = 1
209
210 box_size_horizontal = (max_width + (padding_horizontal * 2))
211
212 chars = {
213 'corner': '+',
214 'horizontal': '-',
215 'vertical': '|',
216 'empty': ' ',
217 }
218
219 margin = "{corner}{line}{corner}\n".format(
220 corner=chars['corner'],
221 line=chars['horizontal'] * box_size_horizontal,
222 )
223
224 padding_lines = [
225 "{border}{space}{border}\n".format(
226 border=cls.colorize(chars['vertical'], border_color),
227 space=chars['empty'] * box_size_horizontal,
228 ) * padding_vertical
229 ]
230
231 content_lines = [
232 "{border}{space}{content}{space}{border}\n".format(
233 border=cls.colorize(chars['vertical'], border_color),
234 space=chars['empty'] * padding_horizontal,
235 content=visual_center(line, max_width),
236 ) for line in lines
237 ]
238
239 box = "{margin}{padding}{content}{padding}{margin}".format(
240 margin=cls.colorize(margin, border_color),
241 padding=''.join(padding_lines),
242 content=''.join(content_lines),
243 )
244
245 print(box)
246
[end of dvc/logger.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/dvc/logger.py b/dvc/logger.py
--- a/dvc/logger.py
+++ b/dvc/logger.py
@@ -59,6 +59,9 @@
def filter(self, record):
return record.levelno <= logging.WARNING
+ if Logger._already_initialized():
+ return
+
sh_out = logging.StreamHandler(sys.stdout)
sh_out.setFormatter(logging.Formatter(Logger.FMT))
sh_out.setLevel(logging.DEBUG)
@@ -243,3 +246,7 @@
)
print(box)
+
+ @staticmethod
+ def _already_initialized():
+ return bool(Logger.logger().handlers)
|
{"golden_diff": "diff --git a/dvc/logger.py b/dvc/logger.py\n--- a/dvc/logger.py\n+++ b/dvc/logger.py\n@@ -59,6 +59,9 @@\n def filter(self, record):\n return record.levelno <= logging.WARNING\n \n+ if Logger._already_initialized():\n+ return\n+\n sh_out = logging.StreamHandler(sys.stdout)\n sh_out.setFormatter(logging.Formatter(Logger.FMT))\n sh_out.setLevel(logging.DEBUG)\n@@ -243,3 +246,7 @@\n )\n \n print(box)\n+\n+ @staticmethod\n+ def _already_initialized():\n+ return bool(Logger.logger().handlers)\n", "issue": "tests: Logs are throwing duplicated output\nYou can review any log (this one, for example: https://travis-ci.org/iterative/dvc/jobs/457244685#L1544-L1571)\n", "before_files": [{"content": "import sys\nimport logging\nimport colorama\nimport traceback\nimport re\n\n\ncolorama.init()\n\n\ndef visual_width(line):\n \"\"\" Get the the number of columns required to display a string \"\"\"\n\n return len(re.sub(colorama.ansitowin32.AnsiToWin32.ANSI_CSI_RE, '', line))\n\n\ndef visual_center(line, width):\n \"\"\" Center align string according to it's visual width \"\"\"\n\n spaces = max(width - visual_width(line), 0)\n left_padding = int(spaces / 2)\n right_padding = (spaces - left_padding)\n\n return (left_padding * ' ') + line + (right_padding * ' ')\n\n\nclass Logger(object):\n FMT = '%(message)s'\n DEFAULT_LEVEL = logging.INFO\n\n LEVEL_MAP = {\n 'debug': logging.DEBUG,\n 'info': logging.INFO,\n 'warn': logging.WARNING,\n 'error': logging.ERROR\n }\n\n COLOR_MAP = {\n 'green': colorama.Fore.GREEN,\n 'yellow': colorama.Fore.YELLOW,\n 'blue': colorama.Fore.BLUE,\n 'red': colorama.Fore.RED,\n }\n\n LEVEL_COLOR_MAP = {\n 'debug': 'blue',\n 'warn': 'yellow',\n 'error': 'red',\n }\n\n def __init__(self, loglevel=None):\n if loglevel:\n Logger.set_level(loglevel)\n\n @staticmethod\n def init():\n\n class LogLevelFilter(logging.Filter):\n def filter(self, record):\n return record.levelno <= logging.WARNING\n\n sh_out = logging.StreamHandler(sys.stdout)\n sh_out.setFormatter(logging.Formatter(Logger.FMT))\n sh_out.setLevel(logging.DEBUG)\n sh_out.addFilter(LogLevelFilter())\n\n sh_err = logging.StreamHandler(sys.stderr)\n sh_err.setFormatter(logging.Formatter(Logger.FMT))\n sh_err.setLevel(logging.ERROR)\n\n Logger.logger().addHandler(sh_out)\n Logger.logger().addHandler(sh_err)\n Logger.set_level()\n\n @staticmethod\n def logger():\n return logging.getLogger('dvc')\n\n @staticmethod\n def set_level(level=None):\n if not level:\n lvl = Logger.DEFAULT_LEVEL\n else:\n lvl = Logger.LEVEL_MAP.get(level.lower(), Logger.DEFAULT_LEVEL)\n Logger.logger().setLevel(lvl)\n\n @staticmethod\n def be_quiet():\n Logger.logger().setLevel(logging.CRITICAL)\n\n @staticmethod\n def be_verbose():\n Logger.logger().setLevel(logging.DEBUG)\n\n @staticmethod\n def colorize(msg, color):\n header = ''\n footer = ''\n\n if sys.stdout.isatty(): # pragma: no cover\n header = Logger.COLOR_MAP.get(color.lower(), '')\n footer = colorama.Style.RESET_ALL\n\n return u'{}{}{}'.format(header, msg, footer)\n\n @staticmethod\n def parse_exc(exc, tb=None):\n str_tb = tb if tb else None\n str_exc = str(exc) if exc else \"\"\n l_str_exc = []\n\n if len(str_exc) != 0:\n l_str_exc.append(str_exc)\n\n if exc and hasattr(exc, 'cause') and exc.cause:\n cause_tb = exc.cause_tb if hasattr(exc, 'cause_tb') else None\n l_cause_str_exc, cause_str_tb = Logger.parse_exc(exc.cause,\n cause_tb)\n\n str_tb = cause_str_tb\n l_str_exc += l_cause_str_exc\n\n return (l_str_exc, str_tb)\n\n @staticmethod\n def _prefix(msg, typ):\n color = Logger.LEVEL_COLOR_MAP.get(typ.lower(), '')\n return Logger.colorize('{}'.format(msg), color)\n\n @staticmethod\n def error_prefix():\n return Logger._prefix('Error', 'error')\n\n @staticmethod\n def warning_prefix():\n return Logger._prefix('Warning', 'warn')\n\n @staticmethod\n def debug_prefix():\n return Logger._prefix('Debug', 'debug')\n\n @staticmethod\n def _with_progress(func, msg):\n from dvc.progress import progress\n with progress:\n func(msg)\n\n @staticmethod\n def _with_exc(func, prefix, msg, suffix=\"\", exc=None):\n l_str_exc, str_tb = Logger.parse_exc(exc)\n\n if exc is not None and Logger.is_verbose():\n str_tb = str_tb if str_tb else traceback.format_exc()\n Logger._with_progress(Logger.logger().error, str_tb)\n\n l_msg = [prefix]\n if msg is not None and len(msg) != 0:\n l_msg.append(msg)\n l_msg += l_str_exc\n\n Logger._with_progress(func, ': '.join(l_msg) + suffix)\n\n @staticmethod\n def error(msg, exc=None):\n chat = \"\\n\\nHaving any troubles? Hit us up at dvc.org/support, \" \\\n \"we are always happy to help!\"\n Logger._with_exc(Logger.logger().error,\n Logger.error_prefix(),\n msg,\n suffix=chat,\n exc=exc)\n\n @classmethod\n def warn(cls, msg, exc=None):\n cls._with_exc(cls.logger().warning,\n cls.warning_prefix(),\n msg,\n exc=exc)\n\n @classmethod\n def debug(cls, msg, exc=None):\n cls._with_exc(cls.logger().debug,\n cls.debug_prefix(),\n msg,\n exc=exc)\n\n @staticmethod\n def info(msg):\n Logger._with_progress(Logger.logger().info, msg)\n\n @staticmethod\n def is_quiet():\n return Logger.logger().level == logging.CRITICAL\n\n @staticmethod\n def is_verbose():\n return Logger.logger().getEffectiveLevel() == logging.DEBUG\n\n @classmethod\n def box(cls, msg, border_color=''):\n if cls.is_quiet():\n return\n\n lines = msg.split('\\n')\n max_width = max(visual_width(line) for line in lines)\n\n padding_horizontal = 5\n padding_vertical = 1\n\n box_size_horizontal = (max_width + (padding_horizontal * 2))\n\n chars = {\n 'corner': '+',\n 'horizontal': '-',\n 'vertical': '|',\n 'empty': ' ',\n }\n\n margin = \"{corner}{line}{corner}\\n\".format(\n corner=chars['corner'],\n line=chars['horizontal'] * box_size_horizontal,\n )\n\n padding_lines = [\n \"{border}{space}{border}\\n\".format(\n border=cls.colorize(chars['vertical'], border_color),\n space=chars['empty'] * box_size_horizontal,\n ) * padding_vertical\n ]\n\n content_lines = [\n \"{border}{space}{content}{space}{border}\\n\".format(\n border=cls.colorize(chars['vertical'], border_color),\n space=chars['empty'] * padding_horizontal,\n content=visual_center(line, max_width),\n ) for line in lines\n ]\n\n box = \"{margin}{padding}{content}{padding}{margin}\".format(\n margin=cls.colorize(margin, border_color),\n padding=''.join(padding_lines),\n content=''.join(content_lines),\n )\n\n print(box)\n", "path": "dvc/logger.py"}]}
| 2,763 | 140 |
gh_patches_debug_26504
|
rasdani/github-patches
|
git_diff
|
getsentry__sentry-python-253
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[0.7.0] `CeleryIntegration` captures retries
Greetings fellows!
We are having an issue with `CeleryIntegration` in Sentry SDK.
### Current versions
Python 3.6.7
Django 2.1.5
Celery 4.1.1
Sentry SDK 0.7.0-0.7.1
### Current behavior
In our code (internal and 3rd-party) we are using [Celery tasks retry functionality](http://docs.celeryproject.org/en/latest/userguide/tasks.html#retrying).
> The app.Task.retry() call will raise an exception so any code after the retry won’t be reached. This is the Retry exception, it isn’t handled as an error but rather as a semi-predicate to signify to the worker that the task is to be retried, so that it can store the correct state when a result backend is enabled.
We did switch recently from Raven to Sentry SDK 0.6.9, everything seemed working as before.
But today we updated it to [0.7.0 release](https://github.com/getsentry/sentry-python/blob/master/CHANGES.md#070) (and later to 0.7.1)
This caused **every [`celery.exceptions.Retry`](http://docs.celeryproject.org/en/latest/reference/celery.exceptions.html#celery.exceptions.Retry) to be sent to Sentry**, which quickly filled Sentry server with thousands of events.
Previously (in old SDK and Raven), those exceptions were ignored and not sent to Sentry server.
### Expected behaviour
`CeleryIntegration` is not flooding Sentry server with every retry exception. Basically, the same behavior as it was in Raven and Sentry SDK<0.7.0.
### Open questions
I am not sure if the old behavior was done intentionally or by mistake.
If that was intended, we should reimplement it in current integration.
If not, there should be a way to filter/ignore that kind of exceptions (I am not sure if we can filter all retries from internal and 3rd-party code in`before_send` in a clean way).
Could you help me to clarify this issue?
</issue>
<code>
[start of sentry_sdk/integrations/celery.py]
1 from __future__ import absolute_import
2
3 import sys
4
5 from celery.exceptions import SoftTimeLimitExceeded
6
7 from sentry_sdk.hub import Hub
8 from sentry_sdk.utils import capture_internal_exceptions, event_from_exception
9 from sentry_sdk._compat import reraise
10 from sentry_sdk.integrations import Integration
11 from sentry_sdk.integrations.logging import ignore_logger
12
13
14 class CeleryIntegration(Integration):
15 identifier = "celery"
16
17 @staticmethod
18 def setup_once():
19 import celery.app.trace as trace
20
21 old_build_tracer = trace.build_tracer
22
23 def sentry_build_tracer(name, task, *args, **kwargs):
24 # Need to patch both methods because older celery sometimes
25 # short-circuits to task.run if it thinks it's safe.
26 task.__call__ = _wrap_task_call(task.__call__)
27 task.run = _wrap_task_call(task.run)
28 return _wrap_tracer(task, old_build_tracer(name, task, *args, **kwargs))
29
30 trace.build_tracer = sentry_build_tracer
31
32 # This logger logs every status of every task that ran on the worker.
33 # Meaning that every task's breadcrumbs are full of stuff like "Task
34 # <foo> raised unexpected <bar>".
35 ignore_logger("celery.worker.job")
36
37
38 def _wrap_tracer(task, f):
39 # Need to wrap tracer for pushing the scope before prerun is sent, and
40 # popping it after postrun is sent.
41 #
42 # This is the reason we don't use signals for hooking in the first place.
43 # Also because in Celery 3, signal dispatch returns early if one handler
44 # crashes.
45 def _inner(*args, **kwargs):
46 hub = Hub.current
47 if hub.get_integration(CeleryIntegration) is None:
48 return f(*args, **kwargs)
49
50 with hub.push_scope() as scope:
51 scope._name = "celery"
52 scope.add_event_processor(_make_event_processor(task, *args, **kwargs))
53
54 return f(*args, **kwargs)
55
56 return _inner
57
58
59 def _wrap_task_call(f):
60 # Need to wrap task call because the exception is caught before we get to
61 # see it. Also celery's reported stacktrace is untrustworthy.
62 def _inner(*args, **kwargs):
63 try:
64 return f(*args, **kwargs)
65 except Exception:
66 reraise(*_capture_exception())
67
68 return _inner
69
70
71 def _make_event_processor(task, uuid, args, kwargs, request=None):
72 def event_processor(event, hint):
73 with capture_internal_exceptions():
74 event["transaction"] = task.name
75
76 with capture_internal_exceptions():
77 extra = event.setdefault("extra", {})
78 extra["celery-job"] = {
79 "task_name": task.name,
80 "args": args,
81 "kwargs": kwargs,
82 }
83
84 if "exc_info" in hint:
85 with capture_internal_exceptions():
86 if issubclass(hint["exc_info"][0], SoftTimeLimitExceeded):
87 event["fingerprint"] = [
88 "celery",
89 "SoftTimeLimitExceeded",
90 getattr(task, "name", task),
91 ]
92
93 with capture_internal_exceptions():
94 if hasattr(task, "throws") and isinstance(
95 hint["exc_info"][1], task.throws
96 ):
97 return None
98
99 return event
100
101 return event_processor
102
103
104 def _capture_exception():
105 hub = Hub.current
106 exc_info = sys.exc_info()
107
108 if hub.get_integration(CeleryIntegration) is not None:
109 event, hint = event_from_exception(
110 exc_info,
111 client_options=hub.client.options,
112 mechanism={"type": "celery", "handled": False},
113 )
114 hub.capture_event(event, hint=hint)
115
116 return exc_info
117
[end of sentry_sdk/integrations/celery.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/sentry_sdk/integrations/celery.py b/sentry_sdk/integrations/celery.py
--- a/sentry_sdk/integrations/celery.py
+++ b/sentry_sdk/integrations/celery.py
@@ -2,7 +2,7 @@
import sys
-from celery.exceptions import SoftTimeLimitExceeded
+from celery.exceptions import SoftTimeLimitExceeded, Retry
from sentry_sdk.hub import Hub
from sentry_sdk.utils import capture_internal_exceptions, event_from_exception
@@ -82,6 +82,15 @@
}
if "exc_info" in hint:
+ with capture_internal_exceptions():
+ if isinstance(hint["exc_info"][1], Retry):
+ return None
+
+ if hasattr(task, "throws") and isinstance(
+ hint["exc_info"][1], task.throws
+ ):
+ return None
+
with capture_internal_exceptions():
if issubclass(hint["exc_info"][0], SoftTimeLimitExceeded):
event["fingerprint"] = [
@@ -90,12 +99,6 @@
getattr(task, "name", task),
]
- with capture_internal_exceptions():
- if hasattr(task, "throws") and isinstance(
- hint["exc_info"][1], task.throws
- ):
- return None
-
return event
return event_processor
|
{"golden_diff": "diff --git a/sentry_sdk/integrations/celery.py b/sentry_sdk/integrations/celery.py\n--- a/sentry_sdk/integrations/celery.py\n+++ b/sentry_sdk/integrations/celery.py\n@@ -2,7 +2,7 @@\n \n import sys\n \n-from celery.exceptions import SoftTimeLimitExceeded\n+from celery.exceptions import SoftTimeLimitExceeded, Retry\n \n from sentry_sdk.hub import Hub\n from sentry_sdk.utils import capture_internal_exceptions, event_from_exception\n@@ -82,6 +82,15 @@\n }\n \n if \"exc_info\" in hint:\n+ with capture_internal_exceptions():\n+ if isinstance(hint[\"exc_info\"][1], Retry):\n+ return None\n+\n+ if hasattr(task, \"throws\") and isinstance(\n+ hint[\"exc_info\"][1], task.throws\n+ ):\n+ return None\n+\n with capture_internal_exceptions():\n if issubclass(hint[\"exc_info\"][0], SoftTimeLimitExceeded):\n event[\"fingerprint\"] = [\n@@ -90,12 +99,6 @@\n getattr(task, \"name\", task),\n ]\n \n- with capture_internal_exceptions():\n- if hasattr(task, \"throws\") and isinstance(\n- hint[\"exc_info\"][1], task.throws\n- ):\n- return None\n-\n return event\n \n return event_processor\n", "issue": "[0.7.0] `CeleryIntegration` captures retries\nGreetings fellows!\r\n\r\nWe are having an issue with `CeleryIntegration` in Sentry SDK.\r\n\r\n### Current versions\r\nPython 3.6.7\r\nDjango 2.1.5\r\nCelery 4.1.1\r\nSentry SDK 0.7.0-0.7.1\r\n\r\n### Current behavior\r\nIn our code (internal and 3rd-party) we are using [Celery tasks retry functionality](http://docs.celeryproject.org/en/latest/userguide/tasks.html#retrying).\r\n\r\n> The app.Task.retry() call will raise an exception so any code after the retry won\u2019t be reached. This is the Retry exception, it isn\u2019t handled as an error but rather as a semi-predicate to signify to the worker that the task is to be retried, so that it can store the correct state when a result backend is enabled.\r\n\r\nWe did switch recently from Raven to Sentry SDK 0.6.9, everything seemed working as before.\r\nBut today we updated it to [0.7.0 release](https://github.com/getsentry/sentry-python/blob/master/CHANGES.md#070) (and later to 0.7.1)\r\n\r\nThis caused **every [`celery.exceptions.Retry`](http://docs.celeryproject.org/en/latest/reference/celery.exceptions.html#celery.exceptions.Retry) to be sent to Sentry**, which quickly filled Sentry server with thousands of events.\r\nPreviously (in old SDK and Raven), those exceptions were ignored and not sent to Sentry server.\r\n\r\n### Expected behaviour\r\n`CeleryIntegration` is not flooding Sentry server with every retry exception. Basically, the same behavior as it was in Raven and Sentry SDK<0.7.0.\r\n\r\n### Open questions\r\nI am not sure if the old behavior was done intentionally or by mistake.\r\nIf that was intended, we should reimplement it in current integration.\r\nIf not, there should be a way to filter/ignore that kind of exceptions (I am not sure if we can filter all retries from internal and 3rd-party code in`before_send` in a clean way).\r\n\r\nCould you help me to clarify this issue?\n", "before_files": [{"content": "from __future__ import absolute_import\n\nimport sys\n\nfrom celery.exceptions import SoftTimeLimitExceeded\n\nfrom sentry_sdk.hub import Hub\nfrom sentry_sdk.utils import capture_internal_exceptions, event_from_exception\nfrom sentry_sdk._compat import reraise\nfrom sentry_sdk.integrations import Integration\nfrom sentry_sdk.integrations.logging import ignore_logger\n\n\nclass CeleryIntegration(Integration):\n identifier = \"celery\"\n\n @staticmethod\n def setup_once():\n import celery.app.trace as trace\n\n old_build_tracer = trace.build_tracer\n\n def sentry_build_tracer(name, task, *args, **kwargs):\n # Need to patch both methods because older celery sometimes\n # short-circuits to task.run if it thinks it's safe.\n task.__call__ = _wrap_task_call(task.__call__)\n task.run = _wrap_task_call(task.run)\n return _wrap_tracer(task, old_build_tracer(name, task, *args, **kwargs))\n\n trace.build_tracer = sentry_build_tracer\n\n # This logger logs every status of every task that ran on the worker.\n # Meaning that every task's breadcrumbs are full of stuff like \"Task\n # <foo> raised unexpected <bar>\".\n ignore_logger(\"celery.worker.job\")\n\n\ndef _wrap_tracer(task, f):\n # Need to wrap tracer for pushing the scope before prerun is sent, and\n # popping it after postrun is sent.\n #\n # This is the reason we don't use signals for hooking in the first place.\n # Also because in Celery 3, signal dispatch returns early if one handler\n # crashes.\n def _inner(*args, **kwargs):\n hub = Hub.current\n if hub.get_integration(CeleryIntegration) is None:\n return f(*args, **kwargs)\n\n with hub.push_scope() as scope:\n scope._name = \"celery\"\n scope.add_event_processor(_make_event_processor(task, *args, **kwargs))\n\n return f(*args, **kwargs)\n\n return _inner\n\n\ndef _wrap_task_call(f):\n # Need to wrap task call because the exception is caught before we get to\n # see it. Also celery's reported stacktrace is untrustworthy.\n def _inner(*args, **kwargs):\n try:\n return f(*args, **kwargs)\n except Exception:\n reraise(*_capture_exception())\n\n return _inner\n\n\ndef _make_event_processor(task, uuid, args, kwargs, request=None):\n def event_processor(event, hint):\n with capture_internal_exceptions():\n event[\"transaction\"] = task.name\n\n with capture_internal_exceptions():\n extra = event.setdefault(\"extra\", {})\n extra[\"celery-job\"] = {\n \"task_name\": task.name,\n \"args\": args,\n \"kwargs\": kwargs,\n }\n\n if \"exc_info\" in hint:\n with capture_internal_exceptions():\n if issubclass(hint[\"exc_info\"][0], SoftTimeLimitExceeded):\n event[\"fingerprint\"] = [\n \"celery\",\n \"SoftTimeLimitExceeded\",\n getattr(task, \"name\", task),\n ]\n\n with capture_internal_exceptions():\n if hasattr(task, \"throws\") and isinstance(\n hint[\"exc_info\"][1], task.throws\n ):\n return None\n\n return event\n\n return event_processor\n\n\ndef _capture_exception():\n hub = Hub.current\n exc_info = sys.exc_info()\n\n if hub.get_integration(CeleryIntegration) is not None:\n event, hint = event_from_exception(\n exc_info,\n client_options=hub.client.options,\n mechanism={\"type\": \"celery\", \"handled\": False},\n )\n hub.capture_event(event, hint=hint)\n\n return exc_info\n", "path": "sentry_sdk/integrations/celery.py"}]}
| 2,078 | 305 |
gh_patches_debug_30509
|
rasdani/github-patches
|
git_diff
|
pytorch__vision-5560
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Improve the accuracy of Classification models by using SOTA recipes and primitives
## 🚀 Feature
Update the weights of all pre-trained models to improve their accuracy.
## Motivation
<h4 id="new-recipe-with-fixres">New Recipe + FixRes mitigations</h4>
```
torchrun --nproc_per_node=8 train.py --model $MODEL_NAME --batch-size 128 --lr 0.5 \
--lr-scheduler cosineannealinglr --lr-warmup-epochs 5 --lr-warmup-method linear \
--auto-augment ta_wide --epochs 600 --random-erase 0.1 --weight-decay 0.00002 \
--norm-weight-decay 0.0 --label-smoothing 0.1 --mixup-alpha 0.2 --cutmix-alpha 1.0 \
--train-crop-size 176 --model-ema --val-resize-size 232
```
Using a recipe which includes Warmup, Cosine Annealing, Label Smoothing, Mixup, Cutmix, Random Erasing, TrivialAugment, No BN weight decay, EMA and long training cycles and optional FixRes mitigations we are able to improve the `resnet50` accuracy by over 4.5 points. For more information on the training recipe, check [here](https://pytorch.org/blog/how-to-train-state-of-the-art-models-using-torchvision-latest-primitives/):
```
Old ResNet50:
Acc@1 76.130 Acc@5 92.862
New ResNet50:
Acc@1 80.674 Acc@5 95.166
```
Running other models through the same recipe, achieves the following improved accuracies:
```
ResNet101:
Acc@1 81.728 Acc@5 95.670
ResNet152:
Acc@1 82.042 Acc@5 95.926
ResNeXt50_32x4d:
Acc@1 81.116 Acc@5 95.478
ResNeXt101_32x8d:
Acc@1 82.834 Acc@5 96.228
MobileNetV3 Large:
Acc@1 74.938 Acc@5 92.496
Wide ResNet50 2:
Acc@1 81.602 Acc@5 95.758 (@prabhat00155)
Wide ResNet101 2:
Acc@1 82.492 Acc@5 96.110 (@prabhat00155)
regnet_x_400mf:
Acc@1 74.864 Acc@5 92.322 (@kazhang)
regnet_x_800mf:
Acc@1 77.522 Acc@5 93.826 (@kazhang)
regnet_x_1_6gf:
Acc@1 79.668 Acc@5 94.922 (@kazhang)
```
<h4 id="new-recipe">New Recipe (without FixRes mitigations)</h4>
```
torchrun --nproc_per_node=8 train.py --model $MODEL_NAME --batch-size 128 --lr 0.5 \
--lr-scheduler cosineannealinglr --lr-warmup-epochs 5 --lr-warmup-method linear \
--auto-augment ta_wide --epochs 600 --random-erase 0.1 --weight-decay 0.00002 \
--norm-weight-decay 0.0 --label-smoothing 0.1 --mixup-alpha 0.2 --cutmix-alpha 1.0 \
--model-ema --val-resize-size 232
```
Removing the optional FixRes mitigations seems to yield better results for some deeper architectures and variants with larger receptive fields:
```
ResNet101:
Acc@1 81.886 Acc@5 95.780
ResNet152:
Acc@1 82.284 Acc@5 96.002
ResNeXt50_32x4d:
Acc@1 81.198 Acc@5 95.340
ResNeXt101_32x8d:
Acc@1 82.812 Acc@5 96.226
MobileNetV3 Large:
Acc@1 75.152 Acc@5 92.634
Wide ResNet50_2:
Acc@1 81.452 Acc@5 95.544 (@prabhat00155)
Wide ResNet101_2:
Acc@1 82.510 Acc@5 96.020 (@prabhat00155)
regnet_x_3_2gf:
Acc@1 81.196 Acc@5 95.430
regnet_x_8gf:
Acc@1 81.682 Acc@5 95.678
regnet_x_16g:
Acc@1 82.716 Acc@5 96.196
regnet_x_32gf:
Acc@1 83.014 Acc@5 96.288
regnet_y_400mf:
Acc@1 75.804 Acc@5 92.742
regnet_y_800mf:
Acc@1 78.828 Acc@5 94.502
regnet_y_1_6gf:
Acc@1 80.876 Acc@5 95.444
regnet_y_3_2gf:
Acc@1 81.982 Acc@5 95.972
regnet_y_8gf:
Acc@1 82.828 Acc@5 96.330
regnet_y_16gf:
Acc@1 82.886 Acc@5 96.328
regnet_y_32gf:
Acc@1 83.368 Acc@5 96.498
```
<h4 id="new-recipe-with-reg-tuning">New Recipe + Regularization tuning</h4>
```
torchrun --nproc_per_node=8 train.py --model $MODEL_NAME --batch-size 128 --lr 0.5 \
--lr-scheduler cosineannealinglr --lr-warmup-epochs 5 --lr-warmup-method linear \
--auto-augment ta_wide --epochs 600 --random-erase 0.1 --weight-decay 0.00001 \
--norm-weight-decay 0.0 --label-smoothing 0.1 --mixup-alpha 0.2 --cutmix-alpha 1.0 \
--model-ema --val-resize-size 232
```
Adjusting slightly the regularization can help us improve the following:
```
MobileNetV3 Large:
Acc@1 75.274 Acc@5 92.566
```
In addition to regularization adjustment we can also apply the Repeated Augmentation trick ` --ra-sampler --ra-reps 4`:
```
MobileNetV2:
Acc@1 72.154 Acc@5 90.822
```
<h4 id="ptq-models">Post-Training Quantized models</h4>
```
ResNet50:
Acc@1 80.282 Acc@5 94.976
ResNeXt101_32x8d:
Acc@1 82.574 Acc@5 96.132
```
<h4 id="new-recipe-with-lr-wd-crop-tuning">New Recipe (LR+weight_decay+train_crop_size tuning)</h4>
```
torchrun --ngpus 8 --nodes 1 --model $MODEL_NAME --batch-size 128 --lr 1 \
--lr-scheduler cosineannealinglr --lr-warmup-epochs 5 --lr-warmup-method linear \
--auto-augment ta_wide --epochs 600 --random-erase 0.1 --weight-decay 0.000002 \
--norm-weight-decay 0.0 --label-smoothing 0.1 --mixup-alpha 0.2 --cutmix-alpha 1.0 \
--train-crop-size 208 --model-ema --val-crop-size 240 --val-resize-size 255
```
```
EfficientNet-B1:
Acc@1 79.838 Acc@5 94.934
```
## Pitch
To be able to improve the pre-trained model accuracy, we need to complete the "Batteries Included" work as #3911. Moreover we will need to extend our existing model builders to support multiple weights as described at #4611. Then we will be able to:
- Update our reference scripts for classification to support the new primitives added by the "Batteries Included" initiative.
- Find a good training recipe for the most important pre-trained models and re-train them. Note that different training configuration might be required for different types of models (for example mobile models are less likely to overfit comparing to bigger models and thus make use of different recipes/primitives)
- Update the weights of the models in the library.
cc @datumbox @vfdev-5
</issue>
<code>
[start of torchvision/prototype/models/mobilenetv2.py]
1 from functools import partial
2 from typing import Any, Optional
3
4 from torchvision.prototype.transforms import ImageNetEval
5 from torchvision.transforms.functional import InterpolationMode
6
7 from ...models.mobilenetv2 import MobileNetV2
8 from ._api import WeightsEnum, Weights
9 from ._meta import _IMAGENET_CATEGORIES
10 from ._utils import handle_legacy_interface, _ovewrite_named_param
11
12
13 __all__ = ["MobileNetV2", "MobileNet_V2_Weights", "mobilenet_v2"]
14
15
16 class MobileNet_V2_Weights(WeightsEnum):
17 IMAGENET1K_V1 = Weights(
18 url="https://download.pytorch.org/models/mobilenet_v2-b0353104.pth",
19 transforms=partial(ImageNetEval, crop_size=224),
20 meta={
21 "task": "image_classification",
22 "architecture": "MobileNetV2",
23 "publication_year": 2018,
24 "num_params": 3504872,
25 "size": (224, 224),
26 "min_size": (1, 1),
27 "categories": _IMAGENET_CATEGORIES,
28 "interpolation": InterpolationMode.BILINEAR,
29 "recipe": "https://github.com/pytorch/vision/tree/main/references/classification#mobilenetv2",
30 "acc@1": 71.878,
31 "acc@5": 90.286,
32 },
33 )
34 DEFAULT = IMAGENET1K_V1
35
36
37 @handle_legacy_interface(weights=("pretrained", MobileNet_V2_Weights.IMAGENET1K_V1))
38 def mobilenet_v2(
39 *, weights: Optional[MobileNet_V2_Weights] = None, progress: bool = True, **kwargs: Any
40 ) -> MobileNetV2:
41 weights = MobileNet_V2_Weights.verify(weights)
42
43 if weights is not None:
44 _ovewrite_named_param(kwargs, "num_classes", len(weights.meta["categories"]))
45
46 model = MobileNetV2(**kwargs)
47
48 if weights is not None:
49 model.load_state_dict(weights.get_state_dict(progress=progress))
50
51 return model
52
[end of torchvision/prototype/models/mobilenetv2.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/torchvision/prototype/models/mobilenetv2.py b/torchvision/prototype/models/mobilenetv2.py
--- a/torchvision/prototype/models/mobilenetv2.py
+++ b/torchvision/prototype/models/mobilenetv2.py
@@ -13,25 +13,40 @@
__all__ = ["MobileNetV2", "MobileNet_V2_Weights", "mobilenet_v2"]
+_COMMON_META = {
+ "task": "image_classification",
+ "architecture": "MobileNetV2",
+ "publication_year": 2018,
+ "num_params": 3504872,
+ "size": (224, 224),
+ "min_size": (1, 1),
+ "categories": _IMAGENET_CATEGORIES,
+ "interpolation": InterpolationMode.BILINEAR,
+}
+
+
class MobileNet_V2_Weights(WeightsEnum):
IMAGENET1K_V1 = Weights(
url="https://download.pytorch.org/models/mobilenet_v2-b0353104.pth",
transforms=partial(ImageNetEval, crop_size=224),
meta={
- "task": "image_classification",
- "architecture": "MobileNetV2",
- "publication_year": 2018,
- "num_params": 3504872,
- "size": (224, 224),
- "min_size": (1, 1),
- "categories": _IMAGENET_CATEGORIES,
- "interpolation": InterpolationMode.BILINEAR,
+ **_COMMON_META,
"recipe": "https://github.com/pytorch/vision/tree/main/references/classification#mobilenetv2",
"acc@1": 71.878,
"acc@5": 90.286,
},
)
- DEFAULT = IMAGENET1K_V1
+ IMAGENET1K_V2 = Weights(
+ url="https://download.pytorch.org/models/mobilenet_v2-7ebf99e0.pth",
+ transforms=partial(ImageNetEval, crop_size=224, resize_size=232),
+ meta={
+ **_COMMON_META,
+ "recipe": "https://github.com/pytorch/vision/issues/3995#new-recipe-with-reg-tuning",
+ "acc@1": 72.154,
+ "acc@5": 90.822,
+ },
+ )
+ DEFAULT = IMAGENET1K_V2
@handle_legacy_interface(weights=("pretrained", MobileNet_V2_Weights.IMAGENET1K_V1))
|
{"golden_diff": "diff --git a/torchvision/prototype/models/mobilenetv2.py b/torchvision/prototype/models/mobilenetv2.py\n--- a/torchvision/prototype/models/mobilenetv2.py\n+++ b/torchvision/prototype/models/mobilenetv2.py\n@@ -13,25 +13,40 @@\n __all__ = [\"MobileNetV2\", \"MobileNet_V2_Weights\", \"mobilenet_v2\"]\n \n \n+_COMMON_META = {\n+ \"task\": \"image_classification\",\n+ \"architecture\": \"MobileNetV2\",\n+ \"publication_year\": 2018,\n+ \"num_params\": 3504872,\n+ \"size\": (224, 224),\n+ \"min_size\": (1, 1),\n+ \"categories\": _IMAGENET_CATEGORIES,\n+ \"interpolation\": InterpolationMode.BILINEAR,\n+}\n+\n+\n class MobileNet_V2_Weights(WeightsEnum):\n IMAGENET1K_V1 = Weights(\n url=\"https://download.pytorch.org/models/mobilenet_v2-b0353104.pth\",\n transforms=partial(ImageNetEval, crop_size=224),\n meta={\n- \"task\": \"image_classification\",\n- \"architecture\": \"MobileNetV2\",\n- \"publication_year\": 2018,\n- \"num_params\": 3504872,\n- \"size\": (224, 224),\n- \"min_size\": (1, 1),\n- \"categories\": _IMAGENET_CATEGORIES,\n- \"interpolation\": InterpolationMode.BILINEAR,\n+ **_COMMON_META,\n \"recipe\": \"https://github.com/pytorch/vision/tree/main/references/classification#mobilenetv2\",\n \"acc@1\": 71.878,\n \"acc@5\": 90.286,\n },\n )\n- DEFAULT = IMAGENET1K_V1\n+ IMAGENET1K_V2 = Weights(\n+ url=\"https://download.pytorch.org/models/mobilenet_v2-7ebf99e0.pth\",\n+ transforms=partial(ImageNetEval, crop_size=224, resize_size=232),\n+ meta={\n+ **_COMMON_META,\n+ \"recipe\": \"https://github.com/pytorch/vision/issues/3995#new-recipe-with-reg-tuning\",\n+ \"acc@1\": 72.154,\n+ \"acc@5\": 90.822,\n+ },\n+ )\n+ DEFAULT = IMAGENET1K_V2\n \n \n @handle_legacy_interface(weights=(\"pretrained\", MobileNet_V2_Weights.IMAGENET1K_V1))\n", "issue": "Improve the accuracy of Classification models by using SOTA recipes and primitives\n## \ud83d\ude80 Feature\r\nUpdate the weights of all pre-trained models to improve their accuracy.\r\n\r\n## Motivation\r\n\r\n<h4 id=\"new-recipe-with-fixres\">New Recipe + FixRes mitigations</h4>\r\n\r\n```\r\ntorchrun --nproc_per_node=8 train.py --model $MODEL_NAME --batch-size 128 --lr 0.5 \\\r\n--lr-scheduler cosineannealinglr --lr-warmup-epochs 5 --lr-warmup-method linear \\\r\n--auto-augment ta_wide --epochs 600 --random-erase 0.1 --weight-decay 0.00002 \\\r\n--norm-weight-decay 0.0 --label-smoothing 0.1 --mixup-alpha 0.2 --cutmix-alpha 1.0 \\\r\n--train-crop-size 176 --model-ema --val-resize-size 232\r\n```\r\n\r\nUsing a recipe which includes Warmup, Cosine Annealing, Label Smoothing, Mixup, Cutmix, Random Erasing, TrivialAugment, No BN weight decay, EMA and long training cycles and optional FixRes mitigations we are able to improve the `resnet50` accuracy by over 4.5 points. For more information on the training recipe, check [here](https://pytorch.org/blog/how-to-train-state-of-the-art-models-using-torchvision-latest-primitives/):\r\n```\r\nOld ResNet50:\r\nAcc@1 76.130 Acc@5 92.862\r\n\r\nNew ResNet50:\r\nAcc@1 80.674 Acc@5 95.166\r\n```\r\n\r\nRunning other models through the same recipe, achieves the following improved accuracies:\r\n```\r\nResNet101:\r\nAcc@1 81.728 Acc@5 95.670\r\n\r\nResNet152:\r\nAcc@1 82.042 Acc@5 95.926\r\n\r\nResNeXt50_32x4d:\r\nAcc@1 81.116 Acc@5 95.478\r\n\r\nResNeXt101_32x8d:\r\nAcc@1 82.834 Acc@5 96.228\r\n\r\nMobileNetV3 Large:\r\nAcc@1 74.938 Acc@5 92.496\r\n\r\nWide ResNet50 2:\r\nAcc@1 81.602 Acc@5 95.758 (@prabhat00155)\r\n\r\nWide ResNet101 2:\r\nAcc@1 82.492 Acc@5 96.110 (@prabhat00155)\r\n\r\nregnet_x_400mf:\r\nAcc@1 74.864 Acc@5 92.322 (@kazhang)\r\n\r\nregnet_x_800mf:\r\nAcc@1 77.522 Acc@5 93.826 (@kazhang)\r\n\r\nregnet_x_1_6gf:\r\nAcc@1 79.668 Acc@5 94.922 (@kazhang)\r\n```\r\n\r\n<h4 id=\"new-recipe\">New Recipe (without FixRes mitigations)</h4>\r\n\r\n```\r\ntorchrun --nproc_per_node=8 train.py --model $MODEL_NAME --batch-size 128 --lr 0.5 \\\r\n--lr-scheduler cosineannealinglr --lr-warmup-epochs 5 --lr-warmup-method linear \\\r\n--auto-augment ta_wide --epochs 600 --random-erase 0.1 --weight-decay 0.00002 \\\r\n--norm-weight-decay 0.0 --label-smoothing 0.1 --mixup-alpha 0.2 --cutmix-alpha 1.0 \\\r\n--model-ema --val-resize-size 232\r\n```\r\n\r\nRemoving the optional FixRes mitigations seems to yield better results for some deeper architectures and variants with larger receptive fields:\r\n```\r\nResNet101:\r\nAcc@1 81.886 Acc@5 95.780\r\n\r\nResNet152:\r\nAcc@1 82.284 Acc@5 96.002\r\n\r\nResNeXt50_32x4d:\r\nAcc@1 81.198 Acc@5 95.340\r\n\r\nResNeXt101_32x8d:\r\nAcc@1 82.812 Acc@5 96.226\r\n\r\nMobileNetV3 Large:\r\nAcc@1 75.152 Acc@5 92.634\r\n\r\nWide ResNet50_2:\r\nAcc@1 81.452 Acc@5 95.544 (@prabhat00155)\r\n\r\nWide ResNet101_2:\r\nAcc@1 82.510 Acc@5 96.020 (@prabhat00155)\r\n\r\nregnet_x_3_2gf:\r\nAcc@1 81.196 Acc@5 95.430\r\n\r\nregnet_x_8gf:\r\nAcc@1 81.682 Acc@5 95.678\r\n\r\nregnet_x_16g:\r\nAcc@1 82.716 Acc@5 96.196\r\n\r\nregnet_x_32gf:\r\nAcc@1 83.014 Acc@5 96.288\r\n\r\nregnet_y_400mf:\r\nAcc@1 75.804 Acc@5 92.742\r\n\r\nregnet_y_800mf:\r\nAcc@1 78.828 Acc@5 94.502\r\n\r\nregnet_y_1_6gf:\r\nAcc@1 80.876 Acc@5 95.444\r\n\r\nregnet_y_3_2gf:\r\nAcc@1 81.982 Acc@5 95.972\r\n\r\nregnet_y_8gf:\r\nAcc@1 82.828 Acc@5 96.330\r\n\r\nregnet_y_16gf:\r\nAcc@1 82.886 Acc@5 96.328\r\n\r\nregnet_y_32gf:\r\nAcc@1 83.368 Acc@5 96.498\r\n```\r\n\r\n<h4 id=\"new-recipe-with-reg-tuning\">New Recipe + Regularization tuning</h4>\r\n\r\n```\r\ntorchrun --nproc_per_node=8 train.py --model $MODEL_NAME --batch-size 128 --lr 0.5 \\\r\n--lr-scheduler cosineannealinglr --lr-warmup-epochs 5 --lr-warmup-method linear \\\r\n--auto-augment ta_wide --epochs 600 --random-erase 0.1 --weight-decay 0.00001 \\\r\n--norm-weight-decay 0.0 --label-smoothing 0.1 --mixup-alpha 0.2 --cutmix-alpha 1.0 \\\r\n--model-ema --val-resize-size 232\r\n```\r\n\r\nAdjusting slightly the regularization can help us improve the following:\r\n```\r\nMobileNetV3 Large:\r\nAcc@1 75.274 Acc@5 92.566\r\n```\r\nIn addition to regularization adjustment we can also apply the Repeated Augmentation trick ` --ra-sampler --ra-reps 4`:\r\n\r\n```\r\nMobileNetV2:\r\nAcc@1 72.154 Acc@5 90.822\r\n```\r\n\r\n<h4 id=\"ptq-models\">Post-Training Quantized models</h4>\r\n\r\n```\r\nResNet50:\r\nAcc@1 80.282 Acc@5 94.976\r\n\r\nResNeXt101_32x8d:\r\nAcc@1 82.574 Acc@5 96.132\r\n```\r\n\r\n<h4 id=\"new-recipe-with-lr-wd-crop-tuning\">New Recipe (LR+weight_decay+train_crop_size tuning)</h4>\r\n\r\n```\r\ntorchrun --ngpus 8 --nodes 1 --model $MODEL_NAME --batch-size 128 --lr 1 \\\r\n--lr-scheduler cosineannealinglr --lr-warmup-epochs 5 --lr-warmup-method linear \\\r\n--auto-augment ta_wide --epochs 600 --random-erase 0.1 --weight-decay 0.000002 \\\r\n--norm-weight-decay 0.0 --label-smoothing 0.1 --mixup-alpha 0.2 --cutmix-alpha 1.0 \\\r\n--train-crop-size 208 --model-ema --val-crop-size 240 --val-resize-size 255\r\n```\r\n\r\n```\r\nEfficientNet-B1:\r\nAcc@1 79.838 Acc@5 94.934\r\n```\r\n\r\n## Pitch\r\n\r\nTo be able to improve the pre-trained model accuracy, we need to complete the \"Batteries Included\" work as #3911. Moreover we will need to extend our existing model builders to support multiple weights as described at #4611. Then we will be able to:\r\n- Update our reference scripts for classification to support the new primitives added by the \"Batteries Included\" initiative.\r\n- Find a good training recipe for the most important pre-trained models and re-train them. Note that different training configuration might be required for different types of models (for example mobile models are less likely to overfit comparing to bigger models and thus make use of different recipes/primitives)\r\n- Update the weights of the models in the library.\r\n\r\ncc @datumbox @vfdev-5\n", "before_files": [{"content": "from functools import partial\nfrom typing import Any, Optional\n\nfrom torchvision.prototype.transforms import ImageNetEval\nfrom torchvision.transforms.functional import InterpolationMode\n\nfrom ...models.mobilenetv2 import MobileNetV2\nfrom ._api import WeightsEnum, Weights\nfrom ._meta import _IMAGENET_CATEGORIES\nfrom ._utils import handle_legacy_interface, _ovewrite_named_param\n\n\n__all__ = [\"MobileNetV2\", \"MobileNet_V2_Weights\", \"mobilenet_v2\"]\n\n\nclass MobileNet_V2_Weights(WeightsEnum):\n IMAGENET1K_V1 = Weights(\n url=\"https://download.pytorch.org/models/mobilenet_v2-b0353104.pth\",\n transforms=partial(ImageNetEval, crop_size=224),\n meta={\n \"task\": \"image_classification\",\n \"architecture\": \"MobileNetV2\",\n \"publication_year\": 2018,\n \"num_params\": 3504872,\n \"size\": (224, 224),\n \"min_size\": (1, 1),\n \"categories\": _IMAGENET_CATEGORIES,\n \"interpolation\": InterpolationMode.BILINEAR,\n \"recipe\": \"https://github.com/pytorch/vision/tree/main/references/classification#mobilenetv2\",\n \"acc@1\": 71.878,\n \"acc@5\": 90.286,\n },\n )\n DEFAULT = IMAGENET1K_V1\n\n\n@handle_legacy_interface(weights=(\"pretrained\", MobileNet_V2_Weights.IMAGENET1K_V1))\ndef mobilenet_v2(\n *, weights: Optional[MobileNet_V2_Weights] = None, progress: bool = True, **kwargs: Any\n) -> MobileNetV2:\n weights = MobileNet_V2_Weights.verify(weights)\n\n if weights is not None:\n _ovewrite_named_param(kwargs, \"num_classes\", len(weights.meta[\"categories\"]))\n\n model = MobileNetV2(**kwargs)\n\n if weights is not None:\n model.load_state_dict(weights.get_state_dict(progress=progress))\n\n return model\n", "path": "torchvision/prototype/models/mobilenetv2.py"}]}
| 3,335 | 637 |
gh_patches_debug_997
|
rasdani/github-patches
|
git_diff
|
mathesar-foundation__mathesar-841
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Use correct test client parameters when sending json body payload
## Problem
Currently, When sending a request containing a json payload using the Django rest framework test client, the payload is being converted into a string using `json.dumps` but the Django rest framework provides convenience parameters that does this automatically.
## Proposed solution
Use the `format` parameter of the DRF test client or set the default payload format in the DRF settings, in order for the test client to be able to handle the conversion automatically
</issue>
<code>
[start of config/settings.py]
1 """
2 Django settings for config project.
3
4 Generated by 'django-admin startproject' using Django 3.1.7.
5
6 For more information on this file, see
7 https://docs.djangoproject.com/en/3.1/topics/settings/
8
9 For the full list of settings and their values, see
10 https://docs.djangoproject.com/en/3.1/ref/settings/
11 """
12
13 import os
14 from pathlib import Path
15
16 from decouple import Csv, config as decouple_config
17 from dj_database_url import parse as db_url
18
19
20 # We use a 'tuple' with pipes as delimiters as decople naively splits the global
21 # variables on commas when casting to Csv()
22 def pipe_delim(pipe_string):
23 # Remove opening and closing brackets
24 pipe_string = pipe_string[1:-1]
25 # Split on pipe delim
26 return pipe_string.split("|")
27
28
29 # Build paths inside the project like this: BASE_DIR / 'subdir'.
30 BASE_DIR = Path(__file__).resolve().parent.parent
31
32 # Application definition
33
34 INSTALLED_APPS = [
35 "django.contrib.admin",
36 "django.contrib.auth",
37 "django.contrib.contenttypes",
38 "django.contrib.sessions",
39 "django.contrib.messages",
40 "django.contrib.staticfiles",
41 "rest_framework",
42 "django_filters",
43 "django_property_filter",
44 "mathesar",
45 ]
46
47 MIDDLEWARE = [
48 "django.middleware.security.SecurityMiddleware",
49 "django.contrib.sessions.middleware.SessionMiddleware",
50 "django.middleware.common.CommonMiddleware",
51 "django.middleware.csrf.CsrfViewMiddleware",
52 "django.contrib.auth.middleware.AuthenticationMiddleware",
53 "django.contrib.messages.middleware.MessageMiddleware",
54 "django.middleware.clickjacking.XFrameOptionsMiddleware",
55 ]
56
57 ROOT_URLCONF = "config.urls"
58
59 TEMPLATES = [
60 {
61 "BACKEND": "django.template.backends.django.DjangoTemplates",
62 "DIRS": [],
63 "APP_DIRS": True,
64 "OPTIONS": {
65 "context_processors": [
66 "config.context_processors.frontend_settings",
67 "django.template.context_processors.debug",
68 "django.template.context_processors.request",
69 "django.contrib.auth.context_processors.auth",
70 "django.contrib.messages.context_processors.messages",
71 ],
72 },
73 },
74 ]
75
76 WSGI_APPLICATION = "config.wsgi.application"
77
78 # Database
79 # https://docs.djangoproject.com/en/3.1/ref/settings/#databases
80
81 # TODO: Add to documentation that database keys should not be than 128 characters.
82
83 # MATHESAR_DATABASES should be of the form '({db_name}|{db_url}), ({db_name}|{db_url})'
84 # See pipe_delim above for why we use pipes as delimiters
85 DATABASES = {
86 db_key: db_url(url_string)
87 for db_key, url_string in decouple_config('MATHESAR_DATABASES', cast=Csv(pipe_delim))
88 }
89 DATABASES[decouple_config('DJANGO_DATABASE_KEY')] = decouple_config('DJANGO_DATABASE_URL', cast=db_url)
90
91 for db_key, db_dict in DATABASES.items():
92 # Engine can be '.postgresql' or '.postgresql_psycopg2'
93 if not db_dict['ENGINE'].startswith('django.db.backends.postgresql'):
94 raise ValueError(
95 f"{db_key} is not a PostgreSQL database. "
96 f"{db_dict['ENGINE']} found for {db_key}'s engine."
97 )
98
99
100 # pytest-django will create a new database named 'test_{DATABASES[table_db]['NAME']}'
101 # and use it for our API tests if we don't specify DATABASES[table_db]['TEST']['NAME']
102 if decouple_config('TEST', default=False, cast=bool):
103 for db_key, _ in decouple_config('MATHESAR_DATABASES', cast=Csv(pipe_delim)):
104 DATABASES[db_key]['TEST'] = {'NAME': DATABASES[db_key]['NAME']}
105
106
107 # Quick-start development settings - unsuitable for production
108 # See https://docs.djangoproject.com/en/3.1/howto/deployment/checklist/
109
110 # SECURITY WARNING: keep the secret key used in production secret!
111 SECRET_KEY = decouple_config('SECRET_KEY')
112
113 # SECURITY WARNING: don't run with debug turned on in production!
114 DEBUG = decouple_config('DEBUG', default=False, cast=bool)
115
116 ALLOWED_HOSTS = decouple_config('ALLOWED_HOSTS', cast=Csv())
117
118 # Password validation
119 # https://docs.djangoproject.com/en/3.1/ref/settings/#auth-password-validators
120
121 AUTH_PASSWORD_VALIDATORS = [
122 {
123 "NAME": "django.contrib.auth.password_validation.UserAttributeSimilarityValidator",
124 },
125 {
126 "NAME": "django.contrib.auth.password_validation.MinimumLengthValidator",
127 },
128 {
129 "NAME": "django.contrib.auth.password_validation.CommonPasswordValidator",
130 },
131 {
132 "NAME": "django.contrib.auth.password_validation.NumericPasswordValidator",
133 },
134 ]
135
136
137 # Internationalization
138 # https://docs.djangoproject.com/en/3.1/topics/i18n/
139
140 LANGUAGE_CODE = "en-us"
141
142 TIME_ZONE = "UTC"
143
144 USE_I18N = True
145
146 USE_L10N = True
147
148 USE_TZ = True
149
150
151 # Static files (CSS, JavaScript, Images)
152 # https://docs.djangoproject.com/en/3.1/howto/static-files/
153 # https://docs.djangoproject.com/en/3.1/ref/contrib/staticfiles/
154
155 STATIC_URL = "/static/"
156
157 # When running with DEBUG=False, the webserver needs to serve files from this location
158 # python manage.py collectstatic has to be run to collect all static files into this location
159 # The files need to served in brotli or gzip compressed format
160 STATIC_ROOT = os.path.join(BASE_DIR, 'static/')
161
162 # Media files (uploaded by the user)
163
164 MEDIA_ROOT = os.path.join(BASE_DIR, '.media/')
165
166 MEDIA_URL = "/media/"
167
168 # Update Authentication classes, removed BasicAuthentication
169 # Defaults: https://www.django-rest-framework.org/api-guide/settings/
170 REST_FRAMEWORK = {
171 'DEFAULT_AUTHENTICATION_CLASSES': [
172 'rest_framework.authentication.SessionAuthentication'
173 ],
174 'DEFAULT_FILTER_BACKENDS': (
175 'django_filters.rest_framework.DjangoFilterBackend',
176 'rest_framework.filters.OrderingFilter',
177 ),
178 }
179
180 # Mathesar settings
181 MATHESAR_MODE = decouple_config('MODE', default='PRODUCTION')
182 MATHESAR_UI_BUILD_LOCATION = os.path.join(BASE_DIR, 'mathesar/static/mathesar/')
183 MATHESAR_MANIFEST_LOCATION = os.path.join(MATHESAR_UI_BUILD_LOCATION, 'manifest.json')
184 MATHESAR_CLIENT_DEV_URL = 'http://localhost:3000'
185
186
187 STATICFILES_DIRS = [MATHESAR_UI_BUILD_LOCATION]
188
[end of config/settings.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/config/settings.py b/config/settings.py
--- a/config/settings.py
+++ b/config/settings.py
@@ -175,6 +175,7 @@
'django_filters.rest_framework.DjangoFilterBackend',
'rest_framework.filters.OrderingFilter',
),
+ 'TEST_REQUEST_DEFAULT_FORMAT': 'json',
}
# Mathesar settings
|
{"golden_diff": "diff --git a/config/settings.py b/config/settings.py\n--- a/config/settings.py\n+++ b/config/settings.py\n@@ -175,6 +175,7 @@\n 'django_filters.rest_framework.DjangoFilterBackend',\n 'rest_framework.filters.OrderingFilter',\n ),\n+ 'TEST_REQUEST_DEFAULT_FORMAT': 'json',\n }\n \n # Mathesar settings\n", "issue": "Use correct test client parameters when sending json body payload\n## Problem\r\nCurrently, When sending a request containing a json payload using the Django rest framework test client, the payload is being converted into a string using `json.dumps` but the Django rest framework provides convenience parameters that does this automatically.\r\n\r\n## Proposed solution\r\nUse the `format` parameter of the DRF test client or set the default payload format in the DRF settings, in order for the test client to be able to handle the conversion automatically\n", "before_files": [{"content": "\"\"\"\nDjango settings for config project.\n\nGenerated by 'django-admin startproject' using Django 3.1.7.\n\nFor more information on this file, see\nhttps://docs.djangoproject.com/en/3.1/topics/settings/\n\nFor the full list of settings and their values, see\nhttps://docs.djangoproject.com/en/3.1/ref/settings/\n\"\"\"\n\nimport os\nfrom pathlib import Path\n\nfrom decouple import Csv, config as decouple_config\nfrom dj_database_url import parse as db_url\n\n\n# We use a 'tuple' with pipes as delimiters as decople naively splits the global\n# variables on commas when casting to Csv()\ndef pipe_delim(pipe_string):\n # Remove opening and closing brackets\n pipe_string = pipe_string[1:-1]\n # Split on pipe delim\n return pipe_string.split(\"|\")\n\n\n# Build paths inside the project like this: BASE_DIR / 'subdir'.\nBASE_DIR = Path(__file__).resolve().parent.parent\n\n# Application definition\n\nINSTALLED_APPS = [\n \"django.contrib.admin\",\n \"django.contrib.auth\",\n \"django.contrib.contenttypes\",\n \"django.contrib.sessions\",\n \"django.contrib.messages\",\n \"django.contrib.staticfiles\",\n \"rest_framework\",\n \"django_filters\",\n \"django_property_filter\",\n \"mathesar\",\n]\n\nMIDDLEWARE = [\n \"django.middleware.security.SecurityMiddleware\",\n \"django.contrib.sessions.middleware.SessionMiddleware\",\n \"django.middleware.common.CommonMiddleware\",\n \"django.middleware.csrf.CsrfViewMiddleware\",\n \"django.contrib.auth.middleware.AuthenticationMiddleware\",\n \"django.contrib.messages.middleware.MessageMiddleware\",\n \"django.middleware.clickjacking.XFrameOptionsMiddleware\",\n]\n\nROOT_URLCONF = \"config.urls\"\n\nTEMPLATES = [\n {\n \"BACKEND\": \"django.template.backends.django.DjangoTemplates\",\n \"DIRS\": [],\n \"APP_DIRS\": True,\n \"OPTIONS\": {\n \"context_processors\": [\n \"config.context_processors.frontend_settings\",\n \"django.template.context_processors.debug\",\n \"django.template.context_processors.request\",\n \"django.contrib.auth.context_processors.auth\",\n \"django.contrib.messages.context_processors.messages\",\n ],\n },\n },\n]\n\nWSGI_APPLICATION = \"config.wsgi.application\"\n\n# Database\n# https://docs.djangoproject.com/en/3.1/ref/settings/#databases\n\n# TODO: Add to documentation that database keys should not be than 128 characters.\n\n# MATHESAR_DATABASES should be of the form '({db_name}|{db_url}), ({db_name}|{db_url})'\n# See pipe_delim above for why we use pipes as delimiters\nDATABASES = {\n db_key: db_url(url_string)\n for db_key, url_string in decouple_config('MATHESAR_DATABASES', cast=Csv(pipe_delim))\n}\nDATABASES[decouple_config('DJANGO_DATABASE_KEY')] = decouple_config('DJANGO_DATABASE_URL', cast=db_url)\n\nfor db_key, db_dict in DATABASES.items():\n # Engine can be '.postgresql' or '.postgresql_psycopg2'\n if not db_dict['ENGINE'].startswith('django.db.backends.postgresql'):\n raise ValueError(\n f\"{db_key} is not a PostgreSQL database. \"\n f\"{db_dict['ENGINE']} found for {db_key}'s engine.\"\n )\n\n\n# pytest-django will create a new database named 'test_{DATABASES[table_db]['NAME']}'\n# and use it for our API tests if we don't specify DATABASES[table_db]['TEST']['NAME']\nif decouple_config('TEST', default=False, cast=bool):\n for db_key, _ in decouple_config('MATHESAR_DATABASES', cast=Csv(pipe_delim)):\n DATABASES[db_key]['TEST'] = {'NAME': DATABASES[db_key]['NAME']}\n\n\n# Quick-start development settings - unsuitable for production\n# See https://docs.djangoproject.com/en/3.1/howto/deployment/checklist/\n\n# SECURITY WARNING: keep the secret key used in production secret!\nSECRET_KEY = decouple_config('SECRET_KEY')\n\n# SECURITY WARNING: don't run with debug turned on in production!\nDEBUG = decouple_config('DEBUG', default=False, cast=bool)\n\nALLOWED_HOSTS = decouple_config('ALLOWED_HOSTS', cast=Csv())\n\n# Password validation\n# https://docs.djangoproject.com/en/3.1/ref/settings/#auth-password-validators\n\nAUTH_PASSWORD_VALIDATORS = [\n {\n \"NAME\": \"django.contrib.auth.password_validation.UserAttributeSimilarityValidator\",\n },\n {\n \"NAME\": \"django.contrib.auth.password_validation.MinimumLengthValidator\",\n },\n {\n \"NAME\": \"django.contrib.auth.password_validation.CommonPasswordValidator\",\n },\n {\n \"NAME\": \"django.contrib.auth.password_validation.NumericPasswordValidator\",\n },\n]\n\n\n# Internationalization\n# https://docs.djangoproject.com/en/3.1/topics/i18n/\n\nLANGUAGE_CODE = \"en-us\"\n\nTIME_ZONE = \"UTC\"\n\nUSE_I18N = True\n\nUSE_L10N = True\n\nUSE_TZ = True\n\n\n# Static files (CSS, JavaScript, Images)\n# https://docs.djangoproject.com/en/3.1/howto/static-files/\n# https://docs.djangoproject.com/en/3.1/ref/contrib/staticfiles/\n\nSTATIC_URL = \"/static/\"\n\n# When running with DEBUG=False, the webserver needs to serve files from this location\n# python manage.py collectstatic has to be run to collect all static files into this location\n# The files need to served in brotli or gzip compressed format\nSTATIC_ROOT = os.path.join(BASE_DIR, 'static/')\n\n# Media files (uploaded by the user)\n\nMEDIA_ROOT = os.path.join(BASE_DIR, '.media/')\n\nMEDIA_URL = \"/media/\"\n\n# Update Authentication classes, removed BasicAuthentication\n# Defaults: https://www.django-rest-framework.org/api-guide/settings/\nREST_FRAMEWORK = {\n 'DEFAULT_AUTHENTICATION_CLASSES': [\n 'rest_framework.authentication.SessionAuthentication'\n ],\n 'DEFAULT_FILTER_BACKENDS': (\n 'django_filters.rest_framework.DjangoFilterBackend',\n 'rest_framework.filters.OrderingFilter',\n ),\n}\n\n# Mathesar settings\nMATHESAR_MODE = decouple_config('MODE', default='PRODUCTION')\nMATHESAR_UI_BUILD_LOCATION = os.path.join(BASE_DIR, 'mathesar/static/mathesar/')\nMATHESAR_MANIFEST_LOCATION = os.path.join(MATHESAR_UI_BUILD_LOCATION, 'manifest.json')\nMATHESAR_CLIENT_DEV_URL = 'http://localhost:3000'\n\n\nSTATICFILES_DIRS = [MATHESAR_UI_BUILD_LOCATION]\n", "path": "config/settings.py"}]}
| 2,495 | 78 |
gh_patches_debug_4708
|
rasdani/github-patches
|
git_diff
|
biolab__orange3-text-360
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
TheGuardianCredentials returns wrong valid property
##### Text version
0.3.0
##### Orange version
3.14
##### Expected behavior
``` python
credentials = TheGuardianCredentials('<your-api-key>')
print(credentials.valid)
```
Should correctly return if key is valid
##### Actual behavior
If the given key exceeds the API limit `credentials.valid` still returns True
</issue>
<code>
[start of orangecontrib/text/guardian.py]
1 """ This module fetches data from The Guardian API.
2
3 To use first create :class:`TheGuardianCredentials`:
4
5 >>> from orangecontrib.text.guardian import TheGuardianCredentials
6 >>> credentials = TheGuardianCredentials('<your-api-key>')
7
8 Then create :class:`TheGuardianAPI` object and use it for searching:
9
10 >>> from orangecontrib.text.guardian import TheGuardianAPI
11 >>> api = TheGuardianAPI(credentials)
12 >>> corpus = api.search('Slovenia', max_documents=10)
13 >>> len(corpus)
14 10
15
16 """
17
18 import requests
19 import math
20 import json
21 import os
22
23 from Orange import data
24
25 from orangecontrib.text.corpus import Corpus
26
27
28 BASE_URL = 'http://content.guardianapis.com/search'
29 ARTICLES_PER_PAGE = 10
30
31
32 class TheGuardianCredentials:
33 """ The Guardian API credentials. """
34 def __init__(self, key):
35 """
36 Args:
37 key (str): The Guardian API key. Use `test` for testing purposes.
38 """
39 self.key = key
40
41 @property
42 def valid(self):
43 """ Check if given API key is valid. """
44 response = requests.get(BASE_URL, {'api-key': self.key})
45 return response.status_code != 403 # 403 == Forbidden
46
47 def __eq__(self, other):
48 return self.key == other.key
49
50
51 class TheGuardianAPI:
52 attributes = []
53
54 class_vars = [
55 (data.DiscreteVariable('Section'), lambda doc: doc['sectionName']),
56 ]
57
58 tv = data.TimeVariable('Publication Date')
59 metas = [
60 (data.StringVariable('Headline'), lambda doc: doc['fields']['headline']),
61 (data.StringVariable('Content'), lambda doc: doc['fields']['bodyText']),
62 (data.StringVariable('Trail Text'), lambda doc: doc['fields']['trailText']),
63 (data.StringVariable('HTML'), lambda doc: doc['fields']['body']),
64 (tv, lambda doc: TheGuardianAPI.tv.parse(doc['webPublicationDate'])),
65 (data.DiscreteVariable('Type'), lambda doc: doc['type']),
66 (data.DiscreteVariable('Language'), lambda doc: doc['fields']['lang']),
67 (data.StringVariable('Tags'),
68 lambda doc: ', '.join(tag['webTitle'] for tag in doc['tags'])),
69 (data.StringVariable('URL'), lambda doc: doc['webUrl']),
70 (data.ContinuousVariable('Word Count', number_of_decimals=0),
71 lambda doc: doc['fields']['wordcount']),
72 ]
73
74 text_features = [metas[0][0], metas[1][0]] # Headline + Content
75 title_indices = [-1] # Headline
76
77 def __init__(self, credentials, on_progress=None, should_break=None):
78 """
79 Args:
80 credentials (:class:`TheGuardianCredentials`): The Guardian Creentials.
81 on_progress (callable): Function for progress reporting.
82 should_break (callable): Function for early stopping.
83 """
84 self.per_page = ARTICLES_PER_PAGE
85 self.pages = 0
86 self.credentials = credentials
87 self.on_progress = on_progress or (lambda x, y: None)
88 self.should_break = should_break or (lambda: False)
89
90 self.results = []
91
92 def _search(self, query, from_date, to_date, page=1):
93 data = self._build_query(query, from_date, to_date, page)
94
95 response = requests.get(BASE_URL, data)
96 parsed = json.loads(response.text)
97
98 if page == 1: # store number of pages
99 self.pages = parsed['response']['pages']
100
101 self.results.extend(parsed['response']['results'])
102
103 def _build_query(self, query, from_date=None, to_date=None, page=1):
104 data = {
105 'q': query,
106 'api-key': self.credentials.key,
107 'page': str(page),
108 'show-fields': 'headline,trailText,body,bodyText,lang,wordcount',
109 'show-tags': 'all',
110 }
111 if from_date is not None:
112 data['from-date'] = from_date
113 if to_date is not None:
114 data['to-date'] = to_date
115
116 return data
117
118 def search(self, query, from_date=None, to_date=None, max_documents=None,
119 accumulate=False):
120 """
121 Search The Guardian API for articles.
122
123 Args:
124 query (str): A query for searching the articles by
125 from_date (str): Search only articles newer than the date provided.
126 Date should be in ISO format; e.g. '2016-12-31'.
127 to_date (str): Search only articles older than the date provided.
128 Date should be in ISO format; e.g. '2016-12-31'.
129 max_documents (int): Maximum number of documents to retrieve.
130 When not given, retrieve all documents.
131 accumulate (bool): A flag indicating whether to accumulate results
132 of multiple consequent search calls.
133
134 Returns:
135 :ref:`Corpus`
136 """
137 if not accumulate:
138 self.results = []
139
140 self._search(query, from_date, to_date)
141
142 pages = math.ceil(max_documents/self.per_page) if max_documents else self.pages
143 self.on_progress(self.per_page, pages * self.per_page)
144
145 for p in range(2, pages+1): # to one based
146 if self.should_break():
147 break
148 self._search(query, from_date, to_date, p)
149 self.on_progress(p*self.per_page, pages * self.per_page)
150
151 c = Corpus.from_documents(
152 self.results, 'The Guardian', self.attributes, self.class_vars,
153 self.metas, title_indices=self.title_indices)
154 c.text_features = self.text_features
155 return c
156
157
158 if __name__ == '__main__':
159 key = os.getenv('THE_GUARDIAN_API_KEY', 'test')
160 credentials = TheGuardianCredentials(key)
161 print(credentials.valid)
162 api = TheGuardianAPI(credentials=credentials)
163 c = api.search('refugees', max_documents=10)
164 print(c)
165
[end of orangecontrib/text/guardian.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/orangecontrib/text/guardian.py b/orangecontrib/text/guardian.py
--- a/orangecontrib/text/guardian.py
+++ b/orangecontrib/text/guardian.py
@@ -42,7 +42,7 @@
def valid(self):
""" Check if given API key is valid. """
response = requests.get(BASE_URL, {'api-key': self.key})
- return response.status_code != 403 # 403 == Forbidden
+ return response.status_code == 200
def __eq__(self, other):
return self.key == other.key
|
{"golden_diff": "diff --git a/orangecontrib/text/guardian.py b/orangecontrib/text/guardian.py\n--- a/orangecontrib/text/guardian.py\n+++ b/orangecontrib/text/guardian.py\n@@ -42,7 +42,7 @@\n def valid(self):\n \"\"\" Check if given API key is valid. \"\"\"\n response = requests.get(BASE_URL, {'api-key': self.key})\n- return response.status_code != 403 # 403 == Forbidden\n+ return response.status_code == 200\n \n def __eq__(self, other):\n return self.key == other.key\n", "issue": "TheGuardianCredentials returns wrong valid property\n##### Text version\r\n0.3.0\r\n##### Orange version\r\n3.14\r\n##### Expected behavior\r\n``` python\r\ncredentials = TheGuardianCredentials('<your-api-key>')\r\nprint(credentials.valid)\r\n```\r\nShould correctly return if key is valid\r\n\r\n##### Actual behavior\r\nIf the given key exceeds the API limit `credentials.valid` still returns True\r\n\r\n\n", "before_files": [{"content": "\"\"\" This module fetches data from The Guardian API.\n\nTo use first create :class:`TheGuardianCredentials`:\n\n >>> from orangecontrib.text.guardian import TheGuardianCredentials\n >>> credentials = TheGuardianCredentials('<your-api-key>')\n\nThen create :class:`TheGuardianAPI` object and use it for searching:\n\n >>> from orangecontrib.text.guardian import TheGuardianAPI\n >>> api = TheGuardianAPI(credentials)\n >>> corpus = api.search('Slovenia', max_documents=10)\n >>> len(corpus)\n 10\n\n\"\"\"\n\nimport requests\nimport math\nimport json\nimport os\n\nfrom Orange import data\n\nfrom orangecontrib.text.corpus import Corpus\n\n\nBASE_URL = 'http://content.guardianapis.com/search'\nARTICLES_PER_PAGE = 10\n\n\nclass TheGuardianCredentials:\n \"\"\" The Guardian API credentials. \"\"\"\n def __init__(self, key):\n \"\"\"\n Args:\n key (str): The Guardian API key. Use `test` for testing purposes.\n \"\"\"\n self.key = key\n\n @property\n def valid(self):\n \"\"\" Check if given API key is valid. \"\"\"\n response = requests.get(BASE_URL, {'api-key': self.key})\n return response.status_code != 403 # 403 == Forbidden\n\n def __eq__(self, other):\n return self.key == other.key\n\n\nclass TheGuardianAPI:\n attributes = []\n\n class_vars = [\n (data.DiscreteVariable('Section'), lambda doc: doc['sectionName']),\n ]\n\n tv = data.TimeVariable('Publication Date')\n metas = [\n (data.StringVariable('Headline'), lambda doc: doc['fields']['headline']),\n (data.StringVariable('Content'), lambda doc: doc['fields']['bodyText']),\n (data.StringVariable('Trail Text'), lambda doc: doc['fields']['trailText']),\n (data.StringVariable('HTML'), lambda doc: doc['fields']['body']),\n (tv, lambda doc: TheGuardianAPI.tv.parse(doc['webPublicationDate'])),\n (data.DiscreteVariable('Type'), lambda doc: doc['type']),\n (data.DiscreteVariable('Language'), lambda doc: doc['fields']['lang']),\n (data.StringVariable('Tags'),\n lambda doc: ', '.join(tag['webTitle'] for tag in doc['tags'])),\n (data.StringVariable('URL'), lambda doc: doc['webUrl']),\n (data.ContinuousVariable('Word Count', number_of_decimals=0),\n lambda doc: doc['fields']['wordcount']),\n ]\n\n text_features = [metas[0][0], metas[1][0]] # Headline + Content\n title_indices = [-1] # Headline\n\n def __init__(self, credentials, on_progress=None, should_break=None):\n \"\"\"\n Args:\n credentials (:class:`TheGuardianCredentials`): The Guardian Creentials.\n on_progress (callable): Function for progress reporting.\n should_break (callable): Function for early stopping.\n \"\"\"\n self.per_page = ARTICLES_PER_PAGE\n self.pages = 0\n self.credentials = credentials\n self.on_progress = on_progress or (lambda x, y: None)\n self.should_break = should_break or (lambda: False)\n\n self.results = []\n\n def _search(self, query, from_date, to_date, page=1):\n data = self._build_query(query, from_date, to_date, page)\n\n response = requests.get(BASE_URL, data)\n parsed = json.loads(response.text)\n\n if page == 1: # store number of pages\n self.pages = parsed['response']['pages']\n\n self.results.extend(parsed['response']['results'])\n\n def _build_query(self, query, from_date=None, to_date=None, page=1):\n data = {\n 'q': query,\n 'api-key': self.credentials.key,\n 'page': str(page),\n 'show-fields': 'headline,trailText,body,bodyText,lang,wordcount',\n 'show-tags': 'all',\n }\n if from_date is not None:\n data['from-date'] = from_date\n if to_date is not None:\n data['to-date'] = to_date\n\n return data\n\n def search(self, query, from_date=None, to_date=None, max_documents=None,\n accumulate=False):\n \"\"\"\n Search The Guardian API for articles.\n\n Args:\n query (str): A query for searching the articles by\n from_date (str): Search only articles newer than the date provided.\n Date should be in ISO format; e.g. '2016-12-31'.\n to_date (str): Search only articles older than the date provided.\n Date should be in ISO format; e.g. '2016-12-31'.\n max_documents (int): Maximum number of documents to retrieve.\n When not given, retrieve all documents.\n accumulate (bool): A flag indicating whether to accumulate results\n of multiple consequent search calls.\n\n Returns:\n :ref:`Corpus`\n \"\"\"\n if not accumulate:\n self.results = []\n\n self._search(query, from_date, to_date)\n\n pages = math.ceil(max_documents/self.per_page) if max_documents else self.pages\n self.on_progress(self.per_page, pages * self.per_page)\n\n for p in range(2, pages+1): # to one based\n if self.should_break():\n break\n self._search(query, from_date, to_date, p)\n self.on_progress(p*self.per_page, pages * self.per_page)\n\n c = Corpus.from_documents(\n self.results, 'The Guardian', self.attributes, self.class_vars,\n self.metas, title_indices=self.title_indices)\n c.text_features = self.text_features\n return c\n\n\nif __name__ == '__main__':\n key = os.getenv('THE_GUARDIAN_API_KEY', 'test')\n credentials = TheGuardianCredentials(key)\n print(credentials.valid)\n api = TheGuardianAPI(credentials=credentials)\n c = api.search('refugees', max_documents=10)\n print(c)\n", "path": "orangecontrib/text/guardian.py"}]}
| 2,355 | 136 |
gh_patches_debug_7540
|
rasdani/github-patches
|
git_diff
|
cloud-custodian__cloud-custodian-3544
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Request Feature: Request for cloud Trail events for Lambda
Hello ,
The following policy is giving me the error :2018-12-18 14:24:39,580: custodian.commands:ERROR Policy: lambda-tag-compliance is invalid: event shortcut not defined: CreateFunction
policy ---
```
- name: lambda-tag-compliance
resource: lambda
mode:
type: cloudtrail #### cloud trail not possible
role: arn:aws:iam::acctnumber:role/acctname
events:
- CreateFunction
filters:
- "tag:custodian": absent
```
i changed the policy to the following based on the cc docs and the got the following error
```
- name: lambda-tag-compliance
resource: lambda
mode:
type: cloudtrail #### cloud trail not possible
role: arn:aws:iam::acctnum:role/acctname
event: CreateFunction
ids: "requestParameters.functionName"
filters:
- "tag:custodian": absent
```
error ---- 2018-12-18 14:33:41,697: custodian.commands:ERROR Configuration invalid: Policy.yml
2018-12-18 14:33:41,704: custodian.commands:ERROR {'type': 'cloudtrail', 'role': 'arn:aws:iam::acctnum:role/acctname', 'event': 'CreateFunction', 'ids': 'requestParameters.functionName'} is not valid under any of the given schemas
can you add the shortcut to cloud-custodian/c7n/cwe.py ... or is there anything else that we can do resolve this ?
Thank you
</issue>
<code>
[start of c7n/cwe.py]
1 # Copyright 2016-2017 Capital One Services, LLC
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 from __future__ import absolute_import, division, print_function, unicode_literals
15
16 import jmespath
17 import six
18
19
20 class CloudWatchEvents(object):
21 """A mapping of events to resource types."""
22
23 # **These are just shortcuts**, you can use the policy definition to
24 # subscribe to any arbitrary cloud trail event that corresponds to
25 # a custodian resource.
26
27 # For common events that we want to match, just keep a short mapping.
28 # Users can specify arbitrary cloud watch events by specifying these
29 # values in their config, but keep the common case simple.
30
31 trail_events = {
32 # event source, resource type as keys, mapping to api call and
33 # jmespath expression
34 'ConsoleLogin': {
35 'ids': 'userIdentity.arn',
36 'source': 'signin.amazonaws.com'},
37
38 'CreateAutoScalingGroup': {
39 'ids': 'requestParameters.autoScalingGroupName',
40 'source': 'autoscaling.amazonaws.com'},
41
42 'UpdateAutoScalingGroup': {
43 'ids': 'requestParameters.autoScalingGroupName',
44 'source': 'autoscaling.amazonaws.com'},
45
46 'CreateBucket': {
47 'ids': 'requestParameters.bucketName',
48 'source': 's3.amazonaws.com'},
49
50 'CreateCluster': {
51 'ids': 'requestParameters.clusterIdentifier',
52 'source': 'redshift.amazonaws.com'},
53
54 'CreateLoadBalancer': {
55 'ids': 'requestParameters.loadBalancerName',
56 'source': 'elasticloadbalancing.amazonaws.com'},
57
58 'CreateLoadBalancerPolicy': {
59 'ids': 'requestParameters.loadBalancerName',
60 'source': 'elasticloadbalancing.amazonaws.com'},
61
62 'CreateDBInstance': {
63 'ids': 'requestParameters.dBInstanceIdentifier',
64 'source': 'rds.amazonaws.com'},
65
66 'CreateVolume': {
67 'ids': 'responseElements.volumeId',
68 'source': 'ec2.amazonaws.com'},
69
70 'SetLoadBalancerPoliciesOfListener': {
71 'ids': 'requestParameters.loadBalancerName',
72 'source': 'elasticloadbalancing.amazonaws.com'},
73
74 'CreateElasticsearchDomain': {
75 'ids': 'requestParameters.domainName',
76 'source': 'es.amazonaws.com'},
77
78 'CreateTable': {
79 'ids': 'requestParameters.tableName',
80 'source': 'dynamodb.amazonaws.com'},
81
82 'RunInstances': {
83 'ids': 'responseElements.instancesSet.items[].instanceId',
84 'source': 'ec2.amazonaws.com'}}
85
86 @classmethod
87 def get(cls, event_name):
88 return cls.trail_events.get(event_name)
89
90 @classmethod
91 def match(cls, event):
92 """Match a given cwe event as cloudtrail with an api call
93
94 That has its information filled out.
95 """
96 if 'detail' not in event:
97 return False
98 if 'eventName' not in event['detail']:
99 return False
100 k = event['detail']['eventName']
101
102 # We want callers to use a compiled expression, but want to avoid
103 # initialization cost of doing it without cause. Not thread safe,
104 # but usage context is lambda entry.
105 if k in cls.trail_events:
106 v = dict(cls.trail_events[k])
107 if isinstance(v['ids'], six.string_types):
108 v['ids'] = e = jmespath.compile('detail.%s' % v['ids'])
109 cls.trail_events[k]['ids'] = e
110 return v
111
112 return False
113
114 @classmethod
115 def get_trail_ids(cls, event, mode):
116 """extract resources ids from a cloud trail event."""
117 resource_ids = ()
118 event_name = event['detail']['eventName']
119 event_source = event['detail']['eventSource']
120 for e in mode.get('events', []):
121 if not isinstance(e, dict):
122 # Check if we have a short cut / alias
123 info = CloudWatchEvents.match(event)
124 if info:
125 return info['ids'].search(event)
126 continue
127 if event_name != e.get('event'):
128 continue
129 if event_source != e.get('source'):
130 continue
131
132 id_query = e.get('ids')
133 if not id_query:
134 raise ValueError("No id query configured")
135 evt = event
136 # be forgiving for users specifying with details or without
137 if not id_query.startswith('detail.'):
138 evt = event.get('detail', {})
139 resource_ids = jmespath.search(id_query, evt)
140 if resource_ids:
141 break
142 return resource_ids
143
144 @classmethod
145 def get_ids(cls, event, mode):
146 mode_type = mode.get('type')
147 if mode_type == 'ec2-instance-state':
148 resource_ids = [event.get('detail', {}).get('instance-id')]
149 elif mode_type == 'asg-instance-state':
150 resource_ids = [event.get('detail', {}).get('AutoScalingGroupName')]
151 elif mode_type != 'cloudtrail':
152 return None
153 else:
154 resource_ids = cls.get_trail_ids(event, mode)
155
156 if not isinstance(resource_ids, (tuple, list)):
157 resource_ids = [resource_ids]
158
159 return list(filter(None, resource_ids))
160
[end of c7n/cwe.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/c7n/cwe.py b/c7n/cwe.py
--- a/c7n/cwe.py
+++ b/c7n/cwe.py
@@ -79,6 +79,11 @@
'ids': 'requestParameters.tableName',
'source': 'dynamodb.amazonaws.com'},
+ 'CreateFunction': {
+ 'event': 'CreateFunction20150331',
+ 'source': 'lambda.amazonaws.com',
+ 'ids': 'requestParameters.functionName'},
+
'RunInstances': {
'ids': 'responseElements.instancesSet.items[].instanceId',
'source': 'ec2.amazonaws.com'}}
|
{"golden_diff": "diff --git a/c7n/cwe.py b/c7n/cwe.py\n--- a/c7n/cwe.py\n+++ b/c7n/cwe.py\n@@ -79,6 +79,11 @@\n 'ids': 'requestParameters.tableName',\n 'source': 'dynamodb.amazonaws.com'},\n \n+ 'CreateFunction': {\n+ 'event': 'CreateFunction20150331',\n+ 'source': 'lambda.amazonaws.com',\n+ 'ids': 'requestParameters.functionName'},\n+\n 'RunInstances': {\n 'ids': 'responseElements.instancesSet.items[].instanceId',\n 'source': 'ec2.amazonaws.com'}}\n", "issue": "Request Feature: Request for cloud Trail events for Lambda\nHello , \r\nThe following policy is giving me the error :2018-12-18 14:24:39,580: custodian.commands:ERROR Policy: lambda-tag-compliance is invalid: event shortcut not defined: CreateFunction\r\n policy ---\r\n```\r\n- name: lambda-tag-compliance\r\n resource: lambda\r\n mode:\r\n type: cloudtrail #### cloud trail not possible\r\n role: arn:aws:iam::acctnumber:role/acctname\r\n events: \r\n - CreateFunction\r\n filters:\r\n - \"tag:custodian\": absent\r\n```\r\ni changed the policy to the following based on the cc docs and the got the following error \r\n```\r\n- name: lambda-tag-compliance\r\n resource: lambda\r\n mode:\r\n type: cloudtrail #### cloud trail not possible\r\n role: arn:aws:iam::acctnum:role/acctname\r\n event: CreateFunction\r\n ids: \"requestParameters.functionName\"\r\n filters:\r\n - \"tag:custodian\": absent\r\n```\r\nerror ---- 2018-12-18 14:33:41,697: custodian.commands:ERROR Configuration invalid: Policy.yml\r\n2018-12-18 14:33:41,704: custodian.commands:ERROR {'type': 'cloudtrail', 'role': 'arn:aws:iam::acctnum:role/acctname', 'event': 'CreateFunction', 'ids': 'requestParameters.functionName'} is not valid under any of the given schemas\r\n\r\ncan you add the shortcut to cloud-custodian/c7n/cwe.py ... or is there anything else that we can do resolve this ?\r\n\r\nThank you \n", "before_files": [{"content": "# Copyright 2016-2017 Capital One Services, LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nimport jmespath\nimport six\n\n\nclass CloudWatchEvents(object):\n \"\"\"A mapping of events to resource types.\"\"\"\n\n # **These are just shortcuts**, you can use the policy definition to\n # subscribe to any arbitrary cloud trail event that corresponds to\n # a custodian resource.\n\n # For common events that we want to match, just keep a short mapping.\n # Users can specify arbitrary cloud watch events by specifying these\n # values in their config, but keep the common case simple.\n\n trail_events = {\n # event source, resource type as keys, mapping to api call and\n # jmespath expression\n 'ConsoleLogin': {\n 'ids': 'userIdentity.arn',\n 'source': 'signin.amazonaws.com'},\n\n 'CreateAutoScalingGroup': {\n 'ids': 'requestParameters.autoScalingGroupName',\n 'source': 'autoscaling.amazonaws.com'},\n\n 'UpdateAutoScalingGroup': {\n 'ids': 'requestParameters.autoScalingGroupName',\n 'source': 'autoscaling.amazonaws.com'},\n\n 'CreateBucket': {\n 'ids': 'requestParameters.bucketName',\n 'source': 's3.amazonaws.com'},\n\n 'CreateCluster': {\n 'ids': 'requestParameters.clusterIdentifier',\n 'source': 'redshift.amazonaws.com'},\n\n 'CreateLoadBalancer': {\n 'ids': 'requestParameters.loadBalancerName',\n 'source': 'elasticloadbalancing.amazonaws.com'},\n\n 'CreateLoadBalancerPolicy': {\n 'ids': 'requestParameters.loadBalancerName',\n 'source': 'elasticloadbalancing.amazonaws.com'},\n\n 'CreateDBInstance': {\n 'ids': 'requestParameters.dBInstanceIdentifier',\n 'source': 'rds.amazonaws.com'},\n\n 'CreateVolume': {\n 'ids': 'responseElements.volumeId',\n 'source': 'ec2.amazonaws.com'},\n\n 'SetLoadBalancerPoliciesOfListener': {\n 'ids': 'requestParameters.loadBalancerName',\n 'source': 'elasticloadbalancing.amazonaws.com'},\n\n 'CreateElasticsearchDomain': {\n 'ids': 'requestParameters.domainName',\n 'source': 'es.amazonaws.com'},\n\n 'CreateTable': {\n 'ids': 'requestParameters.tableName',\n 'source': 'dynamodb.amazonaws.com'},\n\n 'RunInstances': {\n 'ids': 'responseElements.instancesSet.items[].instanceId',\n 'source': 'ec2.amazonaws.com'}}\n\n @classmethod\n def get(cls, event_name):\n return cls.trail_events.get(event_name)\n\n @classmethod\n def match(cls, event):\n \"\"\"Match a given cwe event as cloudtrail with an api call\n\n That has its information filled out.\n \"\"\"\n if 'detail' not in event:\n return False\n if 'eventName' not in event['detail']:\n return False\n k = event['detail']['eventName']\n\n # We want callers to use a compiled expression, but want to avoid\n # initialization cost of doing it without cause. Not thread safe,\n # but usage context is lambda entry.\n if k in cls.trail_events:\n v = dict(cls.trail_events[k])\n if isinstance(v['ids'], six.string_types):\n v['ids'] = e = jmespath.compile('detail.%s' % v['ids'])\n cls.trail_events[k]['ids'] = e\n return v\n\n return False\n\n @classmethod\n def get_trail_ids(cls, event, mode):\n \"\"\"extract resources ids from a cloud trail event.\"\"\"\n resource_ids = ()\n event_name = event['detail']['eventName']\n event_source = event['detail']['eventSource']\n for e in mode.get('events', []):\n if not isinstance(e, dict):\n # Check if we have a short cut / alias\n info = CloudWatchEvents.match(event)\n if info:\n return info['ids'].search(event)\n continue\n if event_name != e.get('event'):\n continue\n if event_source != e.get('source'):\n continue\n\n id_query = e.get('ids')\n if not id_query:\n raise ValueError(\"No id query configured\")\n evt = event\n # be forgiving for users specifying with details or without\n if not id_query.startswith('detail.'):\n evt = event.get('detail', {})\n resource_ids = jmespath.search(id_query, evt)\n if resource_ids:\n break\n return resource_ids\n\n @classmethod\n def get_ids(cls, event, mode):\n mode_type = mode.get('type')\n if mode_type == 'ec2-instance-state':\n resource_ids = [event.get('detail', {}).get('instance-id')]\n elif mode_type == 'asg-instance-state':\n resource_ids = [event.get('detail', {}).get('AutoScalingGroupName')]\n elif mode_type != 'cloudtrail':\n return None\n else:\n resource_ids = cls.get_trail_ids(event, mode)\n\n if not isinstance(resource_ids, (tuple, list)):\n resource_ids = [resource_ids]\n\n return list(filter(None, resource_ids))\n", "path": "c7n/cwe.py"}]}
| 2,547 | 146 |
gh_patches_debug_51565
|
rasdani/github-patches
|
git_diff
|
ray-project__ray-1413
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Worker dies when passed pandas DataFrame.
### System information
- **Ray version**: 0.3.0
- **Python version**: 3.6.0
- **Exact command to reproduce**:
```python
import pandas as pd
import ray
pd.__version__ # '0.19.2'
ray.init()
df = pd.DataFrame(data={'col1': [1, 2, 3, 4], 'col2': [3, 4, 5, 6]})
@ray.remote
def f(x):
pass
f.remote(df)
```
The last line causes the following error to be printed in the background.
```
A worker died or was killed while executing a task.
```
cc @devin-petersohn
</issue>
<code>
[start of python/ray/dataframe/__init__.py]
1 from __future__ import absolute_import
2 from __future__ import division
3 from __future__ import print_function
4
5 from .dataframe import DataFrame
6 from .dataframe import from_pandas
7 from .dataframe import to_pandas
8 from .series import Series
9 import ray
10 import pandas as pd
11
12 __all__ = ["DataFrame", "from_pandas", "to_pandas", "Series"]
13
14 ray.register_custom_serializer(pd.DataFrame, use_pickle=True)
15 ray.register_custom_serializer(pd.core.indexes.base.Index, use_pickle=True)
16
[end of python/ray/dataframe/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/python/ray/dataframe/__init__.py b/python/ray/dataframe/__init__.py
--- a/python/ray/dataframe/__init__.py
+++ b/python/ray/dataframe/__init__.py
@@ -6,10 +6,5 @@
from .dataframe import from_pandas
from .dataframe import to_pandas
from .series import Series
-import ray
-import pandas as pd
__all__ = ["DataFrame", "from_pandas", "to_pandas", "Series"]
-
-ray.register_custom_serializer(pd.DataFrame, use_pickle=True)
-ray.register_custom_serializer(pd.core.indexes.base.Index, use_pickle=True)
|
{"golden_diff": "diff --git a/python/ray/dataframe/__init__.py b/python/ray/dataframe/__init__.py\n--- a/python/ray/dataframe/__init__.py\n+++ b/python/ray/dataframe/__init__.py\n@@ -6,10 +6,5 @@\n from .dataframe import from_pandas\n from .dataframe import to_pandas\n from .series import Series\n-import ray\n-import pandas as pd\n \n __all__ = [\"DataFrame\", \"from_pandas\", \"to_pandas\", \"Series\"]\n-\n-ray.register_custom_serializer(pd.DataFrame, use_pickle=True)\n-ray.register_custom_serializer(pd.core.indexes.base.Index, use_pickle=True)\n", "issue": "Worker dies when passed pandas DataFrame.\n### System information\r\n- **Ray version**: 0.3.0\r\n- **Python version**: 3.6.0\r\n- **Exact command to reproduce**:\r\n\r\n```python\r\nimport pandas as pd\r\nimport ray\r\n\r\npd.__version__ # '0.19.2'\r\n\r\nray.init()\r\n\r\ndf = pd.DataFrame(data={'col1': [1, 2, 3, 4], 'col2': [3, 4, 5, 6]})\r\n\r\[email protected]\r\ndef f(x):\r\n pass\r\n\r\nf.remote(df)\r\n```\r\n\r\nThe last line causes the following error to be printed in the background.\r\n\r\n```\r\nA worker died or was killed while executing a task.\r\n```\r\n\r\ncc @devin-petersohn\n", "before_files": [{"content": "from __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nfrom .dataframe import DataFrame\nfrom .dataframe import from_pandas\nfrom .dataframe import to_pandas\nfrom .series import Series\nimport ray\nimport pandas as pd\n\n__all__ = [\"DataFrame\", \"from_pandas\", \"to_pandas\", \"Series\"]\n\nray.register_custom_serializer(pd.DataFrame, use_pickle=True)\nray.register_custom_serializer(pd.core.indexes.base.Index, use_pickle=True)\n", "path": "python/ray/dataframe/__init__.py"}]}
| 840 | 138 |
gh_patches_debug_549
|
rasdani/github-patches
|
git_diff
|
mabel-dev__opteryx-1412
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
🪲 ARM test fails
~~~
ValueError: 'orso/bitarray/cbitarray.pyx' doesn't match any files
~~~
https://github.com/mabel-dev/opteryx/actions/runs/7535073365/job/20510453555
</issue>
<code>
[start of opteryx/__version__.py]
1 __build__ = 244
2
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """
16 Store the version here so:
17 1) we don't load dependencies by storing it in __init__.py
18 2) we can import it in setup.py for the same reason
19 """
20 from enum import Enum # isort: skip
21
22
23 class VersionStatus(Enum):
24 ALPHA = "alpha"
25 BETA = "beta"
26 RELEASE = "release"
27
28
29 _major = 0
30 _minor = 12
31 _revision = 5
32 _status = VersionStatus.BETA
33
34 __version__ = f"{_major}.{_minor}.{_revision}" + (
35 f"-{_status.value}.{__build__}" if _status != VersionStatus.RELEASE else ""
36 )
37
[end of opteryx/__version__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/opteryx/__version__.py b/opteryx/__version__.py
--- a/opteryx/__version__.py
+++ b/opteryx/__version__.py
@@ -1,4 +1,4 @@
-__build__ = 244
+__build__ = 248
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
|
{"golden_diff": "diff --git a/opteryx/__version__.py b/opteryx/__version__.py\n--- a/opteryx/__version__.py\n+++ b/opteryx/__version__.py\n@@ -1,4 +1,4 @@\n-__build__ = 244\n+__build__ = 248\n \n # Licensed under the Apache License, Version 2.0 (the \"License\");\n # you may not use this file except in compliance with the License.\n", "issue": "\ud83e\udeb2 ARM test fails \n\r\n~~~\r\nValueError: 'orso/bitarray/cbitarray.pyx' doesn't match any files\r\n~~~\r\n\r\nhttps://github.com/mabel-dev/opteryx/actions/runs/7535073365/job/20510453555\n", "before_files": [{"content": "__build__ = 244\n\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"\nStore the version here so:\n1) we don't load dependencies by storing it in __init__.py\n2) we can import it in setup.py for the same reason\n\"\"\"\nfrom enum import Enum # isort: skip\n\n\nclass VersionStatus(Enum):\n ALPHA = \"alpha\"\n BETA = \"beta\"\n RELEASE = \"release\"\n\n\n_major = 0\n_minor = 12\n_revision = 5\n_status = VersionStatus.BETA\n\n__version__ = f\"{_major}.{_minor}.{_revision}\" + (\n f\"-{_status.value}.{__build__}\" if _status != VersionStatus.RELEASE else \"\"\n)\n", "path": "opteryx/__version__.py"}]}
| 952 | 101 |
gh_patches_debug_17327
|
rasdani/github-patches
|
git_diff
|
alltheplaces__alltheplaces-3950
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Primrose Schools
Is generating 1,221 errors. Adding a if statement for `content` should fix it. Could also be turned into a sitemap spider.
</issue>
<code>
[start of locations/spiders/primrose_schools.py]
1 import json
2
3 import scrapy
4
5 from locations.items import GeojsonPointItem
6
7
8 class PrimroseSchoolsSpider(scrapy.Spider):
9 name = "primrose_schools"
10 item_attributes = {"brand": "Primrose Schools", "brand_wikidata": "Q7243677"}
11 allowed_domains = ["primroseschools.com"]
12
13 start_urls = ["https://www.primroseschools.com/find-a-school/"]
14
15 def parse(self, response):
16 with open(
17 "./locations/searchable_points/us_centroids_50mile_radius.csv"
18 ) as points:
19 next(points)
20 for point in points:
21 row = point.replace("\n", "").split(",")
22 lati = row[1]
23 long = row[2]
24 searchurl = "https://www.primroseschools.com/find-a-school/?search_string=USA&latitude={la}&longitude={lo}".format(
25 la=lati, lo=long
26 )
27 yield scrapy.Request(
28 response.urljoin(searchurl), callback=self.parse_search
29 )
30
31 def parse_search(self, response):
32 content = response.xpath('//script[@type="application/json"]/text()').get()
33 schools = json.loads(content)
34 for i in schools:
35 if i["address_1"]:
36 properties = {
37 "name": i["name"],
38 "addr_full": i["address_1"] + " " + i["address_2"],
39 "city": i["city"],
40 "state": i["state"],
41 "postcode": i["zip_code"],
42 "phone": i["phone"],
43 "ref": i["id"],
44 "website": "https://www.primroseschools.com" + i["url"],
45 "lat": float(i["latitude"]),
46 "lon": float(i["longitude"]),
47 }
48 yield GeojsonPointItem(**properties)
49
[end of locations/spiders/primrose_schools.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/locations/spiders/primrose_schools.py b/locations/spiders/primrose_schools.py
--- a/locations/spiders/primrose_schools.py
+++ b/locations/spiders/primrose_schools.py
@@ -30,12 +30,17 @@
def parse_search(self, response):
content = response.xpath('//script[@type="application/json"]/text()').get()
+ if content is None:
+ return
+
schools = json.loads(content)
for i in schools:
if i["address_1"]:
properties = {
"name": i["name"],
- "addr_full": i["address_1"] + " " + i["address_2"],
+ "street_address": ", ".join(
+ filter(None, [i["address_1"], i["address_2"]])
+ ),
"city": i["city"],
"state": i["state"],
"postcode": i["zip_code"],
|
{"golden_diff": "diff --git a/locations/spiders/primrose_schools.py b/locations/spiders/primrose_schools.py\n--- a/locations/spiders/primrose_schools.py\n+++ b/locations/spiders/primrose_schools.py\n@@ -30,12 +30,17 @@\n \n def parse_search(self, response):\n content = response.xpath('//script[@type=\"application/json\"]/text()').get()\n+ if content is None:\n+ return\n+\n schools = json.loads(content)\n for i in schools:\n if i[\"address_1\"]:\n properties = {\n \"name\": i[\"name\"],\n- \"addr_full\": i[\"address_1\"] + \" \" + i[\"address_2\"],\n+ \"street_address\": \", \".join(\n+ filter(None, [i[\"address_1\"], i[\"address_2\"]])\n+ ),\n \"city\": i[\"city\"],\n \"state\": i[\"state\"],\n \"postcode\": i[\"zip_code\"],\n", "issue": "Primrose Schools\nIs generating 1,221 errors. Adding a if statement for `content` should fix it. Could also be turned into a sitemap spider.\n", "before_files": [{"content": "import json\n\nimport scrapy\n\nfrom locations.items import GeojsonPointItem\n\n\nclass PrimroseSchoolsSpider(scrapy.Spider):\n name = \"primrose_schools\"\n item_attributes = {\"brand\": \"Primrose Schools\", \"brand_wikidata\": \"Q7243677\"}\n allowed_domains = [\"primroseschools.com\"]\n\n start_urls = [\"https://www.primroseschools.com/find-a-school/\"]\n\n def parse(self, response):\n with open(\n \"./locations/searchable_points/us_centroids_50mile_radius.csv\"\n ) as points:\n next(points)\n for point in points:\n row = point.replace(\"\\n\", \"\").split(\",\")\n lati = row[1]\n long = row[2]\n searchurl = \"https://www.primroseschools.com/find-a-school/?search_string=USA&latitude={la}&longitude={lo}\".format(\n la=lati, lo=long\n )\n yield scrapy.Request(\n response.urljoin(searchurl), callback=self.parse_search\n )\n\n def parse_search(self, response):\n content = response.xpath('//script[@type=\"application/json\"]/text()').get()\n schools = json.loads(content)\n for i in schools:\n if i[\"address_1\"]:\n properties = {\n \"name\": i[\"name\"],\n \"addr_full\": i[\"address_1\"] + \" \" + i[\"address_2\"],\n \"city\": i[\"city\"],\n \"state\": i[\"state\"],\n \"postcode\": i[\"zip_code\"],\n \"phone\": i[\"phone\"],\n \"ref\": i[\"id\"],\n \"website\": \"https://www.primroseschools.com\" + i[\"url\"],\n \"lat\": float(i[\"latitude\"]),\n \"lon\": float(i[\"longitude\"]),\n }\n yield GeojsonPointItem(**properties)\n", "path": "locations/spiders/primrose_schools.py"}]}
| 1,074 | 217 |
gh_patches_debug_9477
|
rasdani/github-patches
|
git_diff
|
sopel-irc__sopel-1102
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
bot.db.unalias_nick() behavior does not match docs
> ```
> Raises ValueError if there is not at least one other nick in the group.
> ```
The ValueError is never raised:
```
<dgw> .nickunmerge ThisIsATestNick
<Sopel> Removed ThisIsATestNick from nick group 1497.
<dgw> that's not supposed to work
<ThisIsATestNick> blerg
<dgw> .nickunmerge ThisIsATestNick
<Sopel> Removed ThisIsATestNick from nick group 1498.
```
I have traced this issue to a logic error, and will submit a proposed fix when I'm back in front of my laptop later tonight.
If it's more desirable to change the documentation than the behavior, I'd be happy to rewrite that instead—but I believe that the described behavior is correct, and the observed behavior is a bug.
</issue>
<code>
[start of sopel/db.py]
1 # coding=utf-8
2 from __future__ import unicode_literals, absolute_import, print_function, division
3
4 import json
5 import os.path
6 import sys
7 import sqlite3
8
9 from sopel.tools import Identifier
10
11 if sys.version_info.major >= 3:
12 unicode = str
13 basestring = str
14
15
16 def _deserialize(value):
17 if value is None:
18 return None
19 # sqlite likes to return ints for strings that look like ints, even though
20 # the column type is string. That's how you do dynamic typing wrong.
21 value = unicode(value)
22 # Just in case someone's mucking with the DB in a way we can't account for,
23 # ignore json parsing errors
24 try:
25 value = json.loads(value)
26 except:
27 pass
28 return value
29
30
31 class SopelDB(object):
32 """*Availability: 5.0+*
33
34 This defines an interface for basic, common operations on a sqlite
35 database. It simplifies those common operations, and allows direct access
36 to the database, wherever the user has configured it to be.
37
38 When configured with a relative filename, it is assumed to be in the same
39 directory as the config."""
40
41 def __init__(self, config):
42 path = config.core.db_filename
43 config_dir, config_file = os.path.split(config.filename)
44 config_name, _ = os.path.splitext(config_file)
45 if path is None:
46 path = os.path.join(config_dir, config_name + '.db')
47 path = os.path.expanduser(path)
48 if not os.path.isabs(path):
49 path = os.path.normpath(os.path.join(config_dir, path))
50 self.filename = path
51 self._create()
52
53 def connect(self):
54 """Return a raw database connection object."""
55 return sqlite3.connect(self.filename)
56
57 def execute(self, *args, **kwargs):
58 """Execute an arbitrary SQL query against the database.
59
60 Returns a cursor object, on which things like `.fetchall()` can be
61 called per PEP 249."""
62 with self.connect() as conn:
63 cur = conn.cursor()
64 return cur.execute(*args, **kwargs)
65
66 def _create(self):
67 """Create the basic database structure."""
68 # Do nothing if the db already exists.
69 try:
70 self.execute('SELECT * FROM nick_ids;')
71 self.execute('SELECT * FROM nicknames;')
72 self.execute('SELECT * FROM nick_values;')
73 self.execute('SELECT * FROM channel_values;')
74 except:
75 pass
76 else:
77 return
78
79 self.execute(
80 'CREATE TABLE nick_ids (nick_id INTEGER PRIMARY KEY AUTOINCREMENT)'
81 )
82 self.execute(
83 'CREATE TABLE nicknames '
84 '(nick_id INTEGER REFERENCES nick_ids, '
85 'slug STRING PRIMARY KEY, canonical string)'
86 )
87 self.execute(
88 'CREATE TABLE nick_values '
89 '(nick_id INTEGER REFERENCES nick_ids(nick_id), '
90 'key STRING, value STRING, '
91 'PRIMARY KEY (nick_id, key))'
92 )
93 self.execute(
94 'CREATE TABLE channel_values '
95 '(channel STRING, key STRING, value STRING, '
96 'PRIMARY KEY (channel, key))'
97 )
98
99 def get_uri(self):
100 """Returns a URL for the database, usable to connect with SQLAlchemy.
101 """
102 return 'sqlite://{}'.format(self.filename)
103
104 # NICK FUNCTIONS
105
106 def get_nick_id(self, nick, create=True):
107 """Return the internal identifier for a given nick.
108
109 This identifier is unique to a user, and shared across all of that
110 user's aliases. If create is True, a new ID will be created if one does
111 not already exist"""
112 slug = nick.lower()
113 nick_id = self.execute('SELECT nick_id from nicknames where slug = ?',
114 [slug]).fetchone()
115 if nick_id is None:
116 if not create:
117 raise ValueError('No ID exists for the given nick')
118 with self.connect() as conn:
119 cur = conn.cursor()
120 cur.execute('INSERT INTO nick_ids VALUES (NULL)')
121 nick_id = cur.execute('SELECT last_insert_rowid()').fetchone()[0]
122 cur.execute(
123 'INSERT INTO nicknames (nick_id, slug, canonical) VALUES '
124 '(?, ?, ?)',
125 [nick_id, slug, nick]
126 )
127 nick_id = self.execute('SELECT nick_id from nicknames where slug = ?',
128 [slug]).fetchone()
129 return nick_id[0]
130
131 def alias_nick(self, nick, alias):
132 """Create an alias for a nick.
133
134 Raises ValueError if the alias already exists. If nick does not already
135 exist, it will be added along with the alias."""
136 nick = Identifier(nick)
137 alias = Identifier(alias)
138 nick_id = self.get_nick_id(nick)
139 sql = 'INSERT INTO nicknames (nick_id, slug, canonical) VALUES (?, ?, ?)'
140 values = [nick_id, alias.lower(), alias]
141 try:
142 self.execute(sql, values)
143 except sqlite3.IntegrityError:
144 raise ValueError('Alias already exists.')
145
146 def set_nick_value(self, nick, key, value):
147 """Sets the value for a given key to be associated with the nick."""
148 nick = Identifier(nick)
149 value = json.dumps(value, ensure_ascii=False)
150 nick_id = self.get_nick_id(nick)
151 self.execute('INSERT OR REPLACE INTO nick_values VALUES (?, ?, ?)',
152 [nick_id, key, value])
153
154 def get_nick_value(self, nick, key):
155 """Retrieves the value for a given key associated with a nick."""
156 nick = Identifier(nick)
157 result = self.execute(
158 'SELECT value FROM nicknames JOIN nick_values '
159 'ON nicknames.nick_id = nick_values.nick_id '
160 'WHERE slug = ? AND key = ?',
161 [nick.lower(), key]
162 ).fetchone()
163 if result is not None:
164 result = result[0]
165 return _deserialize(result)
166
167 def unalias_nick(self, alias):
168 """Removes an alias.
169
170 Raises ValueError if there is not at least one other nick in the group.
171 To delete an entire group, use `delete_group`.
172 """
173 alias = Identifier(alias)
174 nick_id = self.get_nick_id(alias, False)
175 count = self.execute('SELECT COUNT(*) FROM nicknames WHERE nick_id = ?',
176 [nick_id]).fetchone()[0]
177 if count == 0:
178 raise ValueError('Given alias is the only entry in its group.')
179 self.execute('DELETE FROM nicknames WHERE slug = ?', [alias.lower()])
180
181 def delete_nick_group(self, nick):
182 """Removes a nickname, and all associated aliases and settings.
183 """
184 nick = Identifier(nick)
185 nick_id = self.get_nick_id(nick, False)
186 self.execute('DELETE FROM nicknames WHERE nick_id = ?', [nick_id])
187 self.execute('DELETE FROM nick_values WHERE nick_id = ?', [nick_id])
188
189 def merge_nick_groups(self, first_nick, second_nick):
190 """Merges the nick groups for the specified nicks.
191
192 Takes two nicks, which may or may not be registered. Unregistered
193 nicks will be registered. Keys which are set for only one of the given
194 nicks will be preserved. Where multiple nicks have values for a given
195 key, the value set for the first nick will be used.
196
197 Note that merging of data only applies to the native key-value store.
198 If modules define their own tables which rely on the nick table, they
199 will need to have their merging done separately."""
200 first_id = self.get_nick_id(Identifier(first_nick))
201 second_id = self.get_nick_id(Identifier(second_nick))
202 self.execute(
203 'UPDATE OR IGNORE nick_values SET nick_id = ? WHERE nick_id = ?',
204 [first_id, second_id])
205 self.execute('DELETE FROM nick_values WHERE nick_id = ?', [second_id])
206 self.execute('UPDATE nicknames SET nick_id = ? WHERE nick_id = ?',
207 [first_id, second_id])
208
209 # CHANNEL FUNCTIONS
210
211 def set_channel_value(self, channel, key, value):
212 channel = Identifier(channel).lower()
213 value = json.dumps(value, ensure_ascii=False)
214 self.execute('INSERT OR REPLACE INTO channel_values VALUES (?, ?, ?)',
215 [channel, key, value])
216
217 def get_channel_value(self, channel, key):
218 """Retrieves the value for a given key associated with a channel."""
219 channel = Identifier(channel).lower()
220 result = self.execute(
221 'SELECT value FROM channel_values WHERE channel = ? AND key = ?',
222 [channel, key]
223 ).fetchone()
224 if result is not None:
225 result = result[0]
226 return _deserialize(result)
227
228 # NICK AND CHANNEL FUNCTIONS
229
230 def get_nick_or_channel_value(self, name, key):
231 """Gets the value `key` associated to the nick or channel `name`.
232 """
233 name = Identifier(name)
234 if name.is_nick():
235 return self.get_nick_value(name, key)
236 else:
237 return self.get_channel_value(name, key)
238
239 def get_preferred_value(self, names, key):
240 """Gets the value for the first name which has it set.
241
242 `names` is a list of channel and/or user names. Returns None if none of
243 the names have the key set."""
244 for name in names:
245 value = self.get_nick_or_channel_value(name, key)
246 if value is not None:
247 return value
248
[end of sopel/db.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/sopel/db.py b/sopel/db.py
--- a/sopel/db.py
+++ b/sopel/db.py
@@ -174,7 +174,7 @@
nick_id = self.get_nick_id(alias, False)
count = self.execute('SELECT COUNT(*) FROM nicknames WHERE nick_id = ?',
[nick_id]).fetchone()[0]
- if count == 0:
+ if count <= 1:
raise ValueError('Given alias is the only entry in its group.')
self.execute('DELETE FROM nicknames WHERE slug = ?', [alias.lower()])
|
{"golden_diff": "diff --git a/sopel/db.py b/sopel/db.py\n--- a/sopel/db.py\n+++ b/sopel/db.py\n@@ -174,7 +174,7 @@\n nick_id = self.get_nick_id(alias, False)\n count = self.execute('SELECT COUNT(*) FROM nicknames WHERE nick_id = ?',\n [nick_id]).fetchone()[0]\n- if count == 0:\n+ if count <= 1:\n raise ValueError('Given alias is the only entry in its group.')\n self.execute('DELETE FROM nicknames WHERE slug = ?', [alias.lower()])\n", "issue": "bot.db.unalias_nick() behavior does not match docs\n> ```\n> Raises ValueError if there is not at least one other nick in the group.\n> ```\n\nThe ValueError is never raised:\n\n```\n<dgw> .nickunmerge ThisIsATestNick\n<Sopel> Removed ThisIsATestNick from nick group 1497.\n<dgw> that's not supposed to work\n<ThisIsATestNick> blerg\n<dgw> .nickunmerge ThisIsATestNick\n<Sopel> Removed ThisIsATestNick from nick group 1498.\n```\n\nI have traced this issue to a logic error, and will submit a proposed fix when I'm back in front of my laptop later tonight.\n\nIf it's more desirable to change the documentation than the behavior, I'd be happy to rewrite that instead\u2014but I believe that the described behavior is correct, and the observed behavior is a bug.\n\n", "before_files": [{"content": "# coding=utf-8\nfrom __future__ import unicode_literals, absolute_import, print_function, division\n\nimport json\nimport os.path\nimport sys\nimport sqlite3\n\nfrom sopel.tools import Identifier\n\nif sys.version_info.major >= 3:\n unicode = str\n basestring = str\n\n\ndef _deserialize(value):\n if value is None:\n return None\n # sqlite likes to return ints for strings that look like ints, even though\n # the column type is string. That's how you do dynamic typing wrong.\n value = unicode(value)\n # Just in case someone's mucking with the DB in a way we can't account for,\n # ignore json parsing errors\n try:\n value = json.loads(value)\n except:\n pass\n return value\n\n\nclass SopelDB(object):\n \"\"\"*Availability: 5.0+*\n\n This defines an interface for basic, common operations on a sqlite\n database. It simplifies those common operations, and allows direct access\n to the database, wherever the user has configured it to be.\n\n When configured with a relative filename, it is assumed to be in the same\n directory as the config.\"\"\"\n\n def __init__(self, config):\n path = config.core.db_filename\n config_dir, config_file = os.path.split(config.filename)\n config_name, _ = os.path.splitext(config_file)\n if path is None:\n path = os.path.join(config_dir, config_name + '.db')\n path = os.path.expanduser(path)\n if not os.path.isabs(path):\n path = os.path.normpath(os.path.join(config_dir, path))\n self.filename = path\n self._create()\n\n def connect(self):\n \"\"\"Return a raw database connection object.\"\"\"\n return sqlite3.connect(self.filename)\n\n def execute(self, *args, **kwargs):\n \"\"\"Execute an arbitrary SQL query against the database.\n\n Returns a cursor object, on which things like `.fetchall()` can be\n called per PEP 249.\"\"\"\n with self.connect() as conn:\n cur = conn.cursor()\n return cur.execute(*args, **kwargs)\n\n def _create(self):\n \"\"\"Create the basic database structure.\"\"\"\n # Do nothing if the db already exists.\n try:\n self.execute('SELECT * FROM nick_ids;')\n self.execute('SELECT * FROM nicknames;')\n self.execute('SELECT * FROM nick_values;')\n self.execute('SELECT * FROM channel_values;')\n except:\n pass\n else:\n return\n\n self.execute(\n 'CREATE TABLE nick_ids (nick_id INTEGER PRIMARY KEY AUTOINCREMENT)'\n )\n self.execute(\n 'CREATE TABLE nicknames '\n '(nick_id INTEGER REFERENCES nick_ids, '\n 'slug STRING PRIMARY KEY, canonical string)'\n )\n self.execute(\n 'CREATE TABLE nick_values '\n '(nick_id INTEGER REFERENCES nick_ids(nick_id), '\n 'key STRING, value STRING, '\n 'PRIMARY KEY (nick_id, key))'\n )\n self.execute(\n 'CREATE TABLE channel_values '\n '(channel STRING, key STRING, value STRING, '\n 'PRIMARY KEY (channel, key))'\n )\n\n def get_uri(self):\n \"\"\"Returns a URL for the database, usable to connect with SQLAlchemy.\n \"\"\"\n return 'sqlite://{}'.format(self.filename)\n\n # NICK FUNCTIONS\n\n def get_nick_id(self, nick, create=True):\n \"\"\"Return the internal identifier for a given nick.\n\n This identifier is unique to a user, and shared across all of that\n user's aliases. If create is True, a new ID will be created if one does\n not already exist\"\"\"\n slug = nick.lower()\n nick_id = self.execute('SELECT nick_id from nicknames where slug = ?',\n [slug]).fetchone()\n if nick_id is None:\n if not create:\n raise ValueError('No ID exists for the given nick')\n with self.connect() as conn:\n cur = conn.cursor()\n cur.execute('INSERT INTO nick_ids VALUES (NULL)')\n nick_id = cur.execute('SELECT last_insert_rowid()').fetchone()[0]\n cur.execute(\n 'INSERT INTO nicknames (nick_id, slug, canonical) VALUES '\n '(?, ?, ?)',\n [nick_id, slug, nick]\n )\n nick_id = self.execute('SELECT nick_id from nicknames where slug = ?',\n [slug]).fetchone()\n return nick_id[0]\n\n def alias_nick(self, nick, alias):\n \"\"\"Create an alias for a nick.\n\n Raises ValueError if the alias already exists. If nick does not already\n exist, it will be added along with the alias.\"\"\"\n nick = Identifier(nick)\n alias = Identifier(alias)\n nick_id = self.get_nick_id(nick)\n sql = 'INSERT INTO nicknames (nick_id, slug, canonical) VALUES (?, ?, ?)'\n values = [nick_id, alias.lower(), alias]\n try:\n self.execute(sql, values)\n except sqlite3.IntegrityError:\n raise ValueError('Alias already exists.')\n\n def set_nick_value(self, nick, key, value):\n \"\"\"Sets the value for a given key to be associated with the nick.\"\"\"\n nick = Identifier(nick)\n value = json.dumps(value, ensure_ascii=False)\n nick_id = self.get_nick_id(nick)\n self.execute('INSERT OR REPLACE INTO nick_values VALUES (?, ?, ?)',\n [nick_id, key, value])\n\n def get_nick_value(self, nick, key):\n \"\"\"Retrieves the value for a given key associated with a nick.\"\"\"\n nick = Identifier(nick)\n result = self.execute(\n 'SELECT value FROM nicknames JOIN nick_values '\n 'ON nicknames.nick_id = nick_values.nick_id '\n 'WHERE slug = ? AND key = ?',\n [nick.lower(), key]\n ).fetchone()\n if result is not None:\n result = result[0]\n return _deserialize(result)\n\n def unalias_nick(self, alias):\n \"\"\"Removes an alias.\n\n Raises ValueError if there is not at least one other nick in the group.\n To delete an entire group, use `delete_group`.\n \"\"\"\n alias = Identifier(alias)\n nick_id = self.get_nick_id(alias, False)\n count = self.execute('SELECT COUNT(*) FROM nicknames WHERE nick_id = ?',\n [nick_id]).fetchone()[0]\n if count == 0:\n raise ValueError('Given alias is the only entry in its group.')\n self.execute('DELETE FROM nicknames WHERE slug = ?', [alias.lower()])\n\n def delete_nick_group(self, nick):\n \"\"\"Removes a nickname, and all associated aliases and settings.\n \"\"\"\n nick = Identifier(nick)\n nick_id = self.get_nick_id(nick, False)\n self.execute('DELETE FROM nicknames WHERE nick_id = ?', [nick_id])\n self.execute('DELETE FROM nick_values WHERE nick_id = ?', [nick_id])\n\n def merge_nick_groups(self, first_nick, second_nick):\n \"\"\"Merges the nick groups for the specified nicks.\n\n Takes two nicks, which may or may not be registered. Unregistered\n nicks will be registered. Keys which are set for only one of the given\n nicks will be preserved. Where multiple nicks have values for a given\n key, the value set for the first nick will be used.\n\n Note that merging of data only applies to the native key-value store.\n If modules define their own tables which rely on the nick table, they\n will need to have their merging done separately.\"\"\"\n first_id = self.get_nick_id(Identifier(first_nick))\n second_id = self.get_nick_id(Identifier(second_nick))\n self.execute(\n 'UPDATE OR IGNORE nick_values SET nick_id = ? WHERE nick_id = ?',\n [first_id, second_id])\n self.execute('DELETE FROM nick_values WHERE nick_id = ?', [second_id])\n self.execute('UPDATE nicknames SET nick_id = ? WHERE nick_id = ?',\n [first_id, second_id])\n\n # CHANNEL FUNCTIONS\n\n def set_channel_value(self, channel, key, value):\n channel = Identifier(channel).lower()\n value = json.dumps(value, ensure_ascii=False)\n self.execute('INSERT OR REPLACE INTO channel_values VALUES (?, ?, ?)',\n [channel, key, value])\n\n def get_channel_value(self, channel, key):\n \"\"\"Retrieves the value for a given key associated with a channel.\"\"\"\n channel = Identifier(channel).lower()\n result = self.execute(\n 'SELECT value FROM channel_values WHERE channel = ? AND key = ?',\n [channel, key]\n ).fetchone()\n if result is not None:\n result = result[0]\n return _deserialize(result)\n\n # NICK AND CHANNEL FUNCTIONS\n\n def get_nick_or_channel_value(self, name, key):\n \"\"\"Gets the value `key` associated to the nick or channel `name`.\n \"\"\"\n name = Identifier(name)\n if name.is_nick():\n return self.get_nick_value(name, key)\n else:\n return self.get_channel_value(name, key)\n\n def get_preferred_value(self, names, key):\n \"\"\"Gets the value for the first name which has it set.\n\n `names` is a list of channel and/or user names. Returns None if none of\n the names have the key set.\"\"\"\n for name in names:\n value = self.get_nick_or_channel_value(name, key)\n if value is not None:\n return value\n", "path": "sopel/db.py"}]}
| 3,412 | 132 |
gh_patches_debug_16325
|
rasdani/github-patches
|
git_diff
|
rasterio__rasterio-670
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
rio stack output empty
`rio stack`ing one or more rasters without an explicit band index results in a raster with all nulls
```
$ rio info --tell-me-more tests/data/RGB.byte.tif | jq .stats[0].max
255
$ rio stack tests/data/RGB.byte.tif /tmp/test.tif && \
rio info --tell-me-more /tmp/test.tif | jq .stats[0].max
null
```
</issue>
<code>
[start of rasterio/rio/stack.py]
1 """Commands for operating on bands of datasets."""
2 import logging
3
4 import click
5 from cligj import files_inout_arg, format_opt
6
7 from .helpers import resolve_inout
8 from . import options
9 import rasterio
10 from rasterio.five import zip_longest
11
12
13 # Stack command.
14 @click.command(short_help="Stack a number of bands into a multiband dataset.")
15 @files_inout_arg
16 @options.output_opt
17 @format_opt
18 @options.bidx_mult_opt
19 @options.rgb_opt
20 @options.force_overwrite_opt
21 @options.creation_options
22 @click.pass_context
23 def stack(ctx, files, output, driver, bidx, photometric, force_overwrite,
24 creation_options):
25 """Stack a number of bands from one or more input files into a
26 multiband dataset.
27
28 Input datasets must be of a kind: same data type, dimensions, etc. The
29 output is cloned from the first input.
30
31 By default, rio-stack will take all bands from each input and write them
32 in same order to the output. Optionally, bands for each input may be
33 specified using a simple syntax:
34
35 --bidx N takes the Nth band from the input (first band is 1).
36
37 --bidx M,N,0 takes bands M, N, and O.
38
39 --bidx M..O takes bands M-O, inclusive.
40
41 --bidx ..N takes all bands up to and including N.
42
43 --bidx N.. takes all bands from N to the end.
44
45 Examples, using the Rasterio testing dataset, which produce a copy.
46
47 rio stack RGB.byte.tif -o stacked.tif
48
49 rio stack RGB.byte.tif --bidx 1,2,3 -o stacked.tif
50
51 rio stack RGB.byte.tif --bidx 1..3 -o stacked.tif
52
53 rio stack RGB.byte.tif --bidx ..2 RGB.byte.tif --bidx 3.. -o stacked.tif
54
55 """
56
57 verbosity = (ctx.obj and ctx.obj.get('verbosity')) or 2
58 logger = logging.getLogger('rio')
59 try:
60 with rasterio.drivers(CPL_DEBUG=verbosity>2):
61 output, files = resolve_inout(files=files, output=output,
62 force_overwrite=force_overwrite)
63 output_count = 0
64 indexes = []
65 for path, item in zip_longest(files, bidx, fillvalue=None):
66 with rasterio.open(path) as src:
67 src_indexes = src.indexes
68 if item is None:
69 indexes.append(src_indexes)
70 output_count += len(src_indexes)
71 elif '..' in item:
72 start, stop = map(
73 lambda x: int(x) if x else None, item.split('..'))
74 if start is None:
75 start = 1
76 indexes.append(src_indexes[slice(start-1, stop)])
77 output_count += len(src_indexes[slice(start-1, stop)])
78 else:
79 parts = list(map(int, item.split(',')))
80 if len(parts) == 1:
81 indexes.append(parts[0])
82 output_count += 1
83 else:
84 parts = list(parts)
85 indexes.append(parts)
86 output_count += len(parts)
87
88 with rasterio.open(files[0]) as first:
89 kwargs = first.meta
90 kwargs.update(**creation_options)
91 kwargs['transform'] = kwargs.pop('affine')
92
93 kwargs.update(
94 driver=driver,
95 count=output_count)
96
97 if photometric:
98 kwargs['photometric'] = photometric
99
100 with rasterio.open(output, 'w', **kwargs) as dst:
101 dst_idx = 1
102 for path, index in zip(files, indexes):
103 with rasterio.open(path) as src:
104 if isinstance(index, int):
105 data = src.read(index)
106 dst.write(data, dst_idx)
107 dst_idx += 1
108 elif isinstance(index, list):
109 data = src.read(index)
110 dst.write(data, range(dst_idx, dst_idx+len(index)))
111 dst_idx += len(index)
112
113 except Exception:
114 logger.exception("Exception caught during processing")
115 raise click.Abort()
116
[end of rasterio/rio/stack.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/rasterio/rio/stack.py b/rasterio/rio/stack.py
--- a/rasterio/rio/stack.py
+++ b/rasterio/rio/stack.py
@@ -1,4 +1,5 @@
"""Commands for operating on bands of datasets."""
+import collections
import logging
import click
@@ -105,7 +106,7 @@
data = src.read(index)
dst.write(data, dst_idx)
dst_idx += 1
- elif isinstance(index, list):
+ elif isinstance(index, collections.Iterable):
data = src.read(index)
dst.write(data, range(dst_idx, dst_idx+len(index)))
dst_idx += len(index)
|
{"golden_diff": "diff --git a/rasterio/rio/stack.py b/rasterio/rio/stack.py\n--- a/rasterio/rio/stack.py\n+++ b/rasterio/rio/stack.py\n@@ -1,4 +1,5 @@\n \"\"\"Commands for operating on bands of datasets.\"\"\"\n+import collections\n import logging\n \n import click\n@@ -105,7 +106,7 @@\n data = src.read(index)\n dst.write(data, dst_idx)\n dst_idx += 1\n- elif isinstance(index, list):\n+ elif isinstance(index, collections.Iterable):\n data = src.read(index)\n dst.write(data, range(dst_idx, dst_idx+len(index)))\n dst_idx += len(index)\n", "issue": "rio stack output empty\n`rio stack`ing one or more rasters without an explicit band index results in a raster with all nulls\n\n```\n$ rio info --tell-me-more tests/data/RGB.byte.tif | jq .stats[0].max\n255\n$ rio stack tests/data/RGB.byte.tif /tmp/test.tif && \\\n rio info --tell-me-more /tmp/test.tif | jq .stats[0].max\nnull\n```\n\n", "before_files": [{"content": "\"\"\"Commands for operating on bands of datasets.\"\"\"\nimport logging\n\nimport click\nfrom cligj import files_inout_arg, format_opt\n\nfrom .helpers import resolve_inout\nfrom . import options\nimport rasterio\nfrom rasterio.five import zip_longest\n\n\n# Stack command.\[email protected](short_help=\"Stack a number of bands into a multiband dataset.\")\n@files_inout_arg\[email protected]_opt\n@format_opt\[email protected]_mult_opt\[email protected]_opt\[email protected]_overwrite_opt\[email protected]_options\[email protected]_context\ndef stack(ctx, files, output, driver, bidx, photometric, force_overwrite,\n creation_options):\n \"\"\"Stack a number of bands from one or more input files into a\n multiband dataset.\n\n Input datasets must be of a kind: same data type, dimensions, etc. The\n output is cloned from the first input.\n\n By default, rio-stack will take all bands from each input and write them\n in same order to the output. Optionally, bands for each input may be\n specified using a simple syntax:\n\n --bidx N takes the Nth band from the input (first band is 1).\n\n --bidx M,N,0 takes bands M, N, and O.\n\n --bidx M..O takes bands M-O, inclusive.\n\n --bidx ..N takes all bands up to and including N.\n\n --bidx N.. takes all bands from N to the end.\n\n Examples, using the Rasterio testing dataset, which produce a copy.\n\n rio stack RGB.byte.tif -o stacked.tif\n\n rio stack RGB.byte.tif --bidx 1,2,3 -o stacked.tif\n\n rio stack RGB.byte.tif --bidx 1..3 -o stacked.tif\n\n rio stack RGB.byte.tif --bidx ..2 RGB.byte.tif --bidx 3.. -o stacked.tif\n\n \"\"\"\n\n verbosity = (ctx.obj and ctx.obj.get('verbosity')) or 2\n logger = logging.getLogger('rio')\n try:\n with rasterio.drivers(CPL_DEBUG=verbosity>2):\n output, files = resolve_inout(files=files, output=output,\n force_overwrite=force_overwrite)\n output_count = 0\n indexes = []\n for path, item in zip_longest(files, bidx, fillvalue=None):\n with rasterio.open(path) as src:\n src_indexes = src.indexes\n if item is None:\n indexes.append(src_indexes)\n output_count += len(src_indexes)\n elif '..' in item:\n start, stop = map(\n lambda x: int(x) if x else None, item.split('..'))\n if start is None:\n start = 1\n indexes.append(src_indexes[slice(start-1, stop)])\n output_count += len(src_indexes[slice(start-1, stop)])\n else:\n parts = list(map(int, item.split(',')))\n if len(parts) == 1:\n indexes.append(parts[0])\n output_count += 1\n else:\n parts = list(parts)\n indexes.append(parts)\n output_count += len(parts)\n\n with rasterio.open(files[0]) as first:\n kwargs = first.meta\n kwargs.update(**creation_options)\n kwargs['transform'] = kwargs.pop('affine')\n\n kwargs.update(\n driver=driver,\n count=output_count)\n\n if photometric:\n kwargs['photometric'] = photometric\n\n with rasterio.open(output, 'w', **kwargs) as dst:\n dst_idx = 1\n for path, index in zip(files, indexes):\n with rasterio.open(path) as src:\n if isinstance(index, int):\n data = src.read(index)\n dst.write(data, dst_idx)\n dst_idx += 1\n elif isinstance(index, list):\n data = src.read(index)\n dst.write(data, range(dst_idx, dst_idx+len(index)))\n dst_idx += len(index)\n\n except Exception:\n logger.exception(\"Exception caught during processing\")\n raise click.Abort()\n", "path": "rasterio/rio/stack.py"}]}
| 1,760 | 155 |
gh_patches_debug_25922
|
rasdani/github-patches
|
git_diff
|
freedomofpress__securedrop-5834
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Restoring tarball to Focal shows error for v2
## Description
When restoring a v2-only Xenial backup tarball to a v3-only Focal instance, the restore action fails. It fails even if the admin explicitly requests that the tor config be preserved as-is.
## Steps to Reproduce
I used libvirt-based VMs for testing, and performed all admin actions from a virtualized Tails v4.16 VM.
1. Create a v2-only backup tarball from a Xenial host.
2. Perform a clean install of Focal, with v3-only vars.
3. Attempt to restore the backup: `./securedrop-admin --force restore --preserve-tor-config ~/Persistent/backups/xenial-v2-only/sd-backup-2021-02-26--15-57-06.tar.gz`
## Expected Behavior
Restore action completes, old URLs are restored, and I can proceed with regenerating new v3 URL and finalizing the Xenial -> Focal migration.
## Actual Behavior
Restore action fails. Even when I include the `--preserve-tor-config` flag, it still fails.
## Comments
On one hand, the failure is expected, since Focal is v3-only, but in the context of a migration from Xenial, it's likely we're going to have admins migrating to Focal from a recently created backup, so I recommend we defer the fail-closed behavior to a subsequent release. That'd have bearing on WIP docs changes in e..g. https://github.com/freedomofpress/securedrop-docs/pull/133
The above is a policy question, but this ticket is also pointing out some bugs that should be fixed. For one, `--preserve-tor-config` is not honored, and it should be.
</issue>
<code>
[start of install_files/ansible-base/roles/restore/files/compare_torrc.py]
1 #!/usr/bin/env python
2
3 #
4 # Compares Tor configurations on the app server and from a backup. If
5 # restoring the backup would alter the server's Tor configuration,
6 # print a warning and exit.
7 #
8
9 from __future__ import print_function
10
11 import os
12 import re
13 import sys
14
15
16 def get_tor_versions(path):
17 """
18 Determine which service versions are offered in the given torrc.
19 """
20 service_re = re.compile(r"HiddenServiceDir\s+(?:.*)/(.*)")
21 versions = set([])
22 with open(path) as f:
23 for line in f:
24 m = service_re.match(line)
25 if m:
26 service = m.group(1)
27 if "v3" in service:
28 versions.add(3)
29 else:
30 versions.add(2)
31
32 return versions
33
34
35 def strset(s):
36 """
37 Sort the given set and join members with "and".
38 """
39 return " and ".join(str(v) for v in sorted(s))
40
41
42 if __name__ == "__main__":
43 tempdir = sys.argv[1]
44
45 server_versions = get_tor_versions(os.path.join(tempdir, "app/etc/tor/torrc"))
46 backup_versions = get_tor_versions(os.path.join(tempdir, "backup/etc/tor/torrc"))
47
48 if server_versions == backup_versions:
49 print("The Tor configuration in the backup matches the server.")
50 sys.exit(0)
51
52 if (3 in server_versions) and (3 in backup_versions):
53 print("V3 services detected in backup and server - proceeding with v3-only restore")
54 sys.exit(0)
55
56 print(
57 "The Tor configuration on the app server offers version {} services.".format(
58 strset(server_versions)
59 )
60 )
61
62 print(
63 "The Tor configuration in this backup offers version {} services.".format(
64 strset(backup_versions)
65 )
66 )
67
68 print("\nRestoring a backup with a different Tor configuration than the server ")
69 print("is currently unsupported. If you require technical assistance, please ")
70 print("contact the SecureDrop team via the support portal or at ")
71 print("[email protected].")
72
73 sys.exit(1)
74
[end of install_files/ansible-base/roles/restore/files/compare_torrc.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/install_files/ansible-base/roles/restore/files/compare_torrc.py b/install_files/ansible-base/roles/restore/files/compare_torrc.py
--- a/install_files/ansible-base/roles/restore/files/compare_torrc.py
+++ b/install_files/ansible-base/roles/restore/files/compare_torrc.py
@@ -46,11 +46,11 @@
backup_versions = get_tor_versions(os.path.join(tempdir, "backup/etc/tor/torrc"))
if server_versions == backup_versions:
- print("The Tor configuration in the backup matches the server.")
+ print("Valid configuration: the Tor configuration in the backup matches the server.")
sys.exit(0)
if (3 in server_versions) and (3 in backup_versions):
- print("V3 services detected in backup and server - proceeding with v3-only restore")
+ print("Valid configuration: V3 services only`")
sys.exit(0)
print(
@@ -65,9 +65,11 @@
)
)
- print("\nRestoring a backup with a different Tor configuration than the server ")
- print("is currently unsupported. If you require technical assistance, please ")
- print("contact the SecureDrop team via the support portal or at ")
+ print("\nIncompatible configuration: Restoring a backup including a different ")
+ print("Tor configuration than the server Tor configuration is unsupported. ")
+ print("Optionally, use --preserve-tor-config to apply a data-only backup.")
+ print("If you require technical assistance, please contact the ")
+ print("SecureDrop team via the support portal or at ")
print("[email protected].")
sys.exit(1)
|
{"golden_diff": "diff --git a/install_files/ansible-base/roles/restore/files/compare_torrc.py b/install_files/ansible-base/roles/restore/files/compare_torrc.py\n--- a/install_files/ansible-base/roles/restore/files/compare_torrc.py\n+++ b/install_files/ansible-base/roles/restore/files/compare_torrc.py\n@@ -46,11 +46,11 @@\n backup_versions = get_tor_versions(os.path.join(tempdir, \"backup/etc/tor/torrc\"))\n \n if server_versions == backup_versions:\n- print(\"The Tor configuration in the backup matches the server.\")\n+ print(\"Valid configuration: the Tor configuration in the backup matches the server.\")\n sys.exit(0)\n \n if (3 in server_versions) and (3 in backup_versions):\n- print(\"V3 services detected in backup and server - proceeding with v3-only restore\")\n+ print(\"Valid configuration: V3 services only`\")\n sys.exit(0)\n \n print(\n@@ -65,9 +65,11 @@\n )\n )\n \n- print(\"\\nRestoring a backup with a different Tor configuration than the server \")\n- print(\"is currently unsupported. If you require technical assistance, please \")\n- print(\"contact the SecureDrop team via the support portal or at \")\n+ print(\"\\nIncompatible configuration: Restoring a backup including a different \")\n+ print(\"Tor configuration than the server Tor configuration is unsupported. \")\n+ print(\"Optionally, use --preserve-tor-config to apply a data-only backup.\")\n+ print(\"If you require technical assistance, please contact the \")\n+ print(\"SecureDrop team via the support portal or at \")\n print(\"[email protected].\")\n \n sys.exit(1)\n", "issue": "Restoring tarball to Focal shows error for v2\n## Description\r\n\r\nWhen restoring a v2-only Xenial backup tarball to a v3-only Focal instance, the restore action fails. It fails even if the admin explicitly requests that the tor config be preserved as-is. \r\n\r\n## Steps to Reproduce\r\nI used libvirt-based VMs for testing, and performed all admin actions from a virtualized Tails v4.16 VM.\r\n\r\n1. Create a v2-only backup tarball from a Xenial host.\r\n2. Perform a clean install of Focal, with v3-only vars.\r\n3. Attempt to restore the backup: `./securedrop-admin --force restore --preserve-tor-config ~/Persistent/backups/xenial-v2-only/sd-backup-2021-02-26--15-57-06.tar.gz`\r\n\r\n## Expected Behavior\r\n\r\nRestore action completes, old URLs are restored, and I can proceed with regenerating new v3 URL and finalizing the Xenial -> Focal migration. \r\n\r\n\r\n## Actual Behavior\r\n\r\nRestore action fails. Even when I include the `--preserve-tor-config` flag, it still fails. \r\n\r\n## Comments\r\nOn one hand, the failure is expected, since Focal is v3-only, but in the context of a migration from Xenial, it's likely we're going to have admins migrating to Focal from a recently created backup, so I recommend we defer the fail-closed behavior to a subsequent release. That'd have bearing on WIP docs changes in e..g. https://github.com/freedomofpress/securedrop-docs/pull/133\r\n\r\nThe above is a policy question, but this ticket is also pointing out some bugs that should be fixed. For one, `--preserve-tor-config` is not honored, and it should be.\r\n\n", "before_files": [{"content": "#!/usr/bin/env python\n\n#\n# Compares Tor configurations on the app server and from a backup. If\n# restoring the backup would alter the server's Tor configuration,\n# print a warning and exit.\n#\n\nfrom __future__ import print_function\n\nimport os\nimport re\nimport sys\n\n\ndef get_tor_versions(path):\n \"\"\"\n Determine which service versions are offered in the given torrc.\n \"\"\"\n service_re = re.compile(r\"HiddenServiceDir\\s+(?:.*)/(.*)\")\n versions = set([])\n with open(path) as f:\n for line in f:\n m = service_re.match(line)\n if m:\n service = m.group(1)\n if \"v3\" in service:\n versions.add(3)\n else:\n versions.add(2)\n\n return versions\n\n\ndef strset(s):\n \"\"\"\n Sort the given set and join members with \"and\".\n \"\"\"\n return \" and \".join(str(v) for v in sorted(s))\n\n\nif __name__ == \"__main__\":\n tempdir = sys.argv[1]\n\n server_versions = get_tor_versions(os.path.join(tempdir, \"app/etc/tor/torrc\"))\n backup_versions = get_tor_versions(os.path.join(tempdir, \"backup/etc/tor/torrc\"))\n\n if server_versions == backup_versions:\n print(\"The Tor configuration in the backup matches the server.\")\n sys.exit(0)\n\n if (3 in server_versions) and (3 in backup_versions):\n print(\"V3 services detected in backup and server - proceeding with v3-only restore\")\n sys.exit(0)\n\n print(\n \"The Tor configuration on the app server offers version {} services.\".format(\n strset(server_versions)\n )\n )\n\n print(\n \"The Tor configuration in this backup offers version {} services.\".format(\n strset(backup_versions)\n )\n )\n\n print(\"\\nRestoring a backup with a different Tor configuration than the server \")\n print(\"is currently unsupported. If you require technical assistance, please \")\n print(\"contact the SecureDrop team via the support portal or at \")\n print(\"[email protected].\")\n\n sys.exit(1)\n", "path": "install_files/ansible-base/roles/restore/files/compare_torrc.py"}]}
| 1,560 | 383 |
gh_patches_debug_20393
|
rasdani/github-patches
|
git_diff
|
PlasmaPy__PlasmaPy-405
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add doc page on authors and credits
It would be really helpful to have a page in our `docs` directory that lists the Coordinating Committee members and a full list of authors of the code, along with other credits. Some examples are Astropy's [Authors and Credits page](http://docs.astropy.org/en/stable/credits.html), and SunPy's [The Project](http://sunpy.org/team.html). The list of code contributors can already be accessed from our GitHub repository and the commit log; however, this often does not include full names. We might be able to find a way to automate this, though that's low priority. We should do this prior to our 0.1 release.
To help with the organization, we should probably create an `about` subdirectory that will include pages about the PlasmaPy project as a whole, including this one. The `docs/stability.rst` page could go in this directory too.
</issue>
<code>
[start of plasmapy/constants/__init__.py]
1 """Physical and mathematical constants."""
2
3 from numpy import pi
4
5 from astropy.constants.si import (
6 e,
7 mu0,
8 eps0,
9 k_B,
10 c,
11 G,
12 h,
13 hbar,
14 m_p,
15 m_n,
16 m_e,
17 u,
18 sigma_sb,
19 N_A,
20 R,
21 Ryd,
22 a0,
23 muB,
24 sigma_T,
25 au,
26 pc,
27 kpc,
28 g0,
29 L_sun,
30 M_sun,
31 R_sun,
32 M_earth,
33 R_earth,
34 )
35
36 from astropy.constants import atm
37
[end of plasmapy/constants/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/plasmapy/constants/__init__.py b/plasmapy/constants/__init__.py
--- a/plasmapy/constants/__init__.py
+++ b/plasmapy/constants/__init__.py
@@ -1,4 +1,8 @@
-"""Physical and mathematical constants."""
+"""
+Contains physical and mathematical constants commonly used in plasma
+physics.
+
+"""
from numpy import pi
@@ -34,3 +38,26 @@
)
from astropy.constants import atm
+
+# The following code is modified from astropy.constants to produce a
+# table containing information on the constants contained with PlasmaPy.
+# Mathematical constants can be just entered.
+
+_lines = [
+ 'The following constants are available:\n',
+ '========== ================= ================ ============================================',
+ 'Name Value Units Description',
+ '========== ================= ================ ============================================',
+ " pi 3.141592653589793 Ratio of circumference to diameter of circle",
+]
+
+_constants = [eval(item) for item in dir() if item[0] != '_' and item != 'pi']
+for _const in _constants:
+ _lines.append('{0:^10} {1:^17.12g} {2:^16} {3}'
+ .format(_const.abbrev, _const.value, _const._unit_string, _const.name))
+
+_lines.append(_lines[1])
+
+__doc__ += '\n'.join(_lines)
+
+del _lines, _const, _constants
|
{"golden_diff": "diff --git a/plasmapy/constants/__init__.py b/plasmapy/constants/__init__.py\n--- a/plasmapy/constants/__init__.py\n+++ b/plasmapy/constants/__init__.py\n@@ -1,4 +1,8 @@\n-\"\"\"Physical and mathematical constants.\"\"\"\n+\"\"\"\n+Contains physical and mathematical constants commonly used in plasma\n+physics.\n+\n+\"\"\"\n \n from numpy import pi\n \n@@ -34,3 +38,26 @@\n )\n \n from astropy.constants import atm\n+\n+# The following code is modified from astropy.constants to produce a\n+# table containing information on the constants contained with PlasmaPy.\n+# Mathematical constants can be just entered.\n+\n+_lines = [\n+ 'The following constants are available:\\n',\n+ '========== ================= ================ ============================================',\n+ 'Name Value Units Description',\n+ '========== ================= ================ ============================================',\n+ \" pi 3.141592653589793 Ratio of circumference to diameter of circle\",\n+]\n+\n+_constants = [eval(item) for item in dir() if item[0] != '_' and item != 'pi']\n+for _const in _constants:\n+ _lines.append('{0:^10} {1:^17.12g} {2:^16} {3}'\n+ .format(_const.abbrev, _const.value, _const._unit_string, _const.name))\n+\n+_lines.append(_lines[1])\n+\n+__doc__ += '\\n'.join(_lines)\n+\n+del _lines, _const, _constants\n", "issue": "Add doc page on authors and credits\nIt would be really helpful to have a page in our `docs` directory that lists the Coordinating Committee members and a full list of authors of the code, along with other credits. Some examples are Astropy's [Authors and Credits page](http://docs.astropy.org/en/stable/credits.html), and SunPy's [The Project](http://sunpy.org/team.html). The list of code contributors can already be accessed from our GitHub repository and the commit log; however, this often does not include full names. We might be able to find a way to automate this, though that's low priority. We should do this prior to our 0.1 release.\r\n\r\nTo help with the organization, we should probably create an `about` subdirectory that will include pages about the PlasmaPy project as a whole, including this one. The `docs/stability.rst` page could go in this directory too.\n", "before_files": [{"content": "\"\"\"Physical and mathematical constants.\"\"\"\n\nfrom numpy import pi\n\nfrom astropy.constants.si import (\n e,\n mu0,\n eps0,\n k_B,\n c,\n G,\n h,\n hbar,\n m_p,\n m_n,\n m_e,\n u,\n sigma_sb,\n N_A,\n R,\n Ryd,\n a0,\n muB,\n sigma_T,\n au,\n pc,\n kpc,\n g0,\n L_sun,\n M_sun,\n R_sun,\n M_earth,\n R_earth,\n)\n\nfrom astropy.constants import atm\n", "path": "plasmapy/constants/__init__.py"}]}
| 943 | 350 |
gh_patches_debug_20348
|
rasdani/github-patches
|
git_diff
|
google__personfinder-397
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Internal server error on multiview.py with invalid record ID
multiview.py returns Internal server error when one of the specified IDs is invalid. It should return 404 or something instead.
```
AttributeError: 'NoneType' object has no attribute 'person_record_id'
at get (multiview.py:47)
at serve (main.py:622)
at get (main.py:647)
```
</issue>
<code>
[start of app/multiview.py]
1 #!/usr/bin/python2.7
2 # Copyright 2010 Google Inc.
3 #
4 # Licensed under the Apache License, Version 2.0 (the "License");
5 # you may not use this file except in compliance with the License.
6 # You may obtain a copy of the License at
7 #
8 # http://www.apache.org/licenses/LICENSE-2.0
9 #
10 # Unless required by applicable law or agreed to in writing, software
11 # distributed under the License is distributed on an "AS IS" BASIS,
12 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 # See the License for the specific language governing permissions and
14 # limitations under the License.
15
16 from model import *
17 from utils import *
18 import pfif
19 import reveal
20 import subscribe
21 import view
22
23 from django.utils.translation import ugettext as _
24
25 # Fields to show for side-by-side comparison.
26 COMPARE_FIELDS = pfif.PFIF_1_4.fields['person'] + ['primary_full_name']
27
28
29 class Handler(BaseHandler):
30 def get(self):
31 # To handle multiple persons, we create a single object where
32 # each property is a list of values, one for each person.
33 # This makes page rendering easier.
34 person = dict([(prop, []) for prop in COMPARE_FIELDS])
35 any_person = dict([(prop, None) for prop in COMPARE_FIELDS])
36
37 # Get all persons from db.
38 # TODO: Can later optimize to use fewer DB calls.
39 for i in [1, 2, 3]:
40 id = self.request.get('id%d' % i)
41 if not id:
42 break
43 p = Person.get(self.repo, id)
44 sanitize_urls(p)
45
46 for prop in COMPARE_FIELDS:
47 val = getattr(p, prop)
48 if prop == 'sex': # convert enum value to localized text
49 val = get_person_sex_text(p)
50 person[prop].append(val)
51 any_person[prop] = any_person[prop] or val
52
53 # Compute the local times for the date fields on the person and format.
54 person['source_datetime_local_string'] = map(
55 self.to_formatted_local_datetime, person['source_date'])
56
57 # Check if private info should be revealed.
58 content_id = 'multiview:' + ','.join(person['person_record_id'])
59 reveal_url = reveal.make_reveal_url(self, content_id)
60 show_private_info = reveal.verify(content_id, self.params.signature)
61
62 standalone = self.request.get('standalone')
63
64 # TODO: Handle no persons found.
65
66 person['profile_pages'] = [view.get_profile_pages(profile_urls, self)
67 for profile_urls in person['profile_urls']]
68 any_person['profile_pages'] = any(person['profile_pages'])
69
70 # Note: we're not showing notes and linked persons information
71 # here at the moment.
72 self.render('multiview.html',
73 person=person, any=any_person, standalone=standalone,
74 cols=len(person['full_name']) + 1,
75 onload_function='view_page_loaded()', markdup=True,
76 show_private_info=show_private_info, reveal_url=reveal_url)
77
78 def post(self):
79 if not self.params.text:
80 return self.error(
81 200, _('Message is required. Please go back and try again.'))
82
83 if not self.params.author_name:
84 return self.error(
85 200, _('Your name is required in the "About you" section. Please go back and try again.'))
86
87 # TODO: To reduce possible abuse, we currently limit to 3 person
88 # match. We could guard using e.g. an XSRF token, which I don't know how
89 # to build in GAE.
90
91 ids = set()
92 for i in [1, 2, 3]:
93 id = getattr(self.params, 'id%d' % i)
94 if not id:
95 break
96 ids.add(id)
97
98 if len(ids) > 1:
99 notes = []
100 for person_id in ids:
101 person = Person.get(self.repo, person_id)
102 person_notes = []
103 for other_id in ids - set([person_id]):
104 note = Note.create_original(
105 self.repo,
106 entry_date=get_utcnow(),
107 person_record_id=person_id,
108 linked_person_record_id=other_id,
109 text=self.params.text,
110 author_name=self.params.author_name,
111 author_phone=self.params.author_phone,
112 author_email=self.params.author_email,
113 source_date=get_utcnow())
114 person_notes.append(note)
115 # Notify person's subscribers of all new duplicates. We do not
116 # follow links since each Person record in the ids list gets its
117 # own note. However, 1) when > 2 records are marked as
118 # duplicates, subscribers will still receive multiple
119 # notifications, and 2) subscribers to already-linked Persons
120 # will not be notified of the new link.
121 subscribe.send_notifications(self, person, person_notes, False)
122 notes += person_notes
123 # Write all notes to store
124 db.put(notes)
125 self.redirect('/view', id=self.params.id1)
126
[end of app/multiview.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/app/multiview.py b/app/multiview.py
--- a/app/multiview.py
+++ b/app/multiview.py
@@ -41,6 +41,11 @@
if not id:
break
p = Person.get(self.repo, id)
+ if not p:
+ return self.error(
+ 404,
+ _("This person's entry does not exist or has been "
+ "deleted."))
sanitize_urls(p)
for prop in COMPARE_FIELDS:
@@ -103,7 +108,7 @@
for other_id in ids - set([person_id]):
note = Note.create_original(
self.repo,
- entry_date=get_utcnow(),
+ entry_date=get_utcnow(),
person_record_id=person_id,
linked_person_record_id=other_id,
text=self.params.text,
|
{"golden_diff": "diff --git a/app/multiview.py b/app/multiview.py\n--- a/app/multiview.py\n+++ b/app/multiview.py\n@@ -41,6 +41,11 @@\n if not id:\n break\n p = Person.get(self.repo, id)\n+ if not p:\n+ return self.error(\n+ 404,\n+ _(\"This person's entry does not exist or has been \"\n+ \"deleted.\"))\n sanitize_urls(p)\n \n for prop in COMPARE_FIELDS:\n@@ -103,7 +108,7 @@\n for other_id in ids - set([person_id]):\n note = Note.create_original(\n self.repo,\n- entry_date=get_utcnow(), \n+ entry_date=get_utcnow(),\n person_record_id=person_id,\n linked_person_record_id=other_id,\n text=self.params.text,\n", "issue": "Internal server error on multiview.py with invalid record ID\nmultiview.py returns Internal server error when one of the specified IDs is invalid. It should return 404 or something instead.\r\n\r\n```\r\nAttributeError: 'NoneType' object has no attribute 'person_record_id'\r\nat get (multiview.py:47)\r\nat serve (main.py:622)\r\nat get (main.py:647)\r\n```\n", "before_files": [{"content": "#!/usr/bin/python2.7\n# Copyright 2010 Google Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom model import *\nfrom utils import *\nimport pfif\nimport reveal\nimport subscribe\nimport view\n\nfrom django.utils.translation import ugettext as _\n\n# Fields to show for side-by-side comparison.\nCOMPARE_FIELDS = pfif.PFIF_1_4.fields['person'] + ['primary_full_name']\n\n\nclass Handler(BaseHandler):\n def get(self):\n # To handle multiple persons, we create a single object where\n # each property is a list of values, one for each person.\n # This makes page rendering easier.\n person = dict([(prop, []) for prop in COMPARE_FIELDS])\n any_person = dict([(prop, None) for prop in COMPARE_FIELDS])\n\n # Get all persons from db.\n # TODO: Can later optimize to use fewer DB calls.\n for i in [1, 2, 3]:\n id = self.request.get('id%d' % i)\n if not id:\n break\n p = Person.get(self.repo, id)\n sanitize_urls(p)\n\n for prop in COMPARE_FIELDS:\n val = getattr(p, prop)\n if prop == 'sex': # convert enum value to localized text\n val = get_person_sex_text(p)\n person[prop].append(val)\n any_person[prop] = any_person[prop] or val\n\n # Compute the local times for the date fields on the person and format.\n person['source_datetime_local_string'] = map(\n self.to_formatted_local_datetime, person['source_date'])\n\n # Check if private info should be revealed.\n content_id = 'multiview:' + ','.join(person['person_record_id'])\n reveal_url = reveal.make_reveal_url(self, content_id)\n show_private_info = reveal.verify(content_id, self.params.signature)\n\n standalone = self.request.get('standalone')\n\n # TODO: Handle no persons found.\n\n person['profile_pages'] = [view.get_profile_pages(profile_urls, self)\n for profile_urls in person['profile_urls']]\n any_person['profile_pages'] = any(person['profile_pages'])\n\n # Note: we're not showing notes and linked persons information\n # here at the moment.\n self.render('multiview.html',\n person=person, any=any_person, standalone=standalone,\n cols=len(person['full_name']) + 1,\n onload_function='view_page_loaded()', markdup=True,\n show_private_info=show_private_info, reveal_url=reveal_url)\n\n def post(self):\n if not self.params.text:\n return self.error(\n 200, _('Message is required. Please go back and try again.'))\n\n if not self.params.author_name:\n return self.error(\n 200, _('Your name is required in the \"About you\" section. Please go back and try again.'))\n\n # TODO: To reduce possible abuse, we currently limit to 3 person\n # match. We could guard using e.g. an XSRF token, which I don't know how\n # to build in GAE.\n\n ids = set()\n for i in [1, 2, 3]:\n id = getattr(self.params, 'id%d' % i)\n if not id:\n break\n ids.add(id)\n\n if len(ids) > 1:\n notes = []\n for person_id in ids:\n person = Person.get(self.repo, person_id)\n person_notes = []\n for other_id in ids - set([person_id]):\n note = Note.create_original(\n self.repo,\n entry_date=get_utcnow(), \n person_record_id=person_id,\n linked_person_record_id=other_id,\n text=self.params.text,\n author_name=self.params.author_name,\n author_phone=self.params.author_phone,\n author_email=self.params.author_email,\n source_date=get_utcnow())\n person_notes.append(note)\n # Notify person's subscribers of all new duplicates. We do not\n # follow links since each Person record in the ids list gets its\n # own note. However, 1) when > 2 records are marked as\n # duplicates, subscribers will still receive multiple\n # notifications, and 2) subscribers to already-linked Persons\n # will not be notified of the new link.\n subscribe.send_notifications(self, person, person_notes, False)\n notes += person_notes\n # Write all notes to store\n db.put(notes)\n self.redirect('/view', id=self.params.id1)\n", "path": "app/multiview.py"}]}
| 1,995 | 197 |
gh_patches_debug_22455
|
rasdani/github-patches
|
git_diff
|
zestedesavoir__zds-site-5250
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Erreurs 500 pour "Ajouter en Une"
Sur certains contenus, "Ajouter en Une" renvoie une erreur 500
Sujet : https://zestedesavoir.com/forums/sujet/11676/ajouter-en-une/#p195045
*Envoyé depuis Zeste de Savoir*
</issue>
<code>
[start of zds/featured/views.py]
1 from datetime import datetime
2
3 from django.contrib import messages
4 from django.contrib.auth.decorators import login_required, permission_required
5 from django.urls import reverse
6 from django.db import transaction
7 from django.shortcuts import redirect
8 from django.utils.decorators import method_decorator
9 from django.utils.translation import ugettext as _
10 from django.views.generic import CreateView, RedirectView, UpdateView, FormView, DeleteView
11 from django.views.generic.list import MultipleObjectMixin
12
13 from django.conf import settings
14 from zds.featured.forms import FeaturedResourceForm, FeaturedMessageForm
15 from zds.featured.models import FeaturedResource, FeaturedMessage
16 from zds.forum.models import Topic
17 from zds.tutorialv2.models.database import PublishedContent
18 from zds.utils.paginator import ZdSPagingListView
19
20
21 class FeaturedResourceList(ZdSPagingListView):
22 """
23 Displays the list of featured resources.
24 """
25
26 context_object_name = 'featured_resource_list'
27 paginate_by = settings.ZDS_APP['featured_resource']['featured_per_page']
28 queryset = FeaturedResource.objects.all().order_by('-pubdate')
29 template_name = 'featured/index.html'
30
31 @method_decorator(login_required)
32 @method_decorator(permission_required('featured.change_featuredresource', raise_exception=True))
33 def dispatch(self, request, *args, **kwargs):
34 return super(FeaturedResourceList, self).dispatch(request, *args, **kwargs)
35
36
37 class FeaturedResourceCreate(CreateView):
38 """
39 Creates a new featured resource.
40 """
41
42 form_class = FeaturedResourceForm
43 template_name = 'featured/resource/create.html'
44 context_object_name = 'featured_resource'
45 initial_error_message = _('Le contenu est introuvable')
46 displayed_content_type = {'TUTORIAL': _('Un tutoriel'),
47 'ARTICLE': _('Un article'),
48 'OPINION': _('Un billet'),
49 'TOPIC': _('Un sujet')}
50
51 @method_decorator(login_required)
52 @method_decorator(permission_required('featured.change_featuredresource', raise_exception=True))
53 def dispatch(self, request, *args, **kwargs):
54 return super(FeaturedResourceCreate, self).dispatch(request, *args, **kwargs)
55
56 def get_initial_topic_data(self, topic_id):
57 try:
58 content = Topic.objects.get(pk=int(topic_id))
59 except (Topic.DoesNotExist, ValueError):
60 messages.error(self.request, self.initial_error_message)
61 return {}
62 return {'title': content.title,
63 'type': self.displayed_content_type['TOPIC'],
64 'authors': str(content.author),
65 'url': self.request.build_absolute_uri(content.get_absolute_url())}
66
67 def get_initial_content_data(self, content_id):
68 try:
69 content = PublishedContent.objects.get(content__pk=int(content_id))
70 except (PublishedContent.DoesNotExist, ValueError):
71 messages.error(self.request, self.initial_error_message)
72 return {}
73 displayed_authors = ', '.join([str(x) for x in content.authors.all()])
74 if content.content.image:
75 image_url = self.request.build_absolute_uri(content.content.image.physical['featured'].url)
76 else:
77 image_url = None
78 return {'title': content.title(),
79 'type': self.displayed_content_type[content.content_type],
80 'authors': displayed_authors,
81 'url': self.request.build_absolute_uri(content.content.get_absolute_url_online()),
82 'image_url': image_url}
83
84 def get_initial(self):
85 initial = super(FeaturedResourceCreate, self).get_initial()
86 content_type = self.request.GET.get('content_type', None)
87 content_id = self.request.GET.get('content_id', None)
88 if content_type == 'topic' and content_id:
89 initial.update(**self.get_initial_topic_data(content_id))
90 elif content_type == 'published_content' and content_id:
91 initial.update(**self.get_initial_content_data(content_id))
92 return initial
93
94 def get_form_kwargs(self):
95 kw = super(FeaturedResourceCreate, self).get_form_kwargs()
96 kw['hide_major_update_field'] = True
97 return kw
98
99 def form_valid(self, form):
100 featured_resource = FeaturedResource()
101 featured_resource.title = form.cleaned_data.get('title')
102 featured_resource.type = form.cleaned_data.get('type')
103 featured_resource.authors = form.cleaned_data.get('authors')
104 featured_resource.image_url = form.cleaned_data.get('image_url')
105 featured_resource.url = form.cleaned_data.get('url')
106
107 if form.cleaned_data.get('major_update', False):
108 featured_resource.pubdate = datetime.now()
109 else:
110 featured_resource.pubdate = form.cleaned_data.get('pubdate')
111
112 featured_resource.save()
113
114 messages.success(self.request, _('La une a été créée.'))
115 return redirect(reverse('featured-resource-list'))
116
117
118 class FeaturedResourceUpdate(UpdateView):
119 """
120 Updates a featured resource.
121 """
122
123 form_class = FeaturedResourceForm
124 template_name = 'featured/resource/update.html'
125 queryset = FeaturedResource.objects.all()
126 context_object_name = 'featured_resource'
127
128 @method_decorator(login_required)
129 @method_decorator(permission_required('featured.change_featuredresource', raise_exception=True))
130 def dispatch(self, request, *args, **kwargs):
131 return super(FeaturedResourceUpdate, self).dispatch(request, *args, **kwargs)
132
133 def get_initial(self):
134 initial = super(FeaturedResourceUpdate, self).get_initial()
135 initial.update({
136 'title': self.object.title,
137 'type': self.object.type,
138 'authors': self.object.authors,
139 'image_url': self.object.image_url,
140 'url': self.object.url,
141 'pubdate': self.object.pubdate,
142 })
143
144 return initial
145
146 def form_valid(self, form):
147
148 self.object.title = form.cleaned_data.get('title')
149 self.object.type = form.cleaned_data.get('type')
150 self.object.authors = form.cleaned_data.get('authors')
151 self.object.image_url = form.cleaned_data.get('image_url')
152 self.object.url = form.cleaned_data.get('url')
153 if form.cleaned_data.get('major_update', False):
154 self.object.pubdate = datetime.now()
155 else:
156 self.object.pubdate = form.cleaned_data.get('pubdate')
157
158 messages.success(self.request, _('La une a été mise à jour.'))
159 self.success_url = reverse('featured-resource-list')
160 return super(FeaturedResourceUpdate, self).form_valid(form)
161
162 def get_form(self, form_class=None):
163 form = super(FeaturedResourceUpdate, self).get_form(form_class)
164 form.helper.form_action = reverse('featured-resource-update', args=[self.object.pk])
165 return form
166
167
168 class FeaturedResourceDeleteDetail(DeleteView):
169 """
170 Deletes a featured resource.
171 """
172
173 model = FeaturedResource
174
175 @method_decorator(login_required)
176 @method_decorator(transaction.atomic)
177 @method_decorator(permission_required('featured.change_featuredresource', raise_exception=True))
178 def dispatch(self, request, *args, **kwargs):
179 self.success_url = reverse('featured-resource-list')
180 return super(FeaturedResourceDeleteDetail, self).dispatch(request, *args, **kwargs)
181
182 def post(self, request, *args, **kwargs):
183 r = super(FeaturedResourceDeleteDetail, self).post(request, *args, **kwargs)
184 messages.success(request, _('La une a été supprimée avec succès.'))
185 return r
186
187
188 class FeaturedResourceDeleteList(MultipleObjectMixin, RedirectView):
189 """
190 Deletes a list of featured resources.
191 """
192 permanent = False
193
194 @method_decorator(login_required)
195 @method_decorator(permission_required('featured.change_featuredresource', raise_exception=True))
196 def dispatch(self, request, *args, **kwargs):
197 return super(FeaturedResourceDeleteList, self).dispatch(request, *args, **kwargs)
198
199 def get_queryset(self):
200 items_list = self.request.POST.getlist('items')
201 return FeaturedResource.objects.filter(pk__in=items_list)
202
203 def post(self, request, *args, **kwargs):
204 for featured_resource in self.get_queryset():
205 featured_resource.delete()
206
207 messages.success(request, _('Les unes ont été supprimées avec succès.'))
208
209 return redirect(reverse('featured-resource-list'))
210
211
212 class FeaturedMessageCreateUpdate(FormView):
213 """
214 Creates or updates the featured message.
215 """
216
217 form_class = FeaturedMessageForm
218 template_name = 'featured/message/create.html'
219 last_message = None
220
221 @method_decorator(login_required)
222 @method_decorator(permission_required('featured.change_featuredmessage', raise_exception=True))
223 def dispatch(self, request, *args, **kwargs):
224 self.last_message = FeaturedMessage.objects.get_last_message()
225 return super(FeaturedMessageCreateUpdate, self).dispatch(request, *args, **kwargs)
226
227 def get_initial(self):
228 init = super(FeaturedMessageCreateUpdate, self).get_initial()
229
230 if self.last_message is not None:
231 init.update({
232 'hook': self.last_message.hook,
233 'message': self.last_message.message,
234 'url': self.last_message.url,
235 })
236
237 return init
238
239 def form_valid(self, form):
240 if self.last_message:
241 self.last_message.delete()
242
243 featured_message = FeaturedMessage()
244 featured_message.hook = form.data.get('hook')
245 featured_message.message = form.data.get('message')
246 featured_message.url = form.data.get('url')
247 featured_message.save()
248
249 messages.success(self.request, _('Le message a été changé'))
250 return redirect(reverse('featured-resource-list'))
251
[end of zds/featured/views.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/zds/featured/views.py b/zds/featured/views.py
--- a/zds/featured/views.py
+++ b/zds/featured/views.py
@@ -66,7 +66,7 @@
def get_initial_content_data(self, content_id):
try:
- content = PublishedContent.objects.get(content__pk=int(content_id))
+ content = PublishedContent.objects.filter(must_redirect=False, content__pk=int(content_id)).first()
except (PublishedContent.DoesNotExist, ValueError):
messages.error(self.request, self.initial_error_message)
return {}
@@ -79,7 +79,7 @@
'type': self.displayed_content_type[content.content_type],
'authors': displayed_authors,
'url': self.request.build_absolute_uri(content.content.get_absolute_url_online()),
- 'image_url': image_url}
+ 'image_url': self.request.build_absolute_uri(image_url)}
def get_initial(self):
initial = super(FeaturedResourceCreate, self).get_initial()
|
{"golden_diff": "diff --git a/zds/featured/views.py b/zds/featured/views.py\n--- a/zds/featured/views.py\n+++ b/zds/featured/views.py\n@@ -66,7 +66,7 @@\n \n def get_initial_content_data(self, content_id):\n try:\n- content = PublishedContent.objects.get(content__pk=int(content_id))\n+ content = PublishedContent.objects.filter(must_redirect=False, content__pk=int(content_id)).first()\n except (PublishedContent.DoesNotExist, ValueError):\n messages.error(self.request, self.initial_error_message)\n return {}\n@@ -79,7 +79,7 @@\n 'type': self.displayed_content_type[content.content_type],\n 'authors': displayed_authors,\n 'url': self.request.build_absolute_uri(content.content.get_absolute_url_online()),\n- 'image_url': image_url}\n+ 'image_url': self.request.build_absolute_uri(image_url)}\n \n def get_initial(self):\n initial = super(FeaturedResourceCreate, self).get_initial()\n", "issue": "Erreurs 500 pour \"Ajouter en Une\"\nSur certains contenus, \"Ajouter en Une\" renvoie une erreur 500\r\n\r\nSujet : https://zestedesavoir.com/forums/sujet/11676/ajouter-en-une/#p195045\r\n*Envoy\u00e9 depuis Zeste de Savoir*\n", "before_files": [{"content": "from datetime import datetime\n\nfrom django.contrib import messages\nfrom django.contrib.auth.decorators import login_required, permission_required\nfrom django.urls import reverse\nfrom django.db import transaction\nfrom django.shortcuts import redirect\nfrom django.utils.decorators import method_decorator\nfrom django.utils.translation import ugettext as _\nfrom django.views.generic import CreateView, RedirectView, UpdateView, FormView, DeleteView\nfrom django.views.generic.list import MultipleObjectMixin\n\nfrom django.conf import settings\nfrom zds.featured.forms import FeaturedResourceForm, FeaturedMessageForm\nfrom zds.featured.models import FeaturedResource, FeaturedMessage\nfrom zds.forum.models import Topic\nfrom zds.tutorialv2.models.database import PublishedContent\nfrom zds.utils.paginator import ZdSPagingListView\n\n\nclass FeaturedResourceList(ZdSPagingListView):\n \"\"\"\n Displays the list of featured resources.\n \"\"\"\n\n context_object_name = 'featured_resource_list'\n paginate_by = settings.ZDS_APP['featured_resource']['featured_per_page']\n queryset = FeaturedResource.objects.all().order_by('-pubdate')\n template_name = 'featured/index.html'\n\n @method_decorator(login_required)\n @method_decorator(permission_required('featured.change_featuredresource', raise_exception=True))\n def dispatch(self, request, *args, **kwargs):\n return super(FeaturedResourceList, self).dispatch(request, *args, **kwargs)\n\n\nclass FeaturedResourceCreate(CreateView):\n \"\"\"\n Creates a new featured resource.\n \"\"\"\n\n form_class = FeaturedResourceForm\n template_name = 'featured/resource/create.html'\n context_object_name = 'featured_resource'\n initial_error_message = _('Le contenu est introuvable')\n displayed_content_type = {'TUTORIAL': _('Un tutoriel'),\n 'ARTICLE': _('Un article'),\n 'OPINION': _('Un billet'),\n 'TOPIC': _('Un sujet')}\n\n @method_decorator(login_required)\n @method_decorator(permission_required('featured.change_featuredresource', raise_exception=True))\n def dispatch(self, request, *args, **kwargs):\n return super(FeaturedResourceCreate, self).dispatch(request, *args, **kwargs)\n\n def get_initial_topic_data(self, topic_id):\n try:\n content = Topic.objects.get(pk=int(topic_id))\n except (Topic.DoesNotExist, ValueError):\n messages.error(self.request, self.initial_error_message)\n return {}\n return {'title': content.title,\n 'type': self.displayed_content_type['TOPIC'],\n 'authors': str(content.author),\n 'url': self.request.build_absolute_uri(content.get_absolute_url())}\n\n def get_initial_content_data(self, content_id):\n try:\n content = PublishedContent.objects.get(content__pk=int(content_id))\n except (PublishedContent.DoesNotExist, ValueError):\n messages.error(self.request, self.initial_error_message)\n return {}\n displayed_authors = ', '.join([str(x) for x in content.authors.all()])\n if content.content.image:\n image_url = self.request.build_absolute_uri(content.content.image.physical['featured'].url)\n else:\n image_url = None\n return {'title': content.title(),\n 'type': self.displayed_content_type[content.content_type],\n 'authors': displayed_authors,\n 'url': self.request.build_absolute_uri(content.content.get_absolute_url_online()),\n 'image_url': image_url}\n\n def get_initial(self):\n initial = super(FeaturedResourceCreate, self).get_initial()\n content_type = self.request.GET.get('content_type', None)\n content_id = self.request.GET.get('content_id', None)\n if content_type == 'topic' and content_id:\n initial.update(**self.get_initial_topic_data(content_id))\n elif content_type == 'published_content' and content_id:\n initial.update(**self.get_initial_content_data(content_id))\n return initial\n\n def get_form_kwargs(self):\n kw = super(FeaturedResourceCreate, self).get_form_kwargs()\n kw['hide_major_update_field'] = True\n return kw\n\n def form_valid(self, form):\n featured_resource = FeaturedResource()\n featured_resource.title = form.cleaned_data.get('title')\n featured_resource.type = form.cleaned_data.get('type')\n featured_resource.authors = form.cleaned_data.get('authors')\n featured_resource.image_url = form.cleaned_data.get('image_url')\n featured_resource.url = form.cleaned_data.get('url')\n\n if form.cleaned_data.get('major_update', False):\n featured_resource.pubdate = datetime.now()\n else:\n featured_resource.pubdate = form.cleaned_data.get('pubdate')\n\n featured_resource.save()\n\n messages.success(self.request, _('La une a \u00e9t\u00e9 cr\u00e9\u00e9e.'))\n return redirect(reverse('featured-resource-list'))\n\n\nclass FeaturedResourceUpdate(UpdateView):\n \"\"\"\n Updates a featured resource.\n \"\"\"\n\n form_class = FeaturedResourceForm\n template_name = 'featured/resource/update.html'\n queryset = FeaturedResource.objects.all()\n context_object_name = 'featured_resource'\n\n @method_decorator(login_required)\n @method_decorator(permission_required('featured.change_featuredresource', raise_exception=True))\n def dispatch(self, request, *args, **kwargs):\n return super(FeaturedResourceUpdate, self).dispatch(request, *args, **kwargs)\n\n def get_initial(self):\n initial = super(FeaturedResourceUpdate, self).get_initial()\n initial.update({\n 'title': self.object.title,\n 'type': self.object.type,\n 'authors': self.object.authors,\n 'image_url': self.object.image_url,\n 'url': self.object.url,\n 'pubdate': self.object.pubdate,\n })\n\n return initial\n\n def form_valid(self, form):\n\n self.object.title = form.cleaned_data.get('title')\n self.object.type = form.cleaned_data.get('type')\n self.object.authors = form.cleaned_data.get('authors')\n self.object.image_url = form.cleaned_data.get('image_url')\n self.object.url = form.cleaned_data.get('url')\n if form.cleaned_data.get('major_update', False):\n self.object.pubdate = datetime.now()\n else:\n self.object.pubdate = form.cleaned_data.get('pubdate')\n\n messages.success(self.request, _('La une a \u00e9t\u00e9 mise \u00e0 jour.'))\n self.success_url = reverse('featured-resource-list')\n return super(FeaturedResourceUpdate, self).form_valid(form)\n\n def get_form(self, form_class=None):\n form = super(FeaturedResourceUpdate, self).get_form(form_class)\n form.helper.form_action = reverse('featured-resource-update', args=[self.object.pk])\n return form\n\n\nclass FeaturedResourceDeleteDetail(DeleteView):\n \"\"\"\n Deletes a featured resource.\n \"\"\"\n\n model = FeaturedResource\n\n @method_decorator(login_required)\n @method_decorator(transaction.atomic)\n @method_decorator(permission_required('featured.change_featuredresource', raise_exception=True))\n def dispatch(self, request, *args, **kwargs):\n self.success_url = reverse('featured-resource-list')\n return super(FeaturedResourceDeleteDetail, self).dispatch(request, *args, **kwargs)\n\n def post(self, request, *args, **kwargs):\n r = super(FeaturedResourceDeleteDetail, self).post(request, *args, **kwargs)\n messages.success(request, _('La une a \u00e9t\u00e9 supprim\u00e9e avec succ\u00e8s.'))\n return r\n\n\nclass FeaturedResourceDeleteList(MultipleObjectMixin, RedirectView):\n \"\"\"\n Deletes a list of featured resources.\n \"\"\"\n permanent = False\n\n @method_decorator(login_required)\n @method_decorator(permission_required('featured.change_featuredresource', raise_exception=True))\n def dispatch(self, request, *args, **kwargs):\n return super(FeaturedResourceDeleteList, self).dispatch(request, *args, **kwargs)\n\n def get_queryset(self):\n items_list = self.request.POST.getlist('items')\n return FeaturedResource.objects.filter(pk__in=items_list)\n\n def post(self, request, *args, **kwargs):\n for featured_resource in self.get_queryset():\n featured_resource.delete()\n\n messages.success(request, _('Les unes ont \u00e9t\u00e9 supprim\u00e9es avec succ\u00e8s.'))\n\n return redirect(reverse('featured-resource-list'))\n\n\nclass FeaturedMessageCreateUpdate(FormView):\n \"\"\"\n Creates or updates the featured message.\n \"\"\"\n\n form_class = FeaturedMessageForm\n template_name = 'featured/message/create.html'\n last_message = None\n\n @method_decorator(login_required)\n @method_decorator(permission_required('featured.change_featuredmessage', raise_exception=True))\n def dispatch(self, request, *args, **kwargs):\n self.last_message = FeaturedMessage.objects.get_last_message()\n return super(FeaturedMessageCreateUpdate, self).dispatch(request, *args, **kwargs)\n\n def get_initial(self):\n init = super(FeaturedMessageCreateUpdate, self).get_initial()\n\n if self.last_message is not None:\n init.update({\n 'hook': self.last_message.hook,\n 'message': self.last_message.message,\n 'url': self.last_message.url,\n })\n\n return init\n\n def form_valid(self, form):\n if self.last_message:\n self.last_message.delete()\n\n featured_message = FeaturedMessage()\n featured_message.hook = form.data.get('hook')\n featured_message.message = form.data.get('message')\n featured_message.url = form.data.get('url')\n featured_message.save()\n\n messages.success(self.request, _('Le message a \u00e9t\u00e9 chang\u00e9'))\n return redirect(reverse('featured-resource-list'))\n", "path": "zds/featured/views.py"}]}
| 3,268 | 217 |
gh_patches_debug_10947
|
rasdani/github-patches
|
git_diff
|
pyqtgraph__pyqtgraph-2483
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
PlotSpeedTest command line arguments ignored
### Short description
<!-- This should summarize the issue. -->
Since 108365ba45c1a1302df110dad5f9d960d4d903a9, PlotSpeedTest command arguments are no longer honored.
### Code to reproduce
<!-- Please provide a minimal working example that reproduces the issue in the code block below.
Ideally, this should be a full example someone else could run without additional setup. -->
```
PlotSpeedTest.py --nsamples=10000
```
### Expected behavior
<!-- What should happen? -->
The number of samples used should be 10000.
### Real behavior
<!-- What happens? -->
The number of samples used remains at the default of 5000.
### Tested environment(s)
* PyQtGraph version: master
* Qt Python binding: PyQt5 5.15.7, PySide6 6.3.2
* Python version: Python 3.8.10
* Operating system: Windows 10
* Installation method: pip install -e . <!-- e.g. pip, conda, system packages, ... -->
</issue>
<code>
[start of pyqtgraph/examples/PlotSpeedTest.py]
1 #!/usr/bin/python
2 """
3 Update a simple plot as rapidly as possible to measure speed.
4 """
5
6 import argparse
7 from collections import deque
8 from time import perf_counter
9
10 import numpy as np
11
12 import pyqtgraph as pg
13 import pyqtgraph.functions as fn
14 import pyqtgraph.parametertree as ptree
15 from pyqtgraph.Qt import QtCore, QtGui, QtWidgets
16
17 # defaults here result in the same configuration as the original PlotSpeedTest
18 parser = argparse.ArgumentParser()
19 parser.add_argument('--noise', dest='noise', action='store_true')
20 parser.add_argument('--no-noise', dest='noise', action='store_false')
21 parser.set_defaults(noise=True)
22 parser.add_argument('--nsamples', default=5000, type=int)
23 parser.add_argument('--frames', default=50, type=int)
24 parser.add_argument('--fsample', default=1000, type=float)
25 parser.add_argument('--frequency', default=0, type=float)
26 parser.add_argument('--amplitude', default=5, type=float)
27 parser.add_argument('--opengl', dest='use_opengl', action='store_true')
28 parser.add_argument('--no-opengl', dest='use_opengl', action='store_false')
29 parser.set_defaults(use_opengl=None)
30 parser.add_argument('--allow-opengl-toggle', action='store_true',
31 help="""Allow on-the-fly change of OpenGL setting. This may cause unwanted side effects.
32 """)
33 args = parser.parse_args()
34
35 if args.use_opengl is not None:
36 pg.setConfigOption('useOpenGL', args.use_opengl)
37 pg.setConfigOption('enableExperimental', args.use_opengl)
38
39 # don't limit frame rate to vsync
40 sfmt = QtGui.QSurfaceFormat()
41 sfmt.setSwapInterval(0)
42 QtGui.QSurfaceFormat.setDefaultFormat(sfmt)
43
44
45 class MonkeyCurveItem(pg.PlotCurveItem):
46 def __init__(self, *args, **kwds):
47 super().__init__(*args, **kwds)
48 self.monkey_mode = ''
49
50 def setMethod(self, value):
51 self.monkey_mode = value
52
53 def paint(self, painter, opt, widget):
54 if self.monkey_mode not in ['drawPolyline']:
55 return super().paint(painter, opt, widget)
56
57 painter.setRenderHint(painter.RenderHint.Antialiasing, self.opts['antialias'])
58 painter.setPen(pg.mkPen(self.opts['pen']))
59
60 if self.monkey_mode == 'drawPolyline':
61 painter.drawPolyline(fn.arrayToQPolygonF(self.xData, self.yData))
62
63 app = pg.mkQApp("Plot Speed Test")
64
65 default_pen = pg.mkPen()
66
67 params = ptree.Parameter.create(name='Parameters', type='group')
68 pt = ptree.ParameterTree(showHeader=False)
69 pt.setParameters(params)
70 pw = pg.PlotWidget()
71 splitter = QtWidgets.QSplitter()
72 splitter.addWidget(pt)
73 splitter.addWidget(pw)
74 splitter.show()
75
76 interactor = ptree.Interactor(
77 parent=params, nest=False, runOptions=ptree.RunOptions.ON_CHANGED
78 )
79
80 pw.setWindowTitle('pyqtgraph example: PlotSpeedTest')
81 pw.setLabel('bottom', 'Index', units='B')
82 curve = MonkeyCurveItem(pen=default_pen, brush='b')
83 pw.addItem(curve)
84
85 rollingAverageSize = 1000
86 elapsed = deque(maxlen=rollingAverageSize)
87
88 def resetTimings(*args):
89 elapsed.clear()
90
91 @interactor.decorate(
92 nest=True,
93 nsamples={'limits': [0, None]},
94 frames={'limits': [1, None]},
95 fsample={'units': 'Hz'},
96 frequency={'units': 'Hz'}
97 )
98 def makeData(noise=True, nsamples=5000, frames=50, fsample=1000.0, frequency=0.0, amplitude=5.0):
99 global data, connect_array, ptr
100 ttt = np.arange(frames * nsamples, dtype=np.float64) / fsample
101 data = amplitude*np.sin(2*np.pi*frequency*ttt).reshape((frames, nsamples))
102 if noise:
103 data += np.random.normal(size=data.shape)
104 connect_array = np.ones(data.shape[-1], dtype=bool)
105 ptr = 0
106 pw.setRange(QtCore.QRectF(0, -10, nsamples, 20))
107
108 params.child('makeData').setOpts(title='Plot Options')
109
110 @interactor.decorate(
111 connect={'type': 'list', 'limits': ['all', 'pairs', 'finite', 'array']}
112 )
113 def update(
114 antialias=pg.getConfigOption('antialias'),
115 connect='all',
116 skipFiniteCheck=False
117 ):
118 global curve, data, ptr, elapsed, fpsLastUpdate
119
120 if connect == 'array':
121 connect = connect_array
122
123 # Measure
124 t_start = perf_counter()
125 curve.setData(data[ptr], antialias=antialias, connect=connect, skipFiniteCheck=skipFiniteCheck)
126 app.processEvents(QtCore.QEventLoop.ProcessEventsFlag.AllEvents)
127 t_end = perf_counter()
128 elapsed.append(t_end - t_start)
129 ptr = (ptr + 1) % data.shape[0]
130
131 # update fps at most once every 0.2 secs
132 if t_end - fpsLastUpdate > 0.2:
133 fpsLastUpdate = t_end
134 average = np.mean(elapsed)
135 fps = 1 / average
136 pw.setTitle('%0.2f fps - %0.1f ms avg' % (fps, average * 1_000))
137
138 @interactor.decorate(
139 useOpenGL={'readonly': not args.allow_opengl_toggle},
140 plotMethod={'limits': ['pyqtgraph', 'drawPolyline'], 'type': 'list'},
141 curvePen={'type': 'pen'}
142 )
143 def updateOptions(
144 curvePen=pg.mkPen(),
145 plotMethod='pyqtgraph',
146 fillLevel=False,
147 enableExperimental=False,
148 useOpenGL=False,
149 ):
150 pg.setConfigOption('enableExperimental', enableExperimental)
151 pg.setConfigOption('useOpenGL', useOpenGL)
152 curve.setPen(curvePen)
153 curve.setFillLevel(0.0 if fillLevel else None)
154 curve.setMethod(plotMethod)
155
156 params.sigTreeStateChanged.connect(resetTimings)
157
158 makeData()
159
160 fpsLastUpdate = perf_counter()
161
162 timer = QtCore.QTimer()
163 timer.timeout.connect(update)
164 timer.start(0)
165
166 if __name__ == '__main__':
167 # Splitter by default gives too small of a width to the parameter tree,
168 # so fix that right before the event loop
169 pt.setMinimumSize(225,0)
170 pg.exec()
171
[end of pyqtgraph/examples/PlotSpeedTest.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/pyqtgraph/examples/PlotSpeedTest.py b/pyqtgraph/examples/PlotSpeedTest.py
--- a/pyqtgraph/examples/PlotSpeedTest.py
+++ b/pyqtgraph/examples/PlotSpeedTest.py
@@ -95,7 +95,14 @@
fsample={'units': 'Hz'},
frequency={'units': 'Hz'}
)
-def makeData(noise=True, nsamples=5000, frames=50, fsample=1000.0, frequency=0.0, amplitude=5.0):
+def makeData(
+ noise=args.noise,
+ nsamples=args.nsamples,
+ frames=args.frames,
+ fsample=args.fsample,
+ frequency=args.frequency,
+ amplitude=args.amplitude,
+):
global data, connect_array, ptr
ttt = np.arange(frames * nsamples, dtype=np.float64) / fsample
data = amplitude*np.sin(2*np.pi*frequency*ttt).reshape((frames, nsamples))
|
{"golden_diff": "diff --git a/pyqtgraph/examples/PlotSpeedTest.py b/pyqtgraph/examples/PlotSpeedTest.py\n--- a/pyqtgraph/examples/PlotSpeedTest.py\n+++ b/pyqtgraph/examples/PlotSpeedTest.py\n@@ -95,7 +95,14 @@\n fsample={'units': 'Hz'},\n frequency={'units': 'Hz'}\n )\n-def makeData(noise=True, nsamples=5000, frames=50, fsample=1000.0, frequency=0.0, amplitude=5.0):\n+def makeData(\n+ noise=args.noise,\n+ nsamples=args.nsamples,\n+ frames=args.frames,\n+ fsample=args.fsample,\n+ frequency=args.frequency,\n+ amplitude=args.amplitude,\n+):\n global data, connect_array, ptr\n ttt = np.arange(frames * nsamples, dtype=np.float64) / fsample\n data = amplitude*np.sin(2*np.pi*frequency*ttt).reshape((frames, nsamples))\n", "issue": "PlotSpeedTest command line arguments ignored\n### Short description\r\n<!-- This should summarize the issue. -->\r\n\r\nSince 108365ba45c1a1302df110dad5f9d960d4d903a9, PlotSpeedTest command arguments are no longer honored.\r\n\r\n### Code to reproduce\r\n<!-- Please provide a minimal working example that reproduces the issue in the code block below.\r\n Ideally, this should be a full example someone else could run without additional setup. -->\r\n\r\n```\r\nPlotSpeedTest.py --nsamples=10000\r\n```\r\n\r\n### Expected behavior\r\n<!-- What should happen? -->\r\n\r\nThe number of samples used should be 10000.\r\n\r\n### Real behavior\r\n<!-- What happens? -->\r\n\r\nThe number of samples used remains at the default of 5000.\r\n\r\n### Tested environment(s)\r\n\r\n * PyQtGraph version: master\r\n * Qt Python binding: PyQt5 5.15.7, PySide6 6.3.2\r\n * Python version: Python 3.8.10 \r\n * Operating system: Windows 10\r\n * Installation method: pip install -e . <!-- e.g. pip, conda, system packages, ... -->\r\n\r\n\n", "before_files": [{"content": "#!/usr/bin/python\n\"\"\"\nUpdate a simple plot as rapidly as possible to measure speed.\n\"\"\"\n\nimport argparse\nfrom collections import deque\nfrom time import perf_counter\n\nimport numpy as np\n\nimport pyqtgraph as pg\nimport pyqtgraph.functions as fn\nimport pyqtgraph.parametertree as ptree\nfrom pyqtgraph.Qt import QtCore, QtGui, QtWidgets\n\n# defaults here result in the same configuration as the original PlotSpeedTest\nparser = argparse.ArgumentParser()\nparser.add_argument('--noise', dest='noise', action='store_true')\nparser.add_argument('--no-noise', dest='noise', action='store_false')\nparser.set_defaults(noise=True)\nparser.add_argument('--nsamples', default=5000, type=int)\nparser.add_argument('--frames', default=50, type=int)\nparser.add_argument('--fsample', default=1000, type=float)\nparser.add_argument('--frequency', default=0, type=float)\nparser.add_argument('--amplitude', default=5, type=float)\nparser.add_argument('--opengl', dest='use_opengl', action='store_true')\nparser.add_argument('--no-opengl', dest='use_opengl', action='store_false')\nparser.set_defaults(use_opengl=None)\nparser.add_argument('--allow-opengl-toggle', action='store_true',\n help=\"\"\"Allow on-the-fly change of OpenGL setting. This may cause unwanted side effects.\n \"\"\")\nargs = parser.parse_args()\n\nif args.use_opengl is not None:\n pg.setConfigOption('useOpenGL', args.use_opengl)\n pg.setConfigOption('enableExperimental', args.use_opengl)\n\n# don't limit frame rate to vsync\nsfmt = QtGui.QSurfaceFormat()\nsfmt.setSwapInterval(0)\nQtGui.QSurfaceFormat.setDefaultFormat(sfmt)\n\n\nclass MonkeyCurveItem(pg.PlotCurveItem):\n def __init__(self, *args, **kwds):\n super().__init__(*args, **kwds)\n self.monkey_mode = ''\n\n def setMethod(self, value):\n self.monkey_mode = value\n\n def paint(self, painter, opt, widget):\n if self.monkey_mode not in ['drawPolyline']:\n return super().paint(painter, opt, widget)\n\n painter.setRenderHint(painter.RenderHint.Antialiasing, self.opts['antialias'])\n painter.setPen(pg.mkPen(self.opts['pen']))\n\n if self.monkey_mode == 'drawPolyline':\n painter.drawPolyline(fn.arrayToQPolygonF(self.xData, self.yData))\n\napp = pg.mkQApp(\"Plot Speed Test\")\n\ndefault_pen = pg.mkPen()\n\nparams = ptree.Parameter.create(name='Parameters', type='group')\npt = ptree.ParameterTree(showHeader=False)\npt.setParameters(params)\npw = pg.PlotWidget()\nsplitter = QtWidgets.QSplitter()\nsplitter.addWidget(pt)\nsplitter.addWidget(pw)\nsplitter.show()\n\ninteractor = ptree.Interactor(\n parent=params, nest=False, runOptions=ptree.RunOptions.ON_CHANGED\n)\n\npw.setWindowTitle('pyqtgraph example: PlotSpeedTest')\npw.setLabel('bottom', 'Index', units='B')\ncurve = MonkeyCurveItem(pen=default_pen, brush='b')\npw.addItem(curve)\n\nrollingAverageSize = 1000\nelapsed = deque(maxlen=rollingAverageSize)\n\ndef resetTimings(*args):\n elapsed.clear()\n\[email protected](\n nest=True,\n nsamples={'limits': [0, None]},\n frames={'limits': [1, None]},\n fsample={'units': 'Hz'},\n frequency={'units': 'Hz'}\n)\ndef makeData(noise=True, nsamples=5000, frames=50, fsample=1000.0, frequency=0.0, amplitude=5.0):\n global data, connect_array, ptr\n ttt = np.arange(frames * nsamples, dtype=np.float64) / fsample\n data = amplitude*np.sin(2*np.pi*frequency*ttt).reshape((frames, nsamples))\n if noise:\n data += np.random.normal(size=data.shape)\n connect_array = np.ones(data.shape[-1], dtype=bool)\n ptr = 0\n pw.setRange(QtCore.QRectF(0, -10, nsamples, 20))\n\nparams.child('makeData').setOpts(title='Plot Options')\n\[email protected](\n connect={'type': 'list', 'limits': ['all', 'pairs', 'finite', 'array']}\n)\ndef update(\n antialias=pg.getConfigOption('antialias'),\n connect='all',\n skipFiniteCheck=False\n):\n global curve, data, ptr, elapsed, fpsLastUpdate\n\n if connect == 'array':\n connect = connect_array\n\n # Measure\n t_start = perf_counter()\n curve.setData(data[ptr], antialias=antialias, connect=connect, skipFiniteCheck=skipFiniteCheck)\n app.processEvents(QtCore.QEventLoop.ProcessEventsFlag.AllEvents)\n t_end = perf_counter()\n elapsed.append(t_end - t_start)\n ptr = (ptr + 1) % data.shape[0]\n\n # update fps at most once every 0.2 secs\n if t_end - fpsLastUpdate > 0.2:\n fpsLastUpdate = t_end\n average = np.mean(elapsed)\n fps = 1 / average\n pw.setTitle('%0.2f fps - %0.1f ms avg' % (fps, average * 1_000))\n\[email protected](\n useOpenGL={'readonly': not args.allow_opengl_toggle},\n plotMethod={'limits': ['pyqtgraph', 'drawPolyline'], 'type': 'list'},\n curvePen={'type': 'pen'}\n)\ndef updateOptions(\n curvePen=pg.mkPen(),\n plotMethod='pyqtgraph',\n fillLevel=False,\n enableExperimental=False,\n useOpenGL=False,\n):\n pg.setConfigOption('enableExperimental', enableExperimental)\n pg.setConfigOption('useOpenGL', useOpenGL)\n curve.setPen(curvePen)\n curve.setFillLevel(0.0 if fillLevel else None)\n curve.setMethod(plotMethod)\n\nparams.sigTreeStateChanged.connect(resetTimings)\n\nmakeData()\n\nfpsLastUpdate = perf_counter()\n\ntimer = QtCore.QTimer()\ntimer.timeout.connect(update)\ntimer.start(0)\n\nif __name__ == '__main__':\n # Splitter by default gives too small of a width to the parameter tree,\n # so fix that right before the event loop\n pt.setMinimumSize(225,0)\n pg.exec()\n", "path": "pyqtgraph/examples/PlotSpeedTest.py"}]}
| 2,625 | 223 |
gh_patches_debug_59251
|
rasdani/github-patches
|
git_diff
|
ephios-dev__ephios-639
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
PWA does not respect orientation lock on Android
</issue>
<code>
[start of ephios/core/views/pwa.py]
1 import functools
2
3 from django.conf import settings
4 from django.contrib.staticfiles import finders
5 from django.http import HttpResponse, JsonResponse
6 from django.shortcuts import render
7 from django.utils.translation import get_language
8
9
10 def manifest(request):
11 manifest_json = {
12 "name": "ephios",
13 "short_name": "ephios",
14 "description": "ephios manages events for medical services",
15 "start_url": "/",
16 "display": "standalone",
17 "scope": "/",
18 "orientation": "any",
19 "background_color": "#fff",
20 "theme_color": "#000",
21 "status_bar": "default",
22 "dir": "auto",
23 "icons": settings.PWA_APP_ICONS,
24 "lang": get_language(),
25 }
26 response = JsonResponse(manifest_json)
27 response["Service-Worker-Allowed"] = "/"
28 return response
29
30
31 @functools.lru_cache
32 def serviceworker_js():
33 with open(finders.find("ephios/js/serviceworker.js"), "rb") as sw_js:
34 return sw_js.read()
35
36
37 def serviceworker(request):
38 return HttpResponse(
39 serviceworker_js(),
40 content_type="application/javascript",
41 )
42
43
44 def offline(request):
45 return render(request, "offline.html")
46
[end of ephios/core/views/pwa.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/ephios/core/views/pwa.py b/ephios/core/views/pwa.py
--- a/ephios/core/views/pwa.py
+++ b/ephios/core/views/pwa.py
@@ -15,7 +15,6 @@
"start_url": "/",
"display": "standalone",
"scope": "/",
- "orientation": "any",
"background_color": "#fff",
"theme_color": "#000",
"status_bar": "default",
|
{"golden_diff": "diff --git a/ephios/core/views/pwa.py b/ephios/core/views/pwa.py\n--- a/ephios/core/views/pwa.py\n+++ b/ephios/core/views/pwa.py\n@@ -15,7 +15,6 @@\n \"start_url\": \"/\",\n \"display\": \"standalone\",\n \"scope\": \"/\",\n- \"orientation\": \"any\",\n \"background_color\": \"#fff\",\n \"theme_color\": \"#000\",\n \"status_bar\": \"default\",\n", "issue": "PWA does not respect orientation lock on Android\n\n", "before_files": [{"content": "import functools\n\nfrom django.conf import settings\nfrom django.contrib.staticfiles import finders\nfrom django.http import HttpResponse, JsonResponse\nfrom django.shortcuts import render\nfrom django.utils.translation import get_language\n\n\ndef manifest(request):\n manifest_json = {\n \"name\": \"ephios\",\n \"short_name\": \"ephios\",\n \"description\": \"ephios manages events for medical services\",\n \"start_url\": \"/\",\n \"display\": \"standalone\",\n \"scope\": \"/\",\n \"orientation\": \"any\",\n \"background_color\": \"#fff\",\n \"theme_color\": \"#000\",\n \"status_bar\": \"default\",\n \"dir\": \"auto\",\n \"icons\": settings.PWA_APP_ICONS,\n \"lang\": get_language(),\n }\n response = JsonResponse(manifest_json)\n response[\"Service-Worker-Allowed\"] = \"/\"\n return response\n\n\[email protected]_cache\ndef serviceworker_js():\n with open(finders.find(\"ephios/js/serviceworker.js\"), \"rb\") as sw_js:\n return sw_js.read()\n\n\ndef serviceworker(request):\n return HttpResponse(\n serviceworker_js(),\n content_type=\"application/javascript\",\n )\n\n\ndef offline(request):\n return render(request, \"offline.html\")\n", "path": "ephios/core/views/pwa.py"}]}
| 908 | 111 |
gh_patches_debug_31130
|
rasdani/github-patches
|
git_diff
|
kedro-org__kedro-1789
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add parameters to `%reload_kedro` line magic
## Description
Currently you cannot pass things like `env` or `extra_params` via the line magic, but you can by importing the function.
https://github.com/kedro-org/kedro/blob/5ae97cfb70e5b0d4490132847977d482f13c840f/kedro/extras/extensions/ipython.py#L38
Why don't we introduce feature parity here?
</issue>
<code>
[start of kedro/extras/extensions/ipython.py]
1 # pylint: disable=import-outside-toplevel,global-statement,invalid-name,too-many-locals
2 """
3 This script creates an IPython extension to load Kedro-related variables in
4 local scope.
5 """
6 import logging
7 import sys
8 from pathlib import Path
9 from typing import Any, Dict
10
11 logger = logging.getLogger(__name__)
12 default_project_path = Path.cwd()
13
14
15 def _remove_cached_modules(package_name):
16 to_remove = [mod for mod in sys.modules if mod.startswith(package_name)]
17 # `del` is used instead of `reload()` because: If the new version of a module does not
18 # define a name that was defined by the old version, the old definition remains.
19 for module in to_remove:
20 del sys.modules[module] # pragma: no cover
21
22
23 def _find_kedro_project(current_dir: Path): # pragma: no cover
24 from kedro.framework.startup import _is_project
25
26 while current_dir != current_dir.parent:
27 if _is_project(current_dir):
28 return current_dir
29 current_dir = current_dir.parent
30
31 return None
32
33
34 def reload_kedro(
35 path: str = None, env: str = None, extra_params: Dict[str, Any] = None
36 ):
37 """Line magic which reloads all Kedro default variables.
38 Setting the path will also make it default for subsequent calls.
39 """
40 from IPython import get_ipython
41 from IPython.core.magic import needs_local_scope, register_line_magic
42
43 from kedro.framework.cli import load_entry_points
44 from kedro.framework.project import LOGGING # noqa # pylint:disable=unused-import
45 from kedro.framework.project import configure_project, pipelines
46 from kedro.framework.session import KedroSession
47 from kedro.framework.startup import bootstrap_project
48
49 # If a path is provided, set it as default for subsequent calls
50 global default_project_path
51 if path:
52 default_project_path = Path(path).expanduser().resolve()
53 logger.info("Updated path to Kedro project: %s", default_project_path)
54 else:
55 logger.info("No path argument was provided. Using: %s", default_project_path)
56
57 metadata = bootstrap_project(default_project_path)
58 _remove_cached_modules(metadata.package_name)
59 configure_project(metadata.package_name)
60
61 session = KedroSession.create(
62 metadata.package_name, default_project_path, env=env, extra_params=extra_params
63 )
64 context = session.load_context()
65 catalog = context.catalog
66
67 get_ipython().push(
68 variables={
69 "context": context,
70 "catalog": catalog,
71 "session": session,
72 "pipelines": pipelines,
73 }
74 )
75
76 logger.info("Kedro project %s", str(metadata.project_name))
77 logger.info(
78 "Defined global variable 'context', 'session', 'catalog' and 'pipelines'"
79 )
80
81 for line_magic in load_entry_points("line_magic"):
82 register_line_magic(needs_local_scope(line_magic))
83 logger.info("Registered line magic '%s'", line_magic.__name__) # type: ignore
84
85
86 def load_ipython_extension(ipython):
87 """Main entry point when %load_ext is executed"""
88
89 global default_project_path
90
91 ipython.register_magic_function(reload_kedro, "line", "reload_kedro")
92
93 default_project_path = _find_kedro_project(Path.cwd())
94
95 if default_project_path is None:
96 logger.warning(
97 "Kedro extension was registered but couldn't find a Kedro project. "
98 "Make sure you run '%reload_kedro <project_root>'."
99 )
100 return
101
102 reload_kedro(default_project_path)
103
[end of kedro/extras/extensions/ipython.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/kedro/extras/extensions/ipython.py b/kedro/extras/extensions/ipython.py
--- a/kedro/extras/extensions/ipython.py
+++ b/kedro/extras/extensions/ipython.py
@@ -8,6 +8,9 @@
from pathlib import Path
from typing import Any, Dict
+from kedro.framework.cli.project import PARAMS_ARG_HELP
+from kedro.framework.cli.utils import ENV_HELP, _split_params
+
logger = logging.getLogger(__name__)
default_project_path = Path.cwd()
@@ -84,12 +87,46 @@
def load_ipython_extension(ipython):
- """Main entry point when %load_ext is executed"""
+ """
+ Main entry point when %load_ext is executed.
+ IPython will look for this function specifically.
+ See https://ipython.readthedocs.io/en/stable/config/extensions/index.html
- global default_project_path
+ This function is called when users do `%load_ext kedro.extras.extensions.ipython`.
+ When user use `kedro jupyter notebook` or `jupyter ipython`, this extension is
+ loaded automatically.
+ """
+ from IPython.core.magic_arguments import argument, magic_arguments, parse_argstring
+
+ @magic_arguments()
+ @argument(
+ "path",
+ type=str,
+ help=(
+ "Path to the project root directory. If not given, use the previously set"
+ "project root."
+ ),
+ nargs="?",
+ default=None,
+ )
+ @argument("-e", "--env", type=str, default=None, help=ENV_HELP)
+ @argument(
+ "--params",
+ type=lambda value: _split_params(None, None, value),
+ default=None,
+ help=PARAMS_ARG_HELP,
+ )
+ def magic_reload_kedro(line: str):
+ """
+ The `%reload_kedro` IPython line magic. See
+ https://kedro.readthedocs.io/en/stable/tools_integration/ipython.html for more.
+ """
+ args = parse_argstring(magic_reload_kedro, line)
+ reload_kedro(args.path, args.env, args.params)
- ipython.register_magic_function(reload_kedro, "line", "reload_kedro")
+ global default_project_path
+ ipython.register_magic_function(magic_reload_kedro, magic_name="reload_kedro")
default_project_path = _find_kedro_project(Path.cwd())
if default_project_path is None:
|
{"golden_diff": "diff --git a/kedro/extras/extensions/ipython.py b/kedro/extras/extensions/ipython.py\n--- a/kedro/extras/extensions/ipython.py\n+++ b/kedro/extras/extensions/ipython.py\n@@ -8,6 +8,9 @@\n from pathlib import Path\n from typing import Any, Dict\n \n+from kedro.framework.cli.project import PARAMS_ARG_HELP\n+from kedro.framework.cli.utils import ENV_HELP, _split_params\n+\n logger = logging.getLogger(__name__)\n default_project_path = Path.cwd()\n \n@@ -84,12 +87,46 @@\n \n \n def load_ipython_extension(ipython):\n- \"\"\"Main entry point when %load_ext is executed\"\"\"\n+ \"\"\"\n+ Main entry point when %load_ext is executed.\n+ IPython will look for this function specifically.\n+ See https://ipython.readthedocs.io/en/stable/config/extensions/index.html\n \n- global default_project_path\n+ This function is called when users do `%load_ext kedro.extras.extensions.ipython`.\n+ When user use `kedro jupyter notebook` or `jupyter ipython`, this extension is\n+ loaded automatically.\n+ \"\"\"\n+ from IPython.core.magic_arguments import argument, magic_arguments, parse_argstring\n+\n+ @magic_arguments()\n+ @argument(\n+ \"path\",\n+ type=str,\n+ help=(\n+ \"Path to the project root directory. If not given, use the previously set\"\n+ \"project root.\"\n+ ),\n+ nargs=\"?\",\n+ default=None,\n+ )\n+ @argument(\"-e\", \"--env\", type=str, default=None, help=ENV_HELP)\n+ @argument(\n+ \"--params\",\n+ type=lambda value: _split_params(None, None, value),\n+ default=None,\n+ help=PARAMS_ARG_HELP,\n+ )\n+ def magic_reload_kedro(line: str):\n+ \"\"\"\n+ The `%reload_kedro` IPython line magic. See\n+ https://kedro.readthedocs.io/en/stable/tools_integration/ipython.html for more.\n+ \"\"\"\n+ args = parse_argstring(magic_reload_kedro, line)\n+ reload_kedro(args.path, args.env, args.params)\n \n- ipython.register_magic_function(reload_kedro, \"line\", \"reload_kedro\")\n+ global default_project_path\n \n+ ipython.register_magic_function(magic_reload_kedro, magic_name=\"reload_kedro\")\n default_project_path = _find_kedro_project(Path.cwd())\n \n if default_project_path is None:\n", "issue": "Add parameters to `%reload_kedro` line magic \n## Description\r\n\r\nCurrently you cannot pass things like `env` or `extra_params` via the line magic, but you can by importing the function.\r\n\r\nhttps://github.com/kedro-org/kedro/blob/5ae97cfb70e5b0d4490132847977d482f13c840f/kedro/extras/extensions/ipython.py#L38\r\n\r\nWhy don't we introduce feature parity here? \n", "before_files": [{"content": "# pylint: disable=import-outside-toplevel,global-statement,invalid-name,too-many-locals\n\"\"\"\nThis script creates an IPython extension to load Kedro-related variables in\nlocal scope.\n\"\"\"\nimport logging\nimport sys\nfrom pathlib import Path\nfrom typing import Any, Dict\n\nlogger = logging.getLogger(__name__)\ndefault_project_path = Path.cwd()\n\n\ndef _remove_cached_modules(package_name):\n to_remove = [mod for mod in sys.modules if mod.startswith(package_name)]\n # `del` is used instead of `reload()` because: If the new version of a module does not\n # define a name that was defined by the old version, the old definition remains.\n for module in to_remove:\n del sys.modules[module] # pragma: no cover\n\n\ndef _find_kedro_project(current_dir: Path): # pragma: no cover\n from kedro.framework.startup import _is_project\n\n while current_dir != current_dir.parent:\n if _is_project(current_dir):\n return current_dir\n current_dir = current_dir.parent\n\n return None\n\n\ndef reload_kedro(\n path: str = None, env: str = None, extra_params: Dict[str, Any] = None\n):\n \"\"\"Line magic which reloads all Kedro default variables.\n Setting the path will also make it default for subsequent calls.\n \"\"\"\n from IPython import get_ipython\n from IPython.core.magic import needs_local_scope, register_line_magic\n\n from kedro.framework.cli import load_entry_points\n from kedro.framework.project import LOGGING # noqa # pylint:disable=unused-import\n from kedro.framework.project import configure_project, pipelines\n from kedro.framework.session import KedroSession\n from kedro.framework.startup import bootstrap_project\n\n # If a path is provided, set it as default for subsequent calls\n global default_project_path\n if path:\n default_project_path = Path(path).expanduser().resolve()\n logger.info(\"Updated path to Kedro project: %s\", default_project_path)\n else:\n logger.info(\"No path argument was provided. Using: %s\", default_project_path)\n\n metadata = bootstrap_project(default_project_path)\n _remove_cached_modules(metadata.package_name)\n configure_project(metadata.package_name)\n\n session = KedroSession.create(\n metadata.package_name, default_project_path, env=env, extra_params=extra_params\n )\n context = session.load_context()\n catalog = context.catalog\n\n get_ipython().push(\n variables={\n \"context\": context,\n \"catalog\": catalog,\n \"session\": session,\n \"pipelines\": pipelines,\n }\n )\n\n logger.info(\"Kedro project %s\", str(metadata.project_name))\n logger.info(\n \"Defined global variable 'context', 'session', 'catalog' and 'pipelines'\"\n )\n\n for line_magic in load_entry_points(\"line_magic\"):\n register_line_magic(needs_local_scope(line_magic))\n logger.info(\"Registered line magic '%s'\", line_magic.__name__) # type: ignore\n\n\ndef load_ipython_extension(ipython):\n \"\"\"Main entry point when %load_ext is executed\"\"\"\n\n global default_project_path\n\n ipython.register_magic_function(reload_kedro, \"line\", \"reload_kedro\")\n\n default_project_path = _find_kedro_project(Path.cwd())\n\n if default_project_path is None:\n logger.warning(\n \"Kedro extension was registered but couldn't find a Kedro project. \"\n \"Make sure you run '%reload_kedro <project_root>'.\"\n )\n return\n\n reload_kedro(default_project_path)\n", "path": "kedro/extras/extensions/ipython.py"}]}
| 1,658 | 569 |
gh_patches_debug_19036
|
rasdani/github-patches
|
git_diff
|
lutris__lutris-2998
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
After the Lutris update, no games start.
On the previous version 0.5.6, all games were launched.
After updating Lutris to version 0.5.7-1, **not a single game starts if "Disable desktop effects" is enabled in preference**
Here is what the console writes. **FileNotFoundError: [Errno 2] No such file or directory: 'qdbus'**
Archlinux.
python-dbus is installed

</issue>
<code>
[start of lutris/util/display.py]
1 """Module to deal with various aspects of displays"""
2 # isort:skip_file
3 import enum
4 import os
5 import subprocess
6
7 try:
8 from dbus.exceptions import DBusException
9 DBUS_AVAILABLE = True
10 except ImportError:
11 DBUS_AVAILABLE = False
12
13 from gi.repository import Gdk, GLib, GnomeDesktop
14
15 from lutris.util import system
16 from lutris.util.graphics.displayconfig import MutterDisplayManager
17 from lutris.util.graphics.xrandr import LegacyDisplayManager, change_resolution, get_outputs
18 from lutris.util.log import logger
19
20
21 class NoScreenDetected(Exception):
22
23 """Raise this when unable to detect screens"""
24
25
26 def restore_gamma():
27 """Restores gamma to a normal level."""
28 xgamma_path = system.find_executable("xgamma")
29 try:
30 subprocess.Popen([xgamma_path, "-gamma", "1.0"])
31 except (FileNotFoundError, TypeError):
32 logger.warning("xgamma is not available on your system")
33 except PermissionError:
34 logger.warning("you do not have permission to call xgamma")
35
36
37 def _get_graphics_adapters():
38 """Return the list of graphics cards available on a system
39
40 Returns:
41 list: list of tuples containing PCI ID and description of the display controller
42 """
43 lspci_path = system.find_executable("lspci")
44 dev_subclasses = ["VGA", "XGA", "3D controller", "Display controller"]
45 if not lspci_path:
46 logger.warning("lspci is not available. List of graphics cards not available")
47 return []
48 return [
49 (pci_id, device_desc.split(": ")[1]) for pci_id, device_desc in [
50 line.split(maxsplit=1) for line in system.execute(lspci_path, timeout=3).split("\n")
51 if any(subclass in line for subclass in dev_subclasses)
52 ]
53 ]
54
55
56 class DisplayManager:
57
58 """Get display and resolution using GnomeDesktop"""
59
60 def __init__(self):
61 screen = Gdk.Screen.get_default()
62 if not screen:
63 raise NoScreenDetected
64 self.rr_screen = GnomeDesktop.RRScreen.new(screen)
65 self.rr_config = GnomeDesktop.RRConfig.new_current(self.rr_screen)
66 self.rr_config.load_current()
67
68 def get_display_names(self):
69 """Return names of connected displays"""
70 return [output_info.get_display_name() for output_info in self.rr_config.get_outputs()]
71
72 def get_resolutions(self):
73 """Return available resolutions"""
74 resolutions = ["%sx%s" % (mode.get_width(), mode.get_height()) for mode in self.rr_screen.list_modes()]
75 return sorted(set(resolutions), key=lambda x: int(x.split("x")[0]), reverse=True)
76
77 def _get_primary_output(self):
78 """Return the RROutput used as a primary display"""
79 for output in self.rr_screen.list_outputs():
80 if output.get_is_primary():
81 return output
82 return
83
84 def get_current_resolution(self):
85 """Return the current resolution for the primary display"""
86 output = self._get_primary_output()
87 if not output:
88 logger.error("Failed to get a default output")
89 return "", ""
90 current_mode = output.get_current_mode()
91 return str(current_mode.get_width()), str(current_mode.get_height())
92
93 @staticmethod
94 def set_resolution(resolution):
95 """Set the resolution of one or more displays.
96 The resolution can either be a string, which will be applied to the
97 primary display or a list of configurations as returned by `get_config`.
98 This method uses XrandR and will not work on Wayland.
99 """
100 return change_resolution(resolution)
101
102 @staticmethod
103 def get_config():
104 """Return the current display resolution
105 This method uses XrandR and will not work on wayland
106 The output can be fed in `set_resolution`
107 """
108 return get_outputs()
109
110
111 def get_display_manager():
112 """Return the appropriate display manager instance.
113 Defaults to Mutter if available. This is the only one to support Wayland.
114 """
115 if DBUS_AVAILABLE:
116 try:
117 return MutterDisplayManager()
118 except DBusException as ex:
119 logger.debug("Mutter DBus service not reachable: %s", ex)
120 except Exception as ex: # pylint: disable=broad-except
121 logger.exception("Failed to instanciate MutterDisplayConfig. Please report with exception: %s", ex)
122 else:
123 logger.error("DBus is not available, lutris was not properly installed.")
124 try:
125 return DisplayManager()
126 except (GLib.Error, NoScreenDetected):
127 return LegacyDisplayManager()
128
129
130 DISPLAY_MANAGER = get_display_manager()
131 USE_DRI_PRIME = len(_get_graphics_adapters()) > 1
132
133
134 class DesktopEnvironment(enum.Enum):
135
136 """Enum of desktop environments."""
137
138 PLASMA = 0
139 MATE = 1
140 XFCE = 2
141 DEEPIN = 3
142 UNKNOWN = 999
143
144
145 def get_desktop_environment():
146 """Converts the value of the DESKTOP_SESSION environment variable
147 to one of the constants in the DesktopEnvironment class.
148 Returns None if DESKTOP_SESSION is empty or unset.
149 """
150 desktop_session = os.environ.get("DESKTOP_SESSION", "").lower()
151 if not desktop_session:
152 return None
153 if desktop_session.endswith("plasma"):
154 return DesktopEnvironment.PLASMA
155 if desktop_session.endswith("mate"):
156 return DesktopEnvironment.MATE
157 if desktop_session.endswith("xfce"):
158 return DesktopEnvironment.XFCE
159 if desktop_session.endswith("deepin"):
160 return DesktopEnvironment.DEEPIN
161 return DesktopEnvironment.UNKNOWN
162
163
164 def _get_command_output(*command):
165 return subprocess.Popen(command, stdin=subprocess.DEVNULL, stdout=subprocess.PIPE, close_fds=True).communicate()[0]
166
167
168 def is_compositing_enabled():
169 """Checks whether compositing is currently disabled or enabled.
170 Returns True for enabled, False for disabled, and None if unknown.
171 """
172 desktop_environment = get_desktop_environment()
173 if desktop_environment is DesktopEnvironment.PLASMA:
174 return _get_command_output(
175 "qdbus", "org.kde.KWin", "/Compositor", "org.kde.kwin.Compositing.active"
176 ) == b"true\n"
177 if desktop_environment is DesktopEnvironment.MATE:
178 return _get_command_output("gsettings", "get org.mate.Marco.general", "compositing-manager") == b"true\n"
179 if desktop_environment is DesktopEnvironment.XFCE:
180 return _get_command_output(
181 "xfconf-query", "--channel=xfwm4", "--property=/general/use_compositing"
182 ) == b"true\n"
183 if desktop_environment is DesktopEnvironment.DEEPIN:
184 return _get_command_output(
185 "dbus-send", "--session", "--dest=com.deepin.WMSwitcher", "--type=method_call",
186 "--print-reply=literal", "/com/deepin/WMSwitcher", "com.deepin.WMSwitcher.CurrentWM"
187 ) == b"deepin wm\n"
188 return None
189
190
191 # One element is appended to this for every invocation of disable_compositing:
192 # True if compositing has been disabled, False if not. enable_compositing
193 # removes the last element, and only re-enables compositing if that element
194 # was True.
195 _COMPOSITING_DISABLED_STACK = []
196
197
198 def _get_compositor_commands():
199 """Returns the commands to enable/disable compositing on the current
200 desktop environment as a 2-tuple.
201 """
202 start_compositor = None
203 stop_compositor = None
204 desktop_environment = get_desktop_environment()
205 if desktop_environment is DesktopEnvironment.PLASMA:
206 stop_compositor = ("qdbus", "org.kde.KWin", "/Compositor", "org.kde.kwin.Compositing.suspend")
207 start_compositor = ("qdbus", "org.kde.KWin", "/Compositor", "org.kde.kwin.Compositing.resume")
208 elif desktop_environment is DesktopEnvironment.MATE:
209 stop_compositor = ("gsettings", "set org.mate.Marco.general", "compositing-manager", "false")
210 start_compositor = ("gsettings", "set org.mate.Marco.general", "compositing-manager", "true")
211 elif desktop_environment is DesktopEnvironment.XFCE:
212 stop_compositor = ("xfconf-query", "--channel=xfwm4", "--property=/general/use_compositing", "--set=false")
213 start_compositor = ("xfconf-query", "--channel=xfwm4", "--property=/general/use_compositing", "--set=true")
214 elif desktop_environment is DesktopEnvironment.DEEPIN:
215 start_compositor = (
216 "dbus-send", "--session", "--dest=com.deepin.WMSwitcher", "--type=method_call",
217 "/com/deepin/WMSwitcher", "com.deepin.WMSwitcher.RequestSwitchWM",
218 )
219 stop_compositor = start_compositor
220 return start_compositor, stop_compositor
221
222
223 def _run_command(*command):
224 return subprocess.Popen(command, stdin=subprocess.DEVNULL, close_fds=True)
225
226
227 def disable_compositing():
228 """Disable compositing if not already disabled."""
229 compositing_enabled = is_compositing_enabled()
230 if compositing_enabled is None:
231 compositing_enabled = True
232 if any(_COMPOSITING_DISABLED_STACK):
233 compositing_enabled = False
234 _COMPOSITING_DISABLED_STACK.append(compositing_enabled)
235 if not compositing_enabled:
236 return
237 _, stop_compositor = _get_compositor_commands()
238 if stop_compositor:
239 _run_command(*stop_compositor)
240
241
242 def enable_compositing():
243 """Re-enable compositing if the corresponding call to disable_compositing
244 disabled it."""
245 compositing_disabled = _COMPOSITING_DISABLED_STACK.pop()
246 if not compositing_disabled:
247 return
248 start_compositor, _ = _get_compositor_commands()
249 if start_compositor:
250 _run_command(*start_compositor)
251
[end of lutris/util/display.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/lutris/util/display.py b/lutris/util/display.py
--- a/lutris/util/display.py
+++ b/lutris/util/display.py
@@ -162,7 +162,16 @@
def _get_command_output(*command):
- return subprocess.Popen(command, stdin=subprocess.DEVNULL, stdout=subprocess.PIPE, close_fds=True).communicate()[0]
+ """Some rogue function that gives no shit about residing in the correct module"""
+ try:
+ return subprocess.Popen(
+ command,
+ stdin=subprocess.DEVNULL,
+ stdout=subprocess.PIPE,
+ close_fds=True
+ ).communicate()[0]
+ except FileNotFoundError:
+ logger.error("Unable to run command, %s not found", command[0])
def is_compositing_enabled():
@@ -221,7 +230,13 @@
def _run_command(*command):
- return subprocess.Popen(command, stdin=subprocess.DEVNULL, close_fds=True)
+ """Random _run_command lost in the middle of the project,
+ are you lost little _run_command?
+ """
+ try:
+ return subprocess.Popen(command, stdin=subprocess.DEVNULL, close_fds=True)
+ except FileNotFoundError:
+ logger.error("Oh no")
def disable_compositing():
|
{"golden_diff": "diff --git a/lutris/util/display.py b/lutris/util/display.py\n--- a/lutris/util/display.py\n+++ b/lutris/util/display.py\n@@ -162,7 +162,16 @@\n \n \n def _get_command_output(*command):\n- return subprocess.Popen(command, stdin=subprocess.DEVNULL, stdout=subprocess.PIPE, close_fds=True).communicate()[0]\n+ \"\"\"Some rogue function that gives no shit about residing in the correct module\"\"\"\n+ try:\n+ return subprocess.Popen(\n+ command,\n+ stdin=subprocess.DEVNULL,\n+ stdout=subprocess.PIPE,\n+ close_fds=True\n+ ).communicate()[0]\n+ except FileNotFoundError:\n+ logger.error(\"Unable to run command, %s not found\", command[0])\n \n \n def is_compositing_enabled():\n@@ -221,7 +230,13 @@\n \n \n def _run_command(*command):\n- return subprocess.Popen(command, stdin=subprocess.DEVNULL, close_fds=True)\n+ \"\"\"Random _run_command lost in the middle of the project,\n+ are you lost little _run_command?\n+ \"\"\"\n+ try:\n+ return subprocess.Popen(command, stdin=subprocess.DEVNULL, close_fds=True)\n+ except FileNotFoundError:\n+ logger.error(\"Oh no\")\n \n \n def disable_compositing():\n", "issue": "After the Lutris update, no games start.\nOn the previous version 0.5.6, all games were launched.\r\nAfter updating Lutris to version 0.5.7-1, **not a single game starts if \"Disable desktop effects\" is enabled in preference**\r\nHere is what the console writes. **FileNotFoundError: [Errno 2] No such file or directory: 'qdbus'**\r\nArchlinux.\r\npython-dbus is installed\r\n\r\n\n", "before_files": [{"content": "\"\"\"Module to deal with various aspects of displays\"\"\"\n# isort:skip_file\nimport enum\nimport os\nimport subprocess\n\ntry:\n from dbus.exceptions import DBusException\n DBUS_AVAILABLE = True\nexcept ImportError:\n DBUS_AVAILABLE = False\n\nfrom gi.repository import Gdk, GLib, GnomeDesktop\n\nfrom lutris.util import system\nfrom lutris.util.graphics.displayconfig import MutterDisplayManager\nfrom lutris.util.graphics.xrandr import LegacyDisplayManager, change_resolution, get_outputs\nfrom lutris.util.log import logger\n\n\nclass NoScreenDetected(Exception):\n\n \"\"\"Raise this when unable to detect screens\"\"\"\n\n\ndef restore_gamma():\n \"\"\"Restores gamma to a normal level.\"\"\"\n xgamma_path = system.find_executable(\"xgamma\")\n try:\n subprocess.Popen([xgamma_path, \"-gamma\", \"1.0\"])\n except (FileNotFoundError, TypeError):\n logger.warning(\"xgamma is not available on your system\")\n except PermissionError:\n logger.warning(\"you do not have permission to call xgamma\")\n\n\ndef _get_graphics_adapters():\n \"\"\"Return the list of graphics cards available on a system\n\n Returns:\n list: list of tuples containing PCI ID and description of the display controller\n \"\"\"\n lspci_path = system.find_executable(\"lspci\")\n dev_subclasses = [\"VGA\", \"XGA\", \"3D controller\", \"Display controller\"]\n if not lspci_path:\n logger.warning(\"lspci is not available. List of graphics cards not available\")\n return []\n return [\n (pci_id, device_desc.split(\": \")[1]) for pci_id, device_desc in [\n line.split(maxsplit=1) for line in system.execute(lspci_path, timeout=3).split(\"\\n\")\n if any(subclass in line for subclass in dev_subclasses)\n ]\n ]\n\n\nclass DisplayManager:\n\n \"\"\"Get display and resolution using GnomeDesktop\"\"\"\n\n def __init__(self):\n screen = Gdk.Screen.get_default()\n if not screen:\n raise NoScreenDetected\n self.rr_screen = GnomeDesktop.RRScreen.new(screen)\n self.rr_config = GnomeDesktop.RRConfig.new_current(self.rr_screen)\n self.rr_config.load_current()\n\n def get_display_names(self):\n \"\"\"Return names of connected displays\"\"\"\n return [output_info.get_display_name() for output_info in self.rr_config.get_outputs()]\n\n def get_resolutions(self):\n \"\"\"Return available resolutions\"\"\"\n resolutions = [\"%sx%s\" % (mode.get_width(), mode.get_height()) for mode in self.rr_screen.list_modes()]\n return sorted(set(resolutions), key=lambda x: int(x.split(\"x\")[0]), reverse=True)\n\n def _get_primary_output(self):\n \"\"\"Return the RROutput used as a primary display\"\"\"\n for output in self.rr_screen.list_outputs():\n if output.get_is_primary():\n return output\n return\n\n def get_current_resolution(self):\n \"\"\"Return the current resolution for the primary display\"\"\"\n output = self._get_primary_output()\n if not output:\n logger.error(\"Failed to get a default output\")\n return \"\", \"\"\n current_mode = output.get_current_mode()\n return str(current_mode.get_width()), str(current_mode.get_height())\n\n @staticmethod\n def set_resolution(resolution):\n \"\"\"Set the resolution of one or more displays.\n The resolution can either be a string, which will be applied to the\n primary display or a list of configurations as returned by `get_config`.\n This method uses XrandR and will not work on Wayland.\n \"\"\"\n return change_resolution(resolution)\n\n @staticmethod\n def get_config():\n \"\"\"Return the current display resolution\n This method uses XrandR and will not work on wayland\n The output can be fed in `set_resolution`\n \"\"\"\n return get_outputs()\n\n\ndef get_display_manager():\n \"\"\"Return the appropriate display manager instance.\n Defaults to Mutter if available. This is the only one to support Wayland.\n \"\"\"\n if DBUS_AVAILABLE:\n try:\n return MutterDisplayManager()\n except DBusException as ex:\n logger.debug(\"Mutter DBus service not reachable: %s\", ex)\n except Exception as ex: # pylint: disable=broad-except\n logger.exception(\"Failed to instanciate MutterDisplayConfig. Please report with exception: %s\", ex)\n else:\n logger.error(\"DBus is not available, lutris was not properly installed.\")\n try:\n return DisplayManager()\n except (GLib.Error, NoScreenDetected):\n return LegacyDisplayManager()\n\n\nDISPLAY_MANAGER = get_display_manager()\nUSE_DRI_PRIME = len(_get_graphics_adapters()) > 1\n\n\nclass DesktopEnvironment(enum.Enum):\n\n \"\"\"Enum of desktop environments.\"\"\"\n\n PLASMA = 0\n MATE = 1\n XFCE = 2\n DEEPIN = 3\n UNKNOWN = 999\n\n\ndef get_desktop_environment():\n \"\"\"Converts the value of the DESKTOP_SESSION environment variable\n to one of the constants in the DesktopEnvironment class.\n Returns None if DESKTOP_SESSION is empty or unset.\n \"\"\"\n desktop_session = os.environ.get(\"DESKTOP_SESSION\", \"\").lower()\n if not desktop_session:\n return None\n if desktop_session.endswith(\"plasma\"):\n return DesktopEnvironment.PLASMA\n if desktop_session.endswith(\"mate\"):\n return DesktopEnvironment.MATE\n if desktop_session.endswith(\"xfce\"):\n return DesktopEnvironment.XFCE\n if desktop_session.endswith(\"deepin\"):\n return DesktopEnvironment.DEEPIN\n return DesktopEnvironment.UNKNOWN\n\n\ndef _get_command_output(*command):\n return subprocess.Popen(command, stdin=subprocess.DEVNULL, stdout=subprocess.PIPE, close_fds=True).communicate()[0]\n\n\ndef is_compositing_enabled():\n \"\"\"Checks whether compositing is currently disabled or enabled.\n Returns True for enabled, False for disabled, and None if unknown.\n \"\"\"\n desktop_environment = get_desktop_environment()\n if desktop_environment is DesktopEnvironment.PLASMA:\n return _get_command_output(\n \"qdbus\", \"org.kde.KWin\", \"/Compositor\", \"org.kde.kwin.Compositing.active\"\n ) == b\"true\\n\"\n if desktop_environment is DesktopEnvironment.MATE:\n return _get_command_output(\"gsettings\", \"get org.mate.Marco.general\", \"compositing-manager\") == b\"true\\n\"\n if desktop_environment is DesktopEnvironment.XFCE:\n return _get_command_output(\n \"xfconf-query\", \"--channel=xfwm4\", \"--property=/general/use_compositing\"\n ) == b\"true\\n\"\n if desktop_environment is DesktopEnvironment.DEEPIN:\n return _get_command_output(\n \"dbus-send\", \"--session\", \"--dest=com.deepin.WMSwitcher\", \"--type=method_call\",\n \"--print-reply=literal\", \"/com/deepin/WMSwitcher\", \"com.deepin.WMSwitcher.CurrentWM\"\n ) == b\"deepin wm\\n\"\n return None\n\n\n# One element is appended to this for every invocation of disable_compositing:\n# True if compositing has been disabled, False if not. enable_compositing\n# removes the last element, and only re-enables compositing if that element\n# was True.\n_COMPOSITING_DISABLED_STACK = []\n\n\ndef _get_compositor_commands():\n \"\"\"Returns the commands to enable/disable compositing on the current\n desktop environment as a 2-tuple.\n \"\"\"\n start_compositor = None\n stop_compositor = None\n desktop_environment = get_desktop_environment()\n if desktop_environment is DesktopEnvironment.PLASMA:\n stop_compositor = (\"qdbus\", \"org.kde.KWin\", \"/Compositor\", \"org.kde.kwin.Compositing.suspend\")\n start_compositor = (\"qdbus\", \"org.kde.KWin\", \"/Compositor\", \"org.kde.kwin.Compositing.resume\")\n elif desktop_environment is DesktopEnvironment.MATE:\n stop_compositor = (\"gsettings\", \"set org.mate.Marco.general\", \"compositing-manager\", \"false\")\n start_compositor = (\"gsettings\", \"set org.mate.Marco.general\", \"compositing-manager\", \"true\")\n elif desktop_environment is DesktopEnvironment.XFCE:\n stop_compositor = (\"xfconf-query\", \"--channel=xfwm4\", \"--property=/general/use_compositing\", \"--set=false\")\n start_compositor = (\"xfconf-query\", \"--channel=xfwm4\", \"--property=/general/use_compositing\", \"--set=true\")\n elif desktop_environment is DesktopEnvironment.DEEPIN:\n start_compositor = (\n \"dbus-send\", \"--session\", \"--dest=com.deepin.WMSwitcher\", \"--type=method_call\",\n \"/com/deepin/WMSwitcher\", \"com.deepin.WMSwitcher.RequestSwitchWM\",\n )\n stop_compositor = start_compositor\n return start_compositor, stop_compositor\n\n\ndef _run_command(*command):\n return subprocess.Popen(command, stdin=subprocess.DEVNULL, close_fds=True)\n\n\ndef disable_compositing():\n \"\"\"Disable compositing if not already disabled.\"\"\"\n compositing_enabled = is_compositing_enabled()\n if compositing_enabled is None:\n compositing_enabled = True\n if any(_COMPOSITING_DISABLED_STACK):\n compositing_enabled = False\n _COMPOSITING_DISABLED_STACK.append(compositing_enabled)\n if not compositing_enabled:\n return\n _, stop_compositor = _get_compositor_commands()\n if stop_compositor:\n _run_command(*stop_compositor)\n\n\ndef enable_compositing():\n \"\"\"Re-enable compositing if the corresponding call to disable_compositing\n disabled it.\"\"\"\n compositing_disabled = _COMPOSITING_DISABLED_STACK.pop()\n if not compositing_disabled:\n return\n start_compositor, _ = _get_compositor_commands()\n if start_compositor:\n _run_command(*start_compositor)\n", "path": "lutris/util/display.py"}]}
| 3,518 | 286 |
gh_patches_debug_19054
|
rasdani/github-patches
|
git_diff
|
cupy__cupy-6948
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
python primitive scalar float fails as jit.rawkernel argument
### Description
Passing python primitive float as a jit.rawkernel argument seems not to work. Numpy.float32 works, python float does not.
### To Reproduce
```py
@jit.rawkernel()
def scalar_multiply(a, m, size):
tid = jit.blockIdx.x * jit.blockDim.x + jit.threadIdx.x
ntid = jit.gridDim.x * jit.blockDim.x
for i in range(tid, size, ntid):
a[i] = a[i] * m
size = 5
a = cp.random.random(size, dtype=np.float32)
print(a)
scalar_multiply((128,),(1024,),(a, np.float32(5.0), size))
print(a)
scalar_multiply((128,),(1024,),(a, 5.0, size))
print(a)
```
output:
[0.17959814 0.42873758 0.77541053 0.8213136 0.8284943 ]
[0.8979907 2.143688 3.8770528 4.1065683 4.1424713]
[0. 0. 0. 0. 0.]
### Installation
Wheel
### Environment
```
OS : Linux-5.15.0-41-generic-x86_64-with-glibc2.29
Python Version : 3.8.10
CuPy Version : 11.0.0
CuPy Platform : NVIDIA CUDA
NumPy Version : 1.22.0
SciPy Version : 1.4.1
Cython Build Version : 0.29.24
Cython Runtime Version : 0.29.28
CUDA Root : /usr/local/cuda
nvcc PATH : /usr/local/cuda/bin/nvcc
CUDA Build Version : 11070
CUDA Driver Version : 11070
CUDA Runtime Version : 11070
cuBLAS Version : (available)
cuFFT Version : 10702
cuRAND Version : 10210
cuSOLVER Version : (11, 3, 5)
cuSPARSE Version : (available)
NVRTC Version : (11, 7)
Thrust Version : 101500
CUB Build Version : 101500
Jitify Build Version : 4a37de0
cuDNN Build Version : 8400
cuDNN Version : 8401
NCCL Build Version : None
NCCL Runtime Version : None
cuTENSOR Version : None
cuSPARSELt Build Version : None
Device 0 Name : NVIDIA GeForce GTX 950
Device 0 Compute Capability : 52
Device 0 PCI Bus ID : 0000:01:00.0
```
### Additional Information
Thanks!
</issue>
<code>
[start of cupyx/jit/_interface.py]
1 import functools
2 import warnings
3
4 import numpy
5
6 from cupy_backends.cuda.api import runtime
7 import cupy
8 from cupy._core import core
9 from cupyx.jit import _compile
10 from cupyx.jit import _cuda_typerules
11 from cupyx.jit import _cuda_types
12 from cupyx.jit import _internal_types
13
14
15 class _CudaFunction:
16 """JIT cupy function object
17 """
18
19 def __init__(self, func, mode, device=False, inline=False):
20 self.attributes = []
21
22 if device:
23 self.attributes.append('__device__')
24 else:
25 self.attributes.append('__global__')
26
27 if inline:
28 self.attributes.append('inline')
29
30 self.name = getattr(func, 'name', func.__name__)
31 self.func = func
32 self.mode = mode
33
34 def __call__(self, *args, **kwargs):
35 raise NotImplementedError
36
37 def _emit_code_from_types(self, in_types, ret_type=None):
38 return _compile.transpile(
39 self.func, self.attributes, self.mode, in_types, ret_type)
40
41
42 class _JitRawKernel:
43 """JIT CUDA kernel object.
44
45 The decorator :func:``cupyx.jit.rawkernel`` converts the target function
46 to an object of this class. This class is not inteded to be instantiated
47 by users.
48 """
49
50 def __init__(self, func, mode, device):
51 self._func = func
52 self._mode = mode
53 self._device = device
54 self._cache = {}
55 self._cached_codes = {}
56
57 def __call__(
58 self, grid, block, args, shared_mem=0, stream=None):
59 """Calls the CUDA kernel.
60
61 The compilation will be deferred until the first function call.
62 CuPy's JIT compiler infers the types of arguments at the call
63 time, and will cache the compiled kernels for speeding up any
64 subsequent calls.
65
66 Args:
67 grid (tuple of int): Size of grid in blocks.
68 block (tuple of int): Dimensions of each thread block.
69 args (tuple):
70 Arguments of the kernel. The type of all elements must be
71 ``bool``, ``int``, ``float``, ``complex``, NumPy scalar or
72 ``cupy.ndarray``.
73 shared_mem (int):
74 Dynamic shared-memory size per thread block in bytes.
75 stream (cupy.cuda.Stream): CUDA stream.
76
77 .. seealso:: :ref:`jit_kernel_definition`
78 """
79 in_types = []
80 for x in args:
81 if isinstance(x, cupy.ndarray):
82 t = _cuda_types.CArray.from_ndarray(x)
83 elif numpy.isscalar(x):
84 t = _cuda_typerules.get_ctype_from_scalar(self._mode, x)
85 else:
86 raise TypeError(f'{type(x)} is not supported for RawKernel')
87 in_types.append(t)
88 in_types = tuple(in_types)
89 device_id = cupy.cuda.get_device_id()
90
91 kern, enable_cg = self._cache.get((in_types, device_id), (None, None))
92 if kern is None:
93 result = self._cached_codes.get(in_types)
94 if result is None:
95 result = _compile.transpile(
96 self._func,
97 ['extern "C"', '__global__'],
98 self._mode,
99 in_types,
100 _cuda_types.void,
101 )
102 self._cached_codes[in_types] = result
103
104 fname = result.func_name
105 enable_cg = result.enable_cooperative_groups
106 # workaround for hipRTC: as of ROCm 4.1.0 hipRTC still does not
107 # recognize "-D", so we have to compile using hipcc...
108 backend = 'nvcc' if runtime.is_hip else 'nvrtc'
109 module = core.compile_with_cache(
110 source=result.code,
111 options=('-DCUPY_JIT_MODE', '--std=c++14'),
112 backend=backend)
113 kern = module.get_function(fname)
114 self._cache[(in_types, device_id)] = (kern, enable_cg)
115
116 kern(grid, block, args, shared_mem, stream, enable_cg)
117
118 def __getitem__(self, grid_and_block):
119 """Numba-style kernel call.
120
121 .. seealso:: :ref:`jit_kernel_definition`
122 """
123 grid, block = grid_and_block
124 if not isinstance(grid, tuple):
125 grid = (grid, 1, 1)
126 if not isinstance(block, tuple):
127 block = (block, 1, 1)
128 return lambda *args, **kwargs: self(grid, block, args, **kwargs)
129
130 @property
131 def cached_codes(self):
132 """Returns a dict that has input types as keys and codes values.
133
134 This proprety method is for debugging purpose.
135 The return value is not guaranteed to keep backward compatibility.
136 """
137 if len(self._cached_codes) == 0:
138 warnings.warn(
139 'No codes are cached because compilation is deferred until '
140 'the first function call.')
141 return dict([(k, v.code) for k, v in self._cached_codes.items()])
142
143 @property
144 def cached_code(self):
145 """Returns `next(iter(self.cached_codes.values()))`.
146
147 This proprety method is for debugging purpose.
148 The return value is not guaranteed to keep backward compatibility.
149 """
150 codes = self.cached_codes
151 if len(codes) > 1:
152 warnings.warn(
153 'The input types of the kernel could not be inferred. '
154 'Please use `.cached_codes` instead.')
155 return next(iter(codes.values()))
156
157
158 def rawkernel(*, mode='cuda', device=False):
159 """A decorator compiles a Python function into CUDA kernel.
160 """
161 cupy._util.experimental('cupyx.jit.rawkernel')
162
163 def wrapper(func):
164 return functools.update_wrapper(
165 _JitRawKernel(func, mode, device), func)
166 return wrapper
167
168
169 threadIdx = _internal_types.Data('threadIdx', _cuda_types.dim3)
170 blockDim = _internal_types.Data('blockDim', _cuda_types.dim3)
171 blockIdx = _internal_types.Data('blockIdx', _cuda_types.dim3)
172 gridDim = _internal_types.Data('gridDim', _cuda_types.dim3)
173
174 warpsize = _internal_types.Data(
175 '64' if runtime.is_hip else '32', _cuda_types.uint32)
176 warpsize.__doc__ = r"""Returns the number of threads in a warp.
177
178 In CUDA this is always 32, and in ROCm/HIP always 64.
179
180 .. seealso:: :obj:`numba.cuda.warpsize`
181 """
182
[end of cupyx/jit/_interface.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/cupyx/jit/_interface.py b/cupyx/jit/_interface.py
--- a/cupyx/jit/_interface.py
+++ b/cupyx/jit/_interface.py
@@ -10,6 +10,7 @@
from cupyx.jit import _cuda_typerules
from cupyx.jit import _cuda_types
from cupyx.jit import _internal_types
+from cupyx.jit._cuda_types import Scalar
class _CudaFunction:
@@ -113,7 +114,16 @@
kern = module.get_function(fname)
self._cache[(in_types, device_id)] = (kern, enable_cg)
- kern(grid, block, args, shared_mem, stream, enable_cg)
+ new_args = []
+ for a, t in zip(args, in_types):
+ if isinstance(t, Scalar):
+ if t.dtype.char == 'e':
+ a = numpy.float32(a)
+ else:
+ a = t.dtype.type(a)
+ new_args.append(a)
+
+ kern(grid, block, tuple(new_args), shared_mem, stream, enable_cg)
def __getitem__(self, grid_and_block):
"""Numba-style kernel call.
|
{"golden_diff": "diff --git a/cupyx/jit/_interface.py b/cupyx/jit/_interface.py\n--- a/cupyx/jit/_interface.py\n+++ b/cupyx/jit/_interface.py\n@@ -10,6 +10,7 @@\n from cupyx.jit import _cuda_typerules\n from cupyx.jit import _cuda_types\n from cupyx.jit import _internal_types\n+from cupyx.jit._cuda_types import Scalar\n \n \n class _CudaFunction:\n@@ -113,7 +114,16 @@\n kern = module.get_function(fname)\n self._cache[(in_types, device_id)] = (kern, enable_cg)\n \n- kern(grid, block, args, shared_mem, stream, enable_cg)\n+ new_args = []\n+ for a, t in zip(args, in_types):\n+ if isinstance(t, Scalar):\n+ if t.dtype.char == 'e':\n+ a = numpy.float32(a)\n+ else:\n+ a = t.dtype.type(a)\n+ new_args.append(a)\n+\n+ kern(grid, block, tuple(new_args), shared_mem, stream, enable_cg)\n \n def __getitem__(self, grid_and_block):\n \"\"\"Numba-style kernel call.\n", "issue": "python primitive scalar float fails as jit.rawkernel argument\n### Description\r\n\r\nPassing python primitive float as a jit.rawkernel argument seems not to work. Numpy.float32 works, python float does not.\r\n\r\n### To Reproduce\r\n\r\n```py\r\[email protected]()\r\ndef scalar_multiply(a, m, size):\r\n tid = jit.blockIdx.x * jit.blockDim.x + jit.threadIdx.x\r\n ntid = jit.gridDim.x * jit.blockDim.x\r\n for i in range(tid, size, ntid):\r\n a[i] = a[i] * m\r\n\r\nsize = 5\r\na = cp.random.random(size, dtype=np.float32)\r\n\r\nprint(a)\r\nscalar_multiply((128,),(1024,),(a, np.float32(5.0), size))\r\nprint(a)\r\nscalar_multiply((128,),(1024,),(a, 5.0, size))\r\nprint(a)\r\n```\r\noutput:\r\n\r\n[0.17959814 0.42873758 0.77541053 0.8213136 0.8284943 ]\r\n[0.8979907 2.143688 3.8770528 4.1065683 4.1424713]\r\n[0. 0. 0. 0. 0.]\r\n\r\n### Installation\r\n\r\nWheel\r\n\r\n### Environment\r\n\r\n```\r\nOS : Linux-5.15.0-41-generic-x86_64-with-glibc2.29\r\nPython Version : 3.8.10\r\nCuPy Version : 11.0.0\r\nCuPy Platform : NVIDIA CUDA\r\nNumPy Version : 1.22.0\r\nSciPy Version : 1.4.1\r\nCython Build Version : 0.29.24\r\nCython Runtime Version : 0.29.28\r\nCUDA Root : /usr/local/cuda\r\nnvcc PATH : /usr/local/cuda/bin/nvcc\r\nCUDA Build Version : 11070\r\nCUDA Driver Version : 11070\r\nCUDA Runtime Version : 11070\r\ncuBLAS Version : (available)\r\ncuFFT Version : 10702\r\ncuRAND Version : 10210\r\ncuSOLVER Version : (11, 3, 5)\r\ncuSPARSE Version : (available)\r\nNVRTC Version : (11, 7)\r\nThrust Version : 101500\r\nCUB Build Version : 101500\r\nJitify Build Version : 4a37de0\r\ncuDNN Build Version : 8400\r\ncuDNN Version : 8401\r\nNCCL Build Version : None\r\nNCCL Runtime Version : None\r\ncuTENSOR Version : None\r\ncuSPARSELt Build Version : None\r\nDevice 0 Name : NVIDIA GeForce GTX 950\r\nDevice 0 Compute Capability : 52\r\nDevice 0 PCI Bus ID : 0000:01:00.0\r\n```\r\n\r\n\r\n### Additional Information\r\n\r\nThanks!\n", "before_files": [{"content": "import functools\nimport warnings\n\nimport numpy\n\nfrom cupy_backends.cuda.api import runtime\nimport cupy\nfrom cupy._core import core\nfrom cupyx.jit import _compile\nfrom cupyx.jit import _cuda_typerules\nfrom cupyx.jit import _cuda_types\nfrom cupyx.jit import _internal_types\n\n\nclass _CudaFunction:\n \"\"\"JIT cupy function object\n \"\"\"\n\n def __init__(self, func, mode, device=False, inline=False):\n self.attributes = []\n\n if device:\n self.attributes.append('__device__')\n else:\n self.attributes.append('__global__')\n\n if inline:\n self.attributes.append('inline')\n\n self.name = getattr(func, 'name', func.__name__)\n self.func = func\n self.mode = mode\n\n def __call__(self, *args, **kwargs):\n raise NotImplementedError\n\n def _emit_code_from_types(self, in_types, ret_type=None):\n return _compile.transpile(\n self.func, self.attributes, self.mode, in_types, ret_type)\n\n\nclass _JitRawKernel:\n \"\"\"JIT CUDA kernel object.\n\n The decorator :func:``cupyx.jit.rawkernel`` converts the target function\n to an object of this class. This class is not inteded to be instantiated\n by users.\n \"\"\"\n\n def __init__(self, func, mode, device):\n self._func = func\n self._mode = mode\n self._device = device\n self._cache = {}\n self._cached_codes = {}\n\n def __call__(\n self, grid, block, args, shared_mem=0, stream=None):\n \"\"\"Calls the CUDA kernel.\n\n The compilation will be deferred until the first function call.\n CuPy's JIT compiler infers the types of arguments at the call\n time, and will cache the compiled kernels for speeding up any\n subsequent calls.\n\n Args:\n grid (tuple of int): Size of grid in blocks.\n block (tuple of int): Dimensions of each thread block.\n args (tuple):\n Arguments of the kernel. The type of all elements must be\n ``bool``, ``int``, ``float``, ``complex``, NumPy scalar or\n ``cupy.ndarray``.\n shared_mem (int):\n Dynamic shared-memory size per thread block in bytes.\n stream (cupy.cuda.Stream): CUDA stream.\n\n .. seealso:: :ref:`jit_kernel_definition`\n \"\"\"\n in_types = []\n for x in args:\n if isinstance(x, cupy.ndarray):\n t = _cuda_types.CArray.from_ndarray(x)\n elif numpy.isscalar(x):\n t = _cuda_typerules.get_ctype_from_scalar(self._mode, x)\n else:\n raise TypeError(f'{type(x)} is not supported for RawKernel')\n in_types.append(t)\n in_types = tuple(in_types)\n device_id = cupy.cuda.get_device_id()\n\n kern, enable_cg = self._cache.get((in_types, device_id), (None, None))\n if kern is None:\n result = self._cached_codes.get(in_types)\n if result is None:\n result = _compile.transpile(\n self._func,\n ['extern \"C\"', '__global__'],\n self._mode,\n in_types,\n _cuda_types.void,\n )\n self._cached_codes[in_types] = result\n\n fname = result.func_name\n enable_cg = result.enable_cooperative_groups\n # workaround for hipRTC: as of ROCm 4.1.0 hipRTC still does not\n # recognize \"-D\", so we have to compile using hipcc...\n backend = 'nvcc' if runtime.is_hip else 'nvrtc'\n module = core.compile_with_cache(\n source=result.code,\n options=('-DCUPY_JIT_MODE', '--std=c++14'),\n backend=backend)\n kern = module.get_function(fname)\n self._cache[(in_types, device_id)] = (kern, enable_cg)\n\n kern(grid, block, args, shared_mem, stream, enable_cg)\n\n def __getitem__(self, grid_and_block):\n \"\"\"Numba-style kernel call.\n\n .. seealso:: :ref:`jit_kernel_definition`\n \"\"\"\n grid, block = grid_and_block\n if not isinstance(grid, tuple):\n grid = (grid, 1, 1)\n if not isinstance(block, tuple):\n block = (block, 1, 1)\n return lambda *args, **kwargs: self(grid, block, args, **kwargs)\n\n @property\n def cached_codes(self):\n \"\"\"Returns a dict that has input types as keys and codes values.\n\n This proprety method is for debugging purpose.\n The return value is not guaranteed to keep backward compatibility.\n \"\"\"\n if len(self._cached_codes) == 0:\n warnings.warn(\n 'No codes are cached because compilation is deferred until '\n 'the first function call.')\n return dict([(k, v.code) for k, v in self._cached_codes.items()])\n\n @property\n def cached_code(self):\n \"\"\"Returns `next(iter(self.cached_codes.values()))`.\n\n This proprety method is for debugging purpose.\n The return value is not guaranteed to keep backward compatibility.\n \"\"\"\n codes = self.cached_codes\n if len(codes) > 1:\n warnings.warn(\n 'The input types of the kernel could not be inferred. '\n 'Please use `.cached_codes` instead.')\n return next(iter(codes.values()))\n\n\ndef rawkernel(*, mode='cuda', device=False):\n \"\"\"A decorator compiles a Python function into CUDA kernel.\n \"\"\"\n cupy._util.experimental('cupyx.jit.rawkernel')\n\n def wrapper(func):\n return functools.update_wrapper(\n _JitRawKernel(func, mode, device), func)\n return wrapper\n\n\nthreadIdx = _internal_types.Data('threadIdx', _cuda_types.dim3)\nblockDim = _internal_types.Data('blockDim', _cuda_types.dim3)\nblockIdx = _internal_types.Data('blockIdx', _cuda_types.dim3)\ngridDim = _internal_types.Data('gridDim', _cuda_types.dim3)\n\nwarpsize = _internal_types.Data(\n '64' if runtime.is_hip else '32', _cuda_types.uint32)\nwarpsize.__doc__ = r\"\"\"Returns the number of threads in a warp.\n\nIn CUDA this is always 32, and in ROCm/HIP always 64.\n\n.. seealso:: :obj:`numba.cuda.warpsize`\n\"\"\"\n", "path": "cupyx/jit/_interface.py"}]}
| 3,151 | 274 |
gh_patches_debug_7545
|
rasdani/github-patches
|
git_diff
|
deeppavlov__DeepPavlov-861
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Python 3.7.0 support
DeepPavlov has scikit-learn version fixed to v0.19.1, but its c-extensions build fails on python 3.7.0 (at least on macOS), please see [scikit-learn issue](https://github.com/scikit-learn/scikit-learn/issues/11320).
This issue has been fixed in scikit-learn v0.19.2 release, so you have to up at least minor version to enable python 3.7.0 support.
I can try python 3.7.0 compatibility of other packages and prepare a pull-request, if you want.
</issue>
<code>
[start of deeppavlov/__init__.py]
1 # Copyright 2017 Neural Networks and Deep Learning lab, MIPT
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import sys
16 from pathlib import Path
17
18 from .core.common.log import init_logger
19
20 try:
21 from .configs import configs
22 # noinspection PyUnresolvedReferences
23 from .core.commands.infer import build_model
24 # noinspection PyUnresolvedReferences
25 from .core.commands.train import train_evaluate_model_from_config
26 from .download import deep_download
27 from .core.common.chainer import Chainer
28
29 # TODO: make better
30 def train_model(config: [str, Path, dict], download: bool = False, recursive: bool = False) -> Chainer:
31 train_evaluate_model_from_config(config, download=download, recursive=recursive)
32 return build_model(config, load_trained=True)
33
34 def evaluate_model(config: [str, Path, dict], download: bool = False, recursive: bool = False) -> dict:
35 return train_evaluate_model_from_config(config, to_train=False, download=download, recursive=recursive)
36
37 except ImportError:
38 'Assuming that requirements are not yet installed'
39
40 __version__ = '0.4.0'
41 __author__ = 'Neural Networks and Deep Learning lab, MIPT'
42 __description__ = 'An open source library for building end-to-end dialog systems and training chatbots.'
43 __keywords__ = ['NLP', 'NER', 'SQUAD', 'Intents', 'Chatbot']
44 __license__ = 'Apache License, Version 2.0'
45 __email__ = '[email protected]'
46
47 # check version
48 assert sys.hexversion >= 0x3060000, 'Does not work in python3.5 or lower'
49
50 # resolve conflicts with previous DeepPavlov installations versioned up to 0.0.9
51 dot_dp_path = Path('~/.deeppavlov').expanduser().resolve()
52 if dot_dp_path.is_file():
53 dot_dp_path.unlink()
54
55 # initiate logging
56 init_logger()
57
[end of deeppavlov/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/deeppavlov/__init__.py b/deeppavlov/__init__.py
--- a/deeppavlov/__init__.py
+++ b/deeppavlov/__init__.py
@@ -37,7 +37,7 @@
except ImportError:
'Assuming that requirements are not yet installed'
-__version__ = '0.4.0'
+__version__ = '0.5.0'
__author__ = 'Neural Networks and Deep Learning lab, MIPT'
__description__ = 'An open source library for building end-to-end dialog systems and training chatbots.'
__keywords__ = ['NLP', 'NER', 'SQUAD', 'Intents', 'Chatbot']
|
{"golden_diff": "diff --git a/deeppavlov/__init__.py b/deeppavlov/__init__.py\n--- a/deeppavlov/__init__.py\n+++ b/deeppavlov/__init__.py\n@@ -37,7 +37,7 @@\n except ImportError:\n 'Assuming that requirements are not yet installed'\n \n-__version__ = '0.4.0'\n+__version__ = '0.5.0'\n __author__ = 'Neural Networks and Deep Learning lab, MIPT'\n __description__ = 'An open source library for building end-to-end dialog systems and training chatbots.'\n __keywords__ = ['NLP', 'NER', 'SQUAD', 'Intents', 'Chatbot']\n", "issue": "Python 3.7.0 support\nDeepPavlov has scikit-learn version fixed to v0.19.1, but its c-extensions build fails on python 3.7.0 (at least on macOS), please see [scikit-learn issue](https://github.com/scikit-learn/scikit-learn/issues/11320).\r\n\r\nThis issue has been fixed in scikit-learn v0.19.2 release, so you have to up at least minor version to enable python 3.7.0 support.\r\n\r\nI can try python 3.7.0 compatibility of other packages and prepare a pull-request, if you want.\n", "before_files": [{"content": "# Copyright 2017 Neural Networks and Deep Learning lab, MIPT\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport sys\nfrom pathlib import Path\n\nfrom .core.common.log import init_logger\n\ntry:\n from .configs import configs\n # noinspection PyUnresolvedReferences\n from .core.commands.infer import build_model\n # noinspection PyUnresolvedReferences\n from .core.commands.train import train_evaluate_model_from_config\n from .download import deep_download\n from .core.common.chainer import Chainer\n\n # TODO: make better\n def train_model(config: [str, Path, dict], download: bool = False, recursive: bool = False) -> Chainer:\n train_evaluate_model_from_config(config, download=download, recursive=recursive)\n return build_model(config, load_trained=True)\n\n def evaluate_model(config: [str, Path, dict], download: bool = False, recursive: bool = False) -> dict:\n return train_evaluate_model_from_config(config, to_train=False, download=download, recursive=recursive)\n\nexcept ImportError:\n 'Assuming that requirements are not yet installed'\n\n__version__ = '0.4.0'\n__author__ = 'Neural Networks and Deep Learning lab, MIPT'\n__description__ = 'An open source library for building end-to-end dialog systems and training chatbots.'\n__keywords__ = ['NLP', 'NER', 'SQUAD', 'Intents', 'Chatbot']\n__license__ = 'Apache License, Version 2.0'\n__email__ = '[email protected]'\n\n# check version\nassert sys.hexversion >= 0x3060000, 'Does not work in python3.5 or lower'\n\n# resolve conflicts with previous DeepPavlov installations versioned up to 0.0.9\ndot_dp_path = Path('~/.deeppavlov').expanduser().resolve()\nif dot_dp_path.is_file():\n dot_dp_path.unlink()\n\n# initiate logging\ninit_logger()\n", "path": "deeppavlov/__init__.py"}]}
| 1,340 | 160 |
gh_patches_debug_30784
|
rasdani/github-patches
|
git_diff
|
mozilla__bugbug-3958
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[model:bugtype] ValueError: y should be a 1d array, got an array of shape (198635, 5) instead.
> I'm wondering if the error encountered during the training of the `bugtype` model is something we should investigate.
>
> ```bash
> Traceback (most recent call last):
> File "/home/promisefru/anaconda3/envs/bugbug/lib/python3.10/runpy.py", line 196, in _run_module_as_main
> return _run_code(code, main_globals, None,
> File "/home/promisefru/anaconda3/envs/bugbug/lib/python3.10/runpy.py", line 86, in _run_code
> exec(code, run_globals)
> File "/home/promisefru/mozilla/bugbug/scripts/trainer.py", line 145, in <module>
> main()
> File "/home/promisefru/mozilla/bugbug/scripts/trainer.py", line 141, in main
> retriever.go(args)
> File "/home/promisefru/mozilla/bugbug/scripts/trainer.py", line 41, in go
> metrics = model_obj.train(limit=args.limit)
> File "/home/promisefru/mozilla/bugbug/bugbug/model.py", line 377, in train
> self.le.fit(y)
> File "/home/promisefru/anaconda3/envs/bugbug/lib/python3.10/site-packages/sklearn/preprocessing/_label.py", line 98, in fit
> y = column_or_1d(y, warn=True)
> File "/home/promisefru/anaconda3/envs/bugbug/lib/python3.10/site-packages/sklearn/utils/validation.py", line 1156, in column_or_1d
> raise ValueError(
> ValueError: y should be a 1d array, got an array of shape (198635, 5) instead.
> ```
_Originally posted by @PromiseFru in https://github.com/mozilla/bugbug/issues/3928#issuecomment-1875673580_
See also: https://github.com/mozilla/bugbug/pull/3823#pullrequestreview-1746981626
</issue>
<code>
[start of bugbug/models/rcatype.py]
1 # -*- coding: utf-8 -*-
2 # This Source Code Form is subject to the terms of the Mozilla Public
3 # License, v. 2.0. If a copy of the MPL was not distributed with this file,
4 # You can obtain one at http://mozilla.org/MPL/2.0/.
5
6 import logging
7 import re
8
9 import numpy as np
10 import xgboost
11 from sklearn.compose import ColumnTransformer
12 from sklearn.feature_extraction import DictVectorizer
13 from sklearn.multiclass import OneVsRestClassifier
14 from sklearn.pipeline import Pipeline
15
16 from bugbug import bug_features, bugzilla, feature_cleanup, utils
17 from bugbug.model import BugModel
18
19 # For the moment, rca - XYZ is treated of bugtype XYZ,
20 # so we don't need to store it in a dictionary.
21 RCA_CATEGORIES = [
22 "requirementerror",
23 "poorarchitecture",
24 "designerror",
25 "codingerror",
26 "testingerror",
27 "externalsoftwareaffectingfirefox",
28 "performanceerror",
29 "standards",
30 "systemerror",
31 "localizationerror",
32 "memory",
33 "infrastructure/builderror",
34 "communicationissues",
35 "productdecision",
36 "undocumentedchange",
37 "cornercase",
38 ]
39
40 RCA_SUBCATEGORIES = [
41 "codingerror-syntaxerror",
42 "codingerror-logicalerror",
43 "codingerror-semanticerror",
44 "codingerror-runtimeerror",
45 "codingerror-unhandledexceptions",
46 "codingerror-internalapiissue",
47 "codingerror-networkissue",
48 "codingerror-compatibilityissue",
49 "codingerror-other",
50 ]
51
52 logger = logging.getLogger(__name__)
53
54
55 class RCATypeModel(BugModel):
56 def __init__(
57 self, lemmatization=False, historical=False, rca_subcategories_enabled=False
58 ):
59 BugModel.__init__(self, lemmatization)
60
61 self.calculate_importance = False
62 self.rca_subcategories_enabled = rca_subcategories_enabled
63
64 # should we consider only the main category or all sub categories
65 self.RCA_TYPES = (
66 RCA_SUBCATEGORIES + RCA_CATEGORIES
67 if rca_subcategories_enabled
68 else RCA_CATEGORIES
69 )
70
71 self.RCA_LIST = sorted(set(self.RCA_TYPES))
72
73 feature_extractors = [
74 bug_features.HasSTR(),
75 bug_features.Severity(),
76 bug_features.IsCoverityIssue(),
77 bug_features.HasCrashSignature(),
78 bug_features.HasURL(),
79 bug_features.HasW3CURL(),
80 bug_features.HasGithubURL(),
81 # Ignore whiteboards that would make the ML completely skewed
82 # bug_features.whiteboard(),
83 bug_features.Patches(),
84 bug_features.Landings(),
85 bug_features.BlockedBugsNumber(),
86 bug_features.EverAffected(),
87 bug_features.AffectedThenUnaffected(),
88 bug_features.Product(),
89 bug_features.Component(),
90 ]
91
92 cleanup_functions = [
93 feature_cleanup.url(),
94 feature_cleanup.fileref(),
95 feature_cleanup.synonyms(),
96 ]
97
98 self.extraction_pipeline = Pipeline(
99 [
100 (
101 "bug_extractor",
102 bug_features.BugExtractor(feature_extractors, cleanup_functions),
103 ),
104 ]
105 )
106
107 self.clf = Pipeline(
108 [
109 (
110 "union",
111 ColumnTransformer(
112 [
113 ("data", DictVectorizer(), "data"),
114 ("title", self.text_vectorizer(min_df=0.001), "title"),
115 (
116 "first_comment",
117 self.text_vectorizer(min_df=0.001),
118 "first_comment",
119 ),
120 (
121 "comments",
122 self.text_vectorizer(min_df=0.001),
123 "comments",
124 ),
125 ]
126 ),
127 ),
128 (
129 "estimator",
130 OneVsRestClassifier(
131 xgboost.XGBClassifier(n_jobs=utils.get_physical_cpu_count())
132 ),
133 ),
134 ]
135 )
136
137 # return rca from a whiteboard string
138 def get_rca_from_whiteboard(self, whiteboard_data):
139 rca = []
140 whiteboard_data = re.sub(" ", "", whiteboard_data).lower()
141 for whiteboard in whiteboard_data.split("["):
142 if not any(whiteboard.startswith(s) for s in ("rca-", "rca:")):
143 continue
144
145 rca_whiteboard = re.sub("]", "", whiteboard)
146
147 # Hybrid cases: rca:X-Y
148 rca_whiteboard = re.sub(":", "-", rca_whiteboard)
149
150 rca_whiteboard_split = (
151 rca_whiteboard.split("-", 1)
152 if self.rca_subcategories_enabled
153 else rca_whiteboard.split("-")
154 )
155
156 if rca_whiteboard_split[1] not in self.RCA_LIST:
157 logger.warning(rca_whiteboard_split[1] + " not in RCA_LIST")
158 else:
159 rca.append(rca_whiteboard_split[1])
160 return rca
161
162 def get_labels(self):
163 classes = {}
164 for bug in bugzilla.get_bugs():
165 target = np.zeros(len(self.RCA_LIST))
166 for rca in self.get_rca_from_whiteboard(bug["whiteboard"]):
167 target[self.RCA_LIST.index(rca)] = 1
168 classes[bug["id"]] = target
169 return classes, self.RCA_LIST
170
171 def get_feature_names(self):
172 return self.clf.named_steps["union"].get_feature_names_out()
173
174 def overwrite_classes(self, bugs, classes, probabilities):
175 rca_values = self.get_rca(bugs)
176 for i in len(classes):
177 for rca in rca_values[i]:
178 if rca in self.RCA_LIST:
179 if probabilities:
180 classes[i][self.RCA_LIST.index(rca)] = 1.0
181 else:
182 classes[i][self.RCA_LIST.index(rca)] = 1
183
184 return classes
185
[end of bugbug/models/rcatype.py]
[start of bugbug/models/bugtype.py]
1 # -*- coding: utf-8 -*-
2 # This Source Code Form is subject to the terms of the Mozilla Public
3 # License, v. 2.0. If a copy of the MPL was not distributed with this file,
4 # You can obtain one at http://mozilla.org/MPL/2.0/.
5
6 import logging
7 from typing import Iterable
8
9 import numpy as np
10 import xgboost
11 from sklearn.compose import ColumnTransformer
12 from sklearn.feature_extraction import DictVectorizer
13 from sklearn.multiclass import OneVsRestClassifier
14 from sklearn.pipeline import Pipeline
15
16 from bugbug import bug_features, bugzilla, feature_cleanup, utils
17 from bugbug.model import BugModel
18
19 logger = logging.getLogger(__name__)
20
21
22 class BugTypeModel(BugModel):
23 def __init__(self, lemmatization=False, historical=False):
24 BugModel.__init__(self, lemmatization)
25
26 self.calculate_importance = False
27
28 self.bug_type_extractors = bug_features.BugTypes.bug_type_extractors
29
30 label_keyword_prefixes = {
31 keyword
32 for extractor in self.bug_type_extractors
33 for keyword in extractor.keyword_prefixes
34 }
35
36 feature_extractors = [
37 bug_features.HasSTR(),
38 bug_features.Severity(),
39 # Ignore keywords that would make the ML completely skewed
40 # (we are going to use them as 100% rules in the evaluation phase).
41 bug_features.Keywords(label_keyword_prefixes),
42 bug_features.IsCoverityIssue(),
43 bug_features.HasCrashSignature(),
44 bug_features.HasURL(),
45 bug_features.HasW3CURL(),
46 bug_features.HasGithubURL(),
47 bug_features.Whiteboard(),
48 bug_features.Patches(),
49 bug_features.Landings(),
50 bug_features.BlockedBugsNumber(),
51 bug_features.EverAffected(),
52 bug_features.AffectedThenUnaffected(),
53 bug_features.Product(),
54 bug_features.Component(),
55 ]
56
57 cleanup_functions = [
58 feature_cleanup.url(),
59 feature_cleanup.fileref(),
60 feature_cleanup.synonyms(),
61 ]
62
63 self.extraction_pipeline = Pipeline(
64 [
65 (
66 "bug_extractor",
67 bug_features.BugExtractor(feature_extractors, cleanup_functions),
68 ),
69 ]
70 )
71
72 self.clf = Pipeline(
73 [
74 (
75 "union",
76 ColumnTransformer(
77 [
78 ("data", DictVectorizer(), "data"),
79 ("title", self.text_vectorizer(min_df=0.001), "title"),
80 (
81 "first_comment",
82 self.text_vectorizer(min_df=0.001),
83 "first_comment",
84 ),
85 (
86 "comments",
87 self.text_vectorizer(min_df=0.001),
88 "comments",
89 ),
90 ]
91 ),
92 ),
93 (
94 "estimator",
95 OneVsRestClassifier(
96 xgboost.XGBClassifier(n_jobs=utils.get_physical_cpu_count())
97 ),
98 ),
99 ]
100 )
101
102 def get_labels(self) -> tuple[dict[int, np.ndarray], list[str]]:
103 classes = {}
104
105 bug_map = {bug["id"]: bug for bug in bugzilla.get_bugs()}
106
107 for bug_data in bug_map.values():
108 target = np.zeros(len(self.bug_type_extractors))
109 for i, is_type in enumerate(self.bug_type_extractors):
110 if is_type(bug_data, bug_map):
111 target[i] = 1
112
113 classes[int(bug_data["id"])] = target
114
115 bug_types = [extractor.type_name for extractor in self.bug_type_extractors]
116
117 for i, bug_type in enumerate(bug_types):
118 logger.info(
119 "%d %s bugs",
120 sum(target[i] for target in classes.values()),
121 bug_type,
122 )
123
124 return classes, bug_types
125
126 def get_feature_names(self):
127 return self.clf.named_steps["union"].get_feature_names_out()
128
129 def overwrite_classes(
130 self,
131 bugs: Iterable[bugzilla.BugDict],
132 classes: dict[int, np.ndarray],
133 probabilities: bool,
134 ):
135 bug_map = {bug["id"]: bug for bug in bugs}
136
137 for i, bug in enumerate(bugs):
138 for j, is_type_applicable in enumerate(self.bug_type_extractors):
139 if is_type_applicable(bug, bug_map):
140 if probabilities:
141 classes[i][j] = 1.0
142 else:
143 classes[i][j] = 1
144
145 return classes
146
[end of bugbug/models/bugtype.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/bugbug/models/bugtype.py b/bugbug/models/bugtype.py
--- a/bugbug/models/bugtype.py
+++ b/bugbug/models/bugtype.py
@@ -12,6 +12,7 @@
from sklearn.feature_extraction import DictVectorizer
from sklearn.multiclass import OneVsRestClassifier
from sklearn.pipeline import Pipeline
+from sklearn.preprocessing import LabelBinarizer
from bugbug import bug_features, bugzilla, feature_cleanup, utils
from bugbug.model import BugModel
@@ -25,6 +26,8 @@
self.calculate_importance = False
+ self.le = LabelBinarizer()
+
self.bug_type_extractors = bug_features.BugTypes.bug_type_extractors
label_keyword_prefixes = {
diff --git a/bugbug/models/rcatype.py b/bugbug/models/rcatype.py
--- a/bugbug/models/rcatype.py
+++ b/bugbug/models/rcatype.py
@@ -12,6 +12,7 @@
from sklearn.feature_extraction import DictVectorizer
from sklearn.multiclass import OneVsRestClassifier
from sklearn.pipeline import Pipeline
+from sklearn.preprocessing import LabelBinarizer
from bugbug import bug_features, bugzilla, feature_cleanup, utils
from bugbug.model import BugModel
@@ -61,6 +62,8 @@
self.calculate_importance = False
self.rca_subcategories_enabled = rca_subcategories_enabled
+ self.le = LabelBinarizer()
+
# should we consider only the main category or all sub categories
self.RCA_TYPES = (
RCA_SUBCATEGORIES + RCA_CATEGORIES
|
{"golden_diff": "diff --git a/bugbug/models/bugtype.py b/bugbug/models/bugtype.py\n--- a/bugbug/models/bugtype.py\n+++ b/bugbug/models/bugtype.py\n@@ -12,6 +12,7 @@\n from sklearn.feature_extraction import DictVectorizer\n from sklearn.multiclass import OneVsRestClassifier\n from sklearn.pipeline import Pipeline\n+from sklearn.preprocessing import LabelBinarizer\n \n from bugbug import bug_features, bugzilla, feature_cleanup, utils\n from bugbug.model import BugModel\n@@ -25,6 +26,8 @@\n \n self.calculate_importance = False\n \n+ self.le = LabelBinarizer()\n+\n self.bug_type_extractors = bug_features.BugTypes.bug_type_extractors\n \n label_keyword_prefixes = {\ndiff --git a/bugbug/models/rcatype.py b/bugbug/models/rcatype.py\n--- a/bugbug/models/rcatype.py\n+++ b/bugbug/models/rcatype.py\n@@ -12,6 +12,7 @@\n from sklearn.feature_extraction import DictVectorizer\n from sklearn.multiclass import OneVsRestClassifier\n from sklearn.pipeline import Pipeline\n+from sklearn.preprocessing import LabelBinarizer\n \n from bugbug import bug_features, bugzilla, feature_cleanup, utils\n from bugbug.model import BugModel\n@@ -61,6 +62,8 @@\n self.calculate_importance = False\n self.rca_subcategories_enabled = rca_subcategories_enabled\n \n+ self.le = LabelBinarizer()\n+\n # should we consider only the main category or all sub categories\n self.RCA_TYPES = (\n RCA_SUBCATEGORIES + RCA_CATEGORIES\n", "issue": "[model:bugtype] ValueError: y should be a 1d array, got an array of shape (198635, 5) instead.\n> I'm wondering if the error encountered during the training of the `bugtype` model is something we should investigate.\r\n>\r\n> ```bash\r\n> Traceback (most recent call last):\r\n> File \"/home/promisefru/anaconda3/envs/bugbug/lib/python3.10/runpy.py\", line 196, in _run_module_as_main\r\n> return _run_code(code, main_globals, None,\r\n> File \"/home/promisefru/anaconda3/envs/bugbug/lib/python3.10/runpy.py\", line 86, in _run_code\r\n> exec(code, run_globals)\r\n> File \"/home/promisefru/mozilla/bugbug/scripts/trainer.py\", line 145, in <module>\r\n> main()\r\n> File \"/home/promisefru/mozilla/bugbug/scripts/trainer.py\", line 141, in main\r\n> retriever.go(args)\r\n> File \"/home/promisefru/mozilla/bugbug/scripts/trainer.py\", line 41, in go\r\n> metrics = model_obj.train(limit=args.limit)\r\n> File \"/home/promisefru/mozilla/bugbug/bugbug/model.py\", line 377, in train\r\n> self.le.fit(y)\r\n> File \"/home/promisefru/anaconda3/envs/bugbug/lib/python3.10/site-packages/sklearn/preprocessing/_label.py\", line 98, in fit\r\n> y = column_or_1d(y, warn=True)\r\n> File \"/home/promisefru/anaconda3/envs/bugbug/lib/python3.10/site-packages/sklearn/utils/validation.py\", line 1156, in column_or_1d\r\n> raise ValueError(\r\n> ValueError: y should be a 1d array, got an array of shape (198635, 5) instead.\r\n> ```\r\n\r\n_Originally posted by @PromiseFru in https://github.com/mozilla/bugbug/issues/3928#issuecomment-1875673580_\r\n\r\nSee also: https://github.com/mozilla/bugbug/pull/3823#pullrequestreview-1746981626\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n# This Source Code Form is subject to the terms of the Mozilla Public\n# License, v. 2.0. If a copy of the MPL was not distributed with this file,\n# You can obtain one at http://mozilla.org/MPL/2.0/.\n\nimport logging\nimport re\n\nimport numpy as np\nimport xgboost\nfrom sklearn.compose import ColumnTransformer\nfrom sklearn.feature_extraction import DictVectorizer\nfrom sklearn.multiclass import OneVsRestClassifier\nfrom sklearn.pipeline import Pipeline\n\nfrom bugbug import bug_features, bugzilla, feature_cleanup, utils\nfrom bugbug.model import BugModel\n\n# For the moment, rca - XYZ is treated of bugtype XYZ,\n# so we don't need to store it in a dictionary.\nRCA_CATEGORIES = [\n \"requirementerror\",\n \"poorarchitecture\",\n \"designerror\",\n \"codingerror\",\n \"testingerror\",\n \"externalsoftwareaffectingfirefox\",\n \"performanceerror\",\n \"standards\",\n \"systemerror\",\n \"localizationerror\",\n \"memory\",\n \"infrastructure/builderror\",\n \"communicationissues\",\n \"productdecision\",\n \"undocumentedchange\",\n \"cornercase\",\n]\n\nRCA_SUBCATEGORIES = [\n \"codingerror-syntaxerror\",\n \"codingerror-logicalerror\",\n \"codingerror-semanticerror\",\n \"codingerror-runtimeerror\",\n \"codingerror-unhandledexceptions\",\n \"codingerror-internalapiissue\",\n \"codingerror-networkissue\",\n \"codingerror-compatibilityissue\",\n \"codingerror-other\",\n]\n\nlogger = logging.getLogger(__name__)\n\n\nclass RCATypeModel(BugModel):\n def __init__(\n self, lemmatization=False, historical=False, rca_subcategories_enabled=False\n ):\n BugModel.__init__(self, lemmatization)\n\n self.calculate_importance = False\n self.rca_subcategories_enabled = rca_subcategories_enabled\n\n # should we consider only the main category or all sub categories\n self.RCA_TYPES = (\n RCA_SUBCATEGORIES + RCA_CATEGORIES\n if rca_subcategories_enabled\n else RCA_CATEGORIES\n )\n\n self.RCA_LIST = sorted(set(self.RCA_TYPES))\n\n feature_extractors = [\n bug_features.HasSTR(),\n bug_features.Severity(),\n bug_features.IsCoverityIssue(),\n bug_features.HasCrashSignature(),\n bug_features.HasURL(),\n bug_features.HasW3CURL(),\n bug_features.HasGithubURL(),\n # Ignore whiteboards that would make the ML completely skewed\n # bug_features.whiteboard(),\n bug_features.Patches(),\n bug_features.Landings(),\n bug_features.BlockedBugsNumber(),\n bug_features.EverAffected(),\n bug_features.AffectedThenUnaffected(),\n bug_features.Product(),\n bug_features.Component(),\n ]\n\n cleanup_functions = [\n feature_cleanup.url(),\n feature_cleanup.fileref(),\n feature_cleanup.synonyms(),\n ]\n\n self.extraction_pipeline = Pipeline(\n [\n (\n \"bug_extractor\",\n bug_features.BugExtractor(feature_extractors, cleanup_functions),\n ),\n ]\n )\n\n self.clf = Pipeline(\n [\n (\n \"union\",\n ColumnTransformer(\n [\n (\"data\", DictVectorizer(), \"data\"),\n (\"title\", self.text_vectorizer(min_df=0.001), \"title\"),\n (\n \"first_comment\",\n self.text_vectorizer(min_df=0.001),\n \"first_comment\",\n ),\n (\n \"comments\",\n self.text_vectorizer(min_df=0.001),\n \"comments\",\n ),\n ]\n ),\n ),\n (\n \"estimator\",\n OneVsRestClassifier(\n xgboost.XGBClassifier(n_jobs=utils.get_physical_cpu_count())\n ),\n ),\n ]\n )\n\n # return rca from a whiteboard string\n def get_rca_from_whiteboard(self, whiteboard_data):\n rca = []\n whiteboard_data = re.sub(\" \", \"\", whiteboard_data).lower()\n for whiteboard in whiteboard_data.split(\"[\"):\n if not any(whiteboard.startswith(s) for s in (\"rca-\", \"rca:\")):\n continue\n\n rca_whiteboard = re.sub(\"]\", \"\", whiteboard)\n\n # Hybrid cases: rca:X-Y\n rca_whiteboard = re.sub(\":\", \"-\", rca_whiteboard)\n\n rca_whiteboard_split = (\n rca_whiteboard.split(\"-\", 1)\n if self.rca_subcategories_enabled\n else rca_whiteboard.split(\"-\")\n )\n\n if rca_whiteboard_split[1] not in self.RCA_LIST:\n logger.warning(rca_whiteboard_split[1] + \" not in RCA_LIST\")\n else:\n rca.append(rca_whiteboard_split[1])\n return rca\n\n def get_labels(self):\n classes = {}\n for bug in bugzilla.get_bugs():\n target = np.zeros(len(self.RCA_LIST))\n for rca in self.get_rca_from_whiteboard(bug[\"whiteboard\"]):\n target[self.RCA_LIST.index(rca)] = 1\n classes[bug[\"id\"]] = target\n return classes, self.RCA_LIST\n\n def get_feature_names(self):\n return self.clf.named_steps[\"union\"].get_feature_names_out()\n\n def overwrite_classes(self, bugs, classes, probabilities):\n rca_values = self.get_rca(bugs)\n for i in len(classes):\n for rca in rca_values[i]:\n if rca in self.RCA_LIST:\n if probabilities:\n classes[i][self.RCA_LIST.index(rca)] = 1.0\n else:\n classes[i][self.RCA_LIST.index(rca)] = 1\n\n return classes\n", "path": "bugbug/models/rcatype.py"}, {"content": "# -*- coding: utf-8 -*-\n# This Source Code Form is subject to the terms of the Mozilla Public\n# License, v. 2.0. If a copy of the MPL was not distributed with this file,\n# You can obtain one at http://mozilla.org/MPL/2.0/.\n\nimport logging\nfrom typing import Iterable\n\nimport numpy as np\nimport xgboost\nfrom sklearn.compose import ColumnTransformer\nfrom sklearn.feature_extraction import DictVectorizer\nfrom sklearn.multiclass import OneVsRestClassifier\nfrom sklearn.pipeline import Pipeline\n\nfrom bugbug import bug_features, bugzilla, feature_cleanup, utils\nfrom bugbug.model import BugModel\n\nlogger = logging.getLogger(__name__)\n\n\nclass BugTypeModel(BugModel):\n def __init__(self, lemmatization=False, historical=False):\n BugModel.__init__(self, lemmatization)\n\n self.calculate_importance = False\n\n self.bug_type_extractors = bug_features.BugTypes.bug_type_extractors\n\n label_keyword_prefixes = {\n keyword\n for extractor in self.bug_type_extractors\n for keyword in extractor.keyword_prefixes\n }\n\n feature_extractors = [\n bug_features.HasSTR(),\n bug_features.Severity(),\n # Ignore keywords that would make the ML completely skewed\n # (we are going to use them as 100% rules in the evaluation phase).\n bug_features.Keywords(label_keyword_prefixes),\n bug_features.IsCoverityIssue(),\n bug_features.HasCrashSignature(),\n bug_features.HasURL(),\n bug_features.HasW3CURL(),\n bug_features.HasGithubURL(),\n bug_features.Whiteboard(),\n bug_features.Patches(),\n bug_features.Landings(),\n bug_features.BlockedBugsNumber(),\n bug_features.EverAffected(),\n bug_features.AffectedThenUnaffected(),\n bug_features.Product(),\n bug_features.Component(),\n ]\n\n cleanup_functions = [\n feature_cleanup.url(),\n feature_cleanup.fileref(),\n feature_cleanup.synonyms(),\n ]\n\n self.extraction_pipeline = Pipeline(\n [\n (\n \"bug_extractor\",\n bug_features.BugExtractor(feature_extractors, cleanup_functions),\n ),\n ]\n )\n\n self.clf = Pipeline(\n [\n (\n \"union\",\n ColumnTransformer(\n [\n (\"data\", DictVectorizer(), \"data\"),\n (\"title\", self.text_vectorizer(min_df=0.001), \"title\"),\n (\n \"first_comment\",\n self.text_vectorizer(min_df=0.001),\n \"first_comment\",\n ),\n (\n \"comments\",\n self.text_vectorizer(min_df=0.001),\n \"comments\",\n ),\n ]\n ),\n ),\n (\n \"estimator\",\n OneVsRestClassifier(\n xgboost.XGBClassifier(n_jobs=utils.get_physical_cpu_count())\n ),\n ),\n ]\n )\n\n def get_labels(self) -> tuple[dict[int, np.ndarray], list[str]]:\n classes = {}\n\n bug_map = {bug[\"id\"]: bug for bug in bugzilla.get_bugs()}\n\n for bug_data in bug_map.values():\n target = np.zeros(len(self.bug_type_extractors))\n for i, is_type in enumerate(self.bug_type_extractors):\n if is_type(bug_data, bug_map):\n target[i] = 1\n\n classes[int(bug_data[\"id\"])] = target\n\n bug_types = [extractor.type_name for extractor in self.bug_type_extractors]\n\n for i, bug_type in enumerate(bug_types):\n logger.info(\n \"%d %s bugs\",\n sum(target[i] for target in classes.values()),\n bug_type,\n )\n\n return classes, bug_types\n\n def get_feature_names(self):\n return self.clf.named_steps[\"union\"].get_feature_names_out()\n\n def overwrite_classes(\n self,\n bugs: Iterable[bugzilla.BugDict],\n classes: dict[int, np.ndarray],\n probabilities: bool,\n ):\n bug_map = {bug[\"id\"]: bug for bug in bugs}\n\n for i, bug in enumerate(bugs):\n for j, is_type_applicable in enumerate(self.bug_type_extractors):\n if is_type_applicable(bug, bug_map):\n if probabilities:\n classes[i][j] = 1.0\n else:\n classes[i][j] = 1\n\n return classes\n", "path": "bugbug/models/bugtype.py"}]}
| 4,052 | 356 |
gh_patches_debug_12274
|
rasdani/github-patches
|
git_diff
|
wagtail__wagtail-11223
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Report pages performance regression
### Issue Summary
Various report pages have a performance regression in Wagtail 5.2, which I've tracked down to:
https://github.com/wagtail/wagtail/commit/7ba1afb8a402a09be5838a026523be78f08ea877
https://github.com/wagtail/wagtail/pull/10822
On a few sites we've upgraded to Wagtail 5.2 - performance in the Site History report has been significantly reduced:
Before:
<img width="1717" alt="Screenshot 2023-11-11 at 21 12 02" src="https://github.com/wagtail/wagtail/assets/177332/79650e6b-9c96-4d21-bbdf-23b98c862bf4">
After:
<img width="1716" alt="Screenshot 2023-11-11 at 21 13 09" src="https://github.com/wagtail/wagtail/assets/177332/e719e250-5c9c-4dc8-823b-1e1c3b40a74c">
<img width="900" alt="Screenshot 2023-11-11 at 21 13 19" src="https://github.com/wagtail/wagtail/assets/177332/5623467b-a0ca-4472-aa46-540ff568ac82">
### Steps to Reproduce
Find an existing Wagtail project with lots of pages, and log entries.
Check http://127.0.0.1:9000/admin/reports/site-history/ with the project running Wagtail 5.2 - page will probably be slow to load.
(Note: I did try and create a quick script to test this with Wagtail's starter project - but the performance of SQLite and a lack of a debug toolbar slowing things down made it a bit tricky!).
- I have confirmed that this issue can be reproduced as described on a fresh Wagtail project: yes
### Technical details
- Python version: 3.11 / any
- Django version: 4.2 / any
- Wagtail version: 5.2 / main
- Browser version: n/a
</issue>
<code>
[start of wagtail/admin/views/reports/base.py]
1 from django.utils.translation import gettext_lazy as _
2
3 from wagtail.admin.views.generic.models import IndexView
4
5
6 class ReportView(IndexView):
7 template_name = "wagtailadmin/reports/base_report.html"
8 title = ""
9 paginate_by = 50
10
11 def get_filtered_queryset(self):
12 return self.filter_queryset(self.get_queryset())
13
14 def decorate_paginated_queryset(self, object_list):
15 # A hook point to allow rewriting the object list after pagination has been applied
16 return object_list
17
18 def get(self, request, *args, **kwargs):
19 self.filters, self.object_list = self.get_filtered_queryset()
20 self.object_list = self.decorate_paginated_queryset(self.object_list)
21 context = self.get_context_data()
22 return self.render_to_response(context)
23
24 def get_context_data(self, *args, **kwargs):
25 context = super().get_context_data(*args, **kwargs)
26 context["title"] = self.title
27 return context
28
29
30 class PageReportView(ReportView):
31 template_name = "wagtailadmin/reports/base_page_report.html"
32 export_headings = {
33 "latest_revision_created_at": _("Updated"),
34 "status_string": _("Status"),
35 "content_type.model_class._meta.verbose_name.title": _("Type"),
36 }
37 list_export = [
38 "title",
39 "latest_revision_created_at",
40 "status_string",
41 "content_type.model_class._meta.verbose_name.title",
42 ]
43
[end of wagtail/admin/views/reports/base.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/wagtail/admin/views/reports/base.py b/wagtail/admin/views/reports/base.py
--- a/wagtail/admin/views/reports/base.py
+++ b/wagtail/admin/views/reports/base.py
@@ -17,8 +17,12 @@
def get(self, request, *args, **kwargs):
self.filters, self.object_list = self.get_filtered_queryset()
- self.object_list = self.decorate_paginated_queryset(self.object_list)
context = self.get_context_data()
+ # Decorate the queryset *after* Django's BaseListView has returned a paginated/reduced
+ # list of objects
+ context["object_list"] = self.decorate_paginated_queryset(
+ context["object_list"]
+ )
return self.render_to_response(context)
def get_context_data(self, *args, **kwargs):
|
{"golden_diff": "diff --git a/wagtail/admin/views/reports/base.py b/wagtail/admin/views/reports/base.py\n--- a/wagtail/admin/views/reports/base.py\n+++ b/wagtail/admin/views/reports/base.py\n@@ -17,8 +17,12 @@\n \n def get(self, request, *args, **kwargs):\n self.filters, self.object_list = self.get_filtered_queryset()\n- self.object_list = self.decorate_paginated_queryset(self.object_list)\n context = self.get_context_data()\n+ # Decorate the queryset *after* Django's BaseListView has returned a paginated/reduced\n+ # list of objects\n+ context[\"object_list\"] = self.decorate_paginated_queryset(\n+ context[\"object_list\"]\n+ )\n return self.render_to_response(context)\n \n def get_context_data(self, *args, **kwargs):\n", "issue": "Report pages performance regression\n### Issue Summary\r\n\r\nVarious report pages have a performance regression in Wagtail 5.2, which I've tracked down to:\r\n\r\nhttps://github.com/wagtail/wagtail/commit/7ba1afb8a402a09be5838a026523be78f08ea877\r\nhttps://github.com/wagtail/wagtail/pull/10822\r\n\r\nOn a few sites we've upgraded to Wagtail 5.2 - performance in the Site History report has been significantly reduced:\r\n\r\nBefore:\r\n<img width=\"1717\" alt=\"Screenshot 2023-11-11 at 21 12 02\" src=\"https://github.com/wagtail/wagtail/assets/177332/79650e6b-9c96-4d21-bbdf-23b98c862bf4\">\r\n\r\nAfter:\r\n<img width=\"1716\" alt=\"Screenshot 2023-11-11 at 21 13 09\" src=\"https://github.com/wagtail/wagtail/assets/177332/e719e250-5c9c-4dc8-823b-1e1c3b40a74c\">\r\n<img width=\"900\" alt=\"Screenshot 2023-11-11 at 21 13 19\" src=\"https://github.com/wagtail/wagtail/assets/177332/5623467b-a0ca-4472-aa46-540ff568ac82\">\r\n\r\n### Steps to Reproduce\r\n\r\nFind an existing Wagtail project with lots of pages, and log entries.\r\n\r\nCheck http://127.0.0.1:9000/admin/reports/site-history/ with the project running Wagtail 5.2 - page will probably be slow to load.\r\n\r\n(Note: I did try and create a quick script to test this with Wagtail's starter project - but the performance of SQLite and a lack of a debug toolbar slowing things down made it a bit tricky!).\r\n\r\n- I have confirmed that this issue can be reproduced as described on a fresh Wagtail project: yes\r\n\r\n### Technical details\r\n\r\n- Python version: 3.11 / any\r\n- Django version: 4.2 / any\r\n- Wagtail version: 5.2 / main\r\n- Browser version: n/a\n", "before_files": [{"content": "from django.utils.translation import gettext_lazy as _\n\nfrom wagtail.admin.views.generic.models import IndexView\n\n\nclass ReportView(IndexView):\n template_name = \"wagtailadmin/reports/base_report.html\"\n title = \"\"\n paginate_by = 50\n\n def get_filtered_queryset(self):\n return self.filter_queryset(self.get_queryset())\n\n def decorate_paginated_queryset(self, object_list):\n # A hook point to allow rewriting the object list after pagination has been applied\n return object_list\n\n def get(self, request, *args, **kwargs):\n self.filters, self.object_list = self.get_filtered_queryset()\n self.object_list = self.decorate_paginated_queryset(self.object_list)\n context = self.get_context_data()\n return self.render_to_response(context)\n\n def get_context_data(self, *args, **kwargs):\n context = super().get_context_data(*args, **kwargs)\n context[\"title\"] = self.title\n return context\n\n\nclass PageReportView(ReportView):\n template_name = \"wagtailadmin/reports/base_page_report.html\"\n export_headings = {\n \"latest_revision_created_at\": _(\"Updated\"),\n \"status_string\": _(\"Status\"),\n \"content_type.model_class._meta.verbose_name.title\": _(\"Type\"),\n }\n list_export = [\n \"title\",\n \"latest_revision_created_at\",\n \"status_string\",\n \"content_type.model_class._meta.verbose_name.title\",\n ]\n", "path": "wagtail/admin/views/reports/base.py"}]}
| 1,492 | 186 |
gh_patches_debug_24294
|
rasdani/github-patches
|
git_diff
|
cupy__cupy-6989
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Exception raised in `TCPStore.__del__` upon process termination
### Description
`__del__` should not perform syncronization when called during process termination.
```
Exception ignored in: <function TCPStore.__del__ at 0x7fb939be23a0>
Traceback (most recent call last):
File "/home/maehashi/Development/cupy/cupyx/distributed/_store.py", line 49, in __del__
File "/home/maehashi/Development/cupy/cupyx/distributed/_store.py", line 97, in stop
File "/home/maehashi/Development/cupy/cupyx/distributed/_store.py", line 31, in join
File "/home/maehashi/.pyenv/versions/3.8.1/lib/python3.8/multiprocessing/connection.py", line 251, in recv
ModuleNotFoundError: import of builtins halted; None in sys.modules
```
### To Reproduce
_No response_
### Installation
_No response_
### Environment
_No response_
### Additional Information
_No response_
</issue>
<code>
[start of cupyx/distributed/_store.py]
1 from ctypes import sizeof
2 import multiprocessing
3 import threading
4 import socket
5 import time
6
7 from cupyx.distributed import _klv_utils
8 from cupyx.distributed import _store_actions
9
10
11 _DEFAULT_HOST = '127.0.0.1'
12 _DEFAULT_PORT = 13333
13
14
15 class ExceptionAwareProcess(multiprocessing.Process):
16 def __init__(self, *args, **kwargs):
17 super().__init__(*args, **kwargs)
18 self._exception = None
19 self._parent_p, self._child_p = multiprocessing.Pipe()
20
21 def run(self):
22 try:
23 super().run()
24 self._child_p.send(None)
25 except Exception as e:
26 self._child_p.send(e)
27
28 def join(self):
29 super().join()
30 if self._parent_p.poll():
31 exception = self._parent_p.recv()
32 if exception is not None:
33 raise exception
34
35
36 class TCPStore:
37 # This is only used for initialization of nccl so we don't care
38 # too much about peformance
39 def __init__(self, world_size):
40 self.storage = {}
41 self._process = None
42 self._world_size = world_size
43 self._run = multiprocessing.Value('b', 1)
44 # For implementing a barrier
45 self._lock = threading.Lock()
46 self._current_barrier = None
47
48 def __del__(self):
49 self.stop()
50
51 def _set_process(self, process):
52 self._process = process
53
54 def _process_request(self, c_socket):
55 with c_socket:
56 # Receive in KLV format
57 action_bytes = c_socket.recv(sizeof(_klv_utils.action_t))
58 if len(action_bytes) > 0:
59 action_m = _klv_utils.action_t.from_buffer_copy(action_bytes)
60 if action_m.length > 256:
61 raise ValueError('Invalid length for message')
62 value = bytearray(action_m.value)[:action_m.length]
63 r = _store_actions.execute_action(action_m.action, value, self)
64 if r is not None:
65 c_socket.sendall(r.klv())
66
67 def _server_loop(self, host, port):
68 # This is for minimum info exchange during initialization
69 # a single connection allows to implement locking mechanics easily
70 with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s:
71 s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
72 s.bind((host, port))
73 s.listen()
74 s.settimeout(0.5)
75 while self._run.value == 1:
76 try:
77 c_socket, addr = s.accept()
78 except socket.timeout:
79 continue
80
81 t = threading.Thread(
82 target=self._process_request,
83 args=(c_socket,), daemon=True)
84 t.start()
85
86 def run(self, host=_DEFAULT_HOST, port=_DEFAULT_PORT):
87 # Run the TCP store in a different process
88 p = ExceptionAwareProcess(
89 target=self._server_loop, args=(host, port))
90 p.start()
91 self._process = p
92
93 def stop(self):
94 if self._process is not None:
95 with self._run.get_lock():
96 self._run.value = 0
97 self._process.join()
98
99
100 class TCPStoreProxy:
101
102 MAX_NUM_RETRIES = 50
103 DELAY_FOR_RETRY = 0.5
104
105 def __init__(self, host=_DEFAULT_HOST, port=_DEFAULT_PORT):
106 self.host = host
107 self.port = port
108
109 def _send_recv(self, action):
110 # Retry several times in case the rank 0 has not established the
111 # main store yet
112 for i in range(TCPStoreProxy.MAX_NUM_RETRIES):
113 try:
114 with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s:
115 # TODO retry connects
116 s.connect((self.host, self.port))
117 s.sendall(action.klv())
118 result_bytes = s.recv(sizeof(
119 _klv_utils.result_action_t))
120 if len(result_bytes) > 0:
121 result = _klv_utils.result_action_t.from_buffer_copy(
122 result_bytes)
123 value = bytearray(result.value)[:result.length]
124 if result.status == 0:
125 return action.decode_result(value)
126 else:
127 raise RuntimeError(value.decode('utf-8'))
128 except ConnectionRefusedError:
129 time.sleep(TCPStoreProxy.DELAY_FOR_RETRY)
130 raise RuntimeError('TCPStore is not available')
131
132 def __getitem__(self, key):
133 return self._send_recv(_store_actions.Get(key))
134
135 def __setitem__(self, key, value):
136 self._send_recv(_store_actions.Set(key, value))
137
138 def barrier(self):
139 # Barrier has special semantics
140 self._send_recv(_store_actions.Barrier())
141
[end of cupyx/distributed/_store.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/cupyx/distributed/_store.py b/cupyx/distributed/_store.py
--- a/cupyx/distributed/_store.py
+++ b/cupyx/distributed/_store.py
@@ -1,3 +1,4 @@
+import atexit
from ctypes import sizeof
import multiprocessing
import threading
@@ -11,6 +12,14 @@
_DEFAULT_HOST = '127.0.0.1'
_DEFAULT_PORT = 13333
+_exit_mode = False
+
+
[email protected]
+def _exit():
+ global _exit_mode
+ _exit_mode = True
+
class ExceptionAwareProcess(multiprocessing.Process):
def __init__(self, *args, **kwargs):
@@ -46,7 +55,8 @@
self._current_barrier = None
def __del__(self):
- self.stop()
+ if not _exit_mode:
+ self.stop()
def _set_process(self, process):
self._process = process
@@ -91,6 +101,8 @@
self._process = p
def stop(self):
+ if _exit_mode:
+ return # Prevent shutdown errors
if self._process is not None:
with self._run.get_lock():
self._run.value = 0
|
{"golden_diff": "diff --git a/cupyx/distributed/_store.py b/cupyx/distributed/_store.py\n--- a/cupyx/distributed/_store.py\n+++ b/cupyx/distributed/_store.py\n@@ -1,3 +1,4 @@\n+import atexit\n from ctypes import sizeof\n import multiprocessing\n import threading\n@@ -11,6 +12,14 @@\n _DEFAULT_HOST = '127.0.0.1'\n _DEFAULT_PORT = 13333\n \n+_exit_mode = False\n+\n+\[email protected]\n+def _exit():\n+ global _exit_mode\n+ _exit_mode = True\n+\n \n class ExceptionAwareProcess(multiprocessing.Process):\n def __init__(self, *args, **kwargs):\n@@ -46,7 +55,8 @@\n self._current_barrier = None\n \n def __del__(self):\n- self.stop()\n+ if not _exit_mode:\n+ self.stop()\n \n def _set_process(self, process):\n self._process = process\n@@ -91,6 +101,8 @@\n self._process = p\n \n def stop(self):\n+ if _exit_mode:\n+ return # Prevent shutdown errors\n if self._process is not None:\n with self._run.get_lock():\n self._run.value = 0\n", "issue": "Exception raised in `TCPStore.__del__` upon process termination\n### Description\n\n`__del__` should not perform syncronization when called during process termination.\r\n\r\n```\r\nException ignored in: <function TCPStore.__del__ at 0x7fb939be23a0>\r\nTraceback (most recent call last):\r\n File \"/home/maehashi/Development/cupy/cupyx/distributed/_store.py\", line 49, in __del__\r\n File \"/home/maehashi/Development/cupy/cupyx/distributed/_store.py\", line 97, in stop\r\n File \"/home/maehashi/Development/cupy/cupyx/distributed/_store.py\", line 31, in join\r\n File \"/home/maehashi/.pyenv/versions/3.8.1/lib/python3.8/multiprocessing/connection.py\", line 251, in recv\r\nModuleNotFoundError: import of builtins halted; None in sys.modules\r\n```\n\n### To Reproduce\n\n_No response_\n\n### Installation\n\n_No response_\n\n### Environment\n\n_No response_\n\n### Additional Information\n\n_No response_\n", "before_files": [{"content": "from ctypes import sizeof\nimport multiprocessing\nimport threading\nimport socket\nimport time\n\nfrom cupyx.distributed import _klv_utils\nfrom cupyx.distributed import _store_actions\n\n\n_DEFAULT_HOST = '127.0.0.1'\n_DEFAULT_PORT = 13333\n\n\nclass ExceptionAwareProcess(multiprocessing.Process):\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n self._exception = None\n self._parent_p, self._child_p = multiprocessing.Pipe()\n\n def run(self):\n try:\n super().run()\n self._child_p.send(None)\n except Exception as e:\n self._child_p.send(e)\n\n def join(self):\n super().join()\n if self._parent_p.poll():\n exception = self._parent_p.recv()\n if exception is not None:\n raise exception\n\n\nclass TCPStore:\n # This is only used for initialization of nccl so we don't care\n # too much about peformance\n def __init__(self, world_size):\n self.storage = {}\n self._process = None\n self._world_size = world_size\n self._run = multiprocessing.Value('b', 1)\n # For implementing a barrier\n self._lock = threading.Lock()\n self._current_barrier = None\n\n def __del__(self):\n self.stop()\n\n def _set_process(self, process):\n self._process = process\n\n def _process_request(self, c_socket):\n with c_socket:\n # Receive in KLV format\n action_bytes = c_socket.recv(sizeof(_klv_utils.action_t))\n if len(action_bytes) > 0:\n action_m = _klv_utils.action_t.from_buffer_copy(action_bytes)\n if action_m.length > 256:\n raise ValueError('Invalid length for message')\n value = bytearray(action_m.value)[:action_m.length]\n r = _store_actions.execute_action(action_m.action, value, self)\n if r is not None:\n c_socket.sendall(r.klv())\n\n def _server_loop(self, host, port):\n # This is for minimum info exchange during initialization\n # a single connection allows to implement locking mechanics easily\n with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s:\n s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)\n s.bind((host, port))\n s.listen()\n s.settimeout(0.5)\n while self._run.value == 1:\n try:\n c_socket, addr = s.accept()\n except socket.timeout:\n continue\n\n t = threading.Thread(\n target=self._process_request,\n args=(c_socket,), daemon=True)\n t.start()\n\n def run(self, host=_DEFAULT_HOST, port=_DEFAULT_PORT):\n # Run the TCP store in a different process\n p = ExceptionAwareProcess(\n target=self._server_loop, args=(host, port))\n p.start()\n self._process = p\n\n def stop(self):\n if self._process is not None:\n with self._run.get_lock():\n self._run.value = 0\n self._process.join()\n\n\nclass TCPStoreProxy:\n\n MAX_NUM_RETRIES = 50\n DELAY_FOR_RETRY = 0.5\n\n def __init__(self, host=_DEFAULT_HOST, port=_DEFAULT_PORT):\n self.host = host\n self.port = port\n\n def _send_recv(self, action):\n # Retry several times in case the rank 0 has not established the\n # main store yet\n for i in range(TCPStoreProxy.MAX_NUM_RETRIES):\n try:\n with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s:\n # TODO retry connects\n s.connect((self.host, self.port))\n s.sendall(action.klv())\n result_bytes = s.recv(sizeof(\n _klv_utils.result_action_t))\n if len(result_bytes) > 0:\n result = _klv_utils.result_action_t.from_buffer_copy(\n result_bytes)\n value = bytearray(result.value)[:result.length]\n if result.status == 0:\n return action.decode_result(value)\n else:\n raise RuntimeError(value.decode('utf-8'))\n except ConnectionRefusedError:\n time.sleep(TCPStoreProxy.DELAY_FOR_RETRY)\n raise RuntimeError('TCPStore is not available')\n\n def __getitem__(self, key):\n return self._send_recv(_store_actions.Get(key))\n\n def __setitem__(self, key, value):\n self._send_recv(_store_actions.Set(key, value))\n\n def barrier(self):\n # Barrier has special semantics\n self._send_recv(_store_actions.Barrier())\n", "path": "cupyx/distributed/_store.py"}]}
| 2,121 | 296 |
gh_patches_debug_14928
|
rasdani/github-patches
|
git_diff
|
ansible-collections__community.vmware-1084
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
vmware_host_lockdown crashes on failure
##### SUMMARY
Today, I wanted to enable lockdown mode on a host. This failed, although I didn't find out yet why. But that's not important. The bug is that the module includes `vim` from `pyvmomi` instead of `pyVmomi` and doesn't check that this works:
https://github.com/ansible-collections/community.vmware/blob/f418bdaa6a678c09b6fb9115d927d8c44d50060f/plugins/modules/vmware_host_lockdown.py#L123-L126
I think nobody ran into this issue yet because enabling or disabling lockdown seldom fails (in my experience) and `vim` is only used in this case:
https://github.com/ansible-collections/community.vmware/blob/f418bdaa6a678c09b6fb9115d927d8c44d50060f/plugins/modules/vmware_host_lockdown.py#L176-L182
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
vmware_host_lockdown
##### ANSIBLE VERSION
```
ansible [core 2.11.6]
config file = None
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.9/site-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible
python version = 3.9.1 (default, Aug 19 2021, 02:58:42) [GCC 10.2.0]
jinja version = 3.0.2
libyaml = True
```
##### COLLECTION VERSION
```
# /usr/lib/python3.9/site-packages/ansible_collections
Collection Version
---------------- -------
community.vmware 1.15.0
```
##### CONFIGURATION
```
```
##### OS / ENVIRONMENT
VMware Photon OS 4.0 and vSphere 7.0U2, but this is irrelevant.
##### STEPS TO REPRODUCE
Tricky. As I've said, enabling / disabling lockdown usually works.
##### EXPECTED RESULTS
A failure.
##### ACTUAL RESULTS
```
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: NameError: name 'vim' is not defined
```
</issue>
<code>
[start of plugins/modules/vmware_host_lockdown.py]
1 #!/usr/bin/python
2 # -*- coding: utf-8 -*-
3
4 # Copyright: (c) 2018, Abhijeet Kasurde <[email protected]>
5 # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
6
7 from __future__ import absolute_import, division, print_function
8 __metaclass__ = type
9
10
11 DOCUMENTATION = r'''
12 ---
13 module: vmware_host_lockdown
14 short_description: Manage administrator permission for the local administrative account for the ESXi host
15 description:
16 - This module can be used to manage administrator permission for the local administrative account for the host when ESXi hostname is given.
17 - All parameters and VMware objects values are case sensitive.
18 - This module is destructive as administrator permission are managed using APIs used, please read options carefully and proceed.
19 - Please specify C(hostname) as vCenter IP or hostname only, as lockdown operations are not possible from standalone ESXi server.
20 author:
21 - Abhijeet Kasurde (@Akasurde)
22 notes:
23 - Tested on vSphere 6.5
24 requirements:
25 - python >= 2.6
26 - PyVmomi
27 options:
28 cluster_name:
29 description:
30 - Name of cluster.
31 - All host systems from given cluster used to manage lockdown.
32 - Required parameter, if C(esxi_hostname) is not set.
33 type: str
34 esxi_hostname:
35 description:
36 - List of ESXi hostname to manage lockdown.
37 - Required parameter, if C(cluster_name) is not set.
38 - See examples for specifications.
39 type: list
40 elements: str
41 state:
42 description:
43 - State of hosts system
44 - If set to C(present), all host systems will be set in lockdown mode.
45 - If host system is already in lockdown mode and set to C(present), no action will be taken.
46 - If set to C(absent), all host systems will be removed from lockdown mode.
47 - If host system is already out of lockdown mode and set to C(absent), no action will be taken.
48 default: present
49 choices: [ present, absent ]
50 type: str
51 extends_documentation_fragment:
52 - community.vmware.vmware.documentation
53
54 '''
55
56 EXAMPLES = r'''
57 - name: Enter host system into lockdown mode
58 community.vmware.vmware_host_lockdown:
59 hostname: '{{ vcenter_hostname }}'
60 username: '{{ vcenter_username }}'
61 password: '{{ vcenter_password }}'
62 esxi_hostname: '{{ esxi_hostname }}'
63 state: present
64 delegate_to: localhost
65
66 - name: Exit host systems from lockdown mode
67 community.vmware.vmware_host_lockdown:
68 hostname: '{{ vcenter_hostname }}'
69 username: '{{ vcenter_username }}'
70 password: '{{ vcenter_password }}'
71 esxi_hostname: '{{ esxi_hostname }}'
72 state: absent
73 delegate_to: localhost
74
75 - name: Enter host systems into lockdown mode
76 community.vmware.vmware_host_lockdown:
77 hostname: '{{ vcenter_hostname }}'
78 username: '{{ vcenter_username }}'
79 password: '{{ vcenter_password }}'
80 esxi_hostname:
81 - '{{ esxi_hostname_1 }}'
82 - '{{ esxi_hostname_2 }}'
83 state: present
84 delegate_to: localhost
85
86 - name: Exit host systems from lockdown mode
87 community.vmware.vmware_host_lockdown:
88 hostname: '{{ vcenter_hostname }}'
89 username: '{{ vcenter_username }}'
90 password: '{{ vcenter_password }}'
91 esxi_hostname:
92 - '{{ esxi_hostname_1 }}'
93 - '{{ esxi_hostname_2 }}'
94 state: absent
95 delegate_to: localhost
96
97 - name: Enter all host system from cluster into lockdown mode
98 community.vmware.vmware_host_lockdown:
99 hostname: '{{ vcenter_hostname }}'
100 username: '{{ vcenter_username }}'
101 password: '{{ vcenter_password }}'
102 cluster_name: '{{ cluster_name }}'
103 state: present
104 delegate_to: localhost
105 '''
106
107 RETURN = r'''
108 results:
109 description: metadata about state of Host system lock down
110 returned: always
111 type: dict
112 sample: {
113 "host_lockdown_state": {
114 "DC0_C0": {
115 "current_state": "present",
116 "previous_state": "absent",
117 "desired_state": "present",
118 },
119 }
120 }
121 '''
122
123 try:
124 from pyvmomi import vim
125 except ImportError:
126 pass
127
128 from ansible.module_utils.basic import AnsibleModule
129 from ansible_collections.community.vmware.plugins.module_utils.vmware import vmware_argument_spec, PyVmomi
130 from ansible.module_utils._text import to_native
131
132
133 class VmwareLockdownManager(PyVmomi):
134 def __init__(self, module):
135 super(VmwareLockdownManager, self).__init__(module)
136 if not self.is_vcenter():
137 self.module.fail_json(msg="Lockdown operations are performed from vCenter only. "
138 "hostname %s is an ESXi server. Please specify hostname "
139 "as vCenter server." % self.module.params['hostname'])
140 cluster_name = self.params.get('cluster_name', None)
141 esxi_host_name = self.params.get('esxi_hostname', None)
142 self.hosts = self.get_all_host_objs(cluster_name=cluster_name, esxi_host_name=esxi_host_name)
143
144 def ensure(self):
145 """
146 Function to manage internal state management
147 """
148 results = dict(changed=False, host_lockdown_state=dict())
149 change_list = []
150 desired_state = self.params.get('state')
151 for host in self.hosts:
152 results['host_lockdown_state'][host.name] = dict(current_state='',
153 desired_state=desired_state,
154 previous_state=''
155 )
156 changed = False
157 try:
158 if host.config.adminDisabled:
159 results['host_lockdown_state'][host.name]['previous_state'] = 'present'
160 if desired_state == 'absent':
161 if not self.module.check_mode:
162 host.ExitLockdownMode()
163 results['host_lockdown_state'][host.name]['current_state'] = 'absent'
164 changed = True
165 else:
166 results['host_lockdown_state'][host.name]['current_state'] = 'present'
167 elif not host.config.adminDisabled:
168 results['host_lockdown_state'][host.name]['previous_state'] = 'absent'
169 if desired_state == 'present':
170 if not self.module.check_mode:
171 host.EnterLockdownMode()
172 results['host_lockdown_state'][host.name]['current_state'] = 'present'
173 changed = True
174 else:
175 results['host_lockdown_state'][host.name]['current_state'] = 'absent'
176 except vim.fault.HostConfigFault as host_config_fault:
177 self.module.fail_json(msg="Failed to manage lockdown mode for esxi"
178 " hostname %s : %s" % (host.name, to_native(host_config_fault.msg)))
179 except vim.fault.AdminDisabled as admin_disabled:
180 self.module.fail_json(msg="Failed to manage lockdown mode as administrator "
181 "permission has been disabled for "
182 "esxi hostname %s : %s" % (host.name, to_native(admin_disabled.msg)))
183 except Exception as generic_exception:
184 self.module.fail_json(msg="Failed to manage lockdown mode due to generic exception for esxi "
185 "hostname %s : %s" % (host.name, to_native(generic_exception)))
186 change_list.append(changed)
187
188 if any(change_list):
189 results['changed'] = True
190
191 self.module.exit_json(**results)
192
193
194 def main():
195 argument_spec = vmware_argument_spec()
196 argument_spec.update(
197 cluster_name=dict(type='str', required=False),
198 esxi_hostname=dict(type='list', required=False, elements='str'),
199 state=dict(type='str', default='present', choices=['present', 'absent'], required=False),
200 )
201
202 module = AnsibleModule(
203 argument_spec=argument_spec,
204 supports_check_mode=True,
205 required_one_of=[
206 ['cluster_name', 'esxi_hostname'],
207 ]
208 )
209
210 vmware_lockdown_mgr = VmwareLockdownManager(module)
211 vmware_lockdown_mgr.ensure()
212
213
214 if __name__ == "__main__":
215 main()
216
[end of plugins/modules/vmware_host_lockdown.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/plugins/modules/vmware_host_lockdown.py b/plugins/modules/vmware_host_lockdown.py
--- a/plugins/modules/vmware_host_lockdown.py
+++ b/plugins/modules/vmware_host_lockdown.py
@@ -121,9 +121,10 @@
'''
try:
- from pyvmomi import vim
+ from pyVmomi import vim
+ HAS_PYVMOMI = True
except ImportError:
- pass
+ HAS_PYVMOMI = False
from ansible.module_utils.basic import AnsibleModule
from ansible_collections.community.vmware.plugins.module_utils.vmware import vmware_argument_spec, PyVmomi
@@ -207,6 +208,9 @@
]
)
+ if not HAS_PYVMOMI:
+ module.fail_json(msg='pyvmomi required for this module')
+
vmware_lockdown_mgr = VmwareLockdownManager(module)
vmware_lockdown_mgr.ensure()
|
{"golden_diff": "diff --git a/plugins/modules/vmware_host_lockdown.py b/plugins/modules/vmware_host_lockdown.py\n--- a/plugins/modules/vmware_host_lockdown.py\n+++ b/plugins/modules/vmware_host_lockdown.py\n@@ -121,9 +121,10 @@\n '''\n \n try:\n- from pyvmomi import vim\n+ from pyVmomi import vim\n+ HAS_PYVMOMI = True\n except ImportError:\n- pass\n+ HAS_PYVMOMI = False\n \n from ansible.module_utils.basic import AnsibleModule\n from ansible_collections.community.vmware.plugins.module_utils.vmware import vmware_argument_spec, PyVmomi\n@@ -207,6 +208,9 @@\n ]\n )\n \n+ if not HAS_PYVMOMI:\n+ module.fail_json(msg='pyvmomi required for this module')\n+\n vmware_lockdown_mgr = VmwareLockdownManager(module)\n vmware_lockdown_mgr.ensure()\n", "issue": "vmware_host_lockdown crashes on failure\n##### SUMMARY\r\nToday, I wanted to enable lockdown mode on a host. This failed, although I didn't find out yet why. But that's not important. The bug is that the module includes `vim` from `pyvmomi` instead of `pyVmomi` and doesn't check that this works:\r\n\r\nhttps://github.com/ansible-collections/community.vmware/blob/f418bdaa6a678c09b6fb9115d927d8c44d50060f/plugins/modules/vmware_host_lockdown.py#L123-L126\r\n\r\nI think nobody ran into this issue yet because enabling or disabling lockdown seldom fails (in my experience) and `vim` is only used in this case:\r\n\r\nhttps://github.com/ansible-collections/community.vmware/blob/f418bdaa6a678c09b6fb9115d927d8c44d50060f/plugins/modules/vmware_host_lockdown.py#L176-L182\r\n\r\n##### ISSUE TYPE\r\n- Bug Report\r\n\r\n##### COMPONENT NAME\r\nvmware_host_lockdown\r\n\r\n##### ANSIBLE VERSION\r\n```\r\nansible [core 2.11.6] \r\n config file = None\r\n configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']\r\n ansible python module location = /usr/lib/python3.9/site-packages/ansible\r\n ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections\r\n executable location = /usr/bin/ansible\r\n python version = 3.9.1 (default, Aug 19 2021, 02:58:42) [GCC 10.2.0]\r\n jinja version = 3.0.2\r\n libyaml = True\r\n```\r\n\r\n##### COLLECTION VERSION\r\n```\r\n# /usr/lib/python3.9/site-packages/ansible_collections\r\nCollection Version\r\n---------------- -------\r\ncommunity.vmware 1.15.0\r\n```\r\n\r\n##### CONFIGURATION\r\n```\r\n\r\n```\r\n\r\n##### OS / ENVIRONMENT\r\nVMware Photon OS 4.0 and vSphere 7.0U2, but this is irrelevant.\r\n\r\n\r\n##### STEPS TO REPRODUCE\r\nTricky. As I've said, enabling / disabling lockdown usually works.\r\n\r\n##### EXPECTED RESULTS\r\nA failure.\r\n\r\n\r\n##### ACTUAL RESULTS\r\n```\r\nAn exception occurred during task execution. To see the full traceback, use -vvv. The error was: NameError: name 'vim' is not defined\r\n```\n", "before_files": [{"content": "#!/usr/bin/python\n# -*- coding: utf-8 -*-\n\n# Copyright: (c) 2018, Abhijeet Kasurde <[email protected]>\n# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)\n\nfrom __future__ import absolute_import, division, print_function\n__metaclass__ = type\n\n\nDOCUMENTATION = r'''\n---\nmodule: vmware_host_lockdown\nshort_description: Manage administrator permission for the local administrative account for the ESXi host\ndescription:\n- This module can be used to manage administrator permission for the local administrative account for the host when ESXi hostname is given.\n- All parameters and VMware objects values are case sensitive.\n- This module is destructive as administrator permission are managed using APIs used, please read options carefully and proceed.\n- Please specify C(hostname) as vCenter IP or hostname only, as lockdown operations are not possible from standalone ESXi server.\nauthor:\n- Abhijeet Kasurde (@Akasurde)\nnotes:\n- Tested on vSphere 6.5\nrequirements:\n- python >= 2.6\n- PyVmomi\noptions:\n cluster_name:\n description:\n - Name of cluster.\n - All host systems from given cluster used to manage lockdown.\n - Required parameter, if C(esxi_hostname) is not set.\n type: str\n esxi_hostname:\n description:\n - List of ESXi hostname to manage lockdown.\n - Required parameter, if C(cluster_name) is not set.\n - See examples for specifications.\n type: list\n elements: str\n state:\n description:\n - State of hosts system\n - If set to C(present), all host systems will be set in lockdown mode.\n - If host system is already in lockdown mode and set to C(present), no action will be taken.\n - If set to C(absent), all host systems will be removed from lockdown mode.\n - If host system is already out of lockdown mode and set to C(absent), no action will be taken.\n default: present\n choices: [ present, absent ]\n type: str\nextends_documentation_fragment:\n- community.vmware.vmware.documentation\n\n'''\n\nEXAMPLES = r'''\n- name: Enter host system into lockdown mode\n community.vmware.vmware_host_lockdown:\n hostname: '{{ vcenter_hostname }}'\n username: '{{ vcenter_username }}'\n password: '{{ vcenter_password }}'\n esxi_hostname: '{{ esxi_hostname }}'\n state: present\n delegate_to: localhost\n\n- name: Exit host systems from lockdown mode\n community.vmware.vmware_host_lockdown:\n hostname: '{{ vcenter_hostname }}'\n username: '{{ vcenter_username }}'\n password: '{{ vcenter_password }}'\n esxi_hostname: '{{ esxi_hostname }}'\n state: absent\n delegate_to: localhost\n\n- name: Enter host systems into lockdown mode\n community.vmware.vmware_host_lockdown:\n hostname: '{{ vcenter_hostname }}'\n username: '{{ vcenter_username }}'\n password: '{{ vcenter_password }}'\n esxi_hostname:\n - '{{ esxi_hostname_1 }}'\n - '{{ esxi_hostname_2 }}'\n state: present\n delegate_to: localhost\n\n- name: Exit host systems from lockdown mode\n community.vmware.vmware_host_lockdown:\n hostname: '{{ vcenter_hostname }}'\n username: '{{ vcenter_username }}'\n password: '{{ vcenter_password }}'\n esxi_hostname:\n - '{{ esxi_hostname_1 }}'\n - '{{ esxi_hostname_2 }}'\n state: absent\n delegate_to: localhost\n\n- name: Enter all host system from cluster into lockdown mode\n community.vmware.vmware_host_lockdown:\n hostname: '{{ vcenter_hostname }}'\n username: '{{ vcenter_username }}'\n password: '{{ vcenter_password }}'\n cluster_name: '{{ cluster_name }}'\n state: present\n delegate_to: localhost\n'''\n\nRETURN = r'''\nresults:\n description: metadata about state of Host system lock down\n returned: always\n type: dict\n sample: {\n \"host_lockdown_state\": {\n \"DC0_C0\": {\n \"current_state\": \"present\",\n \"previous_state\": \"absent\",\n \"desired_state\": \"present\",\n },\n }\n }\n'''\n\ntry:\n from pyvmomi import vim\nexcept ImportError:\n pass\n\nfrom ansible.module_utils.basic import AnsibleModule\nfrom ansible_collections.community.vmware.plugins.module_utils.vmware import vmware_argument_spec, PyVmomi\nfrom ansible.module_utils._text import to_native\n\n\nclass VmwareLockdownManager(PyVmomi):\n def __init__(self, module):\n super(VmwareLockdownManager, self).__init__(module)\n if not self.is_vcenter():\n self.module.fail_json(msg=\"Lockdown operations are performed from vCenter only. \"\n \"hostname %s is an ESXi server. Please specify hostname \"\n \"as vCenter server.\" % self.module.params['hostname'])\n cluster_name = self.params.get('cluster_name', None)\n esxi_host_name = self.params.get('esxi_hostname', None)\n self.hosts = self.get_all_host_objs(cluster_name=cluster_name, esxi_host_name=esxi_host_name)\n\n def ensure(self):\n \"\"\"\n Function to manage internal state management\n \"\"\"\n results = dict(changed=False, host_lockdown_state=dict())\n change_list = []\n desired_state = self.params.get('state')\n for host in self.hosts:\n results['host_lockdown_state'][host.name] = dict(current_state='',\n desired_state=desired_state,\n previous_state=''\n )\n changed = False\n try:\n if host.config.adminDisabled:\n results['host_lockdown_state'][host.name]['previous_state'] = 'present'\n if desired_state == 'absent':\n if not self.module.check_mode:\n host.ExitLockdownMode()\n results['host_lockdown_state'][host.name]['current_state'] = 'absent'\n changed = True\n else:\n results['host_lockdown_state'][host.name]['current_state'] = 'present'\n elif not host.config.adminDisabled:\n results['host_lockdown_state'][host.name]['previous_state'] = 'absent'\n if desired_state == 'present':\n if not self.module.check_mode:\n host.EnterLockdownMode()\n results['host_lockdown_state'][host.name]['current_state'] = 'present'\n changed = True\n else:\n results['host_lockdown_state'][host.name]['current_state'] = 'absent'\n except vim.fault.HostConfigFault as host_config_fault:\n self.module.fail_json(msg=\"Failed to manage lockdown mode for esxi\"\n \" hostname %s : %s\" % (host.name, to_native(host_config_fault.msg)))\n except vim.fault.AdminDisabled as admin_disabled:\n self.module.fail_json(msg=\"Failed to manage lockdown mode as administrator \"\n \"permission has been disabled for \"\n \"esxi hostname %s : %s\" % (host.name, to_native(admin_disabled.msg)))\n except Exception as generic_exception:\n self.module.fail_json(msg=\"Failed to manage lockdown mode due to generic exception for esxi \"\n \"hostname %s : %s\" % (host.name, to_native(generic_exception)))\n change_list.append(changed)\n\n if any(change_list):\n results['changed'] = True\n\n self.module.exit_json(**results)\n\n\ndef main():\n argument_spec = vmware_argument_spec()\n argument_spec.update(\n cluster_name=dict(type='str', required=False),\n esxi_hostname=dict(type='list', required=False, elements='str'),\n state=dict(type='str', default='present', choices=['present', 'absent'], required=False),\n )\n\n module = AnsibleModule(\n argument_spec=argument_spec,\n supports_check_mode=True,\n required_one_of=[\n ['cluster_name', 'esxi_hostname'],\n ]\n )\n\n vmware_lockdown_mgr = VmwareLockdownManager(module)\n vmware_lockdown_mgr.ensure()\n\n\nif __name__ == \"__main__\":\n main()\n", "path": "plugins/modules/vmware_host_lockdown.py"}]}
| 3,419 | 211 |
gh_patches_debug_1575
|
rasdani/github-patches
|
git_diff
|
wagtail__wagtail-6086
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Site.hostname should be lowercase to prevent duplicates
### Issue Summary
Wagtail `Site.hostname` accepts duplicate hostnames.
It possible to create `foo.com:80` and `Foo.com:80` but these are technically the same thus duplicates.
Site.hostname is case sensitive field. Hostnames and domain names are a case insensitive. https://tools.ietf.org/html/rfc4343
`foo.com` and `Foo.com` should be treated as the same value and raise a validation error.
### Steps to Reproduce
1. Start a new project with `wagtail start myproject`
2. Go to Settings > Sites
3. Add site "foo.com" port 80
4. Add site "Foo.com" port 80
I expect a validation error: `Site with this Hostname and Port already exists.`
I added a clean method on wagtail.core.models.Site that makes the hostname lowercase on save.
```
def clean(self):
self.hostname = self.hostname.lower()
```
The form raises an error now, but the error isn't displayed in the templates! 🐞
wagtail/admin/templates/wagtailadmin/generic/create.html
wagtail/admin/templates/wagtailadmin/generic/edit.html
These templates need `{{ form.non_field_errors }}`.
<img width="750" alt="Screenshot 2020-05-29 at 01 34 54" src="https://user-images.githubusercontent.com/1969342/83204661-a7060f00-a14c-11ea-8152-8568c0acef83.png">
* I have confirmed that this issue can be reproduced as described on a fresh Wagtail project: yes
</issue>
<code>
[start of wagtail/models/sites.py]
1 from collections import namedtuple
2
3 from django.apps import apps
4 from django.conf import settings
5 from django.core.cache import cache
6 from django.core.exceptions import ValidationError
7 from django.db import models
8 from django.db.models import Case, IntegerField, Q, When
9 from django.db.models.functions import Lower
10 from django.http.request import split_domain_port
11 from django.utils.translation import gettext_lazy as _
12
13 MATCH_HOSTNAME_PORT = 0
14 MATCH_HOSTNAME_DEFAULT = 1
15 MATCH_DEFAULT = 2
16 MATCH_HOSTNAME = 3
17
18
19 def get_site_for_hostname(hostname, port):
20 """Return the wagtailcore.Site object for the given hostname and port."""
21 Site = apps.get_model("wagtailcore.Site")
22
23 sites = list(
24 Site.objects.annotate(
25 match=Case(
26 # annotate the results by best choice descending
27 # put exact hostname+port match first
28 When(hostname=hostname, port=port, then=MATCH_HOSTNAME_PORT),
29 # then put hostname+default (better than just hostname or just default)
30 When(
31 hostname=hostname, is_default_site=True, then=MATCH_HOSTNAME_DEFAULT
32 ),
33 # then match default with different hostname. there is only ever
34 # one default, so order it above (possibly multiple) hostname
35 # matches so we can use sites[0] below to access it
36 When(is_default_site=True, then=MATCH_DEFAULT),
37 # because of the filter below, if it's not default then its a hostname match
38 default=MATCH_HOSTNAME,
39 output_field=IntegerField(),
40 )
41 )
42 .filter(Q(hostname=hostname) | Q(is_default_site=True))
43 .order_by("match")
44 .select_related("root_page")
45 )
46
47 if sites:
48 # if there's a unique match or hostname (with port or default) match
49 if len(sites) == 1 or sites[0].match in (
50 MATCH_HOSTNAME_PORT,
51 MATCH_HOSTNAME_DEFAULT,
52 ):
53 return sites[0]
54
55 # if there is a default match with a different hostname, see if
56 # there are many hostname matches. if only 1 then use that instead
57 # otherwise we use the default
58 if sites[0].match == MATCH_DEFAULT:
59 return sites[len(sites) == 2]
60
61 raise Site.DoesNotExist()
62
63
64 class SiteManager(models.Manager):
65 def get_queryset(self):
66 return super(SiteManager, self).get_queryset().order_by(Lower("hostname"))
67
68 def get_by_natural_key(self, hostname, port):
69 return self.get(hostname=hostname, port=port)
70
71
72 SiteRootPath = namedtuple("SiteRootPath", "site_id root_path root_url language_code")
73
74 SITE_ROOT_PATHS_CACHE_KEY = "wagtail_site_root_paths"
75 # Increase the cache version whenever the structure SiteRootPath tuple changes
76 SITE_ROOT_PATHS_CACHE_VERSION = 2
77
78
79 class Site(models.Model):
80 hostname = models.CharField(
81 verbose_name=_("hostname"), max_length=255, db_index=True
82 )
83 port = models.IntegerField(
84 verbose_name=_("port"),
85 default=80,
86 help_text=_(
87 "Set this to something other than 80 if you need a specific port number to appear in URLs"
88 " (e.g. development on port 8000). Does not affect request handling (so port forwarding still works)."
89 ),
90 )
91 site_name = models.CharField(
92 verbose_name=_("site name"),
93 max_length=255,
94 blank=True,
95 help_text=_("Human-readable name for the site."),
96 )
97 root_page = models.ForeignKey(
98 "Page",
99 verbose_name=_("root page"),
100 related_name="sites_rooted_here",
101 on_delete=models.CASCADE,
102 )
103 is_default_site = models.BooleanField(
104 verbose_name=_("is default site"),
105 default=False,
106 help_text=_(
107 "If true, this site will handle requests for all other hostnames that do not have a site entry of their own"
108 ),
109 )
110
111 objects = SiteManager()
112
113 class Meta:
114 unique_together = ("hostname", "port")
115 verbose_name = _("site")
116 verbose_name_plural = _("sites")
117
118 def natural_key(self):
119 return (self.hostname, self.port)
120
121 def __str__(self):
122 default_suffix = " [{}]".format(_("default"))
123 if self.site_name:
124 return self.site_name + (default_suffix if self.is_default_site else "")
125 else:
126 return (
127 self.hostname
128 + ("" if self.port == 80 else (":%d" % self.port))
129 + (default_suffix if self.is_default_site else "")
130 )
131
132 @staticmethod
133 def find_for_request(request):
134 """
135 Find the site object responsible for responding to this HTTP
136 request object. Try:
137
138 * unique hostname first
139 * then hostname and port
140 * if there is no matching hostname at all, or no matching
141 hostname:port combination, fall back to the unique default site,
142 or raise an exception
143
144 NB this means that high-numbered ports on an extant hostname may
145 still be routed to a different hostname which is set as the default
146
147 The site will be cached via request._wagtail_site
148 """
149
150 if request is None:
151 return None
152
153 if not hasattr(request, "_wagtail_site"):
154 site = Site._find_for_request(request)
155 setattr(request, "_wagtail_site", site)
156 return request._wagtail_site
157
158 @staticmethod
159 def _find_for_request(request):
160 hostname = split_domain_port(request.get_host())[0]
161 port = request.get_port()
162 site = None
163 try:
164 site = get_site_for_hostname(hostname, port)
165 except Site.DoesNotExist:
166 pass
167 # copy old SiteMiddleware behaviour
168 return site
169
170 @property
171 def root_url(self):
172 if self.port == 80:
173 return "http://%s" % self.hostname
174 elif self.port == 443:
175 return "https://%s" % self.hostname
176 else:
177 return "http://%s:%d" % (self.hostname, self.port)
178
179 def clean_fields(self, exclude=None):
180 super().clean_fields(exclude)
181 # Only one site can have the is_default_site flag set
182 try:
183 default = Site.objects.get(is_default_site=True)
184 except Site.DoesNotExist:
185 pass
186 except Site.MultipleObjectsReturned:
187 raise
188 else:
189 if self.is_default_site and self.pk != default.pk:
190 raise ValidationError(
191 {
192 "is_default_site": [
193 _(
194 "%(hostname)s is already configured as the default site."
195 " You must unset that before you can save this site as default."
196 )
197 % {"hostname": default.hostname}
198 ]
199 }
200 )
201
202 @staticmethod
203 def get_site_root_paths():
204 """
205 Return a list of `SiteRootPath` instances, most specific path
206 first - used to translate url_paths into actual URLs with hostnames
207
208 Each root path is an instance of the `SiteRootPath` named tuple,
209 and have the following attributes:
210
211 - `site_id` - The ID of the Site record
212 - `root_path` - The internal URL path of the site's home page (for example '/home/')
213 - `root_url` - The scheme/domain name of the site (for example 'https://www.example.com/')
214 - `language_code` - The language code of the site (for example 'en')
215 """
216 result = cache.get(
217 SITE_ROOT_PATHS_CACHE_KEY, version=SITE_ROOT_PATHS_CACHE_VERSION
218 )
219
220 if result is None:
221 result = []
222
223 for site in Site.objects.select_related(
224 "root_page", "root_page__locale"
225 ).order_by("-root_page__url_path", "-is_default_site", "hostname"):
226 if getattr(settings, "WAGTAIL_I18N_ENABLED", False):
227 result.extend(
228 [
229 SiteRootPath(
230 site.id,
231 root_page.url_path,
232 site.root_url,
233 root_page.locale.language_code,
234 )
235 for root_page in site.root_page.get_translations(
236 inclusive=True
237 ).select_related("locale")
238 ]
239 )
240 else:
241 result.append(
242 SiteRootPath(
243 site.id,
244 site.root_page.url_path,
245 site.root_url,
246 site.root_page.locale.language_code,
247 )
248 )
249
250 cache.set(
251 SITE_ROOT_PATHS_CACHE_KEY,
252 result,
253 3600,
254 version=SITE_ROOT_PATHS_CACHE_VERSION,
255 )
256
257 else:
258 # Convert the cache result to a list of SiteRootPath tuples, as some
259 # cache backends (e.g. Redis) don't support named tuples.
260 result = [SiteRootPath(*result) for result in result]
261
262 return result
263
264 @staticmethod
265 def clear_site_root_paths_cache():
266 cache.delete(SITE_ROOT_PATHS_CACHE_KEY, version=SITE_ROOT_PATHS_CACHE_VERSION)
267
[end of wagtail/models/sites.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/wagtail/models/sites.py b/wagtail/models/sites.py
--- a/wagtail/models/sites.py
+++ b/wagtail/models/sites.py
@@ -129,6 +129,9 @@
+ (default_suffix if self.is_default_site else "")
)
+ def clean(self):
+ self.hostname = self.hostname.lower()
+
@staticmethod
def find_for_request(request):
"""
|
{"golden_diff": "diff --git a/wagtail/models/sites.py b/wagtail/models/sites.py\n--- a/wagtail/models/sites.py\n+++ b/wagtail/models/sites.py\n@@ -129,6 +129,9 @@\n + (default_suffix if self.is_default_site else \"\")\n )\n \n+ def clean(self):\n+ self.hostname = self.hostname.lower()\n+\n @staticmethod\n def find_for_request(request):\n \"\"\"\n", "issue": "Site.hostname should be lowercase to prevent duplicates\n### Issue Summary\r\n\r\nWagtail `Site.hostname` accepts duplicate hostnames. \r\nIt possible to create `foo.com:80` and `Foo.com:80` but these are technically the same thus duplicates.\r\n\r\nSite.hostname is case sensitive field. Hostnames and domain names are a case insensitive. https://tools.ietf.org/html/rfc4343 \r\n\r\n`foo.com` and `Foo.com` should be treated as the same value and raise a validation error.\r\n\r\n### Steps to Reproduce\r\n\r\n1. Start a new project with `wagtail start myproject`\r\n2. Go to Settings > Sites\r\n3. Add site \"foo.com\" port 80\r\n4. Add site \"Foo.com\" port 80\r\n\r\nI expect a validation error: `Site with this Hostname and Port already exists.`\r\n\r\nI added a clean method on wagtail.core.models.Site that makes the hostname lowercase on save.\r\n```\r\n def clean(self):\r\n self.hostname = self.hostname.lower()\r\n```\r\n\r\nThe form raises an error now, but the error isn't displayed in the templates! \ud83d\udc1e\r\nwagtail/admin/templates/wagtailadmin/generic/create.html\r\nwagtail/admin/templates/wagtailadmin/generic/edit.html\r\n\r\nThese templates need `{{ form.non_field_errors }}`.\r\n\r\n<img width=\"750\" alt=\"Screenshot 2020-05-29 at 01 34 54\" src=\"https://user-images.githubusercontent.com/1969342/83204661-a7060f00-a14c-11ea-8152-8568c0acef83.png\">\r\n\r\n* I have confirmed that this issue can be reproduced as described on a fresh Wagtail project: yes\n", "before_files": [{"content": "from collections import namedtuple\n\nfrom django.apps import apps\nfrom django.conf import settings\nfrom django.core.cache import cache\nfrom django.core.exceptions import ValidationError\nfrom django.db import models\nfrom django.db.models import Case, IntegerField, Q, When\nfrom django.db.models.functions import Lower\nfrom django.http.request import split_domain_port\nfrom django.utils.translation import gettext_lazy as _\n\nMATCH_HOSTNAME_PORT = 0\nMATCH_HOSTNAME_DEFAULT = 1\nMATCH_DEFAULT = 2\nMATCH_HOSTNAME = 3\n\n\ndef get_site_for_hostname(hostname, port):\n \"\"\"Return the wagtailcore.Site object for the given hostname and port.\"\"\"\n Site = apps.get_model(\"wagtailcore.Site\")\n\n sites = list(\n Site.objects.annotate(\n match=Case(\n # annotate the results by best choice descending\n # put exact hostname+port match first\n When(hostname=hostname, port=port, then=MATCH_HOSTNAME_PORT),\n # then put hostname+default (better than just hostname or just default)\n When(\n hostname=hostname, is_default_site=True, then=MATCH_HOSTNAME_DEFAULT\n ),\n # then match default with different hostname. there is only ever\n # one default, so order it above (possibly multiple) hostname\n # matches so we can use sites[0] below to access it\n When(is_default_site=True, then=MATCH_DEFAULT),\n # because of the filter below, if it's not default then its a hostname match\n default=MATCH_HOSTNAME,\n output_field=IntegerField(),\n )\n )\n .filter(Q(hostname=hostname) | Q(is_default_site=True))\n .order_by(\"match\")\n .select_related(\"root_page\")\n )\n\n if sites:\n # if there's a unique match or hostname (with port or default) match\n if len(sites) == 1 or sites[0].match in (\n MATCH_HOSTNAME_PORT,\n MATCH_HOSTNAME_DEFAULT,\n ):\n return sites[0]\n\n # if there is a default match with a different hostname, see if\n # there are many hostname matches. if only 1 then use that instead\n # otherwise we use the default\n if sites[0].match == MATCH_DEFAULT:\n return sites[len(sites) == 2]\n\n raise Site.DoesNotExist()\n\n\nclass SiteManager(models.Manager):\n def get_queryset(self):\n return super(SiteManager, self).get_queryset().order_by(Lower(\"hostname\"))\n\n def get_by_natural_key(self, hostname, port):\n return self.get(hostname=hostname, port=port)\n\n\nSiteRootPath = namedtuple(\"SiteRootPath\", \"site_id root_path root_url language_code\")\n\nSITE_ROOT_PATHS_CACHE_KEY = \"wagtail_site_root_paths\"\n# Increase the cache version whenever the structure SiteRootPath tuple changes\nSITE_ROOT_PATHS_CACHE_VERSION = 2\n\n\nclass Site(models.Model):\n hostname = models.CharField(\n verbose_name=_(\"hostname\"), max_length=255, db_index=True\n )\n port = models.IntegerField(\n verbose_name=_(\"port\"),\n default=80,\n help_text=_(\n \"Set this to something other than 80 if you need a specific port number to appear in URLs\"\n \" (e.g. development on port 8000). Does not affect request handling (so port forwarding still works).\"\n ),\n )\n site_name = models.CharField(\n verbose_name=_(\"site name\"),\n max_length=255,\n blank=True,\n help_text=_(\"Human-readable name for the site.\"),\n )\n root_page = models.ForeignKey(\n \"Page\",\n verbose_name=_(\"root page\"),\n related_name=\"sites_rooted_here\",\n on_delete=models.CASCADE,\n )\n is_default_site = models.BooleanField(\n verbose_name=_(\"is default site\"),\n default=False,\n help_text=_(\n \"If true, this site will handle requests for all other hostnames that do not have a site entry of their own\"\n ),\n )\n\n objects = SiteManager()\n\n class Meta:\n unique_together = (\"hostname\", \"port\")\n verbose_name = _(\"site\")\n verbose_name_plural = _(\"sites\")\n\n def natural_key(self):\n return (self.hostname, self.port)\n\n def __str__(self):\n default_suffix = \" [{}]\".format(_(\"default\"))\n if self.site_name:\n return self.site_name + (default_suffix if self.is_default_site else \"\")\n else:\n return (\n self.hostname\n + (\"\" if self.port == 80 else (\":%d\" % self.port))\n + (default_suffix if self.is_default_site else \"\")\n )\n\n @staticmethod\n def find_for_request(request):\n \"\"\"\n Find the site object responsible for responding to this HTTP\n request object. Try:\n\n * unique hostname first\n * then hostname and port\n * if there is no matching hostname at all, or no matching\n hostname:port combination, fall back to the unique default site,\n or raise an exception\n\n NB this means that high-numbered ports on an extant hostname may\n still be routed to a different hostname which is set as the default\n\n The site will be cached via request._wagtail_site\n \"\"\"\n\n if request is None:\n return None\n\n if not hasattr(request, \"_wagtail_site\"):\n site = Site._find_for_request(request)\n setattr(request, \"_wagtail_site\", site)\n return request._wagtail_site\n\n @staticmethod\n def _find_for_request(request):\n hostname = split_domain_port(request.get_host())[0]\n port = request.get_port()\n site = None\n try:\n site = get_site_for_hostname(hostname, port)\n except Site.DoesNotExist:\n pass\n # copy old SiteMiddleware behaviour\n return site\n\n @property\n def root_url(self):\n if self.port == 80:\n return \"http://%s\" % self.hostname\n elif self.port == 443:\n return \"https://%s\" % self.hostname\n else:\n return \"http://%s:%d\" % (self.hostname, self.port)\n\n def clean_fields(self, exclude=None):\n super().clean_fields(exclude)\n # Only one site can have the is_default_site flag set\n try:\n default = Site.objects.get(is_default_site=True)\n except Site.DoesNotExist:\n pass\n except Site.MultipleObjectsReturned:\n raise\n else:\n if self.is_default_site and self.pk != default.pk:\n raise ValidationError(\n {\n \"is_default_site\": [\n _(\n \"%(hostname)s is already configured as the default site.\"\n \" You must unset that before you can save this site as default.\"\n )\n % {\"hostname\": default.hostname}\n ]\n }\n )\n\n @staticmethod\n def get_site_root_paths():\n \"\"\"\n Return a list of `SiteRootPath` instances, most specific path\n first - used to translate url_paths into actual URLs with hostnames\n\n Each root path is an instance of the `SiteRootPath` named tuple,\n and have the following attributes:\n\n - `site_id` - The ID of the Site record\n - `root_path` - The internal URL path of the site's home page (for example '/home/')\n - `root_url` - The scheme/domain name of the site (for example 'https://www.example.com/')\n - `language_code` - The language code of the site (for example 'en')\n \"\"\"\n result = cache.get(\n SITE_ROOT_PATHS_CACHE_KEY, version=SITE_ROOT_PATHS_CACHE_VERSION\n )\n\n if result is None:\n result = []\n\n for site in Site.objects.select_related(\n \"root_page\", \"root_page__locale\"\n ).order_by(\"-root_page__url_path\", \"-is_default_site\", \"hostname\"):\n if getattr(settings, \"WAGTAIL_I18N_ENABLED\", False):\n result.extend(\n [\n SiteRootPath(\n site.id,\n root_page.url_path,\n site.root_url,\n root_page.locale.language_code,\n )\n for root_page in site.root_page.get_translations(\n inclusive=True\n ).select_related(\"locale\")\n ]\n )\n else:\n result.append(\n SiteRootPath(\n site.id,\n site.root_page.url_path,\n site.root_url,\n site.root_page.locale.language_code,\n )\n )\n\n cache.set(\n SITE_ROOT_PATHS_CACHE_KEY,\n result,\n 3600,\n version=SITE_ROOT_PATHS_CACHE_VERSION,\n )\n\n else:\n # Convert the cache result to a list of SiteRootPath tuples, as some\n # cache backends (e.g. Redis) don't support named tuples.\n result = [SiteRootPath(*result) for result in result]\n\n return result\n\n @staticmethod\n def clear_site_root_paths_cache():\n cache.delete(SITE_ROOT_PATHS_CACHE_KEY, version=SITE_ROOT_PATHS_CACHE_VERSION)\n", "path": "wagtail/models/sites.py"}]}
| 3,556 | 94 |
gh_patches_debug_49770
|
rasdani/github-patches
|
git_diff
|
getsentry__sentry-17425
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Event migration 9.1.2 -> 10
<!--
Do you want to ask a question? Are you looking for support? The Sentry message
board is the best place for getting support: https://forum.sentry.io
-->
## Important Details
How are you running Sentry?
* [X] On-Premise docker [Version 9.1.2]
* [ ] Saas (sentry.io)
* [ ] Other [briefly describe your environment]
## Description
I followed the migration guide, alongside all fixes and workaround and managed to get to the actual migration routine. Sentry tries to process all existing postgres events but fails to (for every event):
```
An error occured while trying to instert the following event: <sentry.eventstore.models.Event object at 0x7f2f08e552d0>
.----
insert() takes at least 8 arguments (8 given)
[...]
Event migration done. Migrated 0 of 197988 events.
```
## Steps to Reproduce
1. Have a 9.1.2 onpremise setup and have event data
2. Upgrade to 10 (dev-master), run `install.sh` etc.
### What you expected to happen
Migration scripts succeeds and I have all event data in the new version.
### Possible Solution
Error message suggests a syntax error?
</issue>
<code>
[start of src/sentry/migrations/0024_auto_20191230_2052.py]
1 # -*- coding: utf-8 -*-
2 # Generated by Django 1.9.13 on 2019-12-30 20:52
3 from __future__ import unicode_literals, print_function
4
5 import os
6 import types
7 from datetime import timedelta, datetime
8
9 from django.db import migrations
10 from django.utils import timezone
11
12 from sentry import options
13 from sentry.eventstore.models import Event as NewEvent
14
15
16 def backfill_eventstream(apps, schema_editor):
17 """
18 Inserts Postgres events into the eventstream if there are recent events in Postgres.
19
20 This is for open source users migrating from 9.x who want to keep their events.
21 If there are no recent events in Postgres, skip the backfill.
22 """
23 from sentry import eventstore, eventstream
24 from sentry.utils.query import RangeQuerySetWrapper
25
26 Event = apps.get_model("sentry", "Event")
27 Group = apps.get_model("sentry", "Group")
28 Project = apps.get_model("sentry", "Project")
29
30 # Kill switch to skip this migration
31 skip_backfill = os.environ.get("SENTRY_SKIP_EVENTS_BACKFILL_FOR_10", False)
32
33 # Use 90 day retention if the option has not been set or set to 0
34 DEFAULT_RETENTION = 90
35 retention_days = options.get("system.event-retention-days") or DEFAULT_RETENTION
36
37 def get_events(last_days):
38 to_date = timezone.now()
39 from_date = to_date - timedelta(days=last_days)
40 return Event.objects.filter(
41 datetime__gte=from_date, datetime__lte=to_date, group_id__isnull=False
42 )
43
44 def _attach_related(_events):
45 project_ids = set()
46 group_ids = set()
47 for event in _events:
48 project_ids.add(event.project_id)
49 group_ids.add(event.group_id)
50 projects = {p.id: p for p in Project.objects.filter(id__in=project_ids)}
51 groups = {g.id: g for g in Group.objects.filter(id__in=group_ids)}
52
53 for event in _events:
54 event.project = projects.get(event.project_id)
55 event.group = groups.get(event.group_id)
56 eventstore.bind_nodes(_events, "data")
57
58 if skip_backfill:
59 print("Skipping backfill.\n")
60 return
61
62 events = get_events(retention_days)
63 count = events.count()
64
65 if count == 0:
66 print("Nothing to do, skipping migration.\n")
67 return
68
69 print("Events to process: {}\n".format(count))
70
71 processed = 0
72 for e in RangeQuerySetWrapper(events, step=100, callbacks=(_attach_related,)):
73 event = NewEvent(
74 project_id=e.project_id, event_id=e.event_id, group_id=e.group_id, data=e.data.data
75 )
76 primary_hash = event.get_primary_hash()
77 if event.project is None or event.group is None:
78 print("Skipped {} as group or project information is invalid.\n".format(event))
79 continue
80
81 try:
82 eventstream.insert(
83 group=event.group,
84 event=event,
85 is_new=False,
86 is_regression=False,
87 is_new_group_environment=False,
88 primary_hash=primary_hash,
89 skip_consume=True,
90 )
91 processed += 1
92 except Exception as error:
93 print(
94 "An error occured while trying to instert the following event: {}\n.----\n{}".format(
95 event, error
96 )
97 )
98
99 print("Event migration done. Migrated {} of {} events.\n".format(processed, count))
100
101
102 class Migration(migrations.Migration):
103 # This flag is used to mark that a migration shouldn't be automatically run in
104 # production. We set this to True for operations that we think are risky and want
105 # someone from ops to run manually and monitor.
106 # General advice is that if in doubt, mark your migration as `is_dangerous`.
107 # Some things you should always mark as dangerous:
108 # - Adding indexes to large tables. These indexes should be created concurrently,
109 # unfortunately we can't run migrations outside of a transaction until Django
110 # 1.10. So until then these should be run manually.
111 # - Large data migrations. Typically we want these to be run manually by ops so that
112 # they can be monitored. Since data migrations will now hold a transaction open
113 # this is even more important.
114 # - Adding columns to highly active tables, even ones that are NULL.
115 is_dangerous = True
116
117 dependencies = [
118 ("sentry", "0023_hide_environment_none_20191126"),
119 ]
120
121 operations = [
122 migrations.RunPython(backfill_eventstream, reverse_code=migrations.RunPython.noop),
123 ]
124
[end of src/sentry/migrations/0024_auto_20191230_2052.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/src/sentry/migrations/0024_auto_20191230_2052.py b/src/sentry/migrations/0024_auto_20191230_2052.py
--- a/src/sentry/migrations/0024_auto_20191230_2052.py
+++ b/src/sentry/migrations/0024_auto_20191230_2052.py
@@ -86,6 +86,8 @@
is_regression=False,
is_new_group_environment=False,
primary_hash=primary_hash,
+ received_timestamp=event.data.get("received")
+ or float(event.datetime.strftime("%s")),
skip_consume=True,
)
processed += 1
|
{"golden_diff": "diff --git a/src/sentry/migrations/0024_auto_20191230_2052.py b/src/sentry/migrations/0024_auto_20191230_2052.py\n--- a/src/sentry/migrations/0024_auto_20191230_2052.py\n+++ b/src/sentry/migrations/0024_auto_20191230_2052.py\n@@ -86,6 +86,8 @@\n is_regression=False,\n is_new_group_environment=False,\n primary_hash=primary_hash,\n+ received_timestamp=event.data.get(\"received\")\n+ or float(event.datetime.strftime(\"%s\")),\n skip_consume=True,\n )\n processed += 1\n", "issue": "Event migration 9.1.2 -> 10\n<!--\r\n\r\nDo you want to ask a question? Are you looking for support? The Sentry message\r\nboard is the best place for getting support: https://forum.sentry.io\r\n-->\r\n\r\n## Important Details\r\n\r\nHow are you running Sentry?\r\n\r\n* [X] On-Premise docker [Version 9.1.2]\r\n* [ ] Saas (sentry.io)\r\n* [ ] Other [briefly describe your environment]\r\n\r\n## Description\r\n\r\nI followed the migration guide, alongside all fixes and workaround and managed to get to the actual migration routine. Sentry tries to process all existing postgres events but fails to (for every event):\r\n\r\n```\r\nAn error occured while trying to instert the following event: <sentry.eventstore.models.Event object at 0x7f2f08e552d0>\r\n.----\r\ninsert() takes at least 8 arguments (8 given)\r\n[...]\r\nEvent migration done. Migrated 0 of 197988 events.\r\n```\r\n\r\n## Steps to Reproduce\r\n\r\n1. Have a 9.1.2 onpremise setup and have event data\r\n2. Upgrade to 10 (dev-master), run `install.sh` etc.\r\n\r\n### What you expected to happen\r\n\r\nMigration scripts succeeds and I have all event data in the new version.\r\n\r\n### Possible Solution\r\n\r\nError message suggests a syntax error?\r\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n# Generated by Django 1.9.13 on 2019-12-30 20:52\nfrom __future__ import unicode_literals, print_function\n\nimport os\nimport types\nfrom datetime import timedelta, datetime\n\nfrom django.db import migrations\nfrom django.utils import timezone\n\nfrom sentry import options\nfrom sentry.eventstore.models import Event as NewEvent\n\n\ndef backfill_eventstream(apps, schema_editor):\n \"\"\"\n Inserts Postgres events into the eventstream if there are recent events in Postgres.\n\n This is for open source users migrating from 9.x who want to keep their events.\n If there are no recent events in Postgres, skip the backfill.\n \"\"\"\n from sentry import eventstore, eventstream\n from sentry.utils.query import RangeQuerySetWrapper\n\n Event = apps.get_model(\"sentry\", \"Event\")\n Group = apps.get_model(\"sentry\", \"Group\")\n Project = apps.get_model(\"sentry\", \"Project\")\n\n # Kill switch to skip this migration\n skip_backfill = os.environ.get(\"SENTRY_SKIP_EVENTS_BACKFILL_FOR_10\", False)\n\n # Use 90 day retention if the option has not been set or set to 0\n DEFAULT_RETENTION = 90\n retention_days = options.get(\"system.event-retention-days\") or DEFAULT_RETENTION\n\n def get_events(last_days):\n to_date = timezone.now()\n from_date = to_date - timedelta(days=last_days)\n return Event.objects.filter(\n datetime__gte=from_date, datetime__lte=to_date, group_id__isnull=False\n )\n\n def _attach_related(_events):\n project_ids = set()\n group_ids = set()\n for event in _events:\n project_ids.add(event.project_id)\n group_ids.add(event.group_id)\n projects = {p.id: p for p in Project.objects.filter(id__in=project_ids)}\n groups = {g.id: g for g in Group.objects.filter(id__in=group_ids)}\n\n for event in _events:\n event.project = projects.get(event.project_id)\n event.group = groups.get(event.group_id)\n eventstore.bind_nodes(_events, \"data\")\n\n if skip_backfill:\n print(\"Skipping backfill.\\n\")\n return\n\n events = get_events(retention_days)\n count = events.count()\n\n if count == 0:\n print(\"Nothing to do, skipping migration.\\n\")\n return\n\n print(\"Events to process: {}\\n\".format(count))\n\n processed = 0\n for e in RangeQuerySetWrapper(events, step=100, callbacks=(_attach_related,)):\n event = NewEvent(\n project_id=e.project_id, event_id=e.event_id, group_id=e.group_id, data=e.data.data\n )\n primary_hash = event.get_primary_hash()\n if event.project is None or event.group is None:\n print(\"Skipped {} as group or project information is invalid.\\n\".format(event))\n continue\n\n try:\n eventstream.insert(\n group=event.group,\n event=event,\n is_new=False,\n is_regression=False,\n is_new_group_environment=False,\n primary_hash=primary_hash,\n skip_consume=True,\n )\n processed += 1\n except Exception as error:\n print(\n \"An error occured while trying to instert the following event: {}\\n.----\\n{}\".format(\n event, error\n )\n )\n\n print(\"Event migration done. Migrated {} of {} events.\\n\".format(processed, count))\n\n\nclass Migration(migrations.Migration):\n # This flag is used to mark that a migration shouldn't be automatically run in\n # production. We set this to True for operations that we think are risky and want\n # someone from ops to run manually and monitor.\n # General advice is that if in doubt, mark your migration as `is_dangerous`.\n # Some things you should always mark as dangerous:\n # - Adding indexes to large tables. These indexes should be created concurrently,\n # unfortunately we can't run migrations outside of a transaction until Django\n # 1.10. So until then these should be run manually.\n # - Large data migrations. Typically we want these to be run manually by ops so that\n # they can be monitored. Since data migrations will now hold a transaction open\n # this is even more important.\n # - Adding columns to highly active tables, even ones that are NULL.\n is_dangerous = True\n\n dependencies = [\n (\"sentry\", \"0023_hide_environment_none_20191126\"),\n ]\n\n operations = [\n migrations.RunPython(backfill_eventstream, reverse_code=migrations.RunPython.noop),\n ]\n", "path": "src/sentry/migrations/0024_auto_20191230_2052.py"}]}
| 2,191 | 181 |
gh_patches_debug_38940
|
rasdani/github-patches
|
git_diff
|
streamlink__streamlink-205
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
picarto updated streamlink no longer works
Hey guys picarto no longer works because they said they updated the player so html5 can be default soon.
when you run the program it says found matching plugin picarto for url https:// https://picarto.tv/picknamehere
then the it says error: no stream on this URL: https://picarto.tv/picknamehere.
thanks guys for the awesome program hopefully it gets solved soon!
</issue>
<code>
[start of src/streamlink/plugins/picarto.py]
1 import re
2
3 from streamlink.plugin import Plugin
4 from streamlink.plugin.api import http
5 from streamlink.stream import RTMPStream
6
7 API_CHANNEL_INFO = "https://picarto.tv/process/channel"
8 RTMP_URL = "rtmp://{}:1935/play/"
9 RTMP_PLAYPATH = "golive+{}?token={}"
10
11 _url_re = re.compile(r"""
12 https?://(\w+\.)?picarto\.tv/[^&?/]
13 """, re.VERBOSE)
14
15 _channel_casing_re = re.compile(r"""
16 <script>placeStreamChannel(Flash)?\('(?P<channel>[^']+)',[^,]+,[^,]+,'(?P<visibility>[^']+)'(,[^,]+)?\);</script>
17 """, re.VERBOSE)
18
19
20 class Picarto(Plugin):
21 @classmethod
22 def can_handle_url(self, url):
23 return _url_re.match(url)
24
25 def _get_streams(self):
26 page_res = http.get(self.url)
27 match = _channel_casing_re.search(page_res.text)
28
29 if not match:
30 return {}
31
32 channel = match.group("channel")
33 visibility = match.group("visibility")
34
35 channel_server_res = http.post(API_CHANNEL_INFO, data={
36 "loadbalancinginfo": channel
37 })
38
39 streams = {}
40 streams["live"] = RTMPStream(self.session, {
41 "rtmp": RTMP_URL.format(channel_server_res.text),
42 "playpath": RTMP_PLAYPATH.format(channel, visibility),
43 "pageUrl": self.url,
44 "live": True
45 })
46 return streams
47
48 __plugin__ = Picarto
49
[end of src/streamlink/plugins/picarto.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/src/streamlink/plugins/picarto.py b/src/streamlink/plugins/picarto.py
--- a/src/streamlink/plugins/picarto.py
+++ b/src/streamlink/plugins/picarto.py
@@ -2,47 +2,69 @@
from streamlink.plugin import Plugin
from streamlink.plugin.api import http
+from streamlink.stream import HLSStream
from streamlink.stream import RTMPStream
API_CHANNEL_INFO = "https://picarto.tv/process/channel"
RTMP_URL = "rtmp://{}:1935/play/"
RTMP_PLAYPATH = "golive+{}?token={}"
+HLS_URL = "https://{}/hls/{}/index.m3u8?token={}"
_url_re = re.compile(r"""
https?://(\w+\.)?picarto\.tv/[^&?/]
""", re.VERBOSE)
+# placeStream(channel, playerID, product, offlineImage, online, token, tech)
_channel_casing_re = re.compile(r"""
- <script>placeStreamChannel(Flash)?\('(?P<channel>[^']+)',[^,]+,[^,]+,'(?P<visibility>[^']+)'(,[^,]+)?\);</script>
+ <script>\s*placeStream\s*\((.*?)\);?\s*</script>
""", re.VERBOSE)
class Picarto(Plugin):
@classmethod
- def can_handle_url(self, url):
- return _url_re.match(url)
+ def can_handle_url(cls, url):
+ return _url_re.match(url) is not None
+
+ @staticmethod
+ def _get_stream_arguments(page):
+ match = _channel_casing_re.search(page.text)
+ if not match:
+ raise ValueError
+
+ # transform the arguments
+ channel, player_id, product, offline_image, online, visibility, is_flash = \
+ map(lambda a: a.strip("' \""), match.group(1).split(","))
+ player_id, product, offline_image, online, is_flash = \
+ map(lambda a: bool(int(a)), [player_id, product, offline_image, online, is_flash])
+
+ return channel, player_id, product, offline_image, online, visibility, is_flash
def _get_streams(self):
- page_res = http.get(self.url)
- match = _channel_casing_re.search(page_res.text)
+ page = http.get(self.url)
- if not match:
- return {}
+ try:
+ channel, _, _, _, online, visibility, is_flash = self._get_stream_arguments(page)
+ except ValueError:
+ return
- channel = match.group("channel")
- visibility = match.group("visibility")
+ if not online:
+ self.logger.error("This stream is currently offline")
+ return
channel_server_res = http.post(API_CHANNEL_INFO, data={
"loadbalancinginfo": channel
})
- streams = {}
- streams["live"] = RTMPStream(self.session, {
- "rtmp": RTMP_URL.format(channel_server_res.text),
- "playpath": RTMP_PLAYPATH.format(channel, visibility),
- "pageUrl": self.url,
- "live": True
- })
- return streams
+ if is_flash:
+ return {"live": RTMPStream(self.session, {
+ "rtmp": RTMP_URL.format(channel_server_res.text),
+ "playpath": RTMP_PLAYPATH.format(channel, visibility),
+ "pageUrl": self.url,
+ "live": True
+ })}
+ else:
+ return HLSStream.parse_variant_playlist(self.session,
+ HLS_URL.format(channel_server_res.text, channel, visibility),
+ verify=False)
__plugin__ = Picarto
|
{"golden_diff": "diff --git a/src/streamlink/plugins/picarto.py b/src/streamlink/plugins/picarto.py\n--- a/src/streamlink/plugins/picarto.py\n+++ b/src/streamlink/plugins/picarto.py\n@@ -2,47 +2,69 @@\n \n from streamlink.plugin import Plugin\n from streamlink.plugin.api import http\n+from streamlink.stream import HLSStream\n from streamlink.stream import RTMPStream\n \n API_CHANNEL_INFO = \"https://picarto.tv/process/channel\"\n RTMP_URL = \"rtmp://{}:1935/play/\"\n RTMP_PLAYPATH = \"golive+{}?token={}\"\n+HLS_URL = \"https://{}/hls/{}/index.m3u8?token={}\"\n \n _url_re = re.compile(r\"\"\"\n https?://(\\w+\\.)?picarto\\.tv/[^&?/]\n \"\"\", re.VERBOSE)\n \n+# placeStream(channel, playerID, product, offlineImage, online, token, tech)\n _channel_casing_re = re.compile(r\"\"\"\n- <script>placeStreamChannel(Flash)?\\('(?P<channel>[^']+)',[^,]+,[^,]+,'(?P<visibility>[^']+)'(,[^,]+)?\\);</script>\n+ <script>\\s*placeStream\\s*\\((.*?)\\);?\\s*</script>\n \"\"\", re.VERBOSE)\n \n \n class Picarto(Plugin):\n @classmethod\n- def can_handle_url(self, url):\n- return _url_re.match(url)\n+ def can_handle_url(cls, url):\n+ return _url_re.match(url) is not None\n+\n+ @staticmethod\n+ def _get_stream_arguments(page):\n+ match = _channel_casing_re.search(page.text)\n+ if not match:\n+ raise ValueError\n+\n+ # transform the arguments\n+ channel, player_id, product, offline_image, online, visibility, is_flash = \\\n+ map(lambda a: a.strip(\"' \\\"\"), match.group(1).split(\",\"))\n+ player_id, product, offline_image, online, is_flash = \\\n+ map(lambda a: bool(int(a)), [player_id, product, offline_image, online, is_flash])\n+\n+ return channel, player_id, product, offline_image, online, visibility, is_flash\n \n def _get_streams(self):\n- page_res = http.get(self.url)\n- match = _channel_casing_re.search(page_res.text)\n+ page = http.get(self.url)\n \n- if not match:\n- return {}\n+ try:\n+ channel, _, _, _, online, visibility, is_flash = self._get_stream_arguments(page)\n+ except ValueError:\n+ return\n \n- channel = match.group(\"channel\")\n- visibility = match.group(\"visibility\")\n+ if not online:\n+ self.logger.error(\"This stream is currently offline\")\n+ return\n \n channel_server_res = http.post(API_CHANNEL_INFO, data={\n \"loadbalancinginfo\": channel\n })\n \n- streams = {}\n- streams[\"live\"] = RTMPStream(self.session, {\n- \"rtmp\": RTMP_URL.format(channel_server_res.text),\n- \"playpath\": RTMP_PLAYPATH.format(channel, visibility),\n- \"pageUrl\": self.url,\n- \"live\": True\n- })\n- return streams\n+ if is_flash:\n+ return {\"live\": RTMPStream(self.session, {\n+ \"rtmp\": RTMP_URL.format(channel_server_res.text),\n+ \"playpath\": RTMP_PLAYPATH.format(channel, visibility),\n+ \"pageUrl\": self.url,\n+ \"live\": True\n+ })}\n+ else:\n+ return HLSStream.parse_variant_playlist(self.session,\n+ HLS_URL.format(channel_server_res.text, channel, visibility),\n+ verify=False)\n \n __plugin__ = Picarto\n", "issue": "picarto updated streamlink no longer works\nHey guys picarto no longer works because they said they updated the player so html5 can be default soon.\r\nwhen you run the program it says found matching plugin picarto for url https:// https://picarto.tv/picknamehere\r\nthen the it says error: no stream on this URL: https://picarto.tv/picknamehere.\r\nthanks guys for the awesome program hopefully it gets solved soon!\n", "before_files": [{"content": "import re\n\nfrom streamlink.plugin import Plugin\nfrom streamlink.plugin.api import http\nfrom streamlink.stream import RTMPStream\n\nAPI_CHANNEL_INFO = \"https://picarto.tv/process/channel\"\nRTMP_URL = \"rtmp://{}:1935/play/\"\nRTMP_PLAYPATH = \"golive+{}?token={}\"\n\n_url_re = re.compile(r\"\"\"\n https?://(\\w+\\.)?picarto\\.tv/[^&?/]\n\"\"\", re.VERBOSE)\n\n_channel_casing_re = re.compile(r\"\"\"\n <script>placeStreamChannel(Flash)?\\('(?P<channel>[^']+)',[^,]+,[^,]+,'(?P<visibility>[^']+)'(,[^,]+)?\\);</script>\n\"\"\", re.VERBOSE)\n\n\nclass Picarto(Plugin):\n @classmethod\n def can_handle_url(self, url):\n return _url_re.match(url)\n\n def _get_streams(self):\n page_res = http.get(self.url)\n match = _channel_casing_re.search(page_res.text)\n\n if not match:\n return {}\n\n channel = match.group(\"channel\")\n visibility = match.group(\"visibility\")\n\n channel_server_res = http.post(API_CHANNEL_INFO, data={\n \"loadbalancinginfo\": channel\n })\n\n streams = {}\n streams[\"live\"] = RTMPStream(self.session, {\n \"rtmp\": RTMP_URL.format(channel_server_res.text),\n \"playpath\": RTMP_PLAYPATH.format(channel, visibility),\n \"pageUrl\": self.url,\n \"live\": True\n })\n return streams\n\n__plugin__ = Picarto\n", "path": "src/streamlink/plugins/picarto.py"}]}
| 1,075 | 826 |
gh_patches_debug_4348
|
rasdani/github-patches
|
git_diff
|
pwndbg__pwndbg-747
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Bad unsigned casting
### Description
`pwndbg.memory.u` returns signed integers (with minus `-` sign).
### Steps to reproduce
```c
#include <stdio.h>
#include <stdint.h>
int main(int argc, char const *argv[])
{
uint64_t x = 0xb60ad86e8fb52ea8;
printf("%p\n", &x);
getc(stdin);
return 0;
}
```
```
clang bad_u.c -g -o bad_u
gdb ./bad_u
pwndbg> x/xg 0x7fffffffab18
0x7fffffffab18: 0xb60ad86e8fb52ea8
pwndbg> python-interactive
>>> pwndbg.memory.u(0x7fffffffab18)
-5329209239670542680
```
Idk why it doesn't break the pwndbg visibly. Found it running `vis_heap_chunks` on arbitrary addresses (the minus were printed in few places).
### My setup
```
GNU gdb (Ubuntu 8.1-0ubuntu3.2) 8.1.0.20180409-git
python: 3.6.9 (default, Nov 7 2019, 10:44:02)
pwndbg: dev branch
```
Bad unsigned casting
### Description
`pwndbg.memory.u` returns signed integers (with minus `-` sign).
### Steps to reproduce
```c
#include <stdio.h>
#include <stdint.h>
int main(int argc, char const *argv[])
{
uint64_t x = 0xb60ad86e8fb52ea8;
printf("%p\n", &x);
getc(stdin);
return 0;
}
```
```
clang bad_u.c -g -o bad_u
gdb ./bad_u
pwndbg> x/xg 0x7fffffffab18
0x7fffffffab18: 0xb60ad86e8fb52ea8
pwndbg> python-interactive
>>> pwndbg.memory.u(0x7fffffffab18)
-5329209239670542680
```
Idk why it doesn't break the pwndbg visibly. Found it running `vis_heap_chunks` on arbitrary addresses (the minus were printed in few places).
### My setup
```
GNU gdb (Ubuntu 8.1-0ubuntu3.2) 8.1.0.20180409-git
python: 3.6.9 (default, Nov 7 2019, 10:44:02)
pwndbg: dev branch
```
</issue>
<code>
[start of pwndbg/inthook.py]
1 #!/usr/bin/env python
2 # -*- coding: utf-8 -*-
3 """
4 This hook is necessary for compatibility with Python2.7 versions of GDB
5 since they cannot directly cast to integer a gdb.Value object that is
6 not already an integer type.
7 """
8 from __future__ import absolute_import
9 from __future__ import division
10 from __future__ import print_function
11 from __future__ import unicode_literals
12
13 import enum
14 import os
15
16 import gdb
17 import six
18 from future.utils import with_metaclass
19
20 import pwndbg.typeinfo
21
22 if six.PY2:
23 import __builtin__ as builtins
24 else:
25 import builtins
26
27 _int = builtins.int
28
29
30 # We need this class to get isinstance(7, xint) to return True
31 class IsAnInt(type):
32 def __instancecheck__(self, other):
33 return isinstance(other, _int)
34
35
36 class xint(with_metaclass(IsAnInt, builtins.int)):
37 def __new__(cls, value, *a, **kw):
38 if isinstance(value, gdb.Value):
39 if pwndbg.typeinfo.is_pointer(value):
40 value = value.cast(pwndbg.typeinfo.size_t)
41 else:
42 value = value.cast(pwndbg.typeinfo.ssize_t)
43
44 elif isinstance(value, gdb.Symbol):
45 symbol = value
46 value = symbol.value()
47 if symbol.is_function:
48 value = value.cast(pwndbg.typeinfo.size_t)
49
50 elif not isinstance(value, (six.string_types, six.integer_types)) \
51 or isinstance(cls, enum.EnumMeta):
52 # without check for EnumMeta math operations with enums were failing e.g.:
53 # pwndbg> py import re; flags = 1 | re.MULTILINE
54 return _int.__new__(cls, value, *a, **kw)
55
56 return _int(_int(value, *a, **kw))
57
58 # Do not hook 'int' if we are just generating documentation
59 if os.environ.get('SPHINX', None) is None:
60 builtins.int = xint
61 globals()['int'] = xint
62 if six.PY3:
63 builtins.long = xint
64 globals()['long'] = xint
65
[end of pwndbg/inthook.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/pwndbg/inthook.py b/pwndbg/inthook.py
--- a/pwndbg/inthook.py
+++ b/pwndbg/inthook.py
@@ -39,7 +39,7 @@
if pwndbg.typeinfo.is_pointer(value):
value = value.cast(pwndbg.typeinfo.size_t)
else:
- value = value.cast(pwndbg.typeinfo.ssize_t)
+ return _int.__new__(cls, value, *a, **kw)
elif isinstance(value, gdb.Symbol):
symbol = value
|
{"golden_diff": "diff --git a/pwndbg/inthook.py b/pwndbg/inthook.py\n--- a/pwndbg/inthook.py\n+++ b/pwndbg/inthook.py\n@@ -39,7 +39,7 @@\n if pwndbg.typeinfo.is_pointer(value):\n value = value.cast(pwndbg.typeinfo.size_t)\n else:\n- value = value.cast(pwndbg.typeinfo.ssize_t)\n+ return _int.__new__(cls, value, *a, **kw)\n \n elif isinstance(value, gdb.Symbol):\n symbol = value\n", "issue": "Bad unsigned casting\n### Description\r\n\r\n`pwndbg.memory.u` returns signed integers (with minus `-` sign).\r\n\r\n### Steps to reproduce\r\n\r\n\r\n```c\r\n#include <stdio.h>\r\n#include <stdint.h>\r\n\r\nint main(int argc, char const *argv[])\r\n{\r\n uint64_t x = 0xb60ad86e8fb52ea8;\r\n printf(\"%p\\n\", &x);\r\n getc(stdin);\r\n return 0;\r\n}\r\n```\r\n\r\n```\r\nclang bad_u.c -g -o bad_u\r\ngdb ./bad_u\r\n\r\npwndbg> x/xg 0x7fffffffab18\r\n0x7fffffffab18:\t0xb60ad86e8fb52ea8\r\npwndbg> python-interactive \r\n>>> pwndbg.memory.u(0x7fffffffab18)\r\n-5329209239670542680\r\n```\r\n\r\nIdk why it doesn't break the pwndbg visibly. Found it running `vis_heap_chunks` on arbitrary addresses (the minus were printed in few places).\r\n\r\n### My setup\r\n\r\n```\r\nGNU gdb (Ubuntu 8.1-0ubuntu3.2) 8.1.0.20180409-git\r\npython: 3.6.9 (default, Nov 7 2019, 10:44:02)\r\npwndbg: dev branch\r\n```\nBad unsigned casting\n### Description\r\n\r\n`pwndbg.memory.u` returns signed integers (with minus `-` sign).\r\n\r\n### Steps to reproduce\r\n\r\n\r\n```c\r\n#include <stdio.h>\r\n#include <stdint.h>\r\n\r\nint main(int argc, char const *argv[])\r\n{\r\n uint64_t x = 0xb60ad86e8fb52ea8;\r\n printf(\"%p\\n\", &x);\r\n getc(stdin);\r\n return 0;\r\n}\r\n```\r\n\r\n```\r\nclang bad_u.c -g -o bad_u\r\ngdb ./bad_u\r\n\r\npwndbg> x/xg 0x7fffffffab18\r\n0x7fffffffab18:\t0xb60ad86e8fb52ea8\r\npwndbg> python-interactive \r\n>>> pwndbg.memory.u(0x7fffffffab18)\r\n-5329209239670542680\r\n```\r\n\r\nIdk why it doesn't break the pwndbg visibly. Found it running `vis_heap_chunks` on arbitrary addresses (the minus were printed in few places).\r\n\r\n### My setup\r\n\r\n```\r\nGNU gdb (Ubuntu 8.1-0ubuntu3.2) 8.1.0.20180409-git\r\npython: 3.6.9 (default, Nov 7 2019, 10:44:02)\r\npwndbg: dev branch\r\n```\n", "before_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\"\"\"\nThis hook is necessary for compatibility with Python2.7 versions of GDB\nsince they cannot directly cast to integer a gdb.Value object that is\nnot already an integer type.\n\"\"\"\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\nfrom __future__ import unicode_literals\n\nimport enum\nimport os\n\nimport gdb\nimport six\nfrom future.utils import with_metaclass\n\nimport pwndbg.typeinfo\n\nif six.PY2:\n import __builtin__ as builtins\nelse:\n import builtins\n\n_int = builtins.int\n\n\n# We need this class to get isinstance(7, xint) to return True\nclass IsAnInt(type):\n def __instancecheck__(self, other):\n return isinstance(other, _int)\n\n\nclass xint(with_metaclass(IsAnInt, builtins.int)):\n def __new__(cls, value, *a, **kw):\n if isinstance(value, gdb.Value):\n if pwndbg.typeinfo.is_pointer(value):\n value = value.cast(pwndbg.typeinfo.size_t)\n else:\n value = value.cast(pwndbg.typeinfo.ssize_t)\n\n elif isinstance(value, gdb.Symbol):\n symbol = value\n value = symbol.value()\n if symbol.is_function:\n value = value.cast(pwndbg.typeinfo.size_t)\n\n elif not isinstance(value, (six.string_types, six.integer_types)) \\\n or isinstance(cls, enum.EnumMeta):\n # without check for EnumMeta math operations with enums were failing e.g.:\n # pwndbg> py import re; flags = 1 | re.MULTILINE\n return _int.__new__(cls, value, *a, **kw)\n\n return _int(_int(value, *a, **kw))\n\n# Do not hook 'int' if we are just generating documentation\nif os.environ.get('SPHINX', None) is None:\n builtins.int = xint\n globals()['int'] = xint\n if six.PY3:\n builtins.long = xint\n globals()['long'] = xint\n", "path": "pwndbg/inthook.py"}]}
| 1,766 | 126 |
gh_patches_debug_2300
|
rasdani/github-patches
|
git_diff
|
pytorch__torchdynamo-1012
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Dynamo WONT CONVERT for is_fx_tracing()
Probably the same as #1009. Repro:
```
import torchdynamo
from torch.fx._symbolic_trace import is_fx_tracing
def my_compiler(gm, inputs):
return gm.forward
@torchdynamo.optimize(my_compiler)
def fn(x, y):
if is_fx_tracing():
return x
else:
return y
fn(1, 2)
```
returns
```
torchdynamo.convert_frame: [ERROR] WON'T CONVERT fn /private/home/suo/scratch/test.py line 8
due to:
Traceback (most recent call last):
File "/raid/suo/torchdynamo/torchdynamo/variables/tensor.py", line 258, in create
assert (
AssertionError: torch.* op returned non-Tensor bool call_function <function is_fx_tracing at 0x7f08b681e700>
from user code:
File "/private/home/suo/scratch/test.py", line 10, in fn
if is_fx_tracing():
Set torchdynamo.config.verbose=True for more information
==========
```
</issue>
<code>
[start of torchdynamo/config.py]
1 import logging
2 import os
3 import sys
4 from os.path import abspath
5 from os.path import dirname
6 from types import ModuleType
7
8 import torch
9
10 try:
11 import torch._prims
12 import torch._refs
13
14 HAS_REFS_PRIMS = True
15 except ImportError:
16 HAS_REFS_PRIMS = False
17
18
19 class AccessLimitingConfig(ModuleType):
20 # log level (levels print what it says + all levels listed below it)
21 # DEBUG print full traces <-- lowest level + print tracing of every instruction
22 # INFO print compiled functions + graphs
23 # WARN print warnings (including graph breaks)
24 # ERROR print exceptions (and what user code was being processed when it occurred)
25 log_level = logging.WARNING
26 # Verbose will print full stack traces on warnings and errors
27 verbose = False
28
29 # verify the correctness of optimized backend
30 verify_correctness = False
31
32 # need this many ops to create an FX graph
33 minimum_call_count = 1
34
35 # turn on/off DCE pass
36 dead_code_elimination = True
37
38 # disable (for a function) when cache reaches this size
39 cache_size_limit = 64
40
41 # specializing int/float by default
42 specialize_int_float = True
43
44 # Assume these functions return constants
45 constant_functions = {
46 torch.jit.is_scripting: False,
47 torch.jit.is_tracing: False,
48 torch._C._get_tracing_state: None,
49 }
50
51 # root folder of the project
52 base_dir = dirname(dirname(abspath(__file__)))
53
54 # don't specialize on shapes and strides and put shape ops in graph
55 dynamic_shapes = os.environ.get("TORCHDYNAMO_DYNAMIC_SHAPES") == "1"
56
57 # Set this to False to assume nn.Modules() contents are immutable (similar assumption as freezing)
58 guard_nn_modules = False
59
60 # Run the FX graph as it is created to get better type information
61 dynamic_propagation = True
62
63 # Run the FX graph with FakeTensors
64 fake_tensor_propagation = True
65
66 # run FX normalization passes in optimizer
67 normalize_ir = True
68
69 # If a tensor subclass type is in this set, torchdynamo will inline the
70 # __torch_function__ logic of the subclass.
71 traceable_tensor_subclasses = set()
72
73 # Raise torchdynamo internal assertions
74 raise_on_assertion_error = False
75
76 # Propagate backend exceptions up to torchdynamo.optimize
77 raise_on_backend_error = True
78
79 # If a PyTorch module is in this allowlist, torchdynamo will be allowed
80 # to inline objects from it or its children.
81 skipfiles_inline_module_allowlist = {torch.nn, torch.distributions}
82 if HAS_REFS_PRIMS:
83 skipfiles_inline_module_allowlist |= {
84 torch._refs,
85 torch._prims,
86 torch._decomp,
87 }
88
89 # If a string representing a PyTorch module is in this ignorelist,
90 # the `allowed_functions.is_allowed` function will not consider it
91 # when creating a list of PyTorch functions that will appear in
92 # FX IR.
93 allowed_functions_module_string_ignorelist = {
94 "torch.distributions",
95 "torch.testing",
96 "torch._refs",
97 "torch._prims",
98 "torch._decomp",
99 }
100
101 # Compiler compilation debug info
102 # 0: Nothing printed out when compilation fails
103 # 1: Dump the graph out to repro.py if compilation fails
104 # 2: Dumps the graph out to minify_repro.py with a minifier if compilation fails
105 # 3: Always dumps the last graph ran out to minify_repro.py, useful for segfaults/irrecoverable errors
106 repro_level = int(os.environ.get("COMPILER_REPRO_LEVEL", 0))
107
108 # Not all backends support scalars. Some calls on torch.Tensor (like .item()) return a scalar type.
109 # When this flag is set to False, we introduce a graph break instead of capturing.
110 capture_scalar_outputs = False
111
112 def __setattr__(self, name, value):
113 if sys.version_info > (3, 8):
114 assert hasattr(
115 self, name
116 ), f"Trying to set {name} - this value does not exist in torchdynamo.config"
117 object.__setattr__(self, name, value)
118
119 def __delattr__(self, name):
120 if sys.version_info > (3, 8):
121 assert hasattr(
122 self, name
123 ), f"Trying to del {name} - this value does not exist in torchdynamo.config"
124 object.__delattr__(self, name)
125
126
127 sys.modules[__name__] = AccessLimitingConfig("config")
128
[end of torchdynamo/config.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/torchdynamo/config.py b/torchdynamo/config.py
--- a/torchdynamo/config.py
+++ b/torchdynamo/config.py
@@ -46,6 +46,8 @@
torch.jit.is_scripting: False,
torch.jit.is_tracing: False,
torch._C._get_tracing_state: None,
+ torch.fx._symbolic_trace.is_fx_tracing: False,
+ torch.onnx.is_in_onnx_export: False,
}
# root folder of the project
|
{"golden_diff": "diff --git a/torchdynamo/config.py b/torchdynamo/config.py\n--- a/torchdynamo/config.py\n+++ b/torchdynamo/config.py\n@@ -46,6 +46,8 @@\n torch.jit.is_scripting: False,\n torch.jit.is_tracing: False,\n torch._C._get_tracing_state: None,\n+ torch.fx._symbolic_trace.is_fx_tracing: False,\n+ torch.onnx.is_in_onnx_export: False,\n }\n \n # root folder of the project\n", "issue": "Dynamo WONT CONVERT for is_fx_tracing()\nProbably the same as #1009. Repro:\r\n```\r\nimport torchdynamo\r\nfrom torch.fx._symbolic_trace import is_fx_tracing\r\n\r\ndef my_compiler(gm, inputs):\r\n return gm.forward\r\n\r\[email protected](my_compiler)\r\ndef fn(x, y):\r\n if is_fx_tracing():\r\n return x\r\n else:\r\n return y\r\n\r\nfn(1, 2)\r\n```\r\nreturns\r\n```\r\ntorchdynamo.convert_frame: [ERROR] WON'T CONVERT fn /private/home/suo/scratch/test.py line 8\r\ndue to:\r\nTraceback (most recent call last):\r\n File \"/raid/suo/torchdynamo/torchdynamo/variables/tensor.py\", line 258, in create\r\n assert (\r\nAssertionError: torch.* op returned non-Tensor bool call_function <function is_fx_tracing at 0x7f08b681e700>\r\n\r\nfrom user code:\r\n File \"/private/home/suo/scratch/test.py\", line 10, in fn\r\n if is_fx_tracing():\r\n\r\nSet torchdynamo.config.verbose=True for more information\r\n==========\r\n```\n", "before_files": [{"content": "import logging\nimport os\nimport sys\nfrom os.path import abspath\nfrom os.path import dirname\nfrom types import ModuleType\n\nimport torch\n\ntry:\n import torch._prims\n import torch._refs\n\n HAS_REFS_PRIMS = True\nexcept ImportError:\n HAS_REFS_PRIMS = False\n\n\nclass AccessLimitingConfig(ModuleType):\n # log level (levels print what it says + all levels listed below it)\n # DEBUG print full traces <-- lowest level + print tracing of every instruction\n # INFO print compiled functions + graphs\n # WARN print warnings (including graph breaks)\n # ERROR print exceptions (and what user code was being processed when it occurred)\n log_level = logging.WARNING\n # Verbose will print full stack traces on warnings and errors\n verbose = False\n\n # verify the correctness of optimized backend\n verify_correctness = False\n\n # need this many ops to create an FX graph\n minimum_call_count = 1\n\n # turn on/off DCE pass\n dead_code_elimination = True\n\n # disable (for a function) when cache reaches this size\n cache_size_limit = 64\n\n # specializing int/float by default\n specialize_int_float = True\n\n # Assume these functions return constants\n constant_functions = {\n torch.jit.is_scripting: False,\n torch.jit.is_tracing: False,\n torch._C._get_tracing_state: None,\n }\n\n # root folder of the project\n base_dir = dirname(dirname(abspath(__file__)))\n\n # don't specialize on shapes and strides and put shape ops in graph\n dynamic_shapes = os.environ.get(\"TORCHDYNAMO_DYNAMIC_SHAPES\") == \"1\"\n\n # Set this to False to assume nn.Modules() contents are immutable (similar assumption as freezing)\n guard_nn_modules = False\n\n # Run the FX graph as it is created to get better type information\n dynamic_propagation = True\n\n # Run the FX graph with FakeTensors\n fake_tensor_propagation = True\n\n # run FX normalization passes in optimizer\n normalize_ir = True\n\n # If a tensor subclass type is in this set, torchdynamo will inline the\n # __torch_function__ logic of the subclass.\n traceable_tensor_subclasses = set()\n\n # Raise torchdynamo internal assertions\n raise_on_assertion_error = False\n\n # Propagate backend exceptions up to torchdynamo.optimize\n raise_on_backend_error = True\n\n # If a PyTorch module is in this allowlist, torchdynamo will be allowed\n # to inline objects from it or its children.\n skipfiles_inline_module_allowlist = {torch.nn, torch.distributions}\n if HAS_REFS_PRIMS:\n skipfiles_inline_module_allowlist |= {\n torch._refs,\n torch._prims,\n torch._decomp,\n }\n\n # If a string representing a PyTorch module is in this ignorelist,\n # the `allowed_functions.is_allowed` function will not consider it\n # when creating a list of PyTorch functions that will appear in\n # FX IR.\n allowed_functions_module_string_ignorelist = {\n \"torch.distributions\",\n \"torch.testing\",\n \"torch._refs\",\n \"torch._prims\",\n \"torch._decomp\",\n }\n\n # Compiler compilation debug info\n # 0: Nothing printed out when compilation fails\n # 1: Dump the graph out to repro.py if compilation fails\n # 2: Dumps the graph out to minify_repro.py with a minifier if compilation fails\n # 3: Always dumps the last graph ran out to minify_repro.py, useful for segfaults/irrecoverable errors\n repro_level = int(os.environ.get(\"COMPILER_REPRO_LEVEL\", 0))\n\n # Not all backends support scalars. Some calls on torch.Tensor (like .item()) return a scalar type.\n # When this flag is set to False, we introduce a graph break instead of capturing.\n capture_scalar_outputs = False\n\n def __setattr__(self, name, value):\n if sys.version_info > (3, 8):\n assert hasattr(\n self, name\n ), f\"Trying to set {name} - this value does not exist in torchdynamo.config\"\n object.__setattr__(self, name, value)\n\n def __delattr__(self, name):\n if sys.version_info > (3, 8):\n assert hasattr(\n self, name\n ), f\"Trying to del {name} - this value does not exist in torchdynamo.config\"\n object.__delattr__(self, name)\n\n\nsys.modules[__name__] = AccessLimitingConfig(\"config\")\n", "path": "torchdynamo/config.py"}]}
| 2,101 | 119 |
gh_patches_debug_11149
|
rasdani/github-patches
|
git_diff
|
open-mmlab__mmocr-285
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
LineStrParser: separator behaviour
I've a question regarding this snippet of code:
https://github.com/open-mmlab/mmocr/blob/01d8d63be945882fb2d9eaca5e1c1b39cb45f274/mmocr/datasets/utils/parser.py#L33-L36
Is there a particular reason to use these 4 lines of code instead of simply `line_str = line_str.split(self.separator)`?
I'm asking this because for my own use case I have:
- a TSV file with `filename` and `text` as keys for text recognition task
- some blank spaces in `filename` e.g. `my cropped image.png`
Hence, LineStrParser is configured as follows:
```python
parser=dict(
type='LineStrParser',
keys=['filename', 'text'],
keys_idx=[0, 1],
separator='\t'))
```
but with the 4-lines code snippet, the line parsing fails. Instead, with simply `line_str = line_str.split(self.separator)` everything works well.
</issue>
<code>
[start of mmocr/datasets/utils/parser.py]
1 import json
2
3 from mmocr.datasets.builder import PARSERS
4
5
6 @PARSERS.register_module()
7 class LineStrParser:
8 """Parse string of one line in annotation file to dict format.
9
10 Args:
11 keys (list[str]): Keys in result dict.
12 keys_idx (list[int]): Value index in sub-string list
13 for each key above.
14 separator (str): Separator to separate string to list of sub-string.
15 """
16
17 def __init__(self,
18 keys=['filename', 'text'],
19 keys_idx=[0, 1],
20 separator=' '):
21 assert isinstance(keys, list)
22 assert isinstance(keys_idx, list)
23 assert isinstance(separator, str)
24 assert len(keys) > 0
25 assert len(keys) == len(keys_idx)
26 self.keys = keys
27 self.keys_idx = keys_idx
28 self.separator = separator
29
30 def get_item(self, data_ret, index):
31 map_index = index % len(data_ret)
32 line_str = data_ret[map_index]
33 for split_key in self.separator:
34 if split_key != ' ':
35 line_str = line_str.replace(split_key, ' ')
36 line_str = line_str.split()
37 if len(line_str) <= max(self.keys_idx):
38 raise Exception(
39 f'key index: {max(self.keys_idx)} out of range: {line_str}')
40
41 line_info = {}
42 for i, key in enumerate(self.keys):
43 line_info[key] = line_str[self.keys_idx[i]]
44 return line_info
45
46
47 @PARSERS.register_module()
48 class LineJsonParser:
49 """Parse json-string of one line in annotation file to dict format.
50
51 Args:
52 keys (list[str]): Keys in both json-string and result dict.
53 """
54
55 def __init__(self, keys=[], **kwargs):
56 assert isinstance(keys, list)
57 assert len(keys) > 0
58 self.keys = keys
59
60 def get_item(self, data_ret, index):
61 map_index = index % len(data_ret)
62 line_json_obj = json.loads(data_ret[map_index])
63 line_info = {}
64 for key in self.keys:
65 if key not in line_json_obj:
66 raise Exception(f'key {key} not in line json {line_json_obj}')
67 line_info[key] = line_json_obj[key]
68
69 return line_info
70
[end of mmocr/datasets/utils/parser.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/mmocr/datasets/utils/parser.py b/mmocr/datasets/utils/parser.py
--- a/mmocr/datasets/utils/parser.py
+++ b/mmocr/datasets/utils/parser.py
@@ -30,10 +30,7 @@
def get_item(self, data_ret, index):
map_index = index % len(data_ret)
line_str = data_ret[map_index]
- for split_key in self.separator:
- if split_key != ' ':
- line_str = line_str.replace(split_key, ' ')
- line_str = line_str.split()
+ line_str = line_str.split(self.separator)
if len(line_str) <= max(self.keys_idx):
raise Exception(
f'key index: {max(self.keys_idx)} out of range: {line_str}')
|
{"golden_diff": "diff --git a/mmocr/datasets/utils/parser.py b/mmocr/datasets/utils/parser.py\n--- a/mmocr/datasets/utils/parser.py\n+++ b/mmocr/datasets/utils/parser.py\n@@ -30,10 +30,7 @@\n def get_item(self, data_ret, index):\n map_index = index % len(data_ret)\n line_str = data_ret[map_index]\n- for split_key in self.separator:\n- if split_key != ' ':\n- line_str = line_str.replace(split_key, ' ')\n- line_str = line_str.split()\n+ line_str = line_str.split(self.separator)\n if len(line_str) <= max(self.keys_idx):\n raise Exception(\n f'key index: {max(self.keys_idx)} out of range: {line_str}')\n", "issue": "LineStrParser: separator behaviour\nI've a question regarding this snippet of code:\r\nhttps://github.com/open-mmlab/mmocr/blob/01d8d63be945882fb2d9eaca5e1c1b39cb45f274/mmocr/datasets/utils/parser.py#L33-L36\r\n\r\nIs there a particular reason to use these 4 lines of code instead of simply `line_str = line_str.split(self.separator)`?\r\n\r\nI'm asking this because for my own use case I have:\r\n- a TSV file with `filename` and `text` as keys for text recognition task\r\n- some blank spaces in `filename` e.g. `my cropped image.png`\r\n \r\nHence, LineStrParser is configured as follows:\r\n```python\r\nparser=dict(\r\n type='LineStrParser',\r\n keys=['filename', 'text'],\r\n keys_idx=[0, 1],\r\n separator='\\t'))\r\n```\r\nbut with the 4-lines code snippet, the line parsing fails. Instead, with simply `line_str = line_str.split(self.separator)` everything works well.\n", "before_files": [{"content": "import json\n\nfrom mmocr.datasets.builder import PARSERS\n\n\[email protected]_module()\nclass LineStrParser:\n \"\"\"Parse string of one line in annotation file to dict format.\n\n Args:\n keys (list[str]): Keys in result dict.\n keys_idx (list[int]): Value index in sub-string list\n for each key above.\n separator (str): Separator to separate string to list of sub-string.\n \"\"\"\n\n def __init__(self,\n keys=['filename', 'text'],\n keys_idx=[0, 1],\n separator=' '):\n assert isinstance(keys, list)\n assert isinstance(keys_idx, list)\n assert isinstance(separator, str)\n assert len(keys) > 0\n assert len(keys) == len(keys_idx)\n self.keys = keys\n self.keys_idx = keys_idx\n self.separator = separator\n\n def get_item(self, data_ret, index):\n map_index = index % len(data_ret)\n line_str = data_ret[map_index]\n for split_key in self.separator:\n if split_key != ' ':\n line_str = line_str.replace(split_key, ' ')\n line_str = line_str.split()\n if len(line_str) <= max(self.keys_idx):\n raise Exception(\n f'key index: {max(self.keys_idx)} out of range: {line_str}')\n\n line_info = {}\n for i, key in enumerate(self.keys):\n line_info[key] = line_str[self.keys_idx[i]]\n return line_info\n\n\[email protected]_module()\nclass LineJsonParser:\n \"\"\"Parse json-string of one line in annotation file to dict format.\n\n Args:\n keys (list[str]): Keys in both json-string and result dict.\n \"\"\"\n\n def __init__(self, keys=[], **kwargs):\n assert isinstance(keys, list)\n assert len(keys) > 0\n self.keys = keys\n\n def get_item(self, data_ret, index):\n map_index = index % len(data_ret)\n line_json_obj = json.loads(data_ret[map_index])\n line_info = {}\n for key in self.keys:\n if key not in line_json_obj:\n raise Exception(f'key {key} not in line json {line_json_obj}')\n line_info[key] = line_json_obj[key]\n\n return line_info\n", "path": "mmocr/datasets/utils/parser.py"}]}
| 1,407 | 171 |
gh_patches_debug_52858
|
rasdani/github-patches
|
git_diff
|
getsentry__sentry-540
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
celery 3.0 causes import error (cannot import abbrtools from celery.utils)
Release of celery 3.0 causes an import error at runtime upon any request.
This is the stack trace:
```
ImportError: cannot import name abbrtask
Error handling request
Traceback (most recent call last):
File "/Users/guzru/dev/django14/lib/python2.7/site-packages/gunicorn/workers/sync.py", line 107, in handle_request
for item in respiter:
File "/Users/guzru/dev/django14/lib/python2.7/site-packages/raven/middleware.py", line 28, in __call__
for event in self.application(environ, start_response):
File "/Users/guzru/dev/django14/lib/python2.7/site-packages/django/core/handlers/wsgi.py", line 241, in __call__
response = self.get_response(request)
File "/Users/guzru/dev/django14/lib/python2.7/site-packages/django/core/handlers/base.py", line 179, in get_response
response = self.handle_uncaught_exception(request, resolver, sys.exc_info())
File "/Users/guzru/dev/django14/lib/python2.7/site-packages/django/core/handlers/base.py", line 224, in handle_uncaught_exception
if resolver.urlconf_module is None:
File "/Users/guzru/dev/django14/lib/python2.7/site-packages/django/core/urlresolvers.py", line 323, in urlconf_module
self._urlconf_module = import_module(self.urlconf_name)
File "/Users/guzru/dev/django14/lib/python2.7/site-packages/django/utils/importlib.py", line 35, in import_module
__import__(name)
File "/Users/guzru/dev/django14/lib/python2.7/site-packages/sentry/conf/urls.py", line 19, in <module>
admin.autodiscover()
File "/Users/guzru/dev/django14/lib/python2.7/site-packages/django/contrib/admin/__init__.py", line 29, in autodiscover
import_module('%s.admin' % app)
File "/Users/guzru/dev/django14/lib/python2.7/site-packages/django/utils/importlib.py", line 35, in import_module
__import__(name)
File "/Users/guzru/dev/django14/lib/python2.7/site-packages/djcelery/admin.py", line 19, in <module>
from celery.utils import abbrtask
ImportError: cannot import name abbrtask
```
Requirements line for celery should become:
celery>=2.5.3,<3.0.0
</issue>
<code>
[start of setup.py]
1 #!/usr/bin/env python
2 """
3 Sentry
4 ======
5
6 Sentry is a realtime event logging and aggregation platform. It specializes
7 in monitoring errors and extracting all the information needed to do a proper
8 post-mortem without any of the hassle of the standard user feedback loop.
9
10 Sentry is a Server
11 ------------------
12
13 The Sentry package, at its core, is just a simple server and web UI. It will
14 handle authentication clients (such as `Raven <https://github.com/dcramer/raven>`_)
15 and all of the logic behind storage and aggregation.
16
17 That said, Sentry is not limited to Python. The primary implementation is in
18 Python, but it contains a full API for sending events from any language, in
19 any application.
20
21 :copyright: (c) 2011-2012 by the Sentry Team, see AUTHORS for more details.
22 :license: BSD, see LICENSE for more details.
23 """
24
25 from setuptools import setup, find_packages
26
27 # Hack to prevent stupid "TypeError: 'NoneType' object is not callable" error
28 # in multiprocessing/util.py _exit_function when running `python
29 # setup.py test` (see
30 # http://www.eby-sarna.com/pipermail/peak/2010-May/003357.html)
31 try:
32 import multiprocessing
33 except ImportError:
34 pass
35
36 tests_require = [
37 'django-nose==1.1',
38 'eventlet==0.9.16',
39 'nose==1.1.2',
40 'nydus==0.8.2',
41 'mock==0.8.0',
42 'pyflakes',
43 'pep8',
44 'redis',
45 'unittest2',
46 ]
47
48
49 install_requires = [
50 'cssutils>=0.9.9',
51 'BeautifulSoup>=3.2.1',
52 'django-celery>=2.5.5,<3.0',
53 'django-crispy-forms>=1.1.4',
54 'Django>=1.2,<1.5',
55 'django-indexer>=0.3.0',
56 'django-paging>=0.2.4',
57 'django-picklefield>=0.2.0',
58 'django-templatetag-sugar>=0.1.0',
59 'gunicorn>=0.13.4',
60 'logan>=0.3.1',
61 'pynliner>=0.4.0',
62 'python-dateutil>=1.5.0,<2.0.0',
63 'pytz>=2011n',
64 'raven>=2.0.0',
65 'simplejson>=2.3.0,<2.5.0',
66 'South>=0.7',
67 'httpagentparser>=1.0.5'
68 ]
69
70 dependency_links = [
71 'https://github.com/dcramer/pyflakes/tarball/master#egg=pyflakes',
72 ]
73
74 setup(
75 name='sentry',
76 version='4.8.1',
77 author='David Cramer',
78 author_email='[email protected]',
79 url='http://github.com/dcramer/sentry',
80 description='A realtime logging and aggregation server.',
81 long_description=__doc__,
82 packages=find_packages(exclude=['tests']),
83 zip_safe=False,
84 install_requires=install_requires,
85 tests_require=tests_require,
86 extras_require={'test': tests_require},
87 dependency_links=dependency_links,
88 test_suite='runtests.runtests',
89 license='BSD',
90 include_package_data=True,
91 entry_points={
92 'console_scripts': [
93 'sentry = sentry.utils.runner:main',
94 ],
95 },
96 classifiers=[
97 'Framework :: Django',
98 'Intended Audience :: Developers',
99 'Intended Audience :: System Administrators',
100 'Operating System :: OS Independent',
101 'Topic :: Software Development'
102 ],
103 )
104
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -50,6 +50,7 @@
'cssutils>=0.9.9',
'BeautifulSoup>=3.2.1',
'django-celery>=2.5.5,<3.0',
+ 'celery>=2.5.3,<3.0',
'django-crispy-forms>=1.1.4',
'Django>=1.2,<1.5',
'django-indexer>=0.3.0',
|
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -50,6 +50,7 @@\n 'cssutils>=0.9.9',\n 'BeautifulSoup>=3.2.1',\n 'django-celery>=2.5.5,<3.0',\n+ 'celery>=2.5.3,<3.0',\n 'django-crispy-forms>=1.1.4',\n 'Django>=1.2,<1.5',\n 'django-indexer>=0.3.0',\n", "issue": "celery 3.0 causes import error (cannot import abbrtools from celery.utils)\nRelease of celery 3.0 causes an import error at runtime upon any request.\n\nThis is the stack trace:\n\n```\nImportError: cannot import name abbrtask\nError handling request\nTraceback (most recent call last):\n File \"/Users/guzru/dev/django14/lib/python2.7/site-packages/gunicorn/workers/sync.py\", line 107, in handle_request\n for item in respiter:\n File \"/Users/guzru/dev/django14/lib/python2.7/site-packages/raven/middleware.py\", line 28, in __call__\n for event in self.application(environ, start_response):\n File \"/Users/guzru/dev/django14/lib/python2.7/site-packages/django/core/handlers/wsgi.py\", line 241, in __call__\n response = self.get_response(request)\n File \"/Users/guzru/dev/django14/lib/python2.7/site-packages/django/core/handlers/base.py\", line 179, in get_response\n response = self.handle_uncaught_exception(request, resolver, sys.exc_info())\n File \"/Users/guzru/dev/django14/lib/python2.7/site-packages/django/core/handlers/base.py\", line 224, in handle_uncaught_exception\n if resolver.urlconf_module is None:\n File \"/Users/guzru/dev/django14/lib/python2.7/site-packages/django/core/urlresolvers.py\", line 323, in urlconf_module\n self._urlconf_module = import_module(self.urlconf_name)\n File \"/Users/guzru/dev/django14/lib/python2.7/site-packages/django/utils/importlib.py\", line 35, in import_module\n __import__(name)\n File \"/Users/guzru/dev/django14/lib/python2.7/site-packages/sentry/conf/urls.py\", line 19, in <module>\n admin.autodiscover()\n File \"/Users/guzru/dev/django14/lib/python2.7/site-packages/django/contrib/admin/__init__.py\", line 29, in autodiscover\n import_module('%s.admin' % app)\n File \"/Users/guzru/dev/django14/lib/python2.7/site-packages/django/utils/importlib.py\", line 35, in import_module\n __import__(name)\n File \"/Users/guzru/dev/django14/lib/python2.7/site-packages/djcelery/admin.py\", line 19, in <module>\n from celery.utils import abbrtask\nImportError: cannot import name abbrtask\n```\n\nRequirements line for celery should become:\n\ncelery>=2.5.3,<3.0.0\n\n", "before_files": [{"content": "#!/usr/bin/env python\n\"\"\"\nSentry\n======\n\nSentry is a realtime event logging and aggregation platform. It specializes\nin monitoring errors and extracting all the information needed to do a proper\npost-mortem without any of the hassle of the standard user feedback loop.\n\nSentry is a Server\n------------------\n\nThe Sentry package, at its core, is just a simple server and web UI. It will\nhandle authentication clients (such as `Raven <https://github.com/dcramer/raven>`_)\nand all of the logic behind storage and aggregation.\n\nThat said, Sentry is not limited to Python. The primary implementation is in\nPython, but it contains a full API for sending events from any language, in\nany application.\n\n:copyright: (c) 2011-2012 by the Sentry Team, see AUTHORS for more details.\n:license: BSD, see LICENSE for more details.\n\"\"\"\n\nfrom setuptools import setup, find_packages\n\n# Hack to prevent stupid \"TypeError: 'NoneType' object is not callable\" error\n# in multiprocessing/util.py _exit_function when running `python\n# setup.py test` (see\n# http://www.eby-sarna.com/pipermail/peak/2010-May/003357.html)\ntry:\n import multiprocessing\nexcept ImportError:\n pass\n\ntests_require = [\n 'django-nose==1.1',\n 'eventlet==0.9.16',\n 'nose==1.1.2',\n 'nydus==0.8.2',\n 'mock==0.8.0',\n 'pyflakes',\n 'pep8',\n 'redis',\n 'unittest2',\n]\n\n\ninstall_requires = [\n 'cssutils>=0.9.9',\n 'BeautifulSoup>=3.2.1',\n 'django-celery>=2.5.5,<3.0',\n 'django-crispy-forms>=1.1.4',\n 'Django>=1.2,<1.5',\n 'django-indexer>=0.3.0',\n 'django-paging>=0.2.4',\n 'django-picklefield>=0.2.0',\n 'django-templatetag-sugar>=0.1.0',\n 'gunicorn>=0.13.4',\n 'logan>=0.3.1',\n 'pynliner>=0.4.0',\n 'python-dateutil>=1.5.0,<2.0.0',\n 'pytz>=2011n',\n 'raven>=2.0.0',\n 'simplejson>=2.3.0,<2.5.0',\n 'South>=0.7',\n 'httpagentparser>=1.0.5'\n]\n\ndependency_links = [\n 'https://github.com/dcramer/pyflakes/tarball/master#egg=pyflakes',\n]\n\nsetup(\n name='sentry',\n version='4.8.1',\n author='David Cramer',\n author_email='[email protected]',\n url='http://github.com/dcramer/sentry',\n description='A realtime logging and aggregation server.',\n long_description=__doc__,\n packages=find_packages(exclude=['tests']),\n zip_safe=False,\n install_requires=install_requires,\n tests_require=tests_require,\n extras_require={'test': tests_require},\n dependency_links=dependency_links,\n test_suite='runtests.runtests',\n license='BSD',\n include_package_data=True,\n entry_points={\n 'console_scripts': [\n 'sentry = sentry.utils.runner:main',\n ],\n },\n classifiers=[\n 'Framework :: Django',\n 'Intended Audience :: Developers',\n 'Intended Audience :: System Administrators',\n 'Operating System :: OS Independent',\n 'Topic :: Software Development'\n ],\n)\n", "path": "setup.py"}]}
| 2,170 | 127 |
gh_patches_debug_23439
|
rasdani/github-patches
|
git_diff
|
ESMCI__cime-1490
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
python 3 format causing pylint fails
In PR #1471 I objected to the usage of python3 syntax format statements instead of %s as used throughout the code. Now I've noticed that this usage was introduced to the trunk in PR #1388
Master is failing pylint tests due to #1388 - while fixing the pylint test can you please revert to %s usage throughout?
</issue>
<code>
[start of scripts/lib/CIME/XML/compsets.py]
1 """
2 Common interface to XML files which follow the compsets format,
3 """
4
5 from CIME.XML.standard_module_setup import *
6 from CIME.XML.generic_xml import GenericXML
7 from CIME.XML.entry_id import EntryID
8 from CIME.XML.files import Files
9
10
11 logger = logging.getLogger(__name__)
12
13 class Compsets(GenericXML):
14
15 def __init__(self, infile=None, files=None):
16 if files is None:
17 files = Files()
18 schema = files.get_schema("COMPSETS_SPEC_FILE")
19 GenericXML.__init__(self, infile, schema=schema)
20 self.groups={}
21
22 def get_compset_match(self, name):
23 """
24 science support is used in cesm to determine if this compset and grid
25 is scientifically supported. science_support is returned as an array of grids for this compset
26 """
27 nodes = self.get_nodes("compset")
28 alias = None
29 lname = None
30
31 science_support = []
32
33 for node in nodes:
34 alias = self.get_element_text("alias",root=node)
35 lname = self.get_element_text("lname",root=node)
36 if alias == name or lname == name:
37 science_support_nodes = self.get_nodes("science_support", root=node)
38 for node in science_support_nodes:
39 science_support.append(node.get("grid"))
40 user_mods_node = self.get_optional_node("user_mods", root=node)
41 if user_mods_node is not None:
42 user_mods = user_mods_node.text
43 else:
44 user_mods = None
45 logger.debug("Found node match with alias: {} and lname: {}".format(alias, lname))
46 return (lname, alias, science_support, user_mods)
47 return (None, None, [False], None)
48
49 def get_compset_var_settings(self, compset, grid):
50 '''
51 Variables can be set in config_compsets.xml in entry id settings with compset and grid attributes
52 find and return id value pairs here
53 '''
54 nodes = self.get_nodes("entry")
55 # Get an empty entryid obj to use
56 entryidobj = EntryID()
57 result = []
58 for node in nodes:
59 value = entryidobj.get_default_value(node, {"grid":grid, "compset":compset})
60 if value is not None:
61 result.append((node.get("id"), value))
62 return result
63
64 def get_value(self, name, attribute=None, resolved=False, subgroup=None):
65 expect(subgroup is None, "This class does not support subgroups")
66 if name == "help":
67 rootnode = self.get_node("help")
68 helptext = rootnode.text
69 return helptext
70 else:
71 compsets = {}
72 nodes = self.get_nodes(nodename="compset")
73 for node in nodes:
74 for child in node:
75 logger.debug ("Here child is {} with value {}".format(child.tag,child.text))
76 if child.tag == "alias":
77 alias = child.text
78 if child.tag == "lname":
79 lname = child.text
80 compsets[alias] = lname
81 return compsets
82
83 def print_values(self, help=True):
84 help_text = self.get_value(name="help")
85 compsets_text = self.get_value("names")
86 if help:
87 logger.info(" {} ".format(help_text))
88
89 logger.info(" --------------------------------------")
90 logger.info(" Compset Alias: Compset Long Name ")
91 logger.info(" --------------------------------------")
92 for v in compsets_text.iteritems():
93 label, definition = v
94 logger.info(" {:20} : {}".format(label, definition))
95
[end of scripts/lib/CIME/XML/compsets.py]
[start of scripts/lib/CIME/code_checker.py]
1 """
2 Libraries for checking python code with pylint
3 """
4
5 from CIME.XML.standard_module_setup import *
6
7 from CIME.utils import run_cmd, run_cmd_no_fail, expect, get_cime_root, is_python_executable
8
9 from multiprocessing.dummy import Pool as ThreadPool
10 from distutils.spawn import find_executable
11
12 logger = logging.getLogger(__name__)
13
14 ###############################################################################
15 def _run_pylint(on_file, interactive):
16 ###############################################################################
17 pylint = find_executable("pylint")
18
19 cmd_options = " --disable=I,C,R,logging-not-lazy,wildcard-import,unused-wildcard-import,fixme,broad-except,bare-except,eval-used,exec-used,global-statement"
20 cimeroot = get_cime_root()
21
22 if "scripts/Tools" in on_file:
23 cmd_options +=",relative-import"
24
25 # add init-hook option
26 cmd_options += " --init-hook='sys.path.extend((\"%s\",\"%s\",\"%s\"))'"%\
27 (os.path.join(cimeroot,"scripts","lib"),
28 os.path.join(cimeroot,"scripts","Tools"),
29 os.path.join(cimeroot,"scripts","fortran_unit_testing","python"))
30
31 cmd = "%s %s %s" % (pylint, cmd_options, on_file)
32 logger.debug("pylint command is %s"%cmd)
33 stat, out, err = run_cmd(cmd, verbose=False, from_dir=cimeroot)
34 if stat != 0:
35 if interactive:
36 logger.info("File %s has pylint problems, please fix\n Use command: %s" % (on_file, cmd))
37 logger.info(out + "\n" + err)
38 return (on_file, out + "\n" + err)
39 else:
40 if interactive:
41 logger.info("File %s has no pylint problems" % on_file)
42 return (on_file, "")
43
44 ###############################################################################
45 def _matches(file_path, file_ends):
46 ###############################################################################
47 for file_end in file_ends:
48 if file_path.endswith(file_end):
49 return True
50
51 return False
52
53 ###############################################################################
54 def _should_pylint_skip(filepath):
55 ###############################################################################
56 # TODO - get rid of this
57 list_of_directories_to_ignore = ("xmlconvertors", "pointclm", "point_clm", "tools", "machines", "apidocs")
58 for dir_to_skip in list_of_directories_to_ignore:
59 if dir_to_skip in filepath:
60 return True
61
62 return False
63
64 ###############################################################################
65 def get_all_checkable_files():
66 ###############################################################################
67 cimeroot = get_cime_root()
68 all_git_files = run_cmd_no_fail("git ls-files", from_dir=cimeroot, verbose=False).splitlines()
69
70 files_to_test = [item for item in all_git_files
71 if ((item.endswith(".py") or is_python_executable(os.path.join(cimeroot, item))) and not _should_pylint_skip(item))]
72 return files_to_test
73
74 ###############################################################################
75 def check_code(files, num_procs=10, interactive=False):
76 ###############################################################################
77 """
78 Check all python files in the given directory
79
80 Returns True if all files had no problems
81 """
82 # Get list of files to check, we look to see if user-provided file argument
83 # is a valid file, if not, we search the repo for a file with similar name.
84 repo_files = run_cmd_no_fail('git ls-files --full-name %s' % get_cime_root(), verbose=False).splitlines()
85 files_to_check = []
86 if files:
87 for filearg in files:
88 if os.path.exists(filearg):
89 files_to_check.append(os.path.abspath(filearg))
90 else:
91 found = False
92 for repo_file in repo_files:
93 if repo_file.endswith(filearg):
94 found = True
95 files_to_check.append(repo_file) # could have multiple matches
96
97 if not found:
98 logger.warning("Could not find file matching argument '%s'" % filearg)
99 else:
100 # Check every python file
101 files_to_check = get_all_checkable_files()
102
103 expect(len(files_to_check) > 0, "No matching files found")
104
105 # No point in using more threads than files
106 if len(files_to_check) < num_procs:
107 num_procs = len(files_to_check)
108
109 pool = ThreadPool(num_procs)
110 results = pool.map(lambda x : _run_pylint(x, interactive), files_to_check)
111 pool.close()
112 pool.join()
113 return dict(results)
114
[end of scripts/lib/CIME/code_checker.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/scripts/lib/CIME/XML/compsets.py b/scripts/lib/CIME/XML/compsets.py
--- a/scripts/lib/CIME/XML/compsets.py
+++ b/scripts/lib/CIME/XML/compsets.py
@@ -80,10 +80,10 @@
compsets[alias] = lname
return compsets
- def print_values(self, help=True):
+ def print_values(self, arg_help=True):
help_text = self.get_value(name="help")
compsets_text = self.get_value("names")
- if help:
+ if arg_help:
logger.info(" {} ".format(help_text))
logger.info(" --------------------------------------")
diff --git a/scripts/lib/CIME/code_checker.py b/scripts/lib/CIME/code_checker.py
--- a/scripts/lib/CIME/code_checker.py
+++ b/scripts/lib/CIME/code_checker.py
@@ -16,7 +16,7 @@
###############################################################################
pylint = find_executable("pylint")
- cmd_options = " --disable=I,C,R,logging-not-lazy,wildcard-import,unused-wildcard-import,fixme,broad-except,bare-except,eval-used,exec-used,global-statement"
+ cmd_options = " --disable=I,C,R,logging-not-lazy,wildcard-import,unused-wildcard-import,fixme,broad-except,bare-except,eval-used,exec-used,global-statement,logging-format-interpolation"
cimeroot = get_cime_root()
if "scripts/Tools" in on_file:
|
{"golden_diff": "diff --git a/scripts/lib/CIME/XML/compsets.py b/scripts/lib/CIME/XML/compsets.py\n--- a/scripts/lib/CIME/XML/compsets.py\n+++ b/scripts/lib/CIME/XML/compsets.py\n@@ -80,10 +80,10 @@\n compsets[alias] = lname\n return compsets\n \n- def print_values(self, help=True):\n+ def print_values(self, arg_help=True):\n help_text = self.get_value(name=\"help\")\n compsets_text = self.get_value(\"names\")\n- if help:\n+ if arg_help:\n logger.info(\" {} \".format(help_text))\n \n logger.info(\" --------------------------------------\")\ndiff --git a/scripts/lib/CIME/code_checker.py b/scripts/lib/CIME/code_checker.py\n--- a/scripts/lib/CIME/code_checker.py\n+++ b/scripts/lib/CIME/code_checker.py\n@@ -16,7 +16,7 @@\n ###############################################################################\n pylint = find_executable(\"pylint\")\n \n- cmd_options = \" --disable=I,C,R,logging-not-lazy,wildcard-import,unused-wildcard-import,fixme,broad-except,bare-except,eval-used,exec-used,global-statement\"\n+ cmd_options = \" --disable=I,C,R,logging-not-lazy,wildcard-import,unused-wildcard-import,fixme,broad-except,bare-except,eval-used,exec-used,global-statement,logging-format-interpolation\"\n cimeroot = get_cime_root()\n \n if \"scripts/Tools\" in on_file:\n", "issue": "python 3 format causing pylint fails \nIn PR #1471 I objected to the usage of python3 syntax format statements instead of %s as used throughout the code. Now I've noticed that this usage was introduced to the trunk in PR #1388\r\n\r\nMaster is failing pylint tests due to #1388 - while fixing the pylint test can you please revert to %s usage throughout?\n", "before_files": [{"content": "\"\"\"\nCommon interface to XML files which follow the compsets format,\n\"\"\"\n\nfrom CIME.XML.standard_module_setup import *\nfrom CIME.XML.generic_xml import GenericXML\nfrom CIME.XML.entry_id import EntryID\nfrom CIME.XML.files import Files\n\n\nlogger = logging.getLogger(__name__)\n\nclass Compsets(GenericXML):\n\n def __init__(self, infile=None, files=None):\n if files is None:\n files = Files()\n schema = files.get_schema(\"COMPSETS_SPEC_FILE\")\n GenericXML.__init__(self, infile, schema=schema)\n self.groups={}\n\n def get_compset_match(self, name):\n \"\"\"\n science support is used in cesm to determine if this compset and grid\n is scientifically supported. science_support is returned as an array of grids for this compset\n \"\"\"\n nodes = self.get_nodes(\"compset\")\n alias = None\n lname = None\n\n science_support = []\n\n for node in nodes:\n alias = self.get_element_text(\"alias\",root=node)\n lname = self.get_element_text(\"lname\",root=node)\n if alias == name or lname == name:\n science_support_nodes = self.get_nodes(\"science_support\", root=node)\n for node in science_support_nodes:\n science_support.append(node.get(\"grid\"))\n user_mods_node = self.get_optional_node(\"user_mods\", root=node)\n if user_mods_node is not None:\n user_mods = user_mods_node.text\n else:\n user_mods = None\n logger.debug(\"Found node match with alias: {} and lname: {}\".format(alias, lname))\n return (lname, alias, science_support, user_mods)\n return (None, None, [False], None)\n\n def get_compset_var_settings(self, compset, grid):\n '''\n Variables can be set in config_compsets.xml in entry id settings with compset and grid attributes\n find and return id value pairs here\n '''\n nodes = self.get_nodes(\"entry\")\n # Get an empty entryid obj to use\n entryidobj = EntryID()\n result = []\n for node in nodes:\n value = entryidobj.get_default_value(node, {\"grid\":grid, \"compset\":compset})\n if value is not None:\n result.append((node.get(\"id\"), value))\n return result\n\n def get_value(self, name, attribute=None, resolved=False, subgroup=None):\n expect(subgroup is None, \"This class does not support subgroups\")\n if name == \"help\":\n rootnode = self.get_node(\"help\")\n helptext = rootnode.text\n return helptext\n else:\n compsets = {}\n nodes = self.get_nodes(nodename=\"compset\")\n for node in nodes:\n for child in node:\n logger.debug (\"Here child is {} with value {}\".format(child.tag,child.text))\n if child.tag == \"alias\":\n alias = child.text\n if child.tag == \"lname\":\n lname = child.text\n compsets[alias] = lname\n return compsets\n\n def print_values(self, help=True):\n help_text = self.get_value(name=\"help\")\n compsets_text = self.get_value(\"names\")\n if help:\n logger.info(\" {} \".format(help_text))\n\n logger.info(\" --------------------------------------\")\n logger.info(\" Compset Alias: Compset Long Name \")\n logger.info(\" --------------------------------------\")\n for v in compsets_text.iteritems():\n label, definition = v\n logger.info(\" {:20} : {}\".format(label, definition))\n", "path": "scripts/lib/CIME/XML/compsets.py"}, {"content": "\"\"\"\nLibraries for checking python code with pylint\n\"\"\"\n\nfrom CIME.XML.standard_module_setup import *\n\nfrom CIME.utils import run_cmd, run_cmd_no_fail, expect, get_cime_root, is_python_executable\n\nfrom multiprocessing.dummy import Pool as ThreadPool\nfrom distutils.spawn import find_executable\n\nlogger = logging.getLogger(__name__)\n\n###############################################################################\ndef _run_pylint(on_file, interactive):\n###############################################################################\n pylint = find_executable(\"pylint\")\n\n cmd_options = \" --disable=I,C,R,logging-not-lazy,wildcard-import,unused-wildcard-import,fixme,broad-except,bare-except,eval-used,exec-used,global-statement\"\n cimeroot = get_cime_root()\n\n if \"scripts/Tools\" in on_file:\n cmd_options +=\",relative-import\"\n\n # add init-hook option\n cmd_options += \" --init-hook='sys.path.extend((\\\"%s\\\",\\\"%s\\\",\\\"%s\\\"))'\"%\\\n (os.path.join(cimeroot,\"scripts\",\"lib\"),\n os.path.join(cimeroot,\"scripts\",\"Tools\"),\n os.path.join(cimeroot,\"scripts\",\"fortran_unit_testing\",\"python\"))\n\n cmd = \"%s %s %s\" % (pylint, cmd_options, on_file)\n logger.debug(\"pylint command is %s\"%cmd)\n stat, out, err = run_cmd(cmd, verbose=False, from_dir=cimeroot)\n if stat != 0:\n if interactive:\n logger.info(\"File %s has pylint problems, please fix\\n Use command: %s\" % (on_file, cmd))\n logger.info(out + \"\\n\" + err)\n return (on_file, out + \"\\n\" + err)\n else:\n if interactive:\n logger.info(\"File %s has no pylint problems\" % on_file)\n return (on_file, \"\")\n\n###############################################################################\ndef _matches(file_path, file_ends):\n###############################################################################\n for file_end in file_ends:\n if file_path.endswith(file_end):\n return True\n\n return False\n\n###############################################################################\ndef _should_pylint_skip(filepath):\n###############################################################################\n # TODO - get rid of this\n list_of_directories_to_ignore = (\"xmlconvertors\", \"pointclm\", \"point_clm\", \"tools\", \"machines\", \"apidocs\")\n for dir_to_skip in list_of_directories_to_ignore:\n if dir_to_skip in filepath:\n return True\n\n return False\n\n###############################################################################\ndef get_all_checkable_files():\n###############################################################################\n cimeroot = get_cime_root()\n all_git_files = run_cmd_no_fail(\"git ls-files\", from_dir=cimeroot, verbose=False).splitlines()\n\n files_to_test = [item for item in all_git_files\n if ((item.endswith(\".py\") or is_python_executable(os.path.join(cimeroot, item))) and not _should_pylint_skip(item))]\n return files_to_test\n\n###############################################################################\ndef check_code(files, num_procs=10, interactive=False):\n###############################################################################\n \"\"\"\n Check all python files in the given directory\n\n Returns True if all files had no problems\n \"\"\"\n # Get list of files to check, we look to see if user-provided file argument\n # is a valid file, if not, we search the repo for a file with similar name.\n repo_files = run_cmd_no_fail('git ls-files --full-name %s' % get_cime_root(), verbose=False).splitlines()\n files_to_check = []\n if files:\n for filearg in files:\n if os.path.exists(filearg):\n files_to_check.append(os.path.abspath(filearg))\n else:\n found = False\n for repo_file in repo_files:\n if repo_file.endswith(filearg):\n found = True\n files_to_check.append(repo_file) # could have multiple matches\n\n if not found:\n logger.warning(\"Could not find file matching argument '%s'\" % filearg)\n else:\n # Check every python file\n files_to_check = get_all_checkable_files()\n\n expect(len(files_to_check) > 0, \"No matching files found\")\n\n # No point in using more threads than files\n if len(files_to_check) < num_procs:\n num_procs = len(files_to_check)\n\n pool = ThreadPool(num_procs)\n results = pool.map(lambda x : _run_pylint(x, interactive), files_to_check)\n pool.close()\n pool.join()\n return dict(results)\n", "path": "scripts/lib/CIME/code_checker.py"}]}
| 2,797 | 339 |
gh_patches_debug_37689
|
rasdani/github-patches
|
git_diff
|
encode__httpx-215
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
How to configure TLS beyond client certificate and CA root certs?
Typically in requests you can use a __HTTPAdapter__ to hijack and set the SSL context and define what ciphers to use, how does one go about doing the same in __httpx__?
</issue>
<code>
[start of httpx/config.py]
1 import ssl
2 import typing
3 from pathlib import Path
4
5 import certifi
6
7 from .__version__ import __version__
8
9 CertTypes = typing.Union[str, typing.Tuple[str, str], typing.Tuple[str, str, str]]
10 VerifyTypes = typing.Union[str, bool]
11 TimeoutTypes = typing.Union[float, typing.Tuple[float, float, float], "TimeoutConfig"]
12
13
14 USER_AGENT = f"python-httpx/{__version__}"
15
16 DEFAULT_CIPHERS = ":".join(
17 [
18 "ECDHE+AESGCM",
19 "ECDHE+CHACHA20",
20 "DHE+AESGCM",
21 "DHE+CHACHA20",
22 "ECDH+AESGCM",
23 "DH+AESGCM",
24 "ECDH+AES",
25 "DH+AES",
26 "RSA+AESGCM",
27 "RSA+AES",
28 "!aNULL",
29 "!eNULL",
30 "!MD5",
31 "!DSS",
32 ]
33 )
34
35
36 class SSLConfig:
37 """
38 SSL Configuration.
39 """
40
41 def __init__(self, *, cert: CertTypes = None, verify: VerifyTypes = True):
42 self.cert = cert
43 self.verify = verify
44
45 self.ssl_context: typing.Optional[ssl.SSLContext] = None
46
47 def __eq__(self, other: typing.Any) -> bool:
48 return (
49 isinstance(other, self.__class__)
50 and self.cert == other.cert
51 and self.verify == other.verify
52 )
53
54 def __repr__(self) -> str:
55 class_name = self.__class__.__name__
56 return f"{class_name}(cert={self.cert}, verify={self.verify})"
57
58 def with_overrides(
59 self, cert: CertTypes = None, verify: VerifyTypes = None
60 ) -> "SSLConfig":
61 cert = self.cert if cert is None else cert
62 verify = self.verify if verify is None else verify
63 if (cert == self.cert) and (verify == self.verify):
64 return self
65 return SSLConfig(cert=cert, verify=verify)
66
67 def load_ssl_context(self) -> ssl.SSLContext:
68 if self.ssl_context is None:
69 self.ssl_context = (
70 self.load_ssl_context_verify()
71 if self.verify
72 else self.load_ssl_context_no_verify()
73 )
74
75 assert self.ssl_context is not None
76 return self.ssl_context
77
78 def load_ssl_context_no_verify(self) -> ssl.SSLContext:
79 """
80 Return an SSL context for unverified connections.
81 """
82 context = self._create_default_ssl_context()
83 context.verify_mode = ssl.CERT_NONE
84 context.check_hostname = False
85 return context
86
87 def load_ssl_context_verify(self) -> ssl.SSLContext:
88 """
89 Return an SSL context for verified connections.
90 """
91 if isinstance(self.verify, bool):
92 ca_bundle_path = DEFAULT_CA_BUNDLE_PATH
93 elif Path(self.verify).exists():
94 ca_bundle_path = Path(self.verify)
95 else:
96 raise IOError(
97 "Could not find a suitable TLS CA certificate bundle, "
98 "invalid path: {}".format(self.verify)
99 )
100
101 context = self._create_default_ssl_context()
102 context.verify_mode = ssl.CERT_REQUIRED
103 context.check_hostname = True
104
105 # Signal to server support for PHA in TLS 1.3. Raises an
106 # AttributeError if only read-only access is implemented.
107 try:
108 context.post_handshake_auth = True # type: ignore
109 except AttributeError: # pragma: nocover
110 pass
111
112 # Disable using 'commonName' for SSLContext.check_hostname
113 # when the 'subjectAltName' extension isn't available.
114 try:
115 context.hostname_checks_common_name = False # type: ignore
116 except AttributeError: # pragma: nocover
117 pass
118
119 if ca_bundle_path.is_file():
120 context.load_verify_locations(cafile=str(ca_bundle_path))
121 elif ca_bundle_path.is_dir():
122 context.load_verify_locations(capath=str(ca_bundle_path))
123
124 if self.cert is not None:
125 if isinstance(self.cert, str):
126 context.load_cert_chain(certfile=self.cert)
127 elif isinstance(self.cert, tuple) and len(self.cert) == 2:
128 context.load_cert_chain(certfile=self.cert[0], keyfile=self.cert[1])
129 elif isinstance(self.cert, tuple) and len(self.cert) == 3:
130 context.load_cert_chain(
131 certfile=self.cert[0],
132 keyfile=self.cert[1],
133 password=self.cert[2], # type: ignore
134 )
135
136 return context
137
138 def _create_default_ssl_context(self) -> ssl.SSLContext:
139 """
140 Creates the default SSLContext object that's used for both verified
141 and unverified connections.
142 """
143 context = ssl.SSLContext(ssl.PROTOCOL_TLS)
144 context.options |= ssl.OP_NO_SSLv2
145 context.options |= ssl.OP_NO_SSLv3
146 context.options |= ssl.OP_NO_TLSv1
147 context.options |= ssl.OP_NO_TLSv1_1
148 context.options |= ssl.OP_NO_COMPRESSION
149 context.set_ciphers(DEFAULT_CIPHERS)
150
151 if ssl.HAS_ALPN:
152 context.set_alpn_protocols(["h2", "http/1.1"])
153 if ssl.HAS_NPN:
154 context.set_npn_protocols(["h2", "http/1.1"])
155
156 return context
157
158
159 class TimeoutConfig:
160 """
161 Timeout values.
162 """
163
164 def __init__(
165 self,
166 timeout: TimeoutTypes = None,
167 *,
168 connect_timeout: float = None,
169 read_timeout: float = None,
170 write_timeout: float = None,
171 ):
172 if timeout is None:
173 self.connect_timeout = connect_timeout
174 self.read_timeout = read_timeout
175 self.write_timeout = write_timeout
176 else:
177 # Specified as a single timeout value
178 assert connect_timeout is None
179 assert read_timeout is None
180 assert write_timeout is None
181 if isinstance(timeout, TimeoutConfig):
182 self.connect_timeout = timeout.connect_timeout
183 self.read_timeout = timeout.read_timeout
184 self.write_timeout = timeout.write_timeout
185 elif isinstance(timeout, tuple):
186 self.connect_timeout = timeout[0]
187 self.read_timeout = timeout[1]
188 self.write_timeout = timeout[2]
189 else:
190 self.connect_timeout = timeout
191 self.read_timeout = timeout
192 self.write_timeout = timeout
193
194 def __eq__(self, other: typing.Any) -> bool:
195 return (
196 isinstance(other, self.__class__)
197 and self.connect_timeout == other.connect_timeout
198 and self.read_timeout == other.read_timeout
199 and self.write_timeout == other.write_timeout
200 )
201
202 def __repr__(self) -> str:
203 class_name = self.__class__.__name__
204 if len({self.connect_timeout, self.read_timeout, self.write_timeout}) == 1:
205 return f"{class_name}(timeout={self.connect_timeout})"
206 return (
207 f"{class_name}(connect_timeout={self.connect_timeout}, "
208 f"read_timeout={self.read_timeout}, write_timeout={self.write_timeout})"
209 )
210
211
212 class PoolLimits:
213 """
214 Limits on the number of connections in a connection pool.
215 """
216
217 def __init__(
218 self,
219 *,
220 soft_limit: int = None,
221 hard_limit: int = None,
222 pool_timeout: float = None,
223 ):
224 self.soft_limit = soft_limit
225 self.hard_limit = hard_limit
226 self.pool_timeout = pool_timeout
227
228 def __eq__(self, other: typing.Any) -> bool:
229 return (
230 isinstance(other, self.__class__)
231 and self.soft_limit == other.soft_limit
232 and self.hard_limit == other.hard_limit
233 and self.pool_timeout == other.pool_timeout
234 )
235
236 def __repr__(self) -> str:
237 class_name = self.__class__.__name__
238 return (
239 f"{class_name}(soft_limit={self.soft_limit}, "
240 f"hard_limit={self.hard_limit}, pool_timeout={self.pool_timeout})"
241 )
242
243
244 DEFAULT_SSL_CONFIG = SSLConfig(cert=None, verify=True)
245 DEFAULT_TIMEOUT_CONFIG = TimeoutConfig(timeout=5.0)
246 DEFAULT_POOL_LIMITS = PoolLimits(soft_limit=10, hard_limit=100, pool_timeout=5.0)
247 DEFAULT_CA_BUNDLE_PATH = Path(certifi.where())
248 DEFAULT_MAX_REDIRECTS = 20
249
[end of httpx/config.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/httpx/config.py b/httpx/config.py
--- a/httpx/config.py
+++ b/httpx/config.py
@@ -7,7 +7,7 @@
from .__version__ import __version__
CertTypes = typing.Union[str, typing.Tuple[str, str], typing.Tuple[str, str, str]]
-VerifyTypes = typing.Union[str, bool]
+VerifyTypes = typing.Union[str, bool, ssl.SSLContext]
TimeoutTypes = typing.Union[float, typing.Tuple[float, float, float], "TimeoutConfig"]
@@ -40,9 +40,17 @@
def __init__(self, *, cert: CertTypes = None, verify: VerifyTypes = True):
self.cert = cert
- self.verify = verify
- self.ssl_context: typing.Optional[ssl.SSLContext] = None
+ # Allow passing in our own SSLContext object that's pre-configured.
+ # If you do this we assume that you want verify=True as well.
+ ssl_context = None
+ if isinstance(verify, ssl.SSLContext):
+ ssl_context = verify
+ verify = True
+ self._load_client_certs(ssl_context)
+
+ self.ssl_context: typing.Optional[ssl.SSLContext] = ssl_context
+ self.verify: typing.Union[str, bool] = verify
def __eq__(self, other: typing.Any) -> bool:
return (
@@ -121,17 +129,7 @@
elif ca_bundle_path.is_dir():
context.load_verify_locations(capath=str(ca_bundle_path))
- if self.cert is not None:
- if isinstance(self.cert, str):
- context.load_cert_chain(certfile=self.cert)
- elif isinstance(self.cert, tuple) and len(self.cert) == 2:
- context.load_cert_chain(certfile=self.cert[0], keyfile=self.cert[1])
- elif isinstance(self.cert, tuple) and len(self.cert) == 3:
- context.load_cert_chain(
- certfile=self.cert[0],
- keyfile=self.cert[1],
- password=self.cert[2], # type: ignore
- )
+ self._load_client_certs(context)
return context
@@ -155,6 +153,22 @@
return context
+ def _load_client_certs(self, ssl_context: ssl.SSLContext) -> None:
+ """
+ Loads client certificates into our SSLContext object
+ """
+ if self.cert is not None:
+ if isinstance(self.cert, str):
+ ssl_context.load_cert_chain(certfile=self.cert)
+ elif isinstance(self.cert, tuple) and len(self.cert) == 2:
+ ssl_context.load_cert_chain(certfile=self.cert[0], keyfile=self.cert[1])
+ elif isinstance(self.cert, tuple) and len(self.cert) == 3:
+ ssl_context.load_cert_chain(
+ certfile=self.cert[0],
+ keyfile=self.cert[1],
+ password=self.cert[2], # type: ignore
+ )
+
class TimeoutConfig:
"""
|
{"golden_diff": "diff --git a/httpx/config.py b/httpx/config.py\n--- a/httpx/config.py\n+++ b/httpx/config.py\n@@ -7,7 +7,7 @@\n from .__version__ import __version__\n \n CertTypes = typing.Union[str, typing.Tuple[str, str], typing.Tuple[str, str, str]]\n-VerifyTypes = typing.Union[str, bool]\n+VerifyTypes = typing.Union[str, bool, ssl.SSLContext]\n TimeoutTypes = typing.Union[float, typing.Tuple[float, float, float], \"TimeoutConfig\"]\n \n \n@@ -40,9 +40,17 @@\n \n def __init__(self, *, cert: CertTypes = None, verify: VerifyTypes = True):\n self.cert = cert\n- self.verify = verify\n \n- self.ssl_context: typing.Optional[ssl.SSLContext] = None\n+ # Allow passing in our own SSLContext object that's pre-configured.\n+ # If you do this we assume that you want verify=True as well.\n+ ssl_context = None\n+ if isinstance(verify, ssl.SSLContext):\n+ ssl_context = verify\n+ verify = True\n+ self._load_client_certs(ssl_context)\n+\n+ self.ssl_context: typing.Optional[ssl.SSLContext] = ssl_context\n+ self.verify: typing.Union[str, bool] = verify\n \n def __eq__(self, other: typing.Any) -> bool:\n return (\n@@ -121,17 +129,7 @@\n elif ca_bundle_path.is_dir():\n context.load_verify_locations(capath=str(ca_bundle_path))\n \n- if self.cert is not None:\n- if isinstance(self.cert, str):\n- context.load_cert_chain(certfile=self.cert)\n- elif isinstance(self.cert, tuple) and len(self.cert) == 2:\n- context.load_cert_chain(certfile=self.cert[0], keyfile=self.cert[1])\n- elif isinstance(self.cert, tuple) and len(self.cert) == 3:\n- context.load_cert_chain(\n- certfile=self.cert[0],\n- keyfile=self.cert[1],\n- password=self.cert[2], # type: ignore\n- )\n+ self._load_client_certs(context)\n \n return context\n \n@@ -155,6 +153,22 @@\n \n return context\n \n+ def _load_client_certs(self, ssl_context: ssl.SSLContext) -> None:\n+ \"\"\"\n+ Loads client certificates into our SSLContext object\n+ \"\"\"\n+ if self.cert is not None:\n+ if isinstance(self.cert, str):\n+ ssl_context.load_cert_chain(certfile=self.cert)\n+ elif isinstance(self.cert, tuple) and len(self.cert) == 2:\n+ ssl_context.load_cert_chain(certfile=self.cert[0], keyfile=self.cert[1])\n+ elif isinstance(self.cert, tuple) and len(self.cert) == 3:\n+ ssl_context.load_cert_chain(\n+ certfile=self.cert[0],\n+ keyfile=self.cert[1],\n+ password=self.cert[2], # type: ignore\n+ )\n+\n \n class TimeoutConfig:\n \"\"\"\n", "issue": "How to configure TLS beyond client certificate and CA root certs?\nTypically in requests you can use a __HTTPAdapter__ to hijack and set the SSL context and define what ciphers to use, how does one go about doing the same in __httpx__?\n", "before_files": [{"content": "import ssl\nimport typing\nfrom pathlib import Path\n\nimport certifi\n\nfrom .__version__ import __version__\n\nCertTypes = typing.Union[str, typing.Tuple[str, str], typing.Tuple[str, str, str]]\nVerifyTypes = typing.Union[str, bool]\nTimeoutTypes = typing.Union[float, typing.Tuple[float, float, float], \"TimeoutConfig\"]\n\n\nUSER_AGENT = f\"python-httpx/{__version__}\"\n\nDEFAULT_CIPHERS = \":\".join(\n [\n \"ECDHE+AESGCM\",\n \"ECDHE+CHACHA20\",\n \"DHE+AESGCM\",\n \"DHE+CHACHA20\",\n \"ECDH+AESGCM\",\n \"DH+AESGCM\",\n \"ECDH+AES\",\n \"DH+AES\",\n \"RSA+AESGCM\",\n \"RSA+AES\",\n \"!aNULL\",\n \"!eNULL\",\n \"!MD5\",\n \"!DSS\",\n ]\n)\n\n\nclass SSLConfig:\n \"\"\"\n SSL Configuration.\n \"\"\"\n\n def __init__(self, *, cert: CertTypes = None, verify: VerifyTypes = True):\n self.cert = cert\n self.verify = verify\n\n self.ssl_context: typing.Optional[ssl.SSLContext] = None\n\n def __eq__(self, other: typing.Any) -> bool:\n return (\n isinstance(other, self.__class__)\n and self.cert == other.cert\n and self.verify == other.verify\n )\n\n def __repr__(self) -> str:\n class_name = self.__class__.__name__\n return f\"{class_name}(cert={self.cert}, verify={self.verify})\"\n\n def with_overrides(\n self, cert: CertTypes = None, verify: VerifyTypes = None\n ) -> \"SSLConfig\":\n cert = self.cert if cert is None else cert\n verify = self.verify if verify is None else verify\n if (cert == self.cert) and (verify == self.verify):\n return self\n return SSLConfig(cert=cert, verify=verify)\n\n def load_ssl_context(self) -> ssl.SSLContext:\n if self.ssl_context is None:\n self.ssl_context = (\n self.load_ssl_context_verify()\n if self.verify\n else self.load_ssl_context_no_verify()\n )\n\n assert self.ssl_context is not None\n return self.ssl_context\n\n def load_ssl_context_no_verify(self) -> ssl.SSLContext:\n \"\"\"\n Return an SSL context for unverified connections.\n \"\"\"\n context = self._create_default_ssl_context()\n context.verify_mode = ssl.CERT_NONE\n context.check_hostname = False\n return context\n\n def load_ssl_context_verify(self) -> ssl.SSLContext:\n \"\"\"\n Return an SSL context for verified connections.\n \"\"\"\n if isinstance(self.verify, bool):\n ca_bundle_path = DEFAULT_CA_BUNDLE_PATH\n elif Path(self.verify).exists():\n ca_bundle_path = Path(self.verify)\n else:\n raise IOError(\n \"Could not find a suitable TLS CA certificate bundle, \"\n \"invalid path: {}\".format(self.verify)\n )\n\n context = self._create_default_ssl_context()\n context.verify_mode = ssl.CERT_REQUIRED\n context.check_hostname = True\n\n # Signal to server support for PHA in TLS 1.3. Raises an\n # AttributeError if only read-only access is implemented.\n try:\n context.post_handshake_auth = True # type: ignore\n except AttributeError: # pragma: nocover\n pass\n\n # Disable using 'commonName' for SSLContext.check_hostname\n # when the 'subjectAltName' extension isn't available.\n try:\n context.hostname_checks_common_name = False # type: ignore\n except AttributeError: # pragma: nocover\n pass\n\n if ca_bundle_path.is_file():\n context.load_verify_locations(cafile=str(ca_bundle_path))\n elif ca_bundle_path.is_dir():\n context.load_verify_locations(capath=str(ca_bundle_path))\n\n if self.cert is not None:\n if isinstance(self.cert, str):\n context.load_cert_chain(certfile=self.cert)\n elif isinstance(self.cert, tuple) and len(self.cert) == 2:\n context.load_cert_chain(certfile=self.cert[0], keyfile=self.cert[1])\n elif isinstance(self.cert, tuple) and len(self.cert) == 3:\n context.load_cert_chain(\n certfile=self.cert[0],\n keyfile=self.cert[1],\n password=self.cert[2], # type: ignore\n )\n\n return context\n\n def _create_default_ssl_context(self) -> ssl.SSLContext:\n \"\"\"\n Creates the default SSLContext object that's used for both verified\n and unverified connections.\n \"\"\"\n context = ssl.SSLContext(ssl.PROTOCOL_TLS)\n context.options |= ssl.OP_NO_SSLv2\n context.options |= ssl.OP_NO_SSLv3\n context.options |= ssl.OP_NO_TLSv1\n context.options |= ssl.OP_NO_TLSv1_1\n context.options |= ssl.OP_NO_COMPRESSION\n context.set_ciphers(DEFAULT_CIPHERS)\n\n if ssl.HAS_ALPN:\n context.set_alpn_protocols([\"h2\", \"http/1.1\"])\n if ssl.HAS_NPN:\n context.set_npn_protocols([\"h2\", \"http/1.1\"])\n\n return context\n\n\nclass TimeoutConfig:\n \"\"\"\n Timeout values.\n \"\"\"\n\n def __init__(\n self,\n timeout: TimeoutTypes = None,\n *,\n connect_timeout: float = None,\n read_timeout: float = None,\n write_timeout: float = None,\n ):\n if timeout is None:\n self.connect_timeout = connect_timeout\n self.read_timeout = read_timeout\n self.write_timeout = write_timeout\n else:\n # Specified as a single timeout value\n assert connect_timeout is None\n assert read_timeout is None\n assert write_timeout is None\n if isinstance(timeout, TimeoutConfig):\n self.connect_timeout = timeout.connect_timeout\n self.read_timeout = timeout.read_timeout\n self.write_timeout = timeout.write_timeout\n elif isinstance(timeout, tuple):\n self.connect_timeout = timeout[0]\n self.read_timeout = timeout[1]\n self.write_timeout = timeout[2]\n else:\n self.connect_timeout = timeout\n self.read_timeout = timeout\n self.write_timeout = timeout\n\n def __eq__(self, other: typing.Any) -> bool:\n return (\n isinstance(other, self.__class__)\n and self.connect_timeout == other.connect_timeout\n and self.read_timeout == other.read_timeout\n and self.write_timeout == other.write_timeout\n )\n\n def __repr__(self) -> str:\n class_name = self.__class__.__name__\n if len({self.connect_timeout, self.read_timeout, self.write_timeout}) == 1:\n return f\"{class_name}(timeout={self.connect_timeout})\"\n return (\n f\"{class_name}(connect_timeout={self.connect_timeout}, \"\n f\"read_timeout={self.read_timeout}, write_timeout={self.write_timeout})\"\n )\n\n\nclass PoolLimits:\n \"\"\"\n Limits on the number of connections in a connection pool.\n \"\"\"\n\n def __init__(\n self,\n *,\n soft_limit: int = None,\n hard_limit: int = None,\n pool_timeout: float = None,\n ):\n self.soft_limit = soft_limit\n self.hard_limit = hard_limit\n self.pool_timeout = pool_timeout\n\n def __eq__(self, other: typing.Any) -> bool:\n return (\n isinstance(other, self.__class__)\n and self.soft_limit == other.soft_limit\n and self.hard_limit == other.hard_limit\n and self.pool_timeout == other.pool_timeout\n )\n\n def __repr__(self) -> str:\n class_name = self.__class__.__name__\n return (\n f\"{class_name}(soft_limit={self.soft_limit}, \"\n f\"hard_limit={self.hard_limit}, pool_timeout={self.pool_timeout})\"\n )\n\n\nDEFAULT_SSL_CONFIG = SSLConfig(cert=None, verify=True)\nDEFAULT_TIMEOUT_CONFIG = TimeoutConfig(timeout=5.0)\nDEFAULT_POOL_LIMITS = PoolLimits(soft_limit=10, hard_limit=100, pool_timeout=5.0)\nDEFAULT_CA_BUNDLE_PATH = Path(certifi.where())\nDEFAULT_MAX_REDIRECTS = 20\n", "path": "httpx/config.py"}]}
| 3,035 | 691 |
gh_patches_debug_50810
|
rasdani/github-patches
|
git_diff
|
googleapis__google-cloud-python-6134
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
PubSub protobuf dependency requirements incorrect
I think the minimum version for the `protobuf` package dependency is not correct. google-cloud-python has the version requirements as protobuf>=3.0.0, but it fails to import when using version up to and including protobuf==3.3.0. I'm not sure what the exact correct version is, but the last version of google-cloud-pubsub to work with protobuf==3.3.0 is google-cloud-pubsub==0.35.4. I believe after this commit (https://github.com/GoogleCloudPlatform/google-cloud-python/commit/371333a51165e99d4d02876b1ef133618485b6fc#diff-29280288794caf553b0b008084a0e854), a protobuf version >3.3.0 is required:
Python version
```
$ python --version
Python 2.7.15rc1
```
Package versions:
```
$ pip list | grep -E '(cloud|protobuf)'
google-cloud-core 0.28.1
google-cloud-datastore 1.7.0
google-cloud-pubsub 0.38.0
google-cloud-storage 1.12.0
protobuf 3.3.0
```
Getting a stack track just importing pubsub (in ipython here)
```
In [1]: from google.cloud import pubsub
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-1-8fba37b708ad> in <module>()
----> 1 from google.cloud import pubsub
/home/aaronpeterson/.local/share/virtualenvs/turbinia-docs-oOHvuNoj/local/lib/python2.7/site-packages/google/cloud/pubsub.py in <module>()
17 from __future__ import absolute_import
18
---> 19 from google.cloud.pubsub_v1 import PublisherClient
20 from google.cloud.pubsub_v1 import SubscriberClient
21 from google.cloud.pubsub_v1 import types
/home/aaronpeterson/.local/share/virtualenvs/turbinia-docs-oOHvuNoj/local/lib/python2.7/site-packages/google/cloud/pubsub_v1/__init__.py in <module>()
15 from __future__ import absolute_import
16
---> 17 from google.cloud.pubsub_v1 import types
18 from google.cloud.pubsub_v1 import publisher
19 from google.cloud.pubsub_v1 import subscriber
/home/aaronpeterson/.local/share/virtualenvs/turbinia-docs-oOHvuNoj/local/lib/python2.7/site-packages/google/cloud/pubsub_v1/types.py in <module>()
28
29 from google.api_core.protobuf_helpers import get_messages
---> 30 from google.cloud.pubsub_v1.proto import pubsub_pb2
31
32
/home/aaronpeterson/.local/share/virtualenvs/turbinia-docs-oOHvuNoj/local/lib/python2.7/site-packages/google/cloud/pubsub_v1/proto/pubsub_pb2.py in <module>()
45 message_type=None, enum_type=None, containing_type=None,
46 is_extension=False, extension_scope=None,
---> 47 options=None, file=DESCRIPTOR),
48 ],
49 extensions=[
TypeError: __new__() got an unexpected keyword argument 'file'
```
Snipped the pubsub section from pipdeptree output showing the protobuf requirement is >=3.0.0:
```
- google-cloud-pubsub [required: Any, installed: 0.38.0]
- enum34 [required: Any, installed: 1.1.6]
- google-api-core [required: >=1.1.0,<2.0.0dev, installed: 1.4.0]
- futures [required: >=3.2.0, installed: 3.2.0]
- google-auth [required: >=0.4.0,<2.0.0dev, installed: 1.5.1]
- cachetools [required: >=2.0.0, installed: 2.1.0]
- pyasn1-modules [required: >=0.2.1, installed: 0.2.2]
- pyasn1 [required: >=0.4.1,<0.5.0, installed: 0.4.4]
- rsa [required: >=3.1.4, installed: 4.0]
- pyasn1 [required: >=0.1.3, installed: 0.4.4]
- six [required: >=1.9.0, installed: 1.11.0]
- googleapis-common-protos [required: >=1.5.3,<2.0dev, installed: 1.5.3]
- protobuf [required: >=3.0.0, installed: 3.3.0]
- setuptools [required: Any, installed: 40.4.3]
- six [required: >=1.9, installed: 1.11.0]
- protobuf [required: >=3.0.0, installed: 3.3.0]
- setuptools [required: Any, installed: 40.4.3]
- six [required: >=1.9, installed: 1.11.0]
```
</issue>
<code>
[start of pubsub/setup.py]
1 # Copyright 2018 Google LLC
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import io
16 import os
17
18 import setuptools
19
20
21 # Package metadata.
22
23 name = 'google-cloud-pubsub'
24 description = 'Google Cloud Pub/Sub API client library'
25 version = '0.38.0'
26 # Should be one of:
27 # 'Development Status :: 3 - Alpha'
28 # 'Development Status :: 4 - Beta'
29 # 'Development Status :: 5 - Production/Stable'
30 release_status = 'Development Status :: 4 - Beta'
31 dependencies = [
32 'google-api-core[grpc] >= 1.1.0, < 2.0.0dev',
33 'grpc-google-iam-v1 >= 0.11.1, < 0.12dev',
34 'enum34; python_version < "3.4"',
35 ]
36 extras = {
37 }
38
39
40 # Setup boilerplate below this line.
41
42 package_root = os.path.abspath(os.path.dirname(__file__))
43
44 readme_filename = os.path.join(package_root, 'README.rst')
45 with io.open(readme_filename, encoding='utf-8') as readme_file:
46 readme = readme_file.read()
47
48 # Only include packages under the 'google' namespace. Do not include tests,
49 # benchmarks, etc.
50 packages = [
51 package for package in setuptools.find_packages()
52 if package.startswith('google')]
53
54 # Determine which namespaces are needed.
55 namespaces = ['google']
56 if 'google.cloud' in packages:
57 namespaces.append('google.cloud')
58
59
60 setuptools.setup(
61 name=name,
62 version=version,
63 description=description,
64 long_description=readme,
65 author='Google LLC',
66 author_email='[email protected]',
67 license='Apache 2.0',
68 url='https://github.com/GoogleCloudPlatform/google-cloud-python',
69 classifiers=[
70 release_status,
71 'Intended Audience :: Developers',
72 'License :: OSI Approved :: Apache Software License',
73 'Programming Language :: Python',
74 'Programming Language :: Python :: 2',
75 'Programming Language :: Python :: 2.7',
76 'Programming Language :: Python :: 3',
77 'Programming Language :: Python :: 3.4',
78 'Programming Language :: Python :: 3.5',
79 'Programming Language :: Python :: 3.6',
80 'Operating System :: OS Independent',
81 'Topic :: Internet',
82 ],
83 platforms='Posix; MacOS X; Windows',
84 packages=packages,
85 namespace_packages=namespaces,
86 install_requires=dependencies,
87 extras_require=extras,
88 include_package_data=True,
89 zip_safe=False,
90 )
91
[end of pubsub/setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/pubsub/setup.py b/pubsub/setup.py
--- a/pubsub/setup.py
+++ b/pubsub/setup.py
@@ -29,7 +29,7 @@
# 'Development Status :: 5 - Production/Stable'
release_status = 'Development Status :: 4 - Beta'
dependencies = [
- 'google-api-core[grpc] >= 1.1.0, < 2.0.0dev',
+ 'google-api-core[grpc] >= 1.4.1, < 2.0.0dev',
'grpc-google-iam-v1 >= 0.11.1, < 0.12dev',
'enum34; python_version < "3.4"',
]
|
{"golden_diff": "diff --git a/pubsub/setup.py b/pubsub/setup.py\n--- a/pubsub/setup.py\n+++ b/pubsub/setup.py\n@@ -29,7 +29,7 @@\n # 'Development Status :: 5 - Production/Stable'\n release_status = 'Development Status :: 4 - Beta'\n dependencies = [\n- 'google-api-core[grpc] >= 1.1.0, < 2.0.0dev',\n+ 'google-api-core[grpc] >= 1.4.1, < 2.0.0dev',\n 'grpc-google-iam-v1 >= 0.11.1, < 0.12dev',\n 'enum34; python_version < \"3.4\"',\n ]\n", "issue": "PubSub protobuf dependency requirements incorrect\nI think the minimum version for the `protobuf` package dependency is not correct. google-cloud-python has the version requirements as protobuf>=3.0.0, but it fails to import when using version up to and including protobuf==3.3.0. I'm not sure what the exact correct version is, but the last version of google-cloud-pubsub to work with protobuf==3.3.0 is google-cloud-pubsub==0.35.4. I believe after this commit (https://github.com/GoogleCloudPlatform/google-cloud-python/commit/371333a51165e99d4d02876b1ef133618485b6fc#diff-29280288794caf553b0b008084a0e854), a protobuf version >3.3.0 is required:\r\n\r\nPython version\r\n```\r\n$ python --version\r\nPython 2.7.15rc1\r\n```\r\n\r\nPackage versions:\r\n```\r\n$ pip list | grep -E '(cloud|protobuf)'\r\ngoogle-cloud-core 0.28.1 \r\ngoogle-cloud-datastore 1.7.0 \r\ngoogle-cloud-pubsub 0.38.0 \r\ngoogle-cloud-storage 1.12.0 \r\nprotobuf 3.3.0 \r\n```\r\n\r\nGetting a stack track just importing pubsub (in ipython here)\r\n```\r\nIn [1]: from google.cloud import pubsub\r\n---------------------------------------------------------------------------\r\nTypeError Traceback (most recent call last)\r\n<ipython-input-1-8fba37b708ad> in <module>()\r\n----> 1 from google.cloud import pubsub\r\n\r\n/home/aaronpeterson/.local/share/virtualenvs/turbinia-docs-oOHvuNoj/local/lib/python2.7/site-packages/google/cloud/pubsub.py in <module>()\r\n 17 from __future__ import absolute_import\r\n 18 \r\n---> 19 from google.cloud.pubsub_v1 import PublisherClient\r\n 20 from google.cloud.pubsub_v1 import SubscriberClient\r\n 21 from google.cloud.pubsub_v1 import types\r\n\r\n/home/aaronpeterson/.local/share/virtualenvs/turbinia-docs-oOHvuNoj/local/lib/python2.7/site-packages/google/cloud/pubsub_v1/__init__.py in <module>()\r\n 15 from __future__ import absolute_import\r\n 16 \r\n---> 17 from google.cloud.pubsub_v1 import types\r\n 18 from google.cloud.pubsub_v1 import publisher\r\n 19 from google.cloud.pubsub_v1 import subscriber\r\n\r\n/home/aaronpeterson/.local/share/virtualenvs/turbinia-docs-oOHvuNoj/local/lib/python2.7/site-packages/google/cloud/pubsub_v1/types.py in <module>()\r\n 28 \r\n 29 from google.api_core.protobuf_helpers import get_messages\r\n---> 30 from google.cloud.pubsub_v1.proto import pubsub_pb2\r\n 31 \r\n 32 \r\n\r\n/home/aaronpeterson/.local/share/virtualenvs/turbinia-docs-oOHvuNoj/local/lib/python2.7/site-packages/google/cloud/pubsub_v1/proto/pubsub_pb2.py in <module>()\r\n 45 message_type=None, enum_type=None, containing_type=None,\r\n 46 is_extension=False, extension_scope=None,\r\n---> 47 options=None, file=DESCRIPTOR),\r\n 48 ],\r\n 49 extensions=[\r\n\r\nTypeError: __new__() got an unexpected keyword argument 'file'\r\n```\r\n\r\nSnipped the pubsub section from pipdeptree output showing the protobuf requirement is >=3.0.0:\r\n```\r\n - google-cloud-pubsub [required: Any, installed: 0.38.0]\r\n - enum34 [required: Any, installed: 1.1.6]\r\n - google-api-core [required: >=1.1.0,<2.0.0dev, installed: 1.4.0]\r\n - futures [required: >=3.2.0, installed: 3.2.0]\r\n - google-auth [required: >=0.4.0,<2.0.0dev, installed: 1.5.1]\r\n - cachetools [required: >=2.0.0, installed: 2.1.0]\r\n - pyasn1-modules [required: >=0.2.1, installed: 0.2.2]\r\n - pyasn1 [required: >=0.4.1,<0.5.0, installed: 0.4.4]\r\n - rsa [required: >=3.1.4, installed: 4.0]\r\n - pyasn1 [required: >=0.1.3, installed: 0.4.4]\r\n - six [required: >=1.9.0, installed: 1.11.0]\r\n - googleapis-common-protos [required: >=1.5.3,<2.0dev, installed: 1.5.3]\r\n - protobuf [required: >=3.0.0, installed: 3.3.0]\r\n - setuptools [required: Any, installed: 40.4.3]\r\n - six [required: >=1.9, installed: 1.11.0]\r\n - protobuf [required: >=3.0.0, installed: 3.3.0]\r\n - setuptools [required: Any, installed: 40.4.3]\r\n - six [required: >=1.9, installed: 1.11.0]\r\n```\n", "before_files": [{"content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport io\nimport os\n\nimport setuptools\n\n\n# Package metadata.\n\nname = 'google-cloud-pubsub'\ndescription = 'Google Cloud Pub/Sub API client library'\nversion = '0.38.0'\n# Should be one of:\n# 'Development Status :: 3 - Alpha'\n# 'Development Status :: 4 - Beta'\n# 'Development Status :: 5 - Production/Stable'\nrelease_status = 'Development Status :: 4 - Beta'\ndependencies = [\n 'google-api-core[grpc] >= 1.1.0, < 2.0.0dev',\n 'grpc-google-iam-v1 >= 0.11.1, < 0.12dev',\n 'enum34; python_version < \"3.4\"',\n]\nextras = {\n}\n\n\n# Setup boilerplate below this line.\n\npackage_root = os.path.abspath(os.path.dirname(__file__))\n\nreadme_filename = os.path.join(package_root, 'README.rst')\nwith io.open(readme_filename, encoding='utf-8') as readme_file:\n readme = readme_file.read()\n\n# Only include packages under the 'google' namespace. Do not include tests,\n# benchmarks, etc.\npackages = [\n package for package in setuptools.find_packages()\n if package.startswith('google')]\n\n# Determine which namespaces are needed.\nnamespaces = ['google']\nif 'google.cloud' in packages:\n namespaces.append('google.cloud')\n\n\nsetuptools.setup(\n name=name,\n version=version,\n description=description,\n long_description=readme,\n author='Google LLC',\n author_email='[email protected]',\n license='Apache 2.0',\n url='https://github.com/GoogleCloudPlatform/google-cloud-python',\n classifiers=[\n release_status,\n 'Intended Audience :: Developers',\n 'License :: OSI Approved :: Apache Software License',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'Operating System :: OS Independent',\n 'Topic :: Internet',\n ],\n platforms='Posix; MacOS X; Windows',\n packages=packages,\n namespace_packages=namespaces,\n install_requires=dependencies,\n extras_require=extras,\n include_package_data=True,\n zip_safe=False,\n)\n", "path": "pubsub/setup.py"}]}
| 2,625 | 159 |
gh_patches_debug_7673
|
rasdani/github-patches
|
git_diff
|
translate__pootle-4132
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Running update_tmserver with --dry-run but no existing LocalTM will fail
If we try to build the LocalTM but don't specify `--rebuild` our initial import will fail because the revision counter does not exist.
``` pytb
$ pootle update_tmserver -v 3 --dry-run
2015-09-30 10:55:33,485 INFO Loading custom settings from '/root/.pootle/pootle.conf'...
2015-09-30 13:55:33,704 INFO Using Python PO
System check identified some issues:
WARNINGS:
?: (pootle.W017) There are user accounts with duplicate emails. This will not be allowed in Pootle 2.8.
HINT: Try using 'pootle find_duplicate_emails', and then update user emails with 'pootle update_user_email username email'. You might also want to consider using pootle merge_user or purge_user commands
Traceback (most recent call last):
File "/var/www/pootle/env/bin/pootle", line 11, in <module>
sys.exit(main())
File "/var/www/pootle/env/local/lib/python2.7/site-packages/pootle/runner.py", line 309, in main
django_settings_module='pootle.settings')
File "/var/www/pootle/env/local/lib/python2.7/site-packages/pootle/runner.py", line 289, in run_app
management.execute_from_command_line(command)
File "/var/www/pootle/env/local/lib/python2.7/site-packages/django/core/management/__init__.py", line 385, in execute_from_command_line
utility.execute()
File "/var/www/pootle/env/local/lib/python2.7/site-packages/django/core/management/__init__.py", line 377, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/var/www/pootle/env/local/lib/python2.7/site-packages/django/core/management/base.py", line 288, in run_from_argv
self.execute(*args, **options.__dict__)
File "/var/www/pootle/env/local/lib/python2.7/site-packages/django/core/management/base.py", line 338, in execute
output = self.handle(*args, **options)
File "/var/www/pootle/env/local/lib/python2.7/site-packages/pootle/apps/pootle_app/management/commands/update_tmserver.py", line 152, in handle
'field': 'revision'
File "/var/www/pootle/env/local/lib/python2.7/site-packages/elasticsearch/client/utils.py", line 69, in _wrapped
return func(*args, params=params, **kwargs)
File "/var/www/pootle/env/local/lib/python2.7/site-packages/elasticsearch/client/__init__.py", line 506, in search
params=params, body=body)
File "/var/www/pootle/env/local/lib/python2.7/site-packages/elasticsearch/transport.py", line 307, in perform_request
status, headers, data = connection.perform_request(method, url, params, body, ignore=ignore, timeout=timeout)
File "/var/www/pootle/env/local/lib/python2.7/site-packages/elasticsearch/connection/http_urllib3.py", line 89, in perform_request
self._raise_error(response.status, raw_data)
File "/var/www/pootle/env/local/lib/python2.7/site-packages/elasticsearch/connection/base.py", line 105, in _raise_error
raise HTTP_EXCEPTIONS.get(status_code, TransportError)(status_code, error_message, additional_info)
elasticsearch.exceptions.RequestError: TransportError(400, u'SearchPhaseExecutionException[Failed to execute phase [query], all shards failed; shardFailures {[aQHk0CPtT1K_ZZ2YJG8rjQ][translations][0]: SearchParseException[[translations][0]: query[ConstantScore(*:*)],from[-1],size[-1]: Parse Failure [Failed to parse source [{"query": {"match_all": {}}, "facets": {"stat1": {"statistical": {"field": "revision"}}}}]]]; nested: FacetPhaseExecutionException[Facet [stat1]: No mapping found for field [revision]]; }{[aQHk0CPtT1K_ZZ2YJG8rjQ][translations][1]: SearchParseException[[translations][1]: query[ConstantScore(*:*)],from[-1],size[-1]: Parse Failure [Failed to parse source [{"query": {"match_all": {}}, "facets": {"stat1": {"statistical": {"field": "revision"}}}}]]]; nested: FacetPhaseExecutionException[Facet [stat1]: No mapping found for field [revision]]; }{[aQHk0CPtT1K_ZZ2YJG8rjQ][translations][2]: SearchParseException[[translations][2]: query[ConstantScore(*:*)],from[-1],size[-1]: Parse Failure [Failed to parse source [{"query": {"match_all": {}}, "facets": {"stat1": {"statistical": {"field": "revision"}}}}]]]; nested: FacetPhaseExecutionException[Facet [stat1]: No mapping found for field [revision]]; }{[aQHk0CPtT1K_ZZ2YJG8rjQ][translations][3]: SearchParseException[[translations][3]: query[ConstantScore(*:*)],from[-1],size[-1]: Parse Failure [Failed to parse source [{"query": {"match_all": {}}, "facets": {"stat1": {"statistical": {"field": "revision"}}}}]]]; nested: FacetPhaseExecutionException[Facet [stat1]: No mapping found for field [revision]]; }{[aQHk0CPtT1K_ZZ2YJG8rjQ][translations][4]: SearchParseException[[translations][4]: query[ConstantScore(*:*)],from[-1],size[-1]: Parse Failure [Failed to parse source [{"query": {"match_all": {}}, "facets": {"stat1": {"statistical": {"field": "revision"}}}}]]]; nested: FacetPhaseExecutionException[Facet [stat1]: No mapping found for field [revision]]; }]')
```
</issue>
<code>
[start of pootle/apps/pootle_app/management/commands/update_tmserver.py]
1 #!/usr/bin/env python
2 # -*- coding: utf-8 -*-
3 #
4 # Copyright (C) Pootle contributors.
5 #
6 # This file is a part of the Pootle project. It is distributed under the GPL3
7 # or later license. See the LICENSE file for a copy of the license and the
8 # AUTHORS file for copyright and authorship information.
9
10 from hashlib import md5
11 from optparse import make_option
12 import os
13 import sys
14
15 # This must be run before importing Django.
16 os.environ['DJANGO_SETTINGS_MODULE'] = 'pootle.settings'
17
18 from elasticsearch import helpers, Elasticsearch
19
20 from django.conf import settings
21 from django.core.management.base import BaseCommand, CommandError
22
23 from pootle_store.models import Unit
24
25
26 BULK_CHUNK_SIZE = 5000
27
28
29 class Command(BaseCommand):
30 help = "Load Local Translation Memory"
31 option_list = BaseCommand.option_list + (
32 make_option('--overwrite',
33 action="store_true",
34 dest='overwrite',
35 default=False,
36 help='Process all items, not just the new ones (useful to '
37 'overwrite properties while keeping the index in a '
38 'working condition)'),
39 make_option('--rebuild',
40 action="store_true",
41 dest='rebuild',
42 default=False,
43 help='Drop the entire index on start and update '
44 'everything from scratch'),
45 make_option('--dry-run',
46 action="store_true",
47 dest='dry_run',
48 default=False,
49 help='Report only the number of translations to index '
50 'and quit'),
51 )
52
53 def _parse_translations(self, **options):
54
55 units_qs = Unit.simple_objects \
56 .exclude(target_f__isnull=True) \
57 .exclude(target_f__exact='') \
58 .filter(revision__gt=self.last_indexed_revision) \
59 .select_related(
60 'submitted_by',
61 'store',
62 'store__translation_project__project',
63 'store__translation_project__language'
64 ).values(
65 'id',
66 'revision',
67 'source_f',
68 'target_f',
69 'submitted_by__username',
70 'submitted_by__full_name',
71 'submitted_by__email',
72 'store__translation_project__project__fullname',
73 'store__pootle_path',
74 'store__translation_project__language__code'
75 ).order_by()
76
77 total = units_qs.count()
78
79 if total == 0:
80 self.stdout.write("No translations to index")
81 sys.exit()
82
83 self.stdout.write("%s translations to index" % total)
84
85 if options['dry_run']:
86 sys.exit()
87
88 self.stdout.write("")
89
90 for i, unit in enumerate(units_qs.iterator(), start=1):
91 fullname = (unit['submitted_by__full_name'] or
92 unit['submitted_by__username'])
93 project = unit['store__translation_project__project__fullname']
94
95 email_md5 = None
96 if unit['submitted_by__email']:
97 email_md5 = md5(unit['submitted_by__email']).hexdigest()
98
99 if (i % 1000 == 0) or (i == total):
100 percent = "%.1f" % (i * 100.0 / total)
101 self.stdout.write("%s (%s%%)" % (i, percent), ending='\r')
102 self.stdout.flush()
103
104 yield {
105 "_index": self.INDEX_NAME,
106 "_type": unit['store__translation_project__language__code'],
107 "_id": unit['id'],
108 'revision': int(unit['revision']),
109 'project': project,
110 'path': unit['store__pootle_path'],
111 'username': unit['submitted_by__username'],
112 'fullname': fullname,
113 'email_md5': email_md5,
114 'source': unit['source_f'],
115 'target': unit['target_f'],
116 }
117
118 if i != total:
119 self.stdout.write("Expected %d, loaded %d." % (total, i))
120
121
122 def handle(self, *args, **options):
123 if not getattr(settings, 'POOTLE_TM_SERVER', False):
124 raise CommandError("POOTLE_TM_SERVER is missing from your settings.")
125
126 self.INDEX_NAME = settings.POOTLE_TM_SERVER['default']['INDEX_NAME']
127 es = Elasticsearch([{
128 'host': settings.POOTLE_TM_SERVER['default']['HOST'],
129 'port': settings.POOTLE_TM_SERVER['default']['PORT']
130 }],
131 retry_on_timeout=True
132 )
133
134 self.last_indexed_revision = -1
135
136 if options['rebuild'] and not options['dry_run']:
137 if es.indices.exists(self.INDEX_NAME):
138 es.indices.delete(index=self.INDEX_NAME)
139
140 if (not options['rebuild'] and
141 not options['overwrite'] and
142 es.indices.exists(self.INDEX_NAME)):
143 result = es.search(
144 index=self.INDEX_NAME,
145 body={
146 'query': {
147 'match_all': {}
148 },
149 'facets': {
150 'stat1': {
151 'statistical': {
152 'field': 'revision'
153 }
154 }
155 }
156 }
157 )
158 self.last_indexed_revision = result['facets']['stat1']['max']
159
160 self.stdout.write("Last indexed revision = %s" % self.last_indexed_revision)
161
162 success, _ = helpers.bulk(es, self._parse_translations(**options))
163
[end of pootle/apps/pootle_app/management/commands/update_tmserver.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/pootle/apps/pootle_app/management/commands/update_tmserver.py b/pootle/apps/pootle_app/management/commands/update_tmserver.py
--- a/pootle/apps/pootle_app/management/commands/update_tmserver.py
+++ b/pootle/apps/pootle_app/management/commands/update_tmserver.py
@@ -137,6 +137,9 @@
if es.indices.exists(self.INDEX_NAME):
es.indices.delete(index=self.INDEX_NAME)
+ if not options['dry_run'] and not es.indices.exists(self.INDEX_NAME):
+ es.indices.create(index=self.INDEX_NAME)
+
if (not options['rebuild'] and
not options['overwrite'] and
es.indices.exists(self.INDEX_NAME)):
|
{"golden_diff": "diff --git a/pootle/apps/pootle_app/management/commands/update_tmserver.py b/pootle/apps/pootle_app/management/commands/update_tmserver.py\n--- a/pootle/apps/pootle_app/management/commands/update_tmserver.py\n+++ b/pootle/apps/pootle_app/management/commands/update_tmserver.py\n@@ -137,6 +137,9 @@\n if es.indices.exists(self.INDEX_NAME):\n es.indices.delete(index=self.INDEX_NAME)\n \n+ if not options['dry_run'] and not es.indices.exists(self.INDEX_NAME):\n+ es.indices.create(index=self.INDEX_NAME)\n+\n if (not options['rebuild'] and\n not options['overwrite'] and\n es.indices.exists(self.INDEX_NAME)):\n", "issue": "Running update_tmserver with --dry-run but no existing LocalTM will fail\nIf we try to build the LocalTM but don't specify `--rebuild` our initial import will fail because the revision counter does not exist.\n\n``` pytb\n$ pootle update_tmserver -v 3 --dry-run \n2015-09-30 10:55:33,485 INFO Loading custom settings from '/root/.pootle/pootle.conf'...\n2015-09-30 13:55:33,704 INFO Using Python PO\nSystem check identified some issues:\n\nWARNINGS:\n?: (pootle.W017) There are user accounts with duplicate emails. This will not be allowed in Pootle 2.8.\n HINT: Try using 'pootle find_duplicate_emails', and then update user emails with 'pootle update_user_email username email'. You might also want to consider using pootle merge_user or purge_user commands\nTraceback (most recent call last):\n File \"/var/www/pootle/env/bin/pootle\", line 11, in <module>\n sys.exit(main())\n File \"/var/www/pootle/env/local/lib/python2.7/site-packages/pootle/runner.py\", line 309, in main\n django_settings_module='pootle.settings')\n File \"/var/www/pootle/env/local/lib/python2.7/site-packages/pootle/runner.py\", line 289, in run_app\n management.execute_from_command_line(command)\n File \"/var/www/pootle/env/local/lib/python2.7/site-packages/django/core/management/__init__.py\", line 385, in execute_from_command_line\n utility.execute()\n File \"/var/www/pootle/env/local/lib/python2.7/site-packages/django/core/management/__init__.py\", line 377, in execute\n self.fetch_command(subcommand).run_from_argv(self.argv)\n File \"/var/www/pootle/env/local/lib/python2.7/site-packages/django/core/management/base.py\", line 288, in run_from_argv\n self.execute(*args, **options.__dict__)\n File \"/var/www/pootle/env/local/lib/python2.7/site-packages/django/core/management/base.py\", line 338, in execute\n output = self.handle(*args, **options)\n File \"/var/www/pootle/env/local/lib/python2.7/site-packages/pootle/apps/pootle_app/management/commands/update_tmserver.py\", line 152, in handle\n 'field': 'revision'\n File \"/var/www/pootle/env/local/lib/python2.7/site-packages/elasticsearch/client/utils.py\", line 69, in _wrapped\n return func(*args, params=params, **kwargs)\n File \"/var/www/pootle/env/local/lib/python2.7/site-packages/elasticsearch/client/__init__.py\", line 506, in search\n params=params, body=body)\n File \"/var/www/pootle/env/local/lib/python2.7/site-packages/elasticsearch/transport.py\", line 307, in perform_request\n status, headers, data = connection.perform_request(method, url, params, body, ignore=ignore, timeout=timeout)\n File \"/var/www/pootle/env/local/lib/python2.7/site-packages/elasticsearch/connection/http_urllib3.py\", line 89, in perform_request\n self._raise_error(response.status, raw_data)\n File \"/var/www/pootle/env/local/lib/python2.7/site-packages/elasticsearch/connection/base.py\", line 105, in _raise_error\n raise HTTP_EXCEPTIONS.get(status_code, TransportError)(status_code, error_message, additional_info)\nelasticsearch.exceptions.RequestError: TransportError(400, u'SearchPhaseExecutionException[Failed to execute phase [query], all shards failed; shardFailures {[aQHk0CPtT1K_ZZ2YJG8rjQ][translations][0]: SearchParseException[[translations][0]: query[ConstantScore(*:*)],from[-1],size[-1]: Parse Failure [Failed to parse source [{\"query\": {\"match_all\": {}}, \"facets\": {\"stat1\": {\"statistical\": {\"field\": \"revision\"}}}}]]]; nested: FacetPhaseExecutionException[Facet [stat1]: No mapping found for field [revision]]; }{[aQHk0CPtT1K_ZZ2YJG8rjQ][translations][1]: SearchParseException[[translations][1]: query[ConstantScore(*:*)],from[-1],size[-1]: Parse Failure [Failed to parse source [{\"query\": {\"match_all\": {}}, \"facets\": {\"stat1\": {\"statistical\": {\"field\": \"revision\"}}}}]]]; nested: FacetPhaseExecutionException[Facet [stat1]: No mapping found for field [revision]]; }{[aQHk0CPtT1K_ZZ2YJG8rjQ][translations][2]: SearchParseException[[translations][2]: query[ConstantScore(*:*)],from[-1],size[-1]: Parse Failure [Failed to parse source [{\"query\": {\"match_all\": {}}, \"facets\": {\"stat1\": {\"statistical\": {\"field\": \"revision\"}}}}]]]; nested: FacetPhaseExecutionException[Facet [stat1]: No mapping found for field [revision]]; }{[aQHk0CPtT1K_ZZ2YJG8rjQ][translations][3]: SearchParseException[[translations][3]: query[ConstantScore(*:*)],from[-1],size[-1]: Parse Failure [Failed to parse source [{\"query\": {\"match_all\": {}}, \"facets\": {\"stat1\": {\"statistical\": {\"field\": \"revision\"}}}}]]]; nested: FacetPhaseExecutionException[Facet [stat1]: No mapping found for field [revision]]; }{[aQHk0CPtT1K_ZZ2YJG8rjQ][translations][4]: SearchParseException[[translations][4]: query[ConstantScore(*:*)],from[-1],size[-1]: Parse Failure [Failed to parse source [{\"query\": {\"match_all\": {}}, \"facets\": {\"stat1\": {\"statistical\": {\"field\": \"revision\"}}}}]]]; nested: FacetPhaseExecutionException[Facet [stat1]: No mapping found for field [revision]]; }]')\n```\n\n", "before_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n#\n# Copyright (C) Pootle contributors.\n#\n# This file is a part of the Pootle project. It is distributed under the GPL3\n# or later license. See the LICENSE file for a copy of the license and the\n# AUTHORS file for copyright and authorship information.\n\nfrom hashlib import md5\nfrom optparse import make_option\nimport os\nimport sys\n\n# This must be run before importing Django.\nos.environ['DJANGO_SETTINGS_MODULE'] = 'pootle.settings'\n\nfrom elasticsearch import helpers, Elasticsearch\n\nfrom django.conf import settings\nfrom django.core.management.base import BaseCommand, CommandError\n\nfrom pootle_store.models import Unit\n\n\nBULK_CHUNK_SIZE = 5000\n\n\nclass Command(BaseCommand):\n help = \"Load Local Translation Memory\"\n option_list = BaseCommand.option_list + (\n make_option('--overwrite',\n action=\"store_true\",\n dest='overwrite',\n default=False,\n help='Process all items, not just the new ones (useful to '\n 'overwrite properties while keeping the index in a '\n 'working condition)'),\n make_option('--rebuild',\n action=\"store_true\",\n dest='rebuild',\n default=False,\n help='Drop the entire index on start and update '\n 'everything from scratch'),\n make_option('--dry-run',\n action=\"store_true\",\n dest='dry_run',\n default=False,\n help='Report only the number of translations to index '\n 'and quit'),\n )\n\n def _parse_translations(self, **options):\n\n units_qs = Unit.simple_objects \\\n .exclude(target_f__isnull=True) \\\n .exclude(target_f__exact='') \\\n .filter(revision__gt=self.last_indexed_revision) \\\n .select_related(\n 'submitted_by',\n 'store',\n 'store__translation_project__project',\n 'store__translation_project__language'\n ).values(\n 'id',\n 'revision',\n 'source_f',\n 'target_f',\n 'submitted_by__username',\n 'submitted_by__full_name',\n 'submitted_by__email',\n 'store__translation_project__project__fullname',\n 'store__pootle_path',\n 'store__translation_project__language__code'\n ).order_by()\n\n total = units_qs.count()\n\n if total == 0:\n self.stdout.write(\"No translations to index\")\n sys.exit()\n\n self.stdout.write(\"%s translations to index\" % total)\n\n if options['dry_run']:\n sys.exit()\n\n self.stdout.write(\"\")\n\n for i, unit in enumerate(units_qs.iterator(), start=1):\n fullname = (unit['submitted_by__full_name'] or\n unit['submitted_by__username'])\n project = unit['store__translation_project__project__fullname']\n\n email_md5 = None\n if unit['submitted_by__email']:\n email_md5 = md5(unit['submitted_by__email']).hexdigest()\n\n if (i % 1000 == 0) or (i == total):\n percent = \"%.1f\" % (i * 100.0 / total)\n self.stdout.write(\"%s (%s%%)\" % (i, percent), ending='\\r')\n self.stdout.flush()\n\n yield {\n \"_index\": self.INDEX_NAME,\n \"_type\": unit['store__translation_project__language__code'],\n \"_id\": unit['id'],\n 'revision': int(unit['revision']),\n 'project': project,\n 'path': unit['store__pootle_path'],\n 'username': unit['submitted_by__username'],\n 'fullname': fullname,\n 'email_md5': email_md5,\n 'source': unit['source_f'],\n 'target': unit['target_f'],\n }\n\n if i != total:\n self.stdout.write(\"Expected %d, loaded %d.\" % (total, i))\n\n\n def handle(self, *args, **options):\n if not getattr(settings, 'POOTLE_TM_SERVER', False):\n raise CommandError(\"POOTLE_TM_SERVER is missing from your settings.\")\n\n self.INDEX_NAME = settings.POOTLE_TM_SERVER['default']['INDEX_NAME']\n es = Elasticsearch([{\n 'host': settings.POOTLE_TM_SERVER['default']['HOST'],\n 'port': settings.POOTLE_TM_SERVER['default']['PORT']\n }],\n retry_on_timeout=True\n )\n\n self.last_indexed_revision = -1\n\n if options['rebuild'] and not options['dry_run']:\n if es.indices.exists(self.INDEX_NAME):\n es.indices.delete(index=self.INDEX_NAME)\n\n if (not options['rebuild'] and\n not options['overwrite'] and\n es.indices.exists(self.INDEX_NAME)):\n result = es.search(\n index=self.INDEX_NAME,\n body={\n 'query': {\n 'match_all': {}\n },\n 'facets': {\n 'stat1': {\n 'statistical': {\n 'field': 'revision'\n }\n }\n }\n }\n )\n self.last_indexed_revision = result['facets']['stat1']['max']\n\n self.stdout.write(\"Last indexed revision = %s\" % self.last_indexed_revision)\n\n success, _ = helpers.bulk(es, self._parse_translations(**options))\n", "path": "pootle/apps/pootle_app/management/commands/update_tmserver.py"}]}
| 3,501 | 170 |
gh_patches_debug_61640
|
rasdani/github-patches
|
git_diff
|
pallets__click-773
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Dynamic bash autocompletion should escape blanks
Thanks for #755, wonderful feature! I found an issue when my completion callback returns entries that have (legitimate) blanks in them. In this case, entries get split into separate arguments, whereas I would have expected that blanks are escaped with a backslash (as bash does by default).
</issue>
<code>
[start of click/_bashcomplete.py]
1 import collections
2 import copy
3 import os
4 import re
5
6 from .utils import echo
7 from .parser import split_arg_string
8 from .core import MultiCommand, Option, Argument
9 from .types import Choice
10
11 WORDBREAK = '='
12
13 COMPLETION_SCRIPT = '''
14 %(complete_func)s() {
15 COMPREPLY=( $( env COMP_WORDS="${COMP_WORDS[*]}" \\
16 COMP_CWORD=$COMP_CWORD \\
17 %(autocomplete_var)s=complete $1 ) )
18 return 0
19 }
20
21 complete -F %(complete_func)s -o default %(script_names)s
22 '''
23
24 _invalid_ident_char_re = re.compile(r'[^a-zA-Z0-9_]')
25
26
27 def get_completion_script(prog_name, complete_var):
28 cf_name = _invalid_ident_char_re.sub('', prog_name.replace('-', '_'))
29 return (COMPLETION_SCRIPT % {
30 'complete_func': '_%s_completion' % cf_name,
31 'script_names': prog_name,
32 'autocomplete_var': complete_var,
33 }).strip() + ';'
34
35
36 def resolve_ctx(cli, prog_name, args):
37 """
38 Parse into a hierarchy of contexts. Contexts are connected through the parent variable.
39 :param cli: command definition
40 :param prog_name: the program that is running
41 :param args: full list of args
42 :return: the final context/command parsed
43 """
44 ctx = cli.make_context(prog_name, args, resilient_parsing=True)
45 while ctx.protected_args + ctx.args and isinstance(ctx.command, MultiCommand):
46 a = ctx.protected_args + ctx.args
47 cmd = ctx.command.get_command(ctx, a[0])
48 if cmd is None:
49 return None
50 ctx = cmd.make_context(a[0], a[1:], parent=ctx, resilient_parsing=True)
51 return ctx
52
53
54 def start_of_option(param_str):
55 """
56 :param param_str: param_str to check
57 :return: whether or not this is the start of an option declaration (i.e. starts "-" or "--")
58 """
59 return param_str and param_str[:1] == '-'
60
61
62 def is_incomplete_option(all_args, cmd_param):
63 """
64 :param all_args: the full original list of args supplied
65 :param cmd_param: the current command paramter
66 :return: whether or not the last option declaration (i.e. starts "-" or "--") is incomplete and
67 corresponds to this cmd_param. In other words whether this cmd_param option can still accept
68 values
69 """
70 if cmd_param.is_flag:
71 return False
72 last_option = None
73 for index, arg_str in enumerate(reversed([arg for arg in all_args if arg != WORDBREAK])):
74 if index + 1 > cmd_param.nargs:
75 break
76 if start_of_option(arg_str):
77 last_option = arg_str
78
79 return True if last_option and last_option in cmd_param.opts else False
80
81
82 def is_incomplete_argument(current_params, cmd_param):
83 """
84 :param current_params: the current params and values for this argument as already entered
85 :param cmd_param: the current command parameter
86 :return: whether or not the last argument is incomplete and corresponds to this cmd_param. In
87 other words whether or not the this cmd_param argument can still accept values
88 """
89 current_param_values = current_params[cmd_param.name]
90 if current_param_values is None:
91 return True
92 if cmd_param.nargs == -1:
93 return True
94 if isinstance(current_param_values, collections.Iterable) \
95 and cmd_param.nargs > 1 and len(current_param_values) < cmd_param.nargs:
96 return True
97 return False
98
99 def get_user_autocompletions(ctx, args, incomplete, cmd_param):
100 """
101 :param ctx: context associated with the parsed command
102 :param args: full list of args
103 :param incomplete: the incomplete text to autocomplete
104 :param cmd_param: command definition
105 :return: all the possible user-specified completions for the param
106 """
107 if isinstance(cmd_param.type, Choice):
108 return cmd_param.type.choices
109 elif cmd_param.autocompletion is not None:
110 return cmd_param.autocompletion(ctx=ctx,
111 args=args,
112 incomplete=incomplete)
113 else:
114 return []
115
116 def get_choices(cli, prog_name, args, incomplete):
117 """
118 :param cli: command definition
119 :param prog_name: the program that is running
120 :param args: full list of args
121 :param incomplete: the incomplete text to autocomplete
122 :return: all the possible completions for the incomplete
123 """
124 all_args = copy.deepcopy(args)
125
126 ctx = resolve_ctx(cli, prog_name, args)
127 if ctx is None:
128 return
129
130 # In newer versions of bash long opts with '='s are partitioned, but it's easier to parse
131 # without the '='
132 if start_of_option(incomplete) and WORDBREAK in incomplete:
133 partition_incomplete = incomplete.partition(WORDBREAK)
134 all_args.append(partition_incomplete[0])
135 incomplete = partition_incomplete[2]
136 elif incomplete == WORDBREAK:
137 incomplete = ''
138
139 choices = []
140 found_param = False
141 if start_of_option(incomplete):
142 # completions for options
143 for param in ctx.command.params:
144 if isinstance(param, Option):
145 choices.extend([param_opt for param_opt in param.opts + param.secondary_opts
146 if param_opt not in all_args or param.multiple])
147 found_param = True
148 if not found_param:
149 # completion for option values by choices
150 for cmd_param in ctx.command.params:
151 if isinstance(cmd_param, Option) and is_incomplete_option(all_args, cmd_param):
152 choices.extend(get_user_autocompletions(ctx, all_args, incomplete, cmd_param))
153 found_param = True
154 break
155 if not found_param:
156 # completion for argument values by choices
157 for cmd_param in ctx.command.params:
158 if isinstance(cmd_param, Argument) and is_incomplete_argument(ctx.params, cmd_param):
159 choices.extend(get_user_autocompletions(ctx, all_args, incomplete, cmd_param))
160 found_param = True
161 break
162
163 if not found_param and isinstance(ctx.command, MultiCommand):
164 # completion for any subcommands
165 choices.extend(ctx.command.list_commands(ctx))
166
167 for item in choices:
168 if item.startswith(incomplete):
169 yield item
170
171
172 def do_complete(cli, prog_name):
173 cwords = split_arg_string(os.environ['COMP_WORDS'])
174 cword = int(os.environ['COMP_CWORD'])
175 args = cwords[1:cword]
176 try:
177 incomplete = cwords[cword]
178 except IndexError:
179 incomplete = ''
180
181 for item in get_choices(cli, prog_name, args, incomplete):
182 echo(item)
183
184 return True
185
186
187 def bashcomplete(cli, prog_name, complete_var, complete_instr):
188 if complete_instr == 'source':
189 echo(get_completion_script(prog_name, complete_var))
190 return True
191 elif complete_instr == 'complete':
192 return do_complete(cli, prog_name)
193 return False
194
[end of click/_bashcomplete.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/click/_bashcomplete.py b/click/_bashcomplete.py
--- a/click/_bashcomplete.py
+++ b/click/_bashcomplete.py
@@ -12,6 +12,7 @@
COMPLETION_SCRIPT = '''
%(complete_func)s() {
+ local IFS=$'\n'
COMPREPLY=( $( env COMP_WORDS="${COMP_WORDS[*]}" \\
COMP_CWORD=$COMP_CWORD \\
%(autocomplete_var)s=complete $1 ) )
|
{"golden_diff": "diff --git a/click/_bashcomplete.py b/click/_bashcomplete.py\n--- a/click/_bashcomplete.py\n+++ b/click/_bashcomplete.py\n@@ -12,6 +12,7 @@\n \n COMPLETION_SCRIPT = '''\n %(complete_func)s() {\n+ local IFS=$'\\n'\n COMPREPLY=( $( env COMP_WORDS=\"${COMP_WORDS[*]}\" \\\\\n COMP_CWORD=$COMP_CWORD \\\\\n %(autocomplete_var)s=complete $1 ) )\n", "issue": "Dynamic bash autocompletion should escape blanks\nThanks for #755, wonderful feature! I found an issue when my completion callback returns entries that have (legitimate) blanks in them. In this case, entries get split into separate arguments, whereas I would have expected that blanks are escaped with a backslash (as bash does by default).\n", "before_files": [{"content": "import collections\nimport copy\nimport os\nimport re\n\nfrom .utils import echo\nfrom .parser import split_arg_string\nfrom .core import MultiCommand, Option, Argument\nfrom .types import Choice\n\nWORDBREAK = '='\n\nCOMPLETION_SCRIPT = '''\n%(complete_func)s() {\n COMPREPLY=( $( env COMP_WORDS=\"${COMP_WORDS[*]}\" \\\\\n COMP_CWORD=$COMP_CWORD \\\\\n %(autocomplete_var)s=complete $1 ) )\n return 0\n}\n\ncomplete -F %(complete_func)s -o default %(script_names)s\n'''\n\n_invalid_ident_char_re = re.compile(r'[^a-zA-Z0-9_]')\n\n\ndef get_completion_script(prog_name, complete_var):\n cf_name = _invalid_ident_char_re.sub('', prog_name.replace('-', '_'))\n return (COMPLETION_SCRIPT % {\n 'complete_func': '_%s_completion' % cf_name,\n 'script_names': prog_name,\n 'autocomplete_var': complete_var,\n }).strip() + ';'\n\n\ndef resolve_ctx(cli, prog_name, args):\n \"\"\"\n Parse into a hierarchy of contexts. Contexts are connected through the parent variable.\n :param cli: command definition\n :param prog_name: the program that is running\n :param args: full list of args\n :return: the final context/command parsed\n \"\"\"\n ctx = cli.make_context(prog_name, args, resilient_parsing=True)\n while ctx.protected_args + ctx.args and isinstance(ctx.command, MultiCommand):\n a = ctx.protected_args + ctx.args\n cmd = ctx.command.get_command(ctx, a[0])\n if cmd is None:\n return None\n ctx = cmd.make_context(a[0], a[1:], parent=ctx, resilient_parsing=True)\n return ctx\n\n\ndef start_of_option(param_str):\n \"\"\"\n :param param_str: param_str to check\n :return: whether or not this is the start of an option declaration (i.e. starts \"-\" or \"--\")\n \"\"\"\n return param_str and param_str[:1] == '-'\n\n\ndef is_incomplete_option(all_args, cmd_param):\n \"\"\"\n :param all_args: the full original list of args supplied\n :param cmd_param: the current command paramter\n :return: whether or not the last option declaration (i.e. starts \"-\" or \"--\") is incomplete and\n corresponds to this cmd_param. In other words whether this cmd_param option can still accept\n values\n \"\"\"\n if cmd_param.is_flag:\n return False\n last_option = None\n for index, arg_str in enumerate(reversed([arg for arg in all_args if arg != WORDBREAK])):\n if index + 1 > cmd_param.nargs:\n break\n if start_of_option(arg_str):\n last_option = arg_str\n\n return True if last_option and last_option in cmd_param.opts else False\n\n\ndef is_incomplete_argument(current_params, cmd_param):\n \"\"\"\n :param current_params: the current params and values for this argument as already entered\n :param cmd_param: the current command parameter\n :return: whether or not the last argument is incomplete and corresponds to this cmd_param. In\n other words whether or not the this cmd_param argument can still accept values\n \"\"\"\n current_param_values = current_params[cmd_param.name]\n if current_param_values is None:\n return True\n if cmd_param.nargs == -1:\n return True\n if isinstance(current_param_values, collections.Iterable) \\\n and cmd_param.nargs > 1 and len(current_param_values) < cmd_param.nargs:\n return True\n return False\n\ndef get_user_autocompletions(ctx, args, incomplete, cmd_param):\n \"\"\"\n :param ctx: context associated with the parsed command\n :param args: full list of args\n :param incomplete: the incomplete text to autocomplete\n :param cmd_param: command definition\n :return: all the possible user-specified completions for the param\n \"\"\"\n if isinstance(cmd_param.type, Choice):\n return cmd_param.type.choices\n elif cmd_param.autocompletion is not None:\n return cmd_param.autocompletion(ctx=ctx,\n args=args,\n incomplete=incomplete)\n else:\n return []\n\ndef get_choices(cli, prog_name, args, incomplete):\n \"\"\"\n :param cli: command definition\n :param prog_name: the program that is running\n :param args: full list of args\n :param incomplete: the incomplete text to autocomplete\n :return: all the possible completions for the incomplete\n \"\"\"\n all_args = copy.deepcopy(args)\n\n ctx = resolve_ctx(cli, prog_name, args)\n if ctx is None:\n return\n\n # In newer versions of bash long opts with '='s are partitioned, but it's easier to parse\n # without the '='\n if start_of_option(incomplete) and WORDBREAK in incomplete:\n partition_incomplete = incomplete.partition(WORDBREAK)\n all_args.append(partition_incomplete[0])\n incomplete = partition_incomplete[2]\n elif incomplete == WORDBREAK:\n incomplete = ''\n\n choices = []\n found_param = False\n if start_of_option(incomplete):\n # completions for options\n for param in ctx.command.params:\n if isinstance(param, Option):\n choices.extend([param_opt for param_opt in param.opts + param.secondary_opts\n if param_opt not in all_args or param.multiple])\n found_param = True\n if not found_param:\n # completion for option values by choices\n for cmd_param in ctx.command.params:\n if isinstance(cmd_param, Option) and is_incomplete_option(all_args, cmd_param):\n choices.extend(get_user_autocompletions(ctx, all_args, incomplete, cmd_param))\n found_param = True\n break\n if not found_param:\n # completion for argument values by choices\n for cmd_param in ctx.command.params:\n if isinstance(cmd_param, Argument) and is_incomplete_argument(ctx.params, cmd_param):\n choices.extend(get_user_autocompletions(ctx, all_args, incomplete, cmd_param))\n found_param = True\n break\n\n if not found_param and isinstance(ctx.command, MultiCommand):\n # completion for any subcommands\n choices.extend(ctx.command.list_commands(ctx))\n\n for item in choices:\n if item.startswith(incomplete):\n yield item\n\n\ndef do_complete(cli, prog_name):\n cwords = split_arg_string(os.environ['COMP_WORDS'])\n cword = int(os.environ['COMP_CWORD'])\n args = cwords[1:cword]\n try:\n incomplete = cwords[cword]\n except IndexError:\n incomplete = ''\n\n for item in get_choices(cli, prog_name, args, incomplete):\n echo(item)\n\n return True\n\n\ndef bashcomplete(cli, prog_name, complete_var, complete_instr):\n if complete_instr == 'source':\n echo(get_completion_script(prog_name, complete_var))\n return True\n elif complete_instr == 'complete':\n return do_complete(cli, prog_name)\n return False\n", "path": "click/_bashcomplete.py"}]}
| 2,606 | 105 |
gh_patches_debug_20325
|
rasdani/github-patches
|
git_diff
|
dask__distributed-3253
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
cannot set WRITEABLE flag to True of this array
My colleague is seeing the following error on distirbuted 2.8.0. It looks related to #3180 and this set of lines
https://github.com/dask/distributed/blob/35551998d7350cd5ae6a5c24970d8437fd8d521d/distributed/protocol/numpy.py#L110-L114
```
distributed.worker - INFO - Start worker at: tcp://10.12.205.19:34307
distributed.worker - INFO - Listening to: tcp://10.12.205.19:34307
distributed.worker - INFO - dashboard at: 10.12.205.19:34094
distributed.worker - INFO - Waiting to connect to: tcp://128.117.181.211:37309
distributed.worker - INFO - -------------------------------------------------
distributed.worker - INFO - Threads: 1
distributed.worker - INFO - Memory: 25.00 GB
distributed.worker - INFO - Local Directory: /glade/scratch/deppenme/dask-tmp/worker-k05sxxku
distributed.worker - INFO - -------------------------------------------------
distributed.worker - INFO - Registered to: tcp://128.117.181.211:37309
distributed.worker - INFO - -------------------------------------------------
distributed.worker - ERROR - cannot set WRITEABLE flag to True of this array
Traceback (most recent call last): File "/glade/work/deppenme/miniconda3/envs/analysis/lib/python3.7/site-packages/distributed/worker.py", line 894, in handle_scheduler comm, every_cycle=[self.ensure_communicating, self.ensure_computing] File "/glade/work/deppenme/miniconda3/envs/analysis/lib/python3.7/site-packages/distributed/core.py", line 447, in handle_stream msgs = await comm.read() File "/glade/work/deppenme/miniconda3/envs/analysis/lib/python3.7/site-packages/distributed/comm/tcp.py", line 208, in read frames, deserialize=self.deserialize, deserializers=deserializers
File "/glade/work/deppenme/miniconda3/envs/analysis/lib/python3.7/site-packages/distributed/comm/utils.py", line 63, in from_frames res = await offload(_from_frames) File "/glade/work/deppenme/miniconda3/envs/analysis/lib/python3.7/site-packages/distributed/utils.py", line 1371, in offload return await loop.run_in_executor(_offload_executor, fn, *args, **kwargs) File "/glade/work/deppenme/miniconda3/envs/analysis/lib/python3.7/concurrent/futures/thread.py", line 57, in run result = self.fn(*self.args, **self.kwargs)
File "/glade/work/deppenme/miniconda3/envs/analysis/lib/python3.7/site-packages/distributed/comm/utils.py", line 51, in _from_frames frames, deserialize=deserialize, deserializers=deserializers File "/glade/work/deppenme/miniconda3/envs/analysis/lib/python3.7/site-packages/distributed/protocol/core.py", line 124, in loads value = _deserialize(head, fs, deserializers=deserializers)
File "/glade/work/deppenme/miniconda3/envs/analysis/lib/python3.7/site-packages/distributed/protocol/serialize.py", line 268, in deserialize return loads(header, frames)
File "/glade/work/deppenme/miniconda3/envs/analysis/lib/python3.7/site-packages/distributed/protocol/serialize.py", line 54, in dask_loads return loads(header, frames)
File "/glade/work/deppenme/miniconda3/envs/analysis/lib/python3.7/site-packages/distributed/protocol/numpy.py",
line 113, in deserialize_numpy_ndarray x.setflags(write=writeable) ValueError: cannot set WRITEABLE flag to True of this array
```
This is coming out of a complicated analysis pipeline with xarray, zarr and dask so we don't have a minimal example yet. We could work to find one if you have some pointers on what to look for.
</issue>
<code>
[start of distributed/protocol/numpy.py]
1 import math
2 import numpy as np
3
4 from .utils import frame_split_size, merge_frames
5 from .serialize import dask_serialize, dask_deserialize
6 from . import pickle
7
8 from ..utils import log_errors
9
10
11 def itemsize(dt):
12 """ Itemsize of dtype
13
14 Try to return the itemsize of the base element, return 8 as a fallback
15 """
16 result = dt.base.itemsize
17 if result > 255:
18 result = 8
19 return result
20
21
22 @dask_serialize.register(np.ndarray)
23 def serialize_numpy_ndarray(x):
24 if x.dtype.hasobject:
25 header = {"pickle": True}
26 frames = [pickle.dumps(x)]
27 return header, frames
28
29 # We cannot blindly pickle the dtype as some may fail pickling,
30 # so we have a mixture of strategies.
31 if x.dtype.kind == "V":
32 # Preserving all the information works best when pickling
33 try:
34 # Only use stdlib pickle as cloudpickle is slow when failing
35 # (microseconds instead of nanoseconds)
36 dt = (1, pickle.pickle.dumps(x.dtype))
37 pickle.loads(dt[1]) # does it unpickle fine?
38 except Exception:
39 # dtype fails pickling => fall back on the descr if reasonable.
40 if x.dtype.type is not np.void or x.dtype.alignment != 1:
41 raise
42 else:
43 dt = (0, x.dtype.descr)
44 else:
45 dt = (0, x.dtype.str)
46
47 # Only serialize non-broadcasted data for arrays with zero strided axes
48 if 0 in x.strides:
49 broadcast_to = (x.shape, x.flags.writeable)
50 x = x[tuple(slice(None) if s != 0 else slice(1) for s in x.strides)]
51 else:
52 broadcast_to = None
53
54 if not x.shape:
55 # 0d array
56 strides = x.strides
57 data = x.ravel()
58 elif x.flags.c_contiguous or x.flags.f_contiguous:
59 # Avoid a copy and respect order when unserializing
60 strides = x.strides
61 data = x.ravel(order="K")
62 else:
63 x = np.ascontiguousarray(x)
64 strides = x.strides
65 data = x.ravel()
66
67 if data.dtype.fields or data.dtype.itemsize > 8:
68 data = data.view("u%d" % math.gcd(x.dtype.itemsize, 8))
69
70 try:
71 data = data.data
72 except ValueError:
73 # "ValueError: cannot include dtype 'M' in a buffer"
74 data = data.view("u%d" % math.gcd(x.dtype.itemsize, 8)).data
75
76 header = {"dtype": dt, "shape": x.shape, "strides": strides}
77
78 if broadcast_to is not None:
79 header["broadcast_to"] = broadcast_to
80
81 if x.nbytes > 1e5:
82 frames = frame_split_size([data])
83 else:
84 frames = [data]
85
86 header["lengths"] = [x.nbytes]
87
88 return header, frames
89
90
91 @dask_deserialize.register(np.ndarray)
92 def deserialize_numpy_ndarray(header, frames):
93 with log_errors():
94 if len(frames) > 1:
95 frames = merge_frames(header, frames)
96
97 if header.get("pickle"):
98 return pickle.loads(frames[0])
99
100 is_custom, dt = header["dtype"]
101 if is_custom:
102 dt = pickle.loads(dt)
103 else:
104 dt = np.dtype(dt)
105
106 x = np.ndarray(
107 header["shape"], dtype=dt, buffer=frames[0], strides=header["strides"]
108 )
109
110 if header.get("broadcast_to"):
111 shape, writeable = header["broadcast_to"]
112 x = np.broadcast_to(x, shape)
113 x.setflags(write=writeable)
114
115 return x
116
117
118 @dask_serialize.register(np.ma.core.MaskedConstant)
119 def serialize_numpy_ma_masked(x):
120 return {}, []
121
122
123 @dask_deserialize.register(np.ma.core.MaskedConstant)
124 def deserialize_numpy_ma_masked(header, frames):
125 return np.ma.masked
126
127
128 @dask_serialize.register(np.ma.core.MaskedArray)
129 def serialize_numpy_maskedarray(x):
130 data_header, frames = serialize_numpy_ndarray(x.data)
131 header = {"data-header": data_header, "nframes": len(frames)}
132
133 # Serialize mask if present
134 if x.mask is not np.ma.nomask:
135 mask_header, mask_frames = serialize_numpy_ndarray(x.mask)
136 header["mask-header"] = mask_header
137 frames += mask_frames
138
139 # Only a few dtypes have python equivalents msgpack can serialize
140 if isinstance(x.fill_value, (np.integer, np.floating, np.bool_)):
141 serialized_fill_value = (False, x.fill_value.item())
142 else:
143 serialized_fill_value = (True, pickle.dumps(x.fill_value))
144 header["fill-value"] = serialized_fill_value
145
146 return header, frames
147
148
149 @dask_deserialize.register(np.ma.core.MaskedArray)
150 def deserialize_numpy_maskedarray(header, frames):
151 data_header = header["data-header"]
152 data_frames = frames[: header["nframes"]]
153 data = deserialize_numpy_ndarray(data_header, data_frames)
154
155 if "mask-header" in header:
156 mask_header = header["mask-header"]
157 mask_frames = frames[header["nframes"] :]
158 mask = deserialize_numpy_ndarray(mask_header, mask_frames)
159 else:
160 mask = np.ma.nomask
161
162 pickled_fv, fill_value = header["fill-value"]
163 if pickled_fv:
164 fill_value = pickle.loads(fill_value)
165
166 return np.ma.masked_array(data, mask=mask, fill_value=fill_value)
167
[end of distributed/protocol/numpy.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/distributed/protocol/numpy.py b/distributed/protocol/numpy.py
--- a/distributed/protocol/numpy.py
+++ b/distributed/protocol/numpy.py
@@ -46,7 +46,7 @@
# Only serialize non-broadcasted data for arrays with zero strided axes
if 0 in x.strides:
- broadcast_to = (x.shape, x.flags.writeable)
+ broadcast_to = x.shape
x = x[tuple(slice(None) if s != 0 else slice(1) for s in x.strides)]
else:
broadcast_to = None
@@ -103,14 +103,12 @@
else:
dt = np.dtype(dt)
- x = np.ndarray(
- header["shape"], dtype=dt, buffer=frames[0], strides=header["strides"]
- )
-
if header.get("broadcast_to"):
- shape, writeable = header["broadcast_to"]
- x = np.broadcast_to(x, shape)
- x.setflags(write=writeable)
+ shape = header["broadcast_to"]
+ else:
+ shape = header["shape"]
+
+ x = np.ndarray(shape, dtype=dt, buffer=frames[0], strides=header["strides"])
return x
|
{"golden_diff": "diff --git a/distributed/protocol/numpy.py b/distributed/protocol/numpy.py\n--- a/distributed/protocol/numpy.py\n+++ b/distributed/protocol/numpy.py\n@@ -46,7 +46,7 @@\n \n # Only serialize non-broadcasted data for arrays with zero strided axes\n if 0 in x.strides:\n- broadcast_to = (x.shape, x.flags.writeable)\n+ broadcast_to = x.shape\n x = x[tuple(slice(None) if s != 0 else slice(1) for s in x.strides)]\n else:\n broadcast_to = None\n@@ -103,14 +103,12 @@\n else:\n dt = np.dtype(dt)\n \n- x = np.ndarray(\n- header[\"shape\"], dtype=dt, buffer=frames[0], strides=header[\"strides\"]\n- )\n-\n if header.get(\"broadcast_to\"):\n- shape, writeable = header[\"broadcast_to\"]\n- x = np.broadcast_to(x, shape)\n- x.setflags(write=writeable)\n+ shape = header[\"broadcast_to\"]\n+ else:\n+ shape = header[\"shape\"]\n+\n+ x = np.ndarray(shape, dtype=dt, buffer=frames[0], strides=header[\"strides\"])\n \n return x\n", "issue": "cannot set WRITEABLE flag to True of this array\nMy colleague is seeing the following error on distirbuted 2.8.0. It looks related to #3180 and this set of lines\r\n\r\nhttps://github.com/dask/distributed/blob/35551998d7350cd5ae6a5c24970d8437fd8d521d/distributed/protocol/numpy.py#L110-L114\r\n\r\n\r\n```\r\ndistributed.worker - INFO - Start worker at: tcp://10.12.205.19:34307\r\n\r\ndistributed.worker - INFO - Listening to: tcp://10.12.205.19:34307\r\n\r\ndistributed.worker - INFO - dashboard at: 10.12.205.19:34094\r\n\r\ndistributed.worker - INFO - Waiting to connect to: tcp://128.117.181.211:37309\r\n\r\ndistributed.worker - INFO - -------------------------------------------------\r\n\r\ndistributed.worker - INFO - Threads: 1\r\n\r\ndistributed.worker - INFO - Memory: 25.00 GB\r\n\r\ndistributed.worker - INFO - Local Directory: /glade/scratch/deppenme/dask-tmp/worker-k05sxxku\r\n\r\ndistributed.worker - INFO - -------------------------------------------------\r\n\r\ndistributed.worker - INFO - Registered to: tcp://128.117.181.211:37309\r\n\r\ndistributed.worker - INFO - -------------------------------------------------\r\n\r\ndistributed.worker - ERROR - cannot set WRITEABLE flag to True of this array \r\n\r\nTraceback (most recent call last): File \"/glade/work/deppenme/miniconda3/envs/analysis/lib/python3.7/site-packages/distributed/worker.py\", line 894, in handle_scheduler comm, every_cycle=[self.ensure_communicating, self.ensure_computing] File \"/glade/work/deppenme/miniconda3/envs/analysis/lib/python3.7/site-packages/distributed/core.py\", line 447, in handle_stream msgs = await comm.read() File \"/glade/work/deppenme/miniconda3/envs/analysis/lib/python3.7/site-packages/distributed/comm/tcp.py\", line 208, in read frames, deserialize=self.deserialize, deserializers=deserializers \r\nFile \"/glade/work/deppenme/miniconda3/envs/analysis/lib/python3.7/site-packages/distributed/comm/utils.py\", line 63, in from_frames res = await offload(_from_frames) File \"/glade/work/deppenme/miniconda3/envs/analysis/lib/python3.7/site-packages/distributed/utils.py\", line 1371, in offload return await loop.run_in_executor(_offload_executor, fn, *args, **kwargs) File \"/glade/work/deppenme/miniconda3/envs/analysis/lib/python3.7/concurrent/futures/thread.py\", line 57, in run result = self.fn(*self.args, **self.kwargs) \r\nFile \"/glade/work/deppenme/miniconda3/envs/analysis/lib/python3.7/site-packages/distributed/comm/utils.py\", line 51, in _from_frames frames, deserialize=deserialize, deserializers=deserializers File \"/glade/work/deppenme/miniconda3/envs/analysis/lib/python3.7/site-packages/distributed/protocol/core.py\", line 124, in loads value = _deserialize(head, fs, deserializers=deserializers) \r\nFile \"/glade/work/deppenme/miniconda3/envs/analysis/lib/python3.7/site-packages/distributed/protocol/serialize.py\", line 268, in deserialize return loads(header, frames) \r\nFile \"/glade/work/deppenme/miniconda3/envs/analysis/lib/python3.7/site-packages/distributed/protocol/serialize.py\", line 54, in dask_loads return loads(header, frames) \r\nFile \"/glade/work/deppenme/miniconda3/envs/analysis/lib/python3.7/site-packages/distributed/protocol/numpy.py\",\r\n line 113, in deserialize_numpy_ndarray x.setflags(write=writeable) ValueError: cannot set WRITEABLE flag to True of this array\r\n```\r\n\r\nThis is coming out of a complicated analysis pipeline with xarray, zarr and dask so we don't have a minimal example yet. We could work to find one if you have some pointers on what to look for.\n", "before_files": [{"content": "import math\nimport numpy as np\n\nfrom .utils import frame_split_size, merge_frames\nfrom .serialize import dask_serialize, dask_deserialize\nfrom . import pickle\n\nfrom ..utils import log_errors\n\n\ndef itemsize(dt):\n \"\"\" Itemsize of dtype\n\n Try to return the itemsize of the base element, return 8 as a fallback\n \"\"\"\n result = dt.base.itemsize\n if result > 255:\n result = 8\n return result\n\n\n@dask_serialize.register(np.ndarray)\ndef serialize_numpy_ndarray(x):\n if x.dtype.hasobject:\n header = {\"pickle\": True}\n frames = [pickle.dumps(x)]\n return header, frames\n\n # We cannot blindly pickle the dtype as some may fail pickling,\n # so we have a mixture of strategies.\n if x.dtype.kind == \"V\":\n # Preserving all the information works best when pickling\n try:\n # Only use stdlib pickle as cloudpickle is slow when failing\n # (microseconds instead of nanoseconds)\n dt = (1, pickle.pickle.dumps(x.dtype))\n pickle.loads(dt[1]) # does it unpickle fine?\n except Exception:\n # dtype fails pickling => fall back on the descr if reasonable.\n if x.dtype.type is not np.void or x.dtype.alignment != 1:\n raise\n else:\n dt = (0, x.dtype.descr)\n else:\n dt = (0, x.dtype.str)\n\n # Only serialize non-broadcasted data for arrays with zero strided axes\n if 0 in x.strides:\n broadcast_to = (x.shape, x.flags.writeable)\n x = x[tuple(slice(None) if s != 0 else slice(1) for s in x.strides)]\n else:\n broadcast_to = None\n\n if not x.shape:\n # 0d array\n strides = x.strides\n data = x.ravel()\n elif x.flags.c_contiguous or x.flags.f_contiguous:\n # Avoid a copy and respect order when unserializing\n strides = x.strides\n data = x.ravel(order=\"K\")\n else:\n x = np.ascontiguousarray(x)\n strides = x.strides\n data = x.ravel()\n\n if data.dtype.fields or data.dtype.itemsize > 8:\n data = data.view(\"u%d\" % math.gcd(x.dtype.itemsize, 8))\n\n try:\n data = data.data\n except ValueError:\n # \"ValueError: cannot include dtype 'M' in a buffer\"\n data = data.view(\"u%d\" % math.gcd(x.dtype.itemsize, 8)).data\n\n header = {\"dtype\": dt, \"shape\": x.shape, \"strides\": strides}\n\n if broadcast_to is not None:\n header[\"broadcast_to\"] = broadcast_to\n\n if x.nbytes > 1e5:\n frames = frame_split_size([data])\n else:\n frames = [data]\n\n header[\"lengths\"] = [x.nbytes]\n\n return header, frames\n\n\n@dask_deserialize.register(np.ndarray)\ndef deserialize_numpy_ndarray(header, frames):\n with log_errors():\n if len(frames) > 1:\n frames = merge_frames(header, frames)\n\n if header.get(\"pickle\"):\n return pickle.loads(frames[0])\n\n is_custom, dt = header[\"dtype\"]\n if is_custom:\n dt = pickle.loads(dt)\n else:\n dt = np.dtype(dt)\n\n x = np.ndarray(\n header[\"shape\"], dtype=dt, buffer=frames[0], strides=header[\"strides\"]\n )\n\n if header.get(\"broadcast_to\"):\n shape, writeable = header[\"broadcast_to\"]\n x = np.broadcast_to(x, shape)\n x.setflags(write=writeable)\n\n return x\n\n\n@dask_serialize.register(np.ma.core.MaskedConstant)\ndef serialize_numpy_ma_masked(x):\n return {}, []\n\n\n@dask_deserialize.register(np.ma.core.MaskedConstant)\ndef deserialize_numpy_ma_masked(header, frames):\n return np.ma.masked\n\n\n@dask_serialize.register(np.ma.core.MaskedArray)\ndef serialize_numpy_maskedarray(x):\n data_header, frames = serialize_numpy_ndarray(x.data)\n header = {\"data-header\": data_header, \"nframes\": len(frames)}\n\n # Serialize mask if present\n if x.mask is not np.ma.nomask:\n mask_header, mask_frames = serialize_numpy_ndarray(x.mask)\n header[\"mask-header\"] = mask_header\n frames += mask_frames\n\n # Only a few dtypes have python equivalents msgpack can serialize\n if isinstance(x.fill_value, (np.integer, np.floating, np.bool_)):\n serialized_fill_value = (False, x.fill_value.item())\n else:\n serialized_fill_value = (True, pickle.dumps(x.fill_value))\n header[\"fill-value\"] = serialized_fill_value\n\n return header, frames\n\n\n@dask_deserialize.register(np.ma.core.MaskedArray)\ndef deserialize_numpy_maskedarray(header, frames):\n data_header = header[\"data-header\"]\n data_frames = frames[: header[\"nframes\"]]\n data = deserialize_numpy_ndarray(data_header, data_frames)\n\n if \"mask-header\" in header:\n mask_header = header[\"mask-header\"]\n mask_frames = frames[header[\"nframes\"] :]\n mask = deserialize_numpy_ndarray(mask_header, mask_frames)\n else:\n mask = np.ma.nomask\n\n pickled_fv, fill_value = header[\"fill-value\"]\n if pickled_fv:\n fill_value = pickle.loads(fill_value)\n\n return np.ma.masked_array(data, mask=mask, fill_value=fill_value)\n", "path": "distributed/protocol/numpy.py"}]}
| 3,169 | 290 |
gh_patches_debug_22167
|
rasdani/github-patches
|
git_diff
|
cupy__cupy-3159
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
compatibility issue of `erfinv` and `erfcinv`
For `erfinv`, the valid domain is [-1, 1], and at the boundary -1 and +1 the values are -Inf and +Inf, respectively. But outside the boundary, the values are NaN in SciPy: see [here](https://github.com/scipy/scipy/blob/59347ae8b86bcc92c339efe213128f64ab6df98c/scipy/special/cephes/ndtri.c#L146-L149) (the `ndtri` function is the underlying workhorse).
Reproducer:
```python
>>> from cupyx.scipy.special import erfinv
>>> import cupy as cp
>>>
>>> a = (cp.arange(6) + 1).reshape(2,3)
>>> a
array([[1, 2, 3],
[4, 5, 6]])
>>> erfinv(a)
array([[inf, inf, inf],
[inf, inf, inf]])
>>>
>>> import scipy.special as scp
>>> scp.erfinv(cp.asnumpy(a))
array([[inf, nan, nan],
[nan, nan, nan]])
```
Reproducer 2:
```bash
$ pytest -v tests/cupyx_tests/scipy_tests/special_tests/test_erf.py
========================================================================= test session starts =========================================================================
platform linux -- Python 3.7.6, pytest-5.3.5, py-1.8.1, pluggy-0.12.0 -- /home/leofang/miniconda3/envs/cupy_dev/bin/python
cachedir: .pytest_cache
rootdir: /home/leofang/cupy, inifile: setup.cfg
collected 10 items
tests/cupyx_tests/scipy_tests/special_tests/test_erf.py::TestSpecial::test_erf PASSED [ 10%]
tests/cupyx_tests/scipy_tests/special_tests/test_erf.py::TestSpecial::test_erfc PASSED [ 20%]
tests/cupyx_tests/scipy_tests/special_tests/test_erf.py::TestSpecial::test_erfcinv FAILED [ 30%]
tests/cupyx_tests/scipy_tests/special_tests/test_erf.py::TestSpecial::test_erfcx PASSED [ 40%]
tests/cupyx_tests/scipy_tests/special_tests/test_erf.py::TestSpecial::test_erfinv FAILED [ 50%]
tests/cupyx_tests/scipy_tests/special_tests/test_erf.py::TestFusionSpecial::test_erf PASSED [ 60%]
tests/cupyx_tests/scipy_tests/special_tests/test_erf.py::TestFusionSpecial::test_erfc PASSED [ 70%]
tests/cupyx_tests/scipy_tests/special_tests/test_erf.py::TestFusionSpecial::test_erfcinv FAILED [ 80%]
tests/cupyx_tests/scipy_tests/special_tests/test_erf.py::TestFusionSpecial::test_erfcx PASSED [ 90%]
tests/cupyx_tests/scipy_tests/special_tests/test_erf.py::TestFusionSpecial::test_erfinv FAILED [100%]
=============================================================== 4 failed, 6 passed, 1 warning in 0.74s ================================================================
```
I am a bit surprised to learn this, as the CI doesn't seem to complain at all, so it is likely the behavior is changed in recent SciPy? (I'm using v1.4.1 btw.)
The fix should be simple: just add another `else if` branch handling the out of boundary behavior to the ufunc here : https://github.com/cupy/cupy/blob/84343ce8a87d34928abef65d8930ba590189f43f/cupyx/scipy/special/erf.py#L37-L43
I have not dug into `erfcinv` but presumably the source of error is similar.
</issue>
<code>
[start of cupyx/scipy/special/erf.py]
1 from cupy import core
2
3
4 erf = core.create_ufunc(
5 'cupyx_scipy_erf', ('f->f', 'd->d'),
6 'out0 = erf(in0)',
7 doc='''Error function.
8
9 .. seealso:: :meth:`scipy.special.erf`
10
11 ''')
12
13
14 erfc = core.create_ufunc(
15 'cupyx_scipy_erfc', ('f->f', 'd->d'),
16 'out0 = erfc(in0)',
17 doc='''Complementary error function.
18
19 .. seealso:: :meth:`scipy.special.erfc`
20
21 ''')
22
23
24 erfcx = core.create_ufunc(
25 'cupyx_scipy_erfcx', ('f->f', 'd->d'),
26 'out0 = erfcx(in0)',
27 doc='''Scaled complementary error function.
28
29 .. seealso:: :meth:`scipy.special.erfcx`
30
31 ''')
32
33
34 erfinv = core.create_ufunc(
35 'cupyx_scipy_erfinv', ('f->f', 'd->d'),
36 '''
37 if (in0 < -1) {
38 out0 = -1.0 / 0.0;
39 } else if (in0 > 1) {
40 out0 = 1.0 / 0.0;
41 } else {
42 out0 = erfinv(in0);
43 }
44 ''',
45 doc='''Inverse function of error function.
46
47 .. seealso:: :meth:`scipy.special.erfinv`
48
49 ''')
50
51
52 erfcinv = core.create_ufunc(
53 'cupyx_scipy_erfcinv', ('f->f', 'd->d'),
54 '''
55 if (in0 < 0) {
56 out0 = 1.0 / 0.0;
57 } else if (in0 > 2) {
58 out0 = -1.0 / 0.0;
59 } else {
60 out0 = erfcinv(in0);
61 }
62 ''',
63 doc='''Inverse function of complementary error function.
64
65 .. seealso:: :meth:`scipy.special.erfcinv`
66
67 ''')
68
[end of cupyx/scipy/special/erf.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/cupyx/scipy/special/erf.py b/cupyx/scipy/special/erf.py
--- a/cupyx/scipy/special/erf.py
+++ b/cupyx/scipy/special/erf.py
@@ -33,35 +33,27 @@
erfinv = core.create_ufunc(
'cupyx_scipy_erfinv', ('f->f', 'd->d'),
- '''
- if (in0 < -1) {
- out0 = -1.0 / 0.0;
- } else if (in0 > 1) {
- out0 = 1.0 / 0.0;
- } else {
- out0 = erfinv(in0);
- }
- ''',
+ 'out0 = erfinv(in0);',
doc='''Inverse function of error function.
.. seealso:: :meth:`scipy.special.erfinv`
+ .. note::
+ The behavior close to (and outside) the domain follows that of
+ SciPy v1.4.0+.
+
''')
erfcinv = core.create_ufunc(
'cupyx_scipy_erfcinv', ('f->f', 'd->d'),
- '''
- if (in0 < 0) {
- out0 = 1.0 / 0.0;
- } else if (in0 > 2) {
- out0 = -1.0 / 0.0;
- } else {
- out0 = erfcinv(in0);
- }
- ''',
+ 'out0 = erfcinv(in0);',
doc='''Inverse function of complementary error function.
.. seealso:: :meth:`scipy.special.erfcinv`
+ .. note::
+ The behavior close to (and outside) the domain follows that of
+ SciPy v1.4.0+.
+
''')
|
{"golden_diff": "diff --git a/cupyx/scipy/special/erf.py b/cupyx/scipy/special/erf.py\n--- a/cupyx/scipy/special/erf.py\n+++ b/cupyx/scipy/special/erf.py\n@@ -33,35 +33,27 @@\n \n erfinv = core.create_ufunc(\n 'cupyx_scipy_erfinv', ('f->f', 'd->d'),\n- '''\n- if (in0 < -1) {\n- out0 = -1.0 / 0.0;\n- } else if (in0 > 1) {\n- out0 = 1.0 / 0.0;\n- } else {\n- out0 = erfinv(in0);\n- }\n- ''',\n+ 'out0 = erfinv(in0);',\n doc='''Inverse function of error function.\n \n .. seealso:: :meth:`scipy.special.erfinv`\n \n+ .. note::\n+ The behavior close to (and outside) the domain follows that of\n+ SciPy v1.4.0+.\n+\n ''')\n \n \n erfcinv = core.create_ufunc(\n 'cupyx_scipy_erfcinv', ('f->f', 'd->d'),\n- '''\n- if (in0 < 0) {\n- out0 = 1.0 / 0.0;\n- } else if (in0 > 2) {\n- out0 = -1.0 / 0.0;\n- } else {\n- out0 = erfcinv(in0);\n- }\n- ''',\n+ 'out0 = erfcinv(in0);',\n doc='''Inverse function of complementary error function.\n \n .. seealso:: :meth:`scipy.special.erfcinv`\n \n+ .. note::\n+ The behavior close to (and outside) the domain follows that of\n+ SciPy v1.4.0+.\n+\n ''')\n", "issue": "compatibility issue of `erfinv` and `erfcinv` \nFor `erfinv`, the valid domain is [-1, 1], and at the boundary -1 and +1 the values are -Inf and +Inf, respectively. But outside the boundary, the values are NaN in SciPy: see [here](https://github.com/scipy/scipy/blob/59347ae8b86bcc92c339efe213128f64ab6df98c/scipy/special/cephes/ndtri.c#L146-L149) (the `ndtri` function is the underlying workhorse).\r\n\r\nReproducer:\r\n```python\r\n>>> from cupyx.scipy.special import erfinv\r\n>>> import cupy as cp\r\n>>> \r\n>>> a = (cp.arange(6) + 1).reshape(2,3)\r\n>>> a\r\narray([[1, 2, 3],\r\n [4, 5, 6]])\r\n>>> erfinv(a)\r\narray([[inf, inf, inf],\r\n [inf, inf, inf]])\r\n>>>\r\n>>> import scipy.special as scp\r\n>>> scp.erfinv(cp.asnumpy(a))\r\narray([[inf, nan, nan],\r\n [nan, nan, nan]])\r\n```\r\n\r\nReproducer 2:\r\n```bash\r\n$ pytest -v tests/cupyx_tests/scipy_tests/special_tests/test_erf.py\r\n========================================================================= test session starts =========================================================================\r\nplatform linux -- Python 3.7.6, pytest-5.3.5, py-1.8.1, pluggy-0.12.0 -- /home/leofang/miniconda3/envs/cupy_dev/bin/python\r\ncachedir: .pytest_cache\r\nrootdir: /home/leofang/cupy, inifile: setup.cfg\r\ncollected 10 items \r\n\r\ntests/cupyx_tests/scipy_tests/special_tests/test_erf.py::TestSpecial::test_erf PASSED [ 10%]\r\ntests/cupyx_tests/scipy_tests/special_tests/test_erf.py::TestSpecial::test_erfc PASSED [ 20%]\r\ntests/cupyx_tests/scipy_tests/special_tests/test_erf.py::TestSpecial::test_erfcinv FAILED [ 30%]\r\ntests/cupyx_tests/scipy_tests/special_tests/test_erf.py::TestSpecial::test_erfcx PASSED [ 40%]\r\ntests/cupyx_tests/scipy_tests/special_tests/test_erf.py::TestSpecial::test_erfinv FAILED [ 50%]\r\ntests/cupyx_tests/scipy_tests/special_tests/test_erf.py::TestFusionSpecial::test_erf PASSED [ 60%]\r\ntests/cupyx_tests/scipy_tests/special_tests/test_erf.py::TestFusionSpecial::test_erfc PASSED [ 70%]\r\ntests/cupyx_tests/scipy_tests/special_tests/test_erf.py::TestFusionSpecial::test_erfcinv FAILED [ 80%]\r\ntests/cupyx_tests/scipy_tests/special_tests/test_erf.py::TestFusionSpecial::test_erfcx PASSED [ 90%]\r\ntests/cupyx_tests/scipy_tests/special_tests/test_erf.py::TestFusionSpecial::test_erfinv FAILED [100%]\r\n\r\n=============================================================== 4 failed, 6 passed, 1 warning in 0.74s ================================================================\r\n```\r\n\r\nI am a bit surprised to learn this, as the CI doesn't seem to complain at all, so it is likely the behavior is changed in recent SciPy? (I'm using v1.4.1 btw.) \r\n\r\nThe fix should be simple: just add another `else if` branch handling the out of boundary behavior to the ufunc here : https://github.com/cupy/cupy/blob/84343ce8a87d34928abef65d8930ba590189f43f/cupyx/scipy/special/erf.py#L37-L43\r\n\r\nI have not dug into `erfcinv` but presumably the source of error is similar. \n", "before_files": [{"content": "from cupy import core\n\n\nerf = core.create_ufunc(\n 'cupyx_scipy_erf', ('f->f', 'd->d'),\n 'out0 = erf(in0)',\n doc='''Error function.\n\n .. seealso:: :meth:`scipy.special.erf`\n\n ''')\n\n\nerfc = core.create_ufunc(\n 'cupyx_scipy_erfc', ('f->f', 'd->d'),\n 'out0 = erfc(in0)',\n doc='''Complementary error function.\n\n .. seealso:: :meth:`scipy.special.erfc`\n\n ''')\n\n\nerfcx = core.create_ufunc(\n 'cupyx_scipy_erfcx', ('f->f', 'd->d'),\n 'out0 = erfcx(in0)',\n doc='''Scaled complementary error function.\n\n .. seealso:: :meth:`scipy.special.erfcx`\n\n ''')\n\n\nerfinv = core.create_ufunc(\n 'cupyx_scipy_erfinv', ('f->f', 'd->d'),\n '''\n if (in0 < -1) {\n out0 = -1.0 / 0.0;\n } else if (in0 > 1) {\n out0 = 1.0 / 0.0;\n } else {\n out0 = erfinv(in0);\n }\n ''',\n doc='''Inverse function of error function.\n\n .. seealso:: :meth:`scipy.special.erfinv`\n\n ''')\n\n\nerfcinv = core.create_ufunc(\n 'cupyx_scipy_erfcinv', ('f->f', 'd->d'),\n '''\n if (in0 < 0) {\n out0 = 1.0 / 0.0;\n } else if (in0 > 2) {\n out0 = -1.0 / 0.0;\n } else {\n out0 = erfcinv(in0);\n }\n ''',\n doc='''Inverse function of complementary error function.\n\n .. seealso:: :meth:`scipy.special.erfcinv`\n\n ''')\n", "path": "cupyx/scipy/special/erf.py"}]}
| 2,058 | 447 |
gh_patches_debug_17996
|
rasdani/github-patches
|
git_diff
|
cookiecutter__cookiecutter-768
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Wrong hook executed when a tilde-suffixed file of the same name exists
- Cookiecutter version: 1.4.0
- Template project url: https://github.com/thorgate/django-project-template
- Python version: 3.4
- Operating System: Ubuntu 15.10 wily
### Description:
When using gedit or some other text editor that pollutes the directory with backup files ending with a tilde, cookiecutter mistakes that for the "real" hook it should run. This resulted in cookiecutter running a ridiculously outdated version of my pre-gen hook.
The obvious solution is to just remove `pre_gen_project.py~`, which works, but I believe ideally cookiecutter shouldn't be running it in the first place.
### What I've run:
```
gedit django-template/hooks/pre_gen_project.py
cookiecutter django-template
```
</issue>
<code>
[start of cookiecutter/hooks.py]
1 # -*- coding: utf-8 -*-
2
3 """
4 cookiecutter.hooks
5 ------------------
6
7 Functions for discovering and executing various cookiecutter hooks.
8 """
9
10 import io
11 import logging
12 import os
13 import subprocess
14 import sys
15 import tempfile
16
17 from jinja2 import Template
18
19 from cookiecutter import utils
20 from .exceptions import FailedHookException
21
22
23 _HOOKS = [
24 'pre_gen_project',
25 'post_gen_project',
26 # TODO: other hooks should be listed here
27 ]
28 EXIT_SUCCESS = 0
29
30
31 def find_hooks():
32 """
33 Must be called with the project template as the current working directory.
34 Returns a dict of all hook scripts provided.
35 Dict's key will be the hook/script's name, without extension, while
36 values will be the absolute path to the script.
37 Missing scripts will not be included in the returned dict.
38 """
39 hooks_dir = 'hooks'
40 r = {}
41 logging.debug('hooks_dir is {0}'.format(hooks_dir))
42 if not os.path.isdir(hooks_dir):
43 logging.debug('No hooks/ dir in template_dir')
44 return r
45 for f in os.listdir(hooks_dir):
46 basename = os.path.splitext(os.path.basename(f))[0]
47 if basename in _HOOKS:
48 r[basename] = os.path.abspath(os.path.join(hooks_dir, f))
49 return r
50
51
52 def run_script(script_path, cwd='.'):
53 """
54 Executes a script from a working directory.
55
56 :param script_path: Absolute path to the script to run.
57 :param cwd: The directory to run the script from.
58 """
59 run_thru_shell = sys.platform.startswith('win')
60 if script_path.endswith('.py'):
61 script_command = [sys.executable, script_path]
62 else:
63 script_command = [script_path]
64
65 utils.make_executable(script_path)
66
67 proc = subprocess.Popen(
68 script_command,
69 shell=run_thru_shell,
70 cwd=cwd
71 )
72 exit_status = proc.wait()
73 if exit_status != EXIT_SUCCESS:
74 raise FailedHookException(
75 "Hook script failed (exit status: %d)" % exit_status)
76
77
78 def run_script_with_context(script_path, cwd, context):
79 """
80 Executes a script after rendering with it Jinja.
81
82 :param script_path: Absolute path to the script to run.
83 :param cwd: The directory to run the script from.
84 :param context: Cookiecutter project template context.
85 """
86 _, extension = os.path.splitext(script_path)
87
88 contents = io.open(script_path, 'r', encoding='utf-8').read()
89
90 with tempfile.NamedTemporaryFile(
91 delete=False,
92 mode='wb',
93 suffix=extension
94 ) as temp:
95 output = Template(contents).render(**context)
96 temp.write(output.encode('utf-8'))
97
98 run_script(temp.name, cwd)
99
100
101 def run_hook(hook_name, project_dir, context):
102 """
103 Try to find and execute a hook from the specified project directory.
104
105 :param hook_name: The hook to execute.
106 :param project_dir: The directory to execute the script from.
107 :param context: Cookiecutter project context.
108 """
109 script = find_hooks().get(hook_name)
110 if script is None:
111 logging.debug('No hooks found')
112 return
113 run_script_with_context(script, project_dir, context)
114
[end of cookiecutter/hooks.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/cookiecutter/hooks.py b/cookiecutter/hooks.py
--- a/cookiecutter/hooks.py
+++ b/cookiecutter/hooks.py
@@ -37,16 +37,20 @@
Missing scripts will not be included in the returned dict.
"""
hooks_dir = 'hooks'
- r = {}
+ hooks = {}
logging.debug('hooks_dir is {0}'.format(hooks_dir))
+
if not os.path.isdir(hooks_dir):
logging.debug('No hooks/ dir in template_dir')
- return r
+ return hooks
+
for f in os.listdir(hooks_dir):
- basename = os.path.splitext(os.path.basename(f))[0]
- if basename in _HOOKS:
- r[basename] = os.path.abspath(os.path.join(hooks_dir, f))
- return r
+ filename = os.path.basename(f)
+ basename = os.path.splitext(filename)[0]
+
+ if basename in _HOOKS and not filename.endswith('~'):
+ hooks[basename] = os.path.abspath(os.path.join(hooks_dir, f))
+ return hooks
def run_script(script_path, cwd='.'):
|
{"golden_diff": "diff --git a/cookiecutter/hooks.py b/cookiecutter/hooks.py\n--- a/cookiecutter/hooks.py\n+++ b/cookiecutter/hooks.py\n@@ -37,16 +37,20 @@\n Missing scripts will not be included in the returned dict.\n \"\"\"\n hooks_dir = 'hooks'\n- r = {}\n+ hooks = {}\n logging.debug('hooks_dir is {0}'.format(hooks_dir))\n+\n if not os.path.isdir(hooks_dir):\n logging.debug('No hooks/ dir in template_dir')\n- return r\n+ return hooks\n+\n for f in os.listdir(hooks_dir):\n- basename = os.path.splitext(os.path.basename(f))[0]\n- if basename in _HOOKS:\n- r[basename] = os.path.abspath(os.path.join(hooks_dir, f))\n- return r\n+ filename = os.path.basename(f)\n+ basename = os.path.splitext(filename)[0]\n+\n+ if basename in _HOOKS and not filename.endswith('~'):\n+ hooks[basename] = os.path.abspath(os.path.join(hooks_dir, f))\n+ return hooks\n \n \n def run_script(script_path, cwd='.'):\n", "issue": "Wrong hook executed when a tilde-suffixed file of the same name exists\n- Cookiecutter version: 1.4.0\n- Template project url: https://github.com/thorgate/django-project-template\n- Python version: 3.4\n- Operating System: Ubuntu 15.10 wily\n### Description:\n\nWhen using gedit or some other text editor that pollutes the directory with backup files ending with a tilde, cookiecutter mistakes that for the \"real\" hook it should run. This resulted in cookiecutter running a ridiculously outdated version of my pre-gen hook.\n\nThe obvious solution is to just remove `pre_gen_project.py~`, which works, but I believe ideally cookiecutter shouldn't be running it in the first place.\n### What I've run:\n\n```\ngedit django-template/hooks/pre_gen_project.py\ncookiecutter django-template\n```\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\n\"\"\"\ncookiecutter.hooks\n------------------\n\nFunctions for discovering and executing various cookiecutter hooks.\n\"\"\"\n\nimport io\nimport logging\nimport os\nimport subprocess\nimport sys\nimport tempfile\n\nfrom jinja2 import Template\n\nfrom cookiecutter import utils\nfrom .exceptions import FailedHookException\n\n\n_HOOKS = [\n 'pre_gen_project',\n 'post_gen_project',\n # TODO: other hooks should be listed here\n]\nEXIT_SUCCESS = 0\n\n\ndef find_hooks():\n \"\"\"\n Must be called with the project template as the current working directory.\n Returns a dict of all hook scripts provided.\n Dict's key will be the hook/script's name, without extension, while\n values will be the absolute path to the script.\n Missing scripts will not be included in the returned dict.\n \"\"\"\n hooks_dir = 'hooks'\n r = {}\n logging.debug('hooks_dir is {0}'.format(hooks_dir))\n if not os.path.isdir(hooks_dir):\n logging.debug('No hooks/ dir in template_dir')\n return r\n for f in os.listdir(hooks_dir):\n basename = os.path.splitext(os.path.basename(f))[0]\n if basename in _HOOKS:\n r[basename] = os.path.abspath(os.path.join(hooks_dir, f))\n return r\n\n\ndef run_script(script_path, cwd='.'):\n \"\"\"\n Executes a script from a working directory.\n\n :param script_path: Absolute path to the script to run.\n :param cwd: The directory to run the script from.\n \"\"\"\n run_thru_shell = sys.platform.startswith('win')\n if script_path.endswith('.py'):\n script_command = [sys.executable, script_path]\n else:\n script_command = [script_path]\n\n utils.make_executable(script_path)\n\n proc = subprocess.Popen(\n script_command,\n shell=run_thru_shell,\n cwd=cwd\n )\n exit_status = proc.wait()\n if exit_status != EXIT_SUCCESS:\n raise FailedHookException(\n \"Hook script failed (exit status: %d)\" % exit_status)\n\n\ndef run_script_with_context(script_path, cwd, context):\n \"\"\"\n Executes a script after rendering with it Jinja.\n\n :param script_path: Absolute path to the script to run.\n :param cwd: The directory to run the script from.\n :param context: Cookiecutter project template context.\n \"\"\"\n _, extension = os.path.splitext(script_path)\n\n contents = io.open(script_path, 'r', encoding='utf-8').read()\n\n with tempfile.NamedTemporaryFile(\n delete=False,\n mode='wb',\n suffix=extension\n ) as temp:\n output = Template(contents).render(**context)\n temp.write(output.encode('utf-8'))\n\n run_script(temp.name, cwd)\n\n\ndef run_hook(hook_name, project_dir, context):\n \"\"\"\n Try to find and execute a hook from the specified project directory.\n\n :param hook_name: The hook to execute.\n :param project_dir: The directory to execute the script from.\n :param context: Cookiecutter project context.\n \"\"\"\n script = find_hooks().get(hook_name)\n if script is None:\n logging.debug('No hooks found')\n return\n run_script_with_context(script, project_dir, context)\n", "path": "cookiecutter/hooks.py"}]}
| 1,678 | 256 |
gh_patches_debug_29442
|
rasdani/github-patches
|
git_diff
|
python-discord__bot-466
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Reminder details in confirmation message.
The reminder command currently only replies with `Your reminder has been created successfully!`.
This can be improved by providing the actual reminder contents in the confirmation and the datetime / humanized timedelta the reminder will be due to ensure the intended duration was processed.
For the message sent once the reminder is due, a jump url to the original command message so we can jump back into context of the conversation when the reminder was made would be good to add. (Previously had mixed up wording for this section).
This would do well as a first issue if it's not gotten to in short time.
</issue>
<code>
[start of bot/cogs/reminders.py]
1 import asyncio
2 import logging
3 import random
4 import textwrap
5 from datetime import datetime
6 from operator import itemgetter
7 from typing import Optional
8
9 from dateutil.relativedelta import relativedelta
10 from discord import Colour, Embed, Message
11 from discord.ext.commands import Bot, Cog, Context, group
12
13 from bot.constants import Channels, Icons, NEGATIVE_REPLIES, POSITIVE_REPLIES, STAFF_ROLES
14 from bot.converters import Duration
15 from bot.pagination import LinePaginator
16 from bot.utils.checks import without_role_check
17 from bot.utils.scheduling import Scheduler
18 from bot.utils.time import humanize_delta, wait_until
19
20 log = logging.getLogger(__name__)
21
22 WHITELISTED_CHANNELS = (Channels.bot,)
23 MAXIMUM_REMINDERS = 5
24
25
26 class Reminders(Scheduler, Cog):
27 """Provide in-channel reminder functionality."""
28
29 def __init__(self, bot: Bot):
30 self.bot = bot
31 super().__init__()
32
33 self.bot.loop.create_task(self.reschedule_reminders())
34
35 async def reschedule_reminders(self) -> None:
36 """Get all current reminders from the API and reschedule them."""
37 await self.bot.wait_until_ready()
38 response = await self.bot.api_client.get(
39 'bot/reminders',
40 params={'active': 'true'}
41 )
42
43 now = datetime.utcnow()
44 loop = asyncio.get_event_loop()
45
46 for reminder in response:
47 remind_at = datetime.fromisoformat(reminder['expiration'][:-1])
48
49 # If the reminder is already overdue ...
50 if remind_at < now:
51 late = relativedelta(now, remind_at)
52 await self.send_reminder(reminder, late)
53
54 else:
55 self.schedule_task(loop, reminder["id"], reminder)
56
57 @staticmethod
58 async def _send_confirmation(ctx: Context, on_success: str) -> None:
59 """Send an embed confirming the reminder change was made successfully."""
60 embed = Embed()
61 embed.colour = Colour.green()
62 embed.title = random.choice(POSITIVE_REPLIES)
63 embed.description = on_success
64 await ctx.send(embed=embed)
65
66 async def _scheduled_task(self, reminder: dict) -> None:
67 """A coroutine which sends the reminder once the time is reached, and cancels the running task."""
68 reminder_id = reminder["id"]
69 reminder_datetime = datetime.fromisoformat(reminder['expiration'][:-1])
70
71 # Send the reminder message once the desired duration has passed
72 await wait_until(reminder_datetime)
73 await self.send_reminder(reminder)
74
75 log.debug(f"Deleting reminder {reminder_id} (the user has been reminded).")
76 await self._delete_reminder(reminder_id)
77
78 # Now we can begone with it from our schedule list.
79 self.cancel_task(reminder_id)
80
81 async def _delete_reminder(self, reminder_id: str) -> None:
82 """Delete a reminder from the database, given its ID, and cancel the running task."""
83 await self.bot.api_client.delete('bot/reminders/' + str(reminder_id))
84
85 # Now we can remove it from the schedule list
86 self.cancel_task(reminder_id)
87
88 async def _reschedule_reminder(self, reminder: dict) -> None:
89 """Reschedule a reminder object."""
90 loop = asyncio.get_event_loop()
91
92 self.cancel_task(reminder["id"])
93 self.schedule_task(loop, reminder["id"], reminder)
94
95 async def send_reminder(self, reminder: dict, late: relativedelta = None) -> None:
96 """Send the reminder."""
97 channel = self.bot.get_channel(reminder["channel_id"])
98 user = self.bot.get_user(reminder["author"])
99
100 embed = Embed()
101 embed.colour = Colour.blurple()
102 embed.set_author(
103 icon_url=Icons.remind_blurple,
104 name="It has arrived!"
105 )
106
107 embed.description = f"Here's your reminder: `{reminder['content']}`"
108
109 if late:
110 embed.colour = Colour.red()
111 embed.set_author(
112 icon_url=Icons.remind_red,
113 name=f"Sorry it arrived {humanize_delta(late, max_units=2)} late!"
114 )
115
116 await channel.send(
117 content=user.mention,
118 embed=embed
119 )
120 await self._delete_reminder(reminder["id"])
121
122 @group(name="remind", aliases=("reminder", "reminders"), invoke_without_command=True)
123 async def remind_group(self, ctx: Context, expiration: Duration, *, content: str) -> None:
124 """Commands for managing your reminders."""
125 await ctx.invoke(self.new_reminder, expiration=expiration, content=content)
126
127 @remind_group.command(name="new", aliases=("add", "create"))
128 async def new_reminder(self, ctx: Context, expiration: Duration, *, content: str) -> Optional[Message]:
129 """
130 Set yourself a simple reminder.
131
132 Expiration is parsed per: http://strftime.org/
133 """
134 embed = Embed()
135
136 # If the user is not staff, we need to verify whether or not to make a reminder at all.
137 if without_role_check(ctx, *STAFF_ROLES):
138
139 # If they don't have permission to set a reminder in this channel
140 if ctx.channel.id not in WHITELISTED_CHANNELS:
141 embed.colour = Colour.red()
142 embed.title = random.choice(NEGATIVE_REPLIES)
143 embed.description = "Sorry, you can't do that here!"
144
145 return await ctx.send(embed=embed)
146
147 # Get their current active reminders
148 active_reminders = await self.bot.api_client.get(
149 'bot/reminders',
150 params={
151 'author__id': str(ctx.author.id)
152 }
153 )
154
155 # Let's limit this, so we don't get 10 000
156 # reminders from kip or something like that :P
157 if len(active_reminders) > MAXIMUM_REMINDERS:
158 embed.colour = Colour.red()
159 embed.title = random.choice(NEGATIVE_REPLIES)
160 embed.description = "You have too many active reminders!"
161
162 return await ctx.send(embed=embed)
163
164 # Now we can attempt to actually set the reminder.
165 reminder = await self.bot.api_client.post(
166 'bot/reminders',
167 json={
168 'author': ctx.author.id,
169 'channel_id': ctx.message.channel.id,
170 'content': content,
171 'expiration': expiration.isoformat()
172 }
173 )
174
175 # Confirm to the user that it worked.
176 await self._send_confirmation(
177 ctx, on_success="Your reminder has been created successfully!"
178 )
179
180 loop = asyncio.get_event_loop()
181 self.schedule_task(loop, reminder["id"], reminder)
182
183 @remind_group.command(name="list")
184 async def list_reminders(self, ctx: Context) -> Optional[Message]:
185 """View a paginated embed of all reminders for your user."""
186 # Get all the user's reminders from the database.
187 data = await self.bot.api_client.get(
188 'bot/reminders',
189 params={'author__id': str(ctx.author.id)}
190 )
191
192 now = datetime.utcnow()
193
194 # Make a list of tuples so it can be sorted by time.
195 reminders = sorted(
196 (
197 (rem['content'], rem['expiration'], rem['id'])
198 for rem in data
199 ),
200 key=itemgetter(1)
201 )
202
203 lines = []
204
205 for content, remind_at, id_ in reminders:
206 # Parse and humanize the time, make it pretty :D
207 remind_datetime = datetime.fromisoformat(remind_at[:-1])
208 time = humanize_delta(relativedelta(remind_datetime, now))
209
210 text = textwrap.dedent(f"""
211 **Reminder #{id_}:** *expires in {time}* (ID: {id_})
212 {content}
213 """).strip()
214
215 lines.append(text)
216
217 embed = Embed()
218 embed.colour = Colour.blurple()
219 embed.title = f"Reminders for {ctx.author}"
220
221 # Remind the user that they have no reminders :^)
222 if not lines:
223 embed.description = "No active reminders could be found."
224 return await ctx.send(embed=embed)
225
226 # Construct the embed and paginate it.
227 embed.colour = Colour.blurple()
228
229 await LinePaginator.paginate(
230 lines,
231 ctx, embed,
232 max_lines=3,
233 empty=True
234 )
235
236 @remind_group.group(name="edit", aliases=("change", "modify"), invoke_without_command=True)
237 async def edit_reminder_group(self, ctx: Context) -> None:
238 """Commands for modifying your current reminders."""
239 await ctx.invoke(self.bot.get_command("help"), "reminders", "edit")
240
241 @edit_reminder_group.command(name="duration", aliases=("time",))
242 async def edit_reminder_duration(self, ctx: Context, id_: int, expiration: Duration) -> None:
243 """
244 Edit one of your reminder's expiration.
245
246 Expiration is parsed per: http://strftime.org/
247 """
248 # Send the request to update the reminder in the database
249 reminder = await self.bot.api_client.patch(
250 'bot/reminders/' + str(id_),
251 json={'expiration': expiration.isoformat()}
252 )
253
254 # Send a confirmation message to the channel
255 await self._send_confirmation(
256 ctx, on_success="That reminder has been edited successfully!"
257 )
258
259 await self._reschedule_reminder(reminder)
260
261 @edit_reminder_group.command(name="content", aliases=("reason",))
262 async def edit_reminder_content(self, ctx: Context, id_: int, *, content: str) -> None:
263 """Edit one of your reminder's content."""
264 # Send the request to update the reminder in the database
265 reminder = await self.bot.api_client.patch(
266 'bot/reminders/' + str(id_),
267 json={'content': content}
268 )
269
270 # Send a confirmation message to the channel
271 await self._send_confirmation(
272 ctx, on_success="That reminder has been edited successfully!"
273 )
274 await self._reschedule_reminder(reminder)
275
276 @remind_group.command("delete", aliases=("remove",))
277 async def delete_reminder(self, ctx: Context, id_: int) -> None:
278 """Delete one of your active reminders."""
279 await self._delete_reminder(id_)
280 await self._send_confirmation(
281 ctx, on_success="That reminder has been deleted successfully!"
282 )
283
284
285 def setup(bot: Bot) -> None:
286 """Reminders cog load."""
287 bot.add_cog(Reminders(bot))
288 log.info("Cog loaded: Reminders")
289
[end of bot/cogs/reminders.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/bot/cogs/reminders.py b/bot/cogs/reminders.py
--- a/bot/cogs/reminders.py
+++ b/bot/cogs/reminders.py
@@ -2,7 +2,7 @@
import logging
import random
import textwrap
-from datetime import datetime
+from datetime import datetime, timedelta
from operator import itemgetter
from typing import Optional
@@ -104,7 +104,10 @@
name="It has arrived!"
)
- embed.description = f"Here's your reminder: `{reminder['content']}`"
+ embed.description = f"Here's your reminder: `{reminder['content']}`."
+
+ if reminder.get("jump_url"): # keep backward compatibility
+ embed.description += f"\n[Jump back to when you created the reminder]({reminder['jump_url']})"
if late:
embed.colour = Colour.red()
@@ -167,14 +170,18 @@
json={
'author': ctx.author.id,
'channel_id': ctx.message.channel.id,
+ 'jump_url': ctx.message.jump_url,
'content': content,
'expiration': expiration.isoformat()
}
)
+ now = datetime.utcnow() - timedelta(seconds=1)
+
# Confirm to the user that it worked.
await self._send_confirmation(
- ctx, on_success="Your reminder has been created successfully!"
+ ctx,
+ on_success=f"Your reminder will arrive in {humanize_delta(relativedelta(expiration, now))}!"
)
loop = asyncio.get_event_loop()
|
{"golden_diff": "diff --git a/bot/cogs/reminders.py b/bot/cogs/reminders.py\n--- a/bot/cogs/reminders.py\n+++ b/bot/cogs/reminders.py\n@@ -2,7 +2,7 @@\n import logging\n import random\n import textwrap\n-from datetime import datetime\n+from datetime import datetime, timedelta\n from operator import itemgetter\n from typing import Optional\n \n@@ -104,7 +104,10 @@\n name=\"It has arrived!\"\n )\n \n- embed.description = f\"Here's your reminder: `{reminder['content']}`\"\n+ embed.description = f\"Here's your reminder: `{reminder['content']}`.\"\n+\n+ if reminder.get(\"jump_url\"): # keep backward compatibility\n+ embed.description += f\"\\n[Jump back to when you created the reminder]({reminder['jump_url']})\"\n \n if late:\n embed.colour = Colour.red()\n@@ -167,14 +170,18 @@\n json={\n 'author': ctx.author.id,\n 'channel_id': ctx.message.channel.id,\n+ 'jump_url': ctx.message.jump_url,\n 'content': content,\n 'expiration': expiration.isoformat()\n }\n )\n \n+ now = datetime.utcnow() - timedelta(seconds=1)\n+\n # Confirm to the user that it worked.\n await self._send_confirmation(\n- ctx, on_success=\"Your reminder has been created successfully!\"\n+ ctx,\n+ on_success=f\"Your reminder will arrive in {humanize_delta(relativedelta(expiration, now))}!\"\n )\n \n loop = asyncio.get_event_loop()\n", "issue": "Reminder details in confirmation message.\nThe reminder command currently only replies with `Your reminder has been created successfully!`. \r\n\r\nThis can be improved by providing the actual reminder contents in the confirmation and the datetime / humanized timedelta the reminder will be due to ensure the intended duration was processed. \r\n\r\nFor the message sent once the reminder is due, a jump url to the original command message so we can jump back into context of the conversation when the reminder was made would be good to add. (Previously had mixed up wording for this section).\r\n\r\nThis would do well as a first issue if it's not gotten to in short time.\n", "before_files": [{"content": "import asyncio\nimport logging\nimport random\nimport textwrap\nfrom datetime import datetime\nfrom operator import itemgetter\nfrom typing import Optional\n\nfrom dateutil.relativedelta import relativedelta\nfrom discord import Colour, Embed, Message\nfrom discord.ext.commands import Bot, Cog, Context, group\n\nfrom bot.constants import Channels, Icons, NEGATIVE_REPLIES, POSITIVE_REPLIES, STAFF_ROLES\nfrom bot.converters import Duration\nfrom bot.pagination import LinePaginator\nfrom bot.utils.checks import without_role_check\nfrom bot.utils.scheduling import Scheduler\nfrom bot.utils.time import humanize_delta, wait_until\n\nlog = logging.getLogger(__name__)\n\nWHITELISTED_CHANNELS = (Channels.bot,)\nMAXIMUM_REMINDERS = 5\n\n\nclass Reminders(Scheduler, Cog):\n \"\"\"Provide in-channel reminder functionality.\"\"\"\n\n def __init__(self, bot: Bot):\n self.bot = bot\n super().__init__()\n\n self.bot.loop.create_task(self.reschedule_reminders())\n\n async def reschedule_reminders(self) -> None:\n \"\"\"Get all current reminders from the API and reschedule them.\"\"\"\n await self.bot.wait_until_ready()\n response = await self.bot.api_client.get(\n 'bot/reminders',\n params={'active': 'true'}\n )\n\n now = datetime.utcnow()\n loop = asyncio.get_event_loop()\n\n for reminder in response:\n remind_at = datetime.fromisoformat(reminder['expiration'][:-1])\n\n # If the reminder is already overdue ...\n if remind_at < now:\n late = relativedelta(now, remind_at)\n await self.send_reminder(reminder, late)\n\n else:\n self.schedule_task(loop, reminder[\"id\"], reminder)\n\n @staticmethod\n async def _send_confirmation(ctx: Context, on_success: str) -> None:\n \"\"\"Send an embed confirming the reminder change was made successfully.\"\"\"\n embed = Embed()\n embed.colour = Colour.green()\n embed.title = random.choice(POSITIVE_REPLIES)\n embed.description = on_success\n await ctx.send(embed=embed)\n\n async def _scheduled_task(self, reminder: dict) -> None:\n \"\"\"A coroutine which sends the reminder once the time is reached, and cancels the running task.\"\"\"\n reminder_id = reminder[\"id\"]\n reminder_datetime = datetime.fromisoformat(reminder['expiration'][:-1])\n\n # Send the reminder message once the desired duration has passed\n await wait_until(reminder_datetime)\n await self.send_reminder(reminder)\n\n log.debug(f\"Deleting reminder {reminder_id} (the user has been reminded).\")\n await self._delete_reminder(reminder_id)\n\n # Now we can begone with it from our schedule list.\n self.cancel_task(reminder_id)\n\n async def _delete_reminder(self, reminder_id: str) -> None:\n \"\"\"Delete a reminder from the database, given its ID, and cancel the running task.\"\"\"\n await self.bot.api_client.delete('bot/reminders/' + str(reminder_id))\n\n # Now we can remove it from the schedule list\n self.cancel_task(reminder_id)\n\n async def _reschedule_reminder(self, reminder: dict) -> None:\n \"\"\"Reschedule a reminder object.\"\"\"\n loop = asyncio.get_event_loop()\n\n self.cancel_task(reminder[\"id\"])\n self.schedule_task(loop, reminder[\"id\"], reminder)\n\n async def send_reminder(self, reminder: dict, late: relativedelta = None) -> None:\n \"\"\"Send the reminder.\"\"\"\n channel = self.bot.get_channel(reminder[\"channel_id\"])\n user = self.bot.get_user(reminder[\"author\"])\n\n embed = Embed()\n embed.colour = Colour.blurple()\n embed.set_author(\n icon_url=Icons.remind_blurple,\n name=\"It has arrived!\"\n )\n\n embed.description = f\"Here's your reminder: `{reminder['content']}`\"\n\n if late:\n embed.colour = Colour.red()\n embed.set_author(\n icon_url=Icons.remind_red,\n name=f\"Sorry it arrived {humanize_delta(late, max_units=2)} late!\"\n )\n\n await channel.send(\n content=user.mention,\n embed=embed\n )\n await self._delete_reminder(reminder[\"id\"])\n\n @group(name=\"remind\", aliases=(\"reminder\", \"reminders\"), invoke_without_command=True)\n async def remind_group(self, ctx: Context, expiration: Duration, *, content: str) -> None:\n \"\"\"Commands for managing your reminders.\"\"\"\n await ctx.invoke(self.new_reminder, expiration=expiration, content=content)\n\n @remind_group.command(name=\"new\", aliases=(\"add\", \"create\"))\n async def new_reminder(self, ctx: Context, expiration: Duration, *, content: str) -> Optional[Message]:\n \"\"\"\n Set yourself a simple reminder.\n\n Expiration is parsed per: http://strftime.org/\n \"\"\"\n embed = Embed()\n\n # If the user is not staff, we need to verify whether or not to make a reminder at all.\n if without_role_check(ctx, *STAFF_ROLES):\n\n # If they don't have permission to set a reminder in this channel\n if ctx.channel.id not in WHITELISTED_CHANNELS:\n embed.colour = Colour.red()\n embed.title = random.choice(NEGATIVE_REPLIES)\n embed.description = \"Sorry, you can't do that here!\"\n\n return await ctx.send(embed=embed)\n\n # Get their current active reminders\n active_reminders = await self.bot.api_client.get(\n 'bot/reminders',\n params={\n 'author__id': str(ctx.author.id)\n }\n )\n\n # Let's limit this, so we don't get 10 000\n # reminders from kip or something like that :P\n if len(active_reminders) > MAXIMUM_REMINDERS:\n embed.colour = Colour.red()\n embed.title = random.choice(NEGATIVE_REPLIES)\n embed.description = \"You have too many active reminders!\"\n\n return await ctx.send(embed=embed)\n\n # Now we can attempt to actually set the reminder.\n reminder = await self.bot.api_client.post(\n 'bot/reminders',\n json={\n 'author': ctx.author.id,\n 'channel_id': ctx.message.channel.id,\n 'content': content,\n 'expiration': expiration.isoformat()\n }\n )\n\n # Confirm to the user that it worked.\n await self._send_confirmation(\n ctx, on_success=\"Your reminder has been created successfully!\"\n )\n\n loop = asyncio.get_event_loop()\n self.schedule_task(loop, reminder[\"id\"], reminder)\n\n @remind_group.command(name=\"list\")\n async def list_reminders(self, ctx: Context) -> Optional[Message]:\n \"\"\"View a paginated embed of all reminders for your user.\"\"\"\n # Get all the user's reminders from the database.\n data = await self.bot.api_client.get(\n 'bot/reminders',\n params={'author__id': str(ctx.author.id)}\n )\n\n now = datetime.utcnow()\n\n # Make a list of tuples so it can be sorted by time.\n reminders = sorted(\n (\n (rem['content'], rem['expiration'], rem['id'])\n for rem in data\n ),\n key=itemgetter(1)\n )\n\n lines = []\n\n for content, remind_at, id_ in reminders:\n # Parse and humanize the time, make it pretty :D\n remind_datetime = datetime.fromisoformat(remind_at[:-1])\n time = humanize_delta(relativedelta(remind_datetime, now))\n\n text = textwrap.dedent(f\"\"\"\n **Reminder #{id_}:** *expires in {time}* (ID: {id_})\n {content}\n \"\"\").strip()\n\n lines.append(text)\n\n embed = Embed()\n embed.colour = Colour.blurple()\n embed.title = f\"Reminders for {ctx.author}\"\n\n # Remind the user that they have no reminders :^)\n if not lines:\n embed.description = \"No active reminders could be found.\"\n return await ctx.send(embed=embed)\n\n # Construct the embed and paginate it.\n embed.colour = Colour.blurple()\n\n await LinePaginator.paginate(\n lines,\n ctx, embed,\n max_lines=3,\n empty=True\n )\n\n @remind_group.group(name=\"edit\", aliases=(\"change\", \"modify\"), invoke_without_command=True)\n async def edit_reminder_group(self, ctx: Context) -> None:\n \"\"\"Commands for modifying your current reminders.\"\"\"\n await ctx.invoke(self.bot.get_command(\"help\"), \"reminders\", \"edit\")\n\n @edit_reminder_group.command(name=\"duration\", aliases=(\"time\",))\n async def edit_reminder_duration(self, ctx: Context, id_: int, expiration: Duration) -> None:\n \"\"\"\n Edit one of your reminder's expiration.\n\n Expiration is parsed per: http://strftime.org/\n \"\"\"\n # Send the request to update the reminder in the database\n reminder = await self.bot.api_client.patch(\n 'bot/reminders/' + str(id_),\n json={'expiration': expiration.isoformat()}\n )\n\n # Send a confirmation message to the channel\n await self._send_confirmation(\n ctx, on_success=\"That reminder has been edited successfully!\"\n )\n\n await self._reschedule_reminder(reminder)\n\n @edit_reminder_group.command(name=\"content\", aliases=(\"reason\",))\n async def edit_reminder_content(self, ctx: Context, id_: int, *, content: str) -> None:\n \"\"\"Edit one of your reminder's content.\"\"\"\n # Send the request to update the reminder in the database\n reminder = await self.bot.api_client.patch(\n 'bot/reminders/' + str(id_),\n json={'content': content}\n )\n\n # Send a confirmation message to the channel\n await self._send_confirmation(\n ctx, on_success=\"That reminder has been edited successfully!\"\n )\n await self._reschedule_reminder(reminder)\n\n @remind_group.command(\"delete\", aliases=(\"remove\",))\n async def delete_reminder(self, ctx: Context, id_: int) -> None:\n \"\"\"Delete one of your active reminders.\"\"\"\n await self._delete_reminder(id_)\n await self._send_confirmation(\n ctx, on_success=\"That reminder has been deleted successfully!\"\n )\n\n\ndef setup(bot: Bot) -> None:\n \"\"\"Reminders cog load.\"\"\"\n bot.add_cog(Reminders(bot))\n log.info(\"Cog loaded: Reminders\")\n", "path": "bot/cogs/reminders.py"}]}
| 3,706 | 354 |
gh_patches_debug_11047
|
rasdani/github-patches
|
git_diff
|
astronomer__astro-sdk-62
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Unable to load_file using parquet
Version: Astro 0.2.0
Python: 3.8, 3.9
Astro is unable to run the task `load_file` with a parquet file.
It raises the following exception:
```
Traceback (most recent call last):
File "pyarrow/io.pxi", line 1511, in pyarrow.lib.get_native_file
File "/home/tati/.virtualenvs/astro-py38/lib/python3.8/site-packages/pyarrow/util.py", line 99, in _stringify_path
raise TypeError("not a path-like object")
TypeError: not a path-like object
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "pyarrow/io.pxi", line 1517, in pyarrow.lib.get_native_file
File "pyarrow/io.pxi", line 729, in pyarrow.lib.PythonFile.__cinit__
TypeError: binary file expected, got text file
warnings.warn(pytest.PytestUnraisableExceptionWarning(msg))
```
</issue>
<code>
[start of src/astro/sql/operators/agnostic_load_file.py]
1 """
2 Copyright Astronomer, Inc.
3
4 Licensed under the Apache License, Version 2.0 (the "License");
5 you may not use this file except in compliance with the License.
6 You may obtain a copy of the License at
7
8 http://www.apache.org/licenses/LICENSE-2.0
9
10 Unless required by applicable law or agreed to in writing, software
11 distributed under the License is distributed on an "AS IS" BASIS,
12 WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 See the License for the specific language governing permissions and
14 limitations under the License.
15 """
16
17 import os
18 from typing import Union
19 from urllib.parse import urlparse
20
21 import pandas as pd
22 from airflow.hooks.base import BaseHook
23 from airflow.models import BaseOperator
24 from smart_open import open
25
26 from astro.sql.table import Table, TempTable, create_table_name
27 from astro.utils.cloud_storage_creds import gcs_client, s3fs_creds
28 from astro.utils.load_dataframe import move_dataframe_to_sql
29 from astro.utils.schema_util import get_schema
30 from astro.utils.task_id_helper import get_task_id
31
32
33 class AgnosticLoadFile(BaseOperator):
34 """Load S3/local table to postgres/snowflake database.
35
36 :param path: File path.
37 :type path: str
38 :param output_table_name: Name of table to create.
39 :type output_table_name: str
40 :param file_conn_id: Airflow connection id of input file (optional)
41 :type file_conn_id: str
42 :param output_conn_id: Database connection id.
43 :type output_conn_id: str
44 """
45
46 def __init__(
47 self,
48 path,
49 output_table: Union[TempTable, Table],
50 file_conn_id="",
51 chunksize=None,
52 **kwargs,
53 ) -> None:
54 super().__init__(**kwargs)
55 self.output_table: Union[TempTable, Table] = output_table
56 self.path = path
57 self.chunksize = chunksize
58 self.file_conn_id = file_conn_id
59 self.kwargs = kwargs
60 self.output_table = output_table
61
62 def execute(self, context):
63 """Loads csv/parquet table from local/S3/GCS with Pandas.
64
65 Infers SQL database type based on connection then loads table to db.
66 """
67
68 # Read file with Pandas load method based on `file_type` (S3 or local).
69 df = self._load_dataframe(self.path)
70
71 # Retrieve conn type
72 conn = BaseHook.get_connection(self.output_table.conn_id)
73 if type(self.output_table) == TempTable:
74 self.output_table = self.output_table.to_table(
75 create_table_name(context=context), get_schema()
76 )
77 else:
78 self.output_table.schema = self.output_table.schema or get_schema()
79 move_dataframe_to_sql(
80 output_table_name=self.output_table.table_name,
81 conn_id=self.output_table.conn_id,
82 database=self.output_table.database,
83 warehouse=self.output_table.warehouse,
84 schema=self.output_table.schema,
85 df=df,
86 conn_type=conn.conn_type,
87 user=conn.login,
88 )
89 self.log.info(f"returning table {self.output_table}")
90 return self.output_table
91
92 @staticmethod
93 def validate_path(path):
94 """Validate a URL or local file path"""
95 try:
96 result = urlparse(path)
97 return all([result.scheme, result.netloc]) or os.path.isfile(path)
98 except:
99 return False
100
101 def _load_dataframe(self, path):
102 """Read file with Pandas.
103
104 Select method based on `file_type` (S3 or local).
105 """
106
107 if not AgnosticLoadFile.validate_path(path):
108 raise ValueError("Invalid path: {}".format(path))
109
110 file_type = path.split(".")[-1]
111 transport_params = {
112 "s3": s3fs_creds,
113 "gs": gcs_client,
114 "": lambda: None,
115 }[urlparse(path).scheme]()
116 deserialiser = {
117 "parquet": pd.read_parquet,
118 "csv": pd.read_csv,
119 "json": pd.read_json,
120 "ndjson": pd.read_json,
121 }
122 deserialiser_params = {"ndjson": {"lines": True}}
123 with open(path, transport_params=transport_params) as stream:
124 return deserialiser[file_type](
125 stream, **deserialiser_params.get(file_type, {})
126 )
127
128
129 def load_file(
130 path,
131 output_table=None,
132 file_conn_id=None,
133 task_id=None,
134 **kwargs,
135 ):
136 """Convert AgnosticLoadFile into a function.
137
138 Returns an XComArg object.
139
140 :param path: File path.
141 :type path: str
142 :param output_table: Table to create
143 :type output_table: Table
144 :param file_conn_id: Airflow connection id of input file (optional)
145 :type file_conn_id: str
146 :param task_id: task id, optional.
147 :type task_id: str
148 """
149
150 task_id = task_id if task_id is not None else get_task_id("load_file", path)
151
152 return AgnosticLoadFile(
153 task_id=task_id,
154 path=path,
155 output_table=output_table,
156 file_conn_id=file_conn_id,
157 **kwargs,
158 ).output
159
[end of src/astro/sql/operators/agnostic_load_file.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/src/astro/sql/operators/agnostic_load_file.py b/src/astro/sql/operators/agnostic_load_file.py
--- a/src/astro/sql/operators/agnostic_load_file.py
+++ b/src/astro/sql/operators/agnostic_load_file.py
@@ -119,8 +119,11 @@
"json": pd.read_json,
"ndjson": pd.read_json,
}
+ mode = {"parquet": "rb"}
deserialiser_params = {"ndjson": {"lines": True}}
- with open(path, transport_params=transport_params) as stream:
+ with open(
+ path, mode=mode.get(file_type, "r"), transport_params=transport_params
+ ) as stream:
return deserialiser[file_type](
stream, **deserialiser_params.get(file_type, {})
)
|
{"golden_diff": "diff --git a/src/astro/sql/operators/agnostic_load_file.py b/src/astro/sql/operators/agnostic_load_file.py\n--- a/src/astro/sql/operators/agnostic_load_file.py\n+++ b/src/astro/sql/operators/agnostic_load_file.py\n@@ -119,8 +119,11 @@\n \"json\": pd.read_json,\n \"ndjson\": pd.read_json,\n }\n+ mode = {\"parquet\": \"rb\"}\n deserialiser_params = {\"ndjson\": {\"lines\": True}}\n- with open(path, transport_params=transport_params) as stream:\n+ with open(\n+ path, mode=mode.get(file_type, \"r\"), transport_params=transport_params\n+ ) as stream:\n return deserialiser[file_type](\n stream, **deserialiser_params.get(file_type, {})\n )\n", "issue": "Unable to load_file using parquet\nVersion: Astro 0.2.0\r\nPython: 3.8, 3.9\r\n\r\nAstro is unable to run the task `load_file` with a parquet file.\r\n\r\nIt raises the following exception:\r\n```\r\n Traceback (most recent call last):\r\n File \"pyarrow/io.pxi\", line 1511, in pyarrow.lib.get_native_file\r\n File \"/home/tati/.virtualenvs/astro-py38/lib/python3.8/site-packages/pyarrow/util.py\", line 99, in _stringify_path\r\n raise TypeError(\"not a path-like object\")\r\n TypeError: not a path-like object\r\n \r\n During handling of the above exception, another exception occurred:\r\n \r\n Traceback (most recent call last):\r\n File \"pyarrow/io.pxi\", line 1517, in pyarrow.lib.get_native_file\r\n File \"pyarrow/io.pxi\", line 729, in pyarrow.lib.PythonFile.__cinit__\r\n TypeError: binary file expected, got text file\r\n \r\n warnings.warn(pytest.PytestUnraisableExceptionWarning(msg))\r\n```\n", "before_files": [{"content": "\"\"\"\nCopyright Astronomer, Inc.\n\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\n\n http://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License.\n\"\"\"\n\nimport os\nfrom typing import Union\nfrom urllib.parse import urlparse\n\nimport pandas as pd\nfrom airflow.hooks.base import BaseHook\nfrom airflow.models import BaseOperator\nfrom smart_open import open\n\nfrom astro.sql.table import Table, TempTable, create_table_name\nfrom astro.utils.cloud_storage_creds import gcs_client, s3fs_creds\nfrom astro.utils.load_dataframe import move_dataframe_to_sql\nfrom astro.utils.schema_util import get_schema\nfrom astro.utils.task_id_helper import get_task_id\n\n\nclass AgnosticLoadFile(BaseOperator):\n \"\"\"Load S3/local table to postgres/snowflake database.\n\n :param path: File path.\n :type path: str\n :param output_table_name: Name of table to create.\n :type output_table_name: str\n :param file_conn_id: Airflow connection id of input file (optional)\n :type file_conn_id: str\n :param output_conn_id: Database connection id.\n :type output_conn_id: str\n \"\"\"\n\n def __init__(\n self,\n path,\n output_table: Union[TempTable, Table],\n file_conn_id=\"\",\n chunksize=None,\n **kwargs,\n ) -> None:\n super().__init__(**kwargs)\n self.output_table: Union[TempTable, Table] = output_table\n self.path = path\n self.chunksize = chunksize\n self.file_conn_id = file_conn_id\n self.kwargs = kwargs\n self.output_table = output_table\n\n def execute(self, context):\n \"\"\"Loads csv/parquet table from local/S3/GCS with Pandas.\n\n Infers SQL database type based on connection then loads table to db.\n \"\"\"\n\n # Read file with Pandas load method based on `file_type` (S3 or local).\n df = self._load_dataframe(self.path)\n\n # Retrieve conn type\n conn = BaseHook.get_connection(self.output_table.conn_id)\n if type(self.output_table) == TempTable:\n self.output_table = self.output_table.to_table(\n create_table_name(context=context), get_schema()\n )\n else:\n self.output_table.schema = self.output_table.schema or get_schema()\n move_dataframe_to_sql(\n output_table_name=self.output_table.table_name,\n conn_id=self.output_table.conn_id,\n database=self.output_table.database,\n warehouse=self.output_table.warehouse,\n schema=self.output_table.schema,\n df=df,\n conn_type=conn.conn_type,\n user=conn.login,\n )\n self.log.info(f\"returning table {self.output_table}\")\n return self.output_table\n\n @staticmethod\n def validate_path(path):\n \"\"\"Validate a URL or local file path\"\"\"\n try:\n result = urlparse(path)\n return all([result.scheme, result.netloc]) or os.path.isfile(path)\n except:\n return False\n\n def _load_dataframe(self, path):\n \"\"\"Read file with Pandas.\n\n Select method based on `file_type` (S3 or local).\n \"\"\"\n\n if not AgnosticLoadFile.validate_path(path):\n raise ValueError(\"Invalid path: {}\".format(path))\n\n file_type = path.split(\".\")[-1]\n transport_params = {\n \"s3\": s3fs_creds,\n \"gs\": gcs_client,\n \"\": lambda: None,\n }[urlparse(path).scheme]()\n deserialiser = {\n \"parquet\": pd.read_parquet,\n \"csv\": pd.read_csv,\n \"json\": pd.read_json,\n \"ndjson\": pd.read_json,\n }\n deserialiser_params = {\"ndjson\": {\"lines\": True}}\n with open(path, transport_params=transport_params) as stream:\n return deserialiser[file_type](\n stream, **deserialiser_params.get(file_type, {})\n )\n\n\ndef load_file(\n path,\n output_table=None,\n file_conn_id=None,\n task_id=None,\n **kwargs,\n):\n \"\"\"Convert AgnosticLoadFile into a function.\n\n Returns an XComArg object.\n\n :param path: File path.\n :type path: str\n :param output_table: Table to create\n :type output_table: Table\n :param file_conn_id: Airflow connection id of input file (optional)\n :type file_conn_id: str\n :param task_id: task id, optional.\n :type task_id: str\n \"\"\"\n\n task_id = task_id if task_id is not None else get_task_id(\"load_file\", path)\n\n return AgnosticLoadFile(\n task_id=task_id,\n path=path,\n output_table=output_table,\n file_conn_id=file_conn_id,\n **kwargs,\n ).output\n", "path": "src/astro/sql/operators/agnostic_load_file.py"}]}
| 2,293 | 181 |
gh_patches_debug_1174
|
rasdani/github-patches
|
git_diff
|
cupy__cupy-5225
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[info] NumPy/SciPy new version pinning recommendation
See:
- https://github.com/numpy/numpy/pull/18505
- scipy/scipy#12862
The most important takeaway is that NumPy/SciPy now recommend downstream distributions to pin the upper bound version if NumPy/Scipy are runtime dependencies. (The example is if the latest NumPy out there is 1.20, one should pin to `<1.23`; the notation used in the docs `<1.xx+3.0` is a bit confusing, see the clarification in https://github.com/scipy/scipy/pull/12862#discussion_r575790007.) There are other suggestions too, but I think this is potentially the most impactful one.
</issue>
<code>
[start of setup.py]
1 #!/usr/bin/env python
2
3 import glob
4 import os
5 from setuptools import setup, find_packages
6 import sys
7
8 import cupy_setup_build
9
10
11 for submodule in ('cupy/_core/include/cupy/cub/',
12 'cupy/_core/include/cupy/jitify'):
13 if len(os.listdir(submodule)) == 0:
14 msg = '''
15 The folder %s is a git submodule but is
16 currently empty. Please use the command
17
18 git submodule update --init
19
20 to populate the folder before building from source.
21 ''' % submodule
22 print(msg, file=sys.stderr)
23 sys.exit(1)
24
25
26 requirements = {
27 # TODO(kmaehashi): migrate to pyproject.toml (see #4727, #4619)
28 'setup': [
29 'Cython>=0.29.22',
30 'fastrlock>=0.5',
31 ],
32
33 'install': [
34 'numpy>=1.17',
35 'fastrlock>=0.5',
36 ],
37 'all': [
38 'scipy>=1.4',
39 'optuna>=2.0',
40 ],
41
42 'stylecheck': [
43 'autopep8==1.5.5',
44 'flake8==3.8.4',
45 'pbr==5.5.1',
46 'pycodestyle==2.6.0',
47 ],
48 'test': [
49 # 4.2 <= pytest < 6.2 is slow collecting tests and times out on CI.
50 'pytest>=6.2',
51 ],
52 'jenkins': [
53 '-r test',
54 'pytest-timeout',
55 'pytest-cov',
56 'coveralls',
57 'codecov',
58 'coverage<5', # Otherwise, Python must be built with sqlite
59 ],
60 }
61
62
63 def reduce_requirements(key):
64 # Resolve recursive requirements notation (-r)
65 reqs = requirements[key]
66 resolved_reqs = []
67 for req in reqs:
68 if req.startswith('-r'):
69 depend_key = req[2:].lstrip()
70 reduce_requirements(depend_key)
71 resolved_reqs += requirements[depend_key]
72 else:
73 resolved_reqs.append(req)
74 requirements[key] = resolved_reqs
75
76
77 for k in requirements.keys():
78 reduce_requirements(k)
79
80
81 extras_require = {k: v for k, v in requirements.items() if k != 'install'}
82
83
84 setup_requires = requirements['setup']
85 install_requires = requirements['install']
86 tests_require = requirements['test']
87
88 # List of files that needs to be in the distribution (sdist/wheel).
89 # Notes:
90 # - Files only needed in sdist should be added to `MANIFEST.in`.
91 # - The following glob (`**`) ignores items starting with `.`.
92 cupy_package_data = [
93 'cupy/cuda/cupy_thrust.cu',
94 'cupy/cuda/cupy_cub.cu',
95 'cupy/cuda/cupy_cufftXt.cu', # for cuFFT callback
96 'cupy/cuda/cupy_cufftXt.h', # for cuFFT callback
97 'cupy/cuda/cupy_cufft.h', # for cuFFT callback
98 'cupy/cuda/cufft.pxd', # for cuFFT callback
99 'cupy/cuda/cufft.pyx', # for cuFFT callback
100 'cupy/random/cupy_distributions.cu',
101 'cupy/random/cupy_distributions.cuh',
102 ] + [
103 x for x in glob.glob('cupy/_core/include/cupy/**', recursive=True)
104 if os.path.isfile(x)
105 ]
106
107 package_data = {
108 'cupy': [
109 os.path.relpath(x, 'cupy') for x in cupy_package_data
110 ],
111 }
112
113 package_data['cupy'] += cupy_setup_build.prepare_wheel_libs()
114
115 package_name = cupy_setup_build.get_package_name()
116 long_description = cupy_setup_build.get_long_description()
117 ext_modules = cupy_setup_build.get_ext_modules()
118 build_ext = cupy_setup_build.custom_build_ext
119
120 here = os.path.abspath(os.path.dirname(__file__))
121 # Get __version__ variable
122 with open(os.path.join(here, 'cupy', '_version.py')) as f:
123 exec(f.read())
124
125 CLASSIFIERS = """\
126 Development Status :: 5 - Production/Stable
127 Intended Audience :: Science/Research
128 Intended Audience :: Developers
129 License :: OSI Approved :: MIT License
130 Programming Language :: Python
131 Programming Language :: Python :: 3
132 Programming Language :: Python :: 3.6
133 Programming Language :: Python :: 3.7
134 Programming Language :: Python :: 3.8
135 Programming Language :: Python :: 3.9
136 Programming Language :: Python :: 3 :: Only
137 Programming Language :: Cython
138 Topic :: Software Development
139 Topic :: Scientific/Engineering
140 Operating System :: POSIX
141 Operating System :: Microsoft :: Windows
142 """
143
144
145 setup(
146 name=package_name,
147 version=__version__, # NOQA
148 description='CuPy: A NumPy-compatible array library accelerated by CUDA',
149 long_description=long_description,
150 author='Seiya Tokui',
151 author_email='[email protected]',
152 url='https://cupy.dev/',
153 license='MIT License',
154 project_urls={
155 "Bug Tracker": "https://github.com/cupy/cupy/issues",
156 "Documentation": "https://docs.cupy.dev/",
157 "Source Code": "https://github.com/cupy/cupy",
158 },
159 classifiers=[_f for _f in CLASSIFIERS.split('\n') if _f],
160 packages=find_packages(exclude=['install', 'tests']),
161 package_data=package_data,
162 zip_safe=False,
163 python_requires='>=3.6.0',
164 setup_requires=setup_requires,
165 install_requires=install_requires,
166 tests_require=tests_require,
167 extras_require=extras_require,
168 ext_modules=ext_modules,
169 cmdclass={'build_ext': build_ext},
170 )
171
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -31,11 +31,11 @@
],
'install': [
- 'numpy>=1.17',
+ 'numpy>=1.17,<1.23', # see #4773
'fastrlock>=0.5',
],
'all': [
- 'scipy>=1.4',
+ 'scipy>=1.4,<1.9', # see #4773
'optuna>=2.0',
],
|
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -31,11 +31,11 @@\n ],\n \n 'install': [\n- 'numpy>=1.17',\n+ 'numpy>=1.17,<1.23', # see #4773\n 'fastrlock>=0.5',\n ],\n 'all': [\n- 'scipy>=1.4',\n+ 'scipy>=1.4,<1.9', # see #4773\n 'optuna>=2.0',\n ],\n", "issue": "[info] NumPy/SciPy new version pinning recommendation\nSee:\r\n- https://github.com/numpy/numpy/pull/18505\r\n- scipy/scipy#12862\r\n\r\nThe most important takeaway is that NumPy/SciPy now recommend downstream distributions to pin the upper bound version if NumPy/Scipy are runtime dependencies. (The example is if the latest NumPy out there is 1.20, one should pin to `<1.23`; the notation used in the docs `<1.xx+3.0` is a bit confusing, see the clarification in https://github.com/scipy/scipy/pull/12862#discussion_r575790007.) There are other suggestions too, but I think this is potentially the most impactful one.\n", "before_files": [{"content": "#!/usr/bin/env python\n\nimport glob\nimport os\nfrom setuptools import setup, find_packages\nimport sys\n\nimport cupy_setup_build\n\n\nfor submodule in ('cupy/_core/include/cupy/cub/',\n 'cupy/_core/include/cupy/jitify'):\n if len(os.listdir(submodule)) == 0:\n msg = '''\n The folder %s is a git submodule but is\n currently empty. Please use the command\n\n git submodule update --init\n\n to populate the folder before building from source.\n ''' % submodule\n print(msg, file=sys.stderr)\n sys.exit(1)\n\n\nrequirements = {\n # TODO(kmaehashi): migrate to pyproject.toml (see #4727, #4619)\n 'setup': [\n 'Cython>=0.29.22',\n 'fastrlock>=0.5',\n ],\n\n 'install': [\n 'numpy>=1.17',\n 'fastrlock>=0.5',\n ],\n 'all': [\n 'scipy>=1.4',\n 'optuna>=2.0',\n ],\n\n 'stylecheck': [\n 'autopep8==1.5.5',\n 'flake8==3.8.4',\n 'pbr==5.5.1',\n 'pycodestyle==2.6.0',\n ],\n 'test': [\n # 4.2 <= pytest < 6.2 is slow collecting tests and times out on CI.\n 'pytest>=6.2',\n ],\n 'jenkins': [\n '-r test',\n 'pytest-timeout',\n 'pytest-cov',\n 'coveralls',\n 'codecov',\n 'coverage<5', # Otherwise, Python must be built with sqlite\n ],\n}\n\n\ndef reduce_requirements(key):\n # Resolve recursive requirements notation (-r)\n reqs = requirements[key]\n resolved_reqs = []\n for req in reqs:\n if req.startswith('-r'):\n depend_key = req[2:].lstrip()\n reduce_requirements(depend_key)\n resolved_reqs += requirements[depend_key]\n else:\n resolved_reqs.append(req)\n requirements[key] = resolved_reqs\n\n\nfor k in requirements.keys():\n reduce_requirements(k)\n\n\nextras_require = {k: v for k, v in requirements.items() if k != 'install'}\n\n\nsetup_requires = requirements['setup']\ninstall_requires = requirements['install']\ntests_require = requirements['test']\n\n# List of files that needs to be in the distribution (sdist/wheel).\n# Notes:\n# - Files only needed in sdist should be added to `MANIFEST.in`.\n# - The following glob (`**`) ignores items starting with `.`.\ncupy_package_data = [\n 'cupy/cuda/cupy_thrust.cu',\n 'cupy/cuda/cupy_cub.cu',\n 'cupy/cuda/cupy_cufftXt.cu', # for cuFFT callback\n 'cupy/cuda/cupy_cufftXt.h', # for cuFFT callback\n 'cupy/cuda/cupy_cufft.h', # for cuFFT callback\n 'cupy/cuda/cufft.pxd', # for cuFFT callback\n 'cupy/cuda/cufft.pyx', # for cuFFT callback\n 'cupy/random/cupy_distributions.cu',\n 'cupy/random/cupy_distributions.cuh',\n] + [\n x for x in glob.glob('cupy/_core/include/cupy/**', recursive=True)\n if os.path.isfile(x)\n]\n\npackage_data = {\n 'cupy': [\n os.path.relpath(x, 'cupy') for x in cupy_package_data\n ],\n}\n\npackage_data['cupy'] += cupy_setup_build.prepare_wheel_libs()\n\npackage_name = cupy_setup_build.get_package_name()\nlong_description = cupy_setup_build.get_long_description()\next_modules = cupy_setup_build.get_ext_modules()\nbuild_ext = cupy_setup_build.custom_build_ext\n\nhere = os.path.abspath(os.path.dirname(__file__))\n# Get __version__ variable\nwith open(os.path.join(here, 'cupy', '_version.py')) as f:\n exec(f.read())\n\nCLASSIFIERS = \"\"\"\\\nDevelopment Status :: 5 - Production/Stable\nIntended Audience :: Science/Research\nIntended Audience :: Developers\nLicense :: OSI Approved :: MIT License\nProgramming Language :: Python\nProgramming Language :: Python :: 3\nProgramming Language :: Python :: 3.6\nProgramming Language :: Python :: 3.7\nProgramming Language :: Python :: 3.8\nProgramming Language :: Python :: 3.9\nProgramming Language :: Python :: 3 :: Only\nProgramming Language :: Cython\nTopic :: Software Development\nTopic :: Scientific/Engineering\nOperating System :: POSIX\nOperating System :: Microsoft :: Windows\n\"\"\"\n\n\nsetup(\n name=package_name,\n version=__version__, # NOQA\n description='CuPy: A NumPy-compatible array library accelerated by CUDA',\n long_description=long_description,\n author='Seiya Tokui',\n author_email='[email protected]',\n url='https://cupy.dev/',\n license='MIT License',\n project_urls={\n \"Bug Tracker\": \"https://github.com/cupy/cupy/issues\",\n \"Documentation\": \"https://docs.cupy.dev/\",\n \"Source Code\": \"https://github.com/cupy/cupy\",\n },\n classifiers=[_f for _f in CLASSIFIERS.split('\\n') if _f],\n packages=find_packages(exclude=['install', 'tests']),\n package_data=package_data,\n zip_safe=False,\n python_requires='>=3.6.0',\n setup_requires=setup_requires,\n install_requires=install_requires,\n tests_require=tests_require,\n extras_require=extras_require,\n ext_modules=ext_modules,\n cmdclass={'build_ext': build_ext},\n)\n", "path": "setup.py"}]}
| 2,392 | 137 |
gh_patches_debug_30339
|
rasdani/github-patches
|
git_diff
|
OCHA-DAP__hdx-ckan-2182
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Search and Dataset List Pages: Move the filter button
Move the filter button on /search and /dataset so that it apears as part of the search input in main nav:
https://cloud.githubusercontent.com/assets/1654485/5780030/a25eaeb6-9da7-11e4-9e5e-bdca79e549ab.png
When open, the filter button remains in the same place but has a "-" added to indicate that it can be closed. SVGs will be linked below shortly.
- Open filter button: https://drive.google.com/file/d/0Bx1KWNTx8Ij3SW42UEFNaTdFQXc/view?usp=sharing
- Close filter button: https://drive.google.com/file/d/0Bx1KWNTx8Ij3NzdEXzFmRlJZZU0/view?usp=sharing
Let me know if these SVGs cause any problem. They are filled black, but I can easily change that.
Default Country Page: graphs section
Blocked by #2102 and #2103
Annotated (very annotated) design is here:
https://docs.google.com/drawings/d/1qOBKZ7IO7zkEMHh2g3ZiAynh5PiAbO-_-SL4rd9uO_M/edit
Implement the section with 5 graphs
</issue>
<code>
[start of ckanext-hdx_org_group/ckanext/hdx_org_group/controllers/country_controller.py]
1 '''
2 Created on Jan 13, 2015
3
4 @author: alexandru-m-g
5 '''
6 import json
7
8 import logging
9 import datetime as dt
10
11 import ckan.lib.base as base
12 import ckan.logic as logic
13 import ckan.model as model
14 import ckan.common as common
15 import ckan.controllers.group as group
16
17 render = base.render
18 abort = base.abort
19 NotFound = logic.NotFound
20 NotAuthorized = logic.NotAuthorized
21 get_action = logic.get_action
22 c = common.c
23 request = common.request
24 _ = common._
25
26
27 log = logging.getLogger(__name__)
28
29 group_type = 'group'
30
31 indicators_4_charts = ['PVH140', 'PVN010', 'PVW010', 'PVF020',
32 'PSE160', 'PCX051', 'PVE130', 'PCX060',
33 'RW002', 'PVE110', 'PVN050', 'PVN070',
34 'PVW040']
35 # http://localhost:8080/public/api2/values?it=PSP120&it=PSP090&l=CHN&sorting=INDICATOR_TYPE_ASC
36
37 indicators_4_top_line = ['PSP120', 'PSP090', 'PSE220', 'PSE030',
38 'CG300']
39 # http://localhost:8080/public/api2/values?it=PSP120&l=CHN&periodType=LATEST_YEAR
40
41
42 class CountryController(group.GroupController):
43
44 def read(self, id):
45 self.get_country(id)
46 self.get_dataset_results(c.group_dict.get('name', id))
47
48 # activity stream
49 context = {'model': model, 'session': model.Session,
50 'user': c.user or c.author,
51 'for_view': True}
52 country_uuid = c.group_dict.get('id', id)
53 self.get_activity_stream(context, country_uuid)
54
55 return render('country/country.html')
56
57 def get_country(self, id):
58 if group_type != self.group_type:
59 abort(404, _('Incorrect group type'))
60
61 context = {'model': model, 'session': model.Session,
62 'user': c.user or c.author,
63 'schema': self._db_to_form_schema(group_type=group_type),
64 'for_view': True}
65 data_dict = {'id': id}
66
67 try:
68 context['include_datasets'] = False
69 c.group_dict = self._action('group_show')(context, data_dict)
70 c.group = context['group']
71 except NotFound:
72 abort(404, _('Group not found'))
73 except NotAuthorized:
74 abort(401, _('Unauthorized to read group %s') % id)
75
76
77 def get_dataset_results(self, country_id):
78 upper_case_id = country_id.upper()
79 top_line_results = self._get_top_line_num(upper_case_id)
80 top_line_data = top_line_results.get('results', [])
81
82 if not top_line_data:
83 log.warn('No top line numbers found for country: {}'.format(country_id))
84
85 sorted_top_line_data = sorted(top_line_data,
86 key=lambda x: indicators_4_top_line.index(x['indicatorTypeCode']))
87
88 c.top_line_data_list = sorted_top_line_data
89
90 chart_results = self._get_chart_data(upper_case_id)
91 chart_data = chart_results.get('results', [])
92 if not chart_data:
93 log.warn('No chart data found for country: {}'.format(country_id))
94 chart_data_dict = {}
95
96 # for el in chart_data:
97 # ind_type = el.get('indicatorTypeCode', None)
98 # if ind_type:
99 # d = dt.datetime.strptime(el.get('time', ''), '%Y-%m-%d')
100 # el['datetime'] = d
101 # if ind_type in chart_data_dict:
102 # chart_data_dict[ind_type].append(el)
103 # else:
104 # chart_data_dict[ind_type] = [el]
105
106 for el in chart_data:
107 ind_type = el.get('indicatorTypeCode', None)
108 if ind_type:
109 # d = dt.datetime.strptime(el.get('time', ''), '%Y-%m-%d')
110 val = {
111 'date': el.get('time'),
112 'value': el.get('value')
113 }
114
115 if ind_type in chart_data_dict:
116 chart_data_dict[ind_type]['data'].append(val);
117 else:
118 newel = {
119 'title': el.get('unitName'),
120 'code': ind_type,
121 'data': [val]
122 }
123 chart_data_dict[ind_type] = newel
124
125
126
127 # for code in chart_data_dict.keys():
128 # chart_data_dict[code] = sorted(chart_data_dict[code], key=lambda x: x.get('datetime', None))
129
130 for code in chart_data_dict.keys():
131 chart_data_dict[code]['data'] = json.dumps(chart_data_dict[code]['data'])
132
133 chart_data_list = []
134 for code in indicators_4_charts:
135 if code in chart_data_dict and len(chart_data_list) < 5:
136 chart_data_list.append(chart_data_dict[code])
137
138 c.chart_data_list = chart_data_list
139
140 # c.chart_data_dict = chart_data_dict
141
142 def _get_chart_data(self, country_id):
143 data_dict = {
144 'sorting': 'INDICATOR_TYPE_ASC',
145 'l': country_id,
146 'it': indicators_4_charts
147 }
148 result = get_action('hdx_get_indicator_values')({}, data_dict)
149 return result
150
151 def _get_top_line_num(self, country_id):
152 data_dict = {
153 'periodType': 'LATEST_YEAR',
154 'l': country_id,
155 'it': indicators_4_top_line
156 }
157 result = get_action('hdx_get_indicator_values')({}, data_dict)
158 return result
159
160 def get_activity_stream(self, context, country_id):
161 act_data_dict = {'id': country_id, 'limit': 7}
162 c.hdx_group_activities = get_action(
163 'hdx_get_group_activity_list')(context, act_data_dict)
164
[end of ckanext-hdx_org_group/ckanext/hdx_org_group/controllers/country_controller.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/ckanext-hdx_org_group/ckanext/hdx_org_group/controllers/country_controller.py b/ckanext-hdx_org_group/ckanext/hdx_org_group/controllers/country_controller.py
--- a/ckanext-hdx_org_group/ckanext/hdx_org_group/controllers/country_controller.py
+++ b/ckanext-hdx_org_group/ckanext/hdx_org_group/controllers/country_controller.py
@@ -107,18 +107,35 @@
ind_type = el.get('indicatorTypeCode', None)
if ind_type:
# d = dt.datetime.strptime(el.get('time', ''), '%Y-%m-%d')
+ el_time = el.get('time')
+ el_value = el.get('value')
val = {
- 'date': el.get('time'),
- 'value': el.get('value')
+ 'date': el_time,
+ 'value': el_value
}
if ind_type in chart_data_dict:
chart_data_dict[ind_type]['data'].append(val);
+
+ last_date = dt.datetime.strptime(chart_data_dict[ind_type]['lastDate'], '%Y-%m-%d')
+ curr_date = dt.datetime.strptime(el_time, '%Y-%m-%d')
+
+ if last_date < curr_date:
+ chart_data_dict[ind_type]['lastDate'] = el_time
+ chart_data_dict[ind_type]['lastValue'] = el_value
+
else:
newel = {
- 'title': el.get('unitName'),
+ 'title': el.get('indicatorTypeName'),
+ 'sourceName': el.get('sourceName'),
+ 'sourceCode': el.get('sourceCode'),
+ 'lastDate': el_time,
+ 'lastValue': el_value,
+ 'unit': el.get('unitName'),
'code': ind_type,
- 'data': [val]
+ 'data': [val],
+ 'datasetLink': '/todo/changeme',
+ 'datasetUpdateDate': 'Jun 21, 1985'
}
chart_data_dict[ind_type] = newel
|
{"golden_diff": "diff --git a/ckanext-hdx_org_group/ckanext/hdx_org_group/controllers/country_controller.py b/ckanext-hdx_org_group/ckanext/hdx_org_group/controllers/country_controller.py\n--- a/ckanext-hdx_org_group/ckanext/hdx_org_group/controllers/country_controller.py\n+++ b/ckanext-hdx_org_group/ckanext/hdx_org_group/controllers/country_controller.py\n@@ -107,18 +107,35 @@\n ind_type = el.get('indicatorTypeCode', None)\n if ind_type:\n # d = dt.datetime.strptime(el.get('time', ''), '%Y-%m-%d')\n+ el_time = el.get('time')\n+ el_value = el.get('value')\n val = {\n- 'date': el.get('time'),\n- 'value': el.get('value')\n+ 'date': el_time,\n+ 'value': el_value\n }\n \n if ind_type in chart_data_dict:\n chart_data_dict[ind_type]['data'].append(val);\n+\n+ last_date = dt.datetime.strptime(chart_data_dict[ind_type]['lastDate'], '%Y-%m-%d')\n+ curr_date = dt.datetime.strptime(el_time, '%Y-%m-%d')\n+\n+ if last_date < curr_date:\n+ chart_data_dict[ind_type]['lastDate'] = el_time\n+ chart_data_dict[ind_type]['lastValue'] = el_value\n+\n else:\n newel = {\n- 'title': el.get('unitName'),\n+ 'title': el.get('indicatorTypeName'),\n+ 'sourceName': el.get('sourceName'),\n+ 'sourceCode': el.get('sourceCode'),\n+ 'lastDate': el_time,\n+ 'lastValue': el_value,\n+ 'unit': el.get('unitName'),\n 'code': ind_type,\n- 'data': [val]\n+ 'data': [val],\n+ 'datasetLink': '/todo/changeme',\n+ 'datasetUpdateDate': 'Jun 21, 1985'\n }\n chart_data_dict[ind_type] = newel\n", "issue": "Search and Dataset List Pages: Move the filter button\nMove the filter button on /search and /dataset so that it apears as part of the search input in main nav:\nhttps://cloud.githubusercontent.com/assets/1654485/5780030/a25eaeb6-9da7-11e4-9e5e-bdca79e549ab.png\n\nWhen open, the filter button remains in the same place but has a \"-\" added to indicate that it can be closed. SVGs will be linked below shortly.\n- Open filter button: https://drive.google.com/file/d/0Bx1KWNTx8Ij3SW42UEFNaTdFQXc/view?usp=sharing\n- Close filter button: https://drive.google.com/file/d/0Bx1KWNTx8Ij3NzdEXzFmRlJZZU0/view?usp=sharing\n\nLet me know if these SVGs cause any problem. They are filled black, but I can easily change that. \n\nDefault Country Page: graphs section \nBlocked by #2102 and #2103 \nAnnotated (very annotated) design is here: \nhttps://docs.google.com/drawings/d/1qOBKZ7IO7zkEMHh2g3ZiAynh5PiAbO-_-SL4rd9uO_M/edit\nImplement the section with 5 graphs\n\n", "before_files": [{"content": "'''\nCreated on Jan 13, 2015\n\n@author: alexandru-m-g\n'''\nimport json\n\nimport logging\nimport datetime as dt\n\nimport ckan.lib.base as base\nimport ckan.logic as logic\nimport ckan.model as model\nimport ckan.common as common\nimport ckan.controllers.group as group\n\nrender = base.render\nabort = base.abort\nNotFound = logic.NotFound\nNotAuthorized = logic.NotAuthorized\nget_action = logic.get_action\nc = common.c\nrequest = common.request\n_ = common._\n\n\nlog = logging.getLogger(__name__)\n\ngroup_type = 'group'\n\nindicators_4_charts = ['PVH140', 'PVN010', 'PVW010', 'PVF020',\n 'PSE160', 'PCX051', 'PVE130', 'PCX060',\n 'RW002', 'PVE110', 'PVN050', 'PVN070',\n 'PVW040']\n# http://localhost:8080/public/api2/values?it=PSP120&it=PSP090&l=CHN&sorting=INDICATOR_TYPE_ASC\n\nindicators_4_top_line = ['PSP120', 'PSP090', 'PSE220', 'PSE030',\n 'CG300']\n# http://localhost:8080/public/api2/values?it=PSP120&l=CHN&periodType=LATEST_YEAR\n\n\nclass CountryController(group.GroupController):\n\n def read(self, id):\n self.get_country(id)\n self.get_dataset_results(c.group_dict.get('name', id))\n\n # activity stream\n context = {'model': model, 'session': model.Session,\n 'user': c.user or c.author,\n 'for_view': True}\n country_uuid = c.group_dict.get('id', id)\n self.get_activity_stream(context, country_uuid)\n\n return render('country/country.html')\n\n def get_country(self, id):\n if group_type != self.group_type:\n abort(404, _('Incorrect group type'))\n\n context = {'model': model, 'session': model.Session,\n 'user': c.user or c.author,\n 'schema': self._db_to_form_schema(group_type=group_type),\n 'for_view': True}\n data_dict = {'id': id}\n\n try:\n context['include_datasets'] = False\n c.group_dict = self._action('group_show')(context, data_dict)\n c.group = context['group']\n except NotFound:\n abort(404, _('Group not found'))\n except NotAuthorized:\n abort(401, _('Unauthorized to read group %s') % id)\n\n\n def get_dataset_results(self, country_id):\n upper_case_id = country_id.upper()\n top_line_results = self._get_top_line_num(upper_case_id)\n top_line_data = top_line_results.get('results', [])\n\n if not top_line_data:\n log.warn('No top line numbers found for country: {}'.format(country_id))\n\n sorted_top_line_data = sorted(top_line_data,\n key=lambda x: indicators_4_top_line.index(x['indicatorTypeCode']))\n\n c.top_line_data_list = sorted_top_line_data\n\n chart_results = self._get_chart_data(upper_case_id)\n chart_data = chart_results.get('results', [])\n if not chart_data:\n log.warn('No chart data found for country: {}'.format(country_id))\n chart_data_dict = {}\n\n # for el in chart_data:\n # ind_type = el.get('indicatorTypeCode', None)\n # if ind_type:\n # d = dt.datetime.strptime(el.get('time', ''), '%Y-%m-%d')\n # el['datetime'] = d\n # if ind_type in chart_data_dict:\n # chart_data_dict[ind_type].append(el)\n # else:\n # chart_data_dict[ind_type] = [el]\n\n for el in chart_data:\n ind_type = el.get('indicatorTypeCode', None)\n if ind_type:\n # d = dt.datetime.strptime(el.get('time', ''), '%Y-%m-%d')\n val = {\n 'date': el.get('time'),\n 'value': el.get('value')\n }\n\n if ind_type in chart_data_dict:\n chart_data_dict[ind_type]['data'].append(val);\n else:\n newel = {\n 'title': el.get('unitName'),\n 'code': ind_type,\n 'data': [val]\n }\n chart_data_dict[ind_type] = newel\n\n\n\n # for code in chart_data_dict.keys():\n # chart_data_dict[code] = sorted(chart_data_dict[code], key=lambda x: x.get('datetime', None))\n\n for code in chart_data_dict.keys():\n chart_data_dict[code]['data'] = json.dumps(chart_data_dict[code]['data'])\n\n chart_data_list = []\n for code in indicators_4_charts:\n if code in chart_data_dict and len(chart_data_list) < 5:\n chart_data_list.append(chart_data_dict[code])\n\n c.chart_data_list = chart_data_list\n\n # c.chart_data_dict = chart_data_dict\n\n def _get_chart_data(self, country_id):\n data_dict = {\n 'sorting': 'INDICATOR_TYPE_ASC',\n 'l': country_id,\n 'it': indicators_4_charts\n }\n result = get_action('hdx_get_indicator_values')({}, data_dict)\n return result\n\n def _get_top_line_num(self, country_id):\n data_dict = {\n 'periodType': 'LATEST_YEAR',\n 'l': country_id,\n 'it': indicators_4_top_line\n }\n result = get_action('hdx_get_indicator_values')({}, data_dict)\n return result\n\n def get_activity_stream(self, context, country_id):\n act_data_dict = {'id': country_id, 'limit': 7}\n c.hdx_group_activities = get_action(\n 'hdx_get_group_activity_list')(context, act_data_dict)\n", "path": "ckanext-hdx_org_group/ckanext/hdx_org_group/controllers/country_controller.py"}]}
| 2,621 | 468 |
gh_patches_debug_56255
|
rasdani/github-patches
|
git_diff
|
litestar-org__litestar-1377
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
StaticFilesConfig and virtual directories
I'm trying to write a ``FileSystemProtocol`` to load files from the package data using [importlib_resources](https://importlib-resources.readthedocs.io/en/latest/using.html#). But because ``directories`` is defined as ``DirectoryPath``, pydantic checks if the given directories exist in the local filesystem.
This is not generally true, especially in any kind of virtual filesystem (e.g. a zipped package). I think this condition should be relaxed to support virtual filesystems.
https://github.com/starlite-api/starlite/blob/9bb6dcd57c10a591377cf8e3a537e9292566d5b9/starlite/config/static_files.py#L32
</issue>
<code>
[start of starlite/events/emitter.py]
1 from __future__ import annotations
2
3 from abc import ABC, abstractmethod
4 from asyncio import CancelledError, Queue, Task, create_task
5 from collections import defaultdict
6 from contextlib import suppress
7 from typing import TYPE_CHECKING, Any, DefaultDict, Sequence
8
9 import sniffio
10
11 from starlite.exceptions import ImproperlyConfiguredException
12
13 __all__ = ("BaseEventEmitterBackend", "SimpleEventEmitter")
14
15
16 if TYPE_CHECKING:
17 from starlite.events.listener import EventListener
18
19
20 class BaseEventEmitterBackend(ABC):
21 """Abstract class used to define event emitter backends."""
22
23 __slots__ = ("listeners",)
24
25 listeners: DefaultDict[str, set[EventListener]]
26
27 def __init__(self, listeners: Sequence[EventListener]):
28 """Create an event emitter instance.
29
30 Args:
31 listeners: A list of listeners.
32 """
33 self.listeners = defaultdict(set)
34 for listener in listeners:
35 for event_id in listener.event_ids:
36 self.listeners[event_id].add(listener)
37
38 @abstractmethod
39 def emit(self, event_id: str, *args: Any, **kwargs: Any) -> None: # pragma: no cover
40 """Emit an event to all attached listeners.
41
42 Args:
43 event_id: The ID of the event to emit, e.g 'my_event'.
44 *args: args to pass to the listener(s).
45 **kwargs: kwargs to pass to the listener(s)
46
47 Returns:
48 None
49 """
50 raise NotImplementedError("not implemented")
51
52 @abstractmethod
53 async def on_startup(self) -> None: # pragma: no cover
54 """Hook called on application startup, used to establish connection or perform other async operations.
55
56 Returns:
57 None
58 """
59 raise NotImplementedError("not implemented")
60
61 @abstractmethod
62 async def on_shutdown(self) -> None: # pragma: no cover
63 """Hook called on application shutdown, used to perform cleanup.
64
65 Returns:
66 None
67 """
68 raise NotImplementedError("not implemented")
69
70
71 class SimpleEventEmitter(BaseEventEmitterBackend):
72 """Event emitter the works only in the current process"""
73
74 __slots__ = ("_queue", "_worker_task")
75
76 _worker_task: Task | None
77
78 def __init__(self, listeners: Sequence[EventListener]):
79 """Create an event emitter instance.
80
81 Args:
82 listeners: A list of listeners.
83 """
84 super().__init__(listeners=listeners)
85 self._queue: Queue | None = None
86 self._worker_task = None
87
88 async def _worker(self) -> None:
89 """Worker that runs in a separate task and continuously pulls events from asyncio queue.
90
91 Returns:
92 None
93 """
94 while self._queue:
95 fn, args, kwargs = await self._queue.get()
96 await fn(*args, *kwargs)
97 self._queue.task_done()
98
99 async def on_startup(self) -> None:
100 """Hook called on application startup, used to establish connection or perform other async operations.
101
102 Returns:
103 None
104 """
105 if sniffio.current_async_library() != "asyncio":
106 return
107
108 self._queue = Queue()
109 self._worker_task = create_task(self._worker())
110
111 async def on_shutdown(self) -> None:
112 """Hook called on application shutdown, used to perform cleanup.
113
114 Returns:
115 None
116 """
117
118 if self._queue:
119 await self._queue.join()
120
121 if self._worker_task:
122 self._worker_task.cancel()
123 with suppress(CancelledError):
124 await self._worker_task
125
126 self._worker_task = None
127 self._queue = None
128
129 def emit(self, event_id: str, *args: Any, **kwargs: Any) -> None:
130 """Emit an event to all attached listeners.
131
132 Args:
133 event_id: The ID of the event to emit, e.g 'my_event'.
134 *args: args to pass to the listener(s).
135 **kwargs: kwargs to pass to the listener(s)
136
137 Returns:
138 None
139 """
140 if not (self._worker_task and self._queue):
141 if sniffio.current_async_library() != "asyncio":
142 raise ImproperlyConfiguredException("{type(self).__name__} only supports 'asyncio' based event loops")
143
144 raise ImproperlyConfiguredException("Worker not running")
145
146 if listeners := self.listeners.get(event_id):
147 for listener in listeners:
148 self._queue.put_nowait((listener.fn, args, kwargs))
149 return
150 raise ImproperlyConfiguredException(f"no event listeners are registered for event ID: {event_id}")
151
[end of starlite/events/emitter.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/starlite/events/emitter.py b/starlite/events/emitter.py
--- a/starlite/events/emitter.py
+++ b/starlite/events/emitter.py
@@ -93,7 +93,7 @@
"""
while self._queue:
fn, args, kwargs = await self._queue.get()
- await fn(*args, *kwargs)
+ await fn(*args, **kwargs)
self._queue.task_done()
async def on_startup(self) -> None:
|
{"golden_diff": "diff --git a/starlite/events/emitter.py b/starlite/events/emitter.py\n--- a/starlite/events/emitter.py\n+++ b/starlite/events/emitter.py\n@@ -93,7 +93,7 @@\n \"\"\"\n while self._queue:\n fn, args, kwargs = await self._queue.get()\n- await fn(*args, *kwargs)\n+ await fn(*args, **kwargs)\n self._queue.task_done()\n \n async def on_startup(self) -> None:\n", "issue": "StaticFilesConfig and virtual directories\nI'm trying to write a ``FileSystemProtocol`` to load files from the package data using [importlib_resources](https://importlib-resources.readthedocs.io/en/latest/using.html#). But because ``directories`` is defined as ``DirectoryPath``, pydantic checks if the given directories exist in the local filesystem. \r\n\r\nThis is not generally true, especially in any kind of virtual filesystem (e.g. a zipped package). I think this condition should be relaxed to support virtual filesystems.\r\n\r\nhttps://github.com/starlite-api/starlite/blob/9bb6dcd57c10a591377cf8e3a537e9292566d5b9/starlite/config/static_files.py#L32\n", "before_files": [{"content": "from __future__ import annotations\n\nfrom abc import ABC, abstractmethod\nfrom asyncio import CancelledError, Queue, Task, create_task\nfrom collections import defaultdict\nfrom contextlib import suppress\nfrom typing import TYPE_CHECKING, Any, DefaultDict, Sequence\n\nimport sniffio\n\nfrom starlite.exceptions import ImproperlyConfiguredException\n\n__all__ = (\"BaseEventEmitterBackend\", \"SimpleEventEmitter\")\n\n\nif TYPE_CHECKING:\n from starlite.events.listener import EventListener\n\n\nclass BaseEventEmitterBackend(ABC):\n \"\"\"Abstract class used to define event emitter backends.\"\"\"\n\n __slots__ = (\"listeners\",)\n\n listeners: DefaultDict[str, set[EventListener]]\n\n def __init__(self, listeners: Sequence[EventListener]):\n \"\"\"Create an event emitter instance.\n\n Args:\n listeners: A list of listeners.\n \"\"\"\n self.listeners = defaultdict(set)\n for listener in listeners:\n for event_id in listener.event_ids:\n self.listeners[event_id].add(listener)\n\n @abstractmethod\n def emit(self, event_id: str, *args: Any, **kwargs: Any) -> None: # pragma: no cover\n \"\"\"Emit an event to all attached listeners.\n\n Args:\n event_id: The ID of the event to emit, e.g 'my_event'.\n *args: args to pass to the listener(s).\n **kwargs: kwargs to pass to the listener(s)\n\n Returns:\n None\n \"\"\"\n raise NotImplementedError(\"not implemented\")\n\n @abstractmethod\n async def on_startup(self) -> None: # pragma: no cover\n \"\"\"Hook called on application startup, used to establish connection or perform other async operations.\n\n Returns:\n None\n \"\"\"\n raise NotImplementedError(\"not implemented\")\n\n @abstractmethod\n async def on_shutdown(self) -> None: # pragma: no cover\n \"\"\"Hook called on application shutdown, used to perform cleanup.\n\n Returns:\n None\n \"\"\"\n raise NotImplementedError(\"not implemented\")\n\n\nclass SimpleEventEmitter(BaseEventEmitterBackend):\n \"\"\"Event emitter the works only in the current process\"\"\"\n\n __slots__ = (\"_queue\", \"_worker_task\")\n\n _worker_task: Task | None\n\n def __init__(self, listeners: Sequence[EventListener]):\n \"\"\"Create an event emitter instance.\n\n Args:\n listeners: A list of listeners.\n \"\"\"\n super().__init__(listeners=listeners)\n self._queue: Queue | None = None\n self._worker_task = None\n\n async def _worker(self) -> None:\n \"\"\"Worker that runs in a separate task and continuously pulls events from asyncio queue.\n\n Returns:\n None\n \"\"\"\n while self._queue:\n fn, args, kwargs = await self._queue.get()\n await fn(*args, *kwargs)\n self._queue.task_done()\n\n async def on_startup(self) -> None:\n \"\"\"Hook called on application startup, used to establish connection or perform other async operations.\n\n Returns:\n None\n \"\"\"\n if sniffio.current_async_library() != \"asyncio\":\n return\n\n self._queue = Queue()\n self._worker_task = create_task(self._worker())\n\n async def on_shutdown(self) -> None:\n \"\"\"Hook called on application shutdown, used to perform cleanup.\n\n Returns:\n None\n \"\"\"\n\n if self._queue:\n await self._queue.join()\n\n if self._worker_task:\n self._worker_task.cancel()\n with suppress(CancelledError):\n await self._worker_task\n\n self._worker_task = None\n self._queue = None\n\n def emit(self, event_id: str, *args: Any, **kwargs: Any) -> None:\n \"\"\"Emit an event to all attached listeners.\n\n Args:\n event_id: The ID of the event to emit, e.g 'my_event'.\n *args: args to pass to the listener(s).\n **kwargs: kwargs to pass to the listener(s)\n\n Returns:\n None\n \"\"\"\n if not (self._worker_task and self._queue):\n if sniffio.current_async_library() != \"asyncio\":\n raise ImproperlyConfiguredException(\"{type(self).__name__} only supports 'asyncio' based event loops\")\n\n raise ImproperlyConfiguredException(\"Worker not running\")\n\n if listeners := self.listeners.get(event_id):\n for listener in listeners:\n self._queue.put_nowait((listener.fn, args, kwargs))\n return\n raise ImproperlyConfiguredException(f\"no event listeners are registered for event ID: {event_id}\")\n", "path": "starlite/events/emitter.py"}]}
| 2,044 | 107 |
gh_patches_debug_19682
|
rasdani/github-patches
|
git_diff
|
dotkom__onlineweb4-325
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
All strings should be unicode
I'm quite sure there are some strings still missing the u. Should have a look at this as it can cause trouble.
</issue>
<code>
[start of apps/authentication/models.py]
1 # -*- coding: utf-8 -*-
2
3 import datetime
4
5 from django.conf import settings
6 from django.contrib.auth.models import AbstractUser
7 from django.db import models
8 from django.utils.translation import ugettext as _
9
10
11 # If this list is changed, remember to check that the year property on
12 # OnlineUser is still correct!
13 FIELD_OF_STUDY_CHOICES = [
14 (0, _(u'Gjest')),
15 (1, _(u'Bachelor i Informatikk (BIT)')),
16 # master degrees take up the interval [10,30>
17 (10, _(u'Software (SW)')),
18 (11, _(u'Informasjonsforvaltning (DIF)')),
19 (12, _(u'Komplekse Datasystemer (KDS)')),
20 (13, _(u'Spillteknologi (SPT)')),
21 (14, _(u'Intelligente Systemer (IRS)')),
22 (15, _(u'Helseinformatikk (MSMEDTEK)')),
23 (30, _(u'Annen mastergrad')),
24 (80, _(u'PhD')),
25 (90, _(u'International')),
26 (100, _(u'Annet Onlinemedlem')),
27 ]
28
29 class OnlineUser(AbstractUser):
30
31 IMAGE_FOLDER = "images/profiles"
32 IMAGE_EXTENSIONS = ['.jpg', '.jpeg', '.gif', '.png']
33
34 # Online related fields
35 field_of_study = models.SmallIntegerField(_(u"studieretning"), choices=FIELD_OF_STUDY_CHOICES, default=0)
36 started_date = models.DateField(_(u"startet studie"), default=datetime.datetime.now())
37 compiled = models.BooleanField(_(u"kompilert"), default=False)
38
39 # Email
40 infomail = models.BooleanField(_(u"vil ha infomail"), default=True)
41
42 # Address
43 phone_number = models.CharField(_(u"telefonnummer"), max_length=20, blank=True, null=True)
44 address = models.CharField(_(u"adresse"), max_length=30, blank=True, null=True)
45 zip_code = models.CharField(_(u"postnummer"), max_length=4, blank=True, null=True)
46
47 # Other
48 allergies = models.TextField(_(u"allergier"), blank=True, null=True)
49 mark_rules = models.BooleanField(_(u"godtatt prikkeregler"), default=False)
50 rfid = models.CharField(_(u"RFID"), max_length=50, blank=True, null=True)
51 nickname = models.CharField(_(u"nickname"), max_length=50, blank=True, null=True)
52 website = models.CharField(_(u"hjemmeside"), max_length=50, blank=True, null=True)
53
54 image = models.ImageField(_(u"bilde"), max_length=200, upload_to=IMAGE_FOLDER, blank=True, null=True,
55 default=settings.DEFAULT_PROFILE_PICTURE_URL)
56
57 # NTNU credentials
58 ntnu_username = models.CharField(_(u"NTNU-brukernavn"), max_length=10, blank=True, null=True)
59
60 # TODO profile pictures
61 # TODO checkbox for forwarding of @online.ntnu.no mail
62
63 @property
64 def is_member(self):
65 """
66 Returns true if the User object is associated with Online.
67 """
68 if AllowedUsername.objects.filter(username=self.ntnu_username).filter(expiration_date__gte=datetime.datetime.now()).count() > 0:
69 return True
70 return False
71
72 def get_full_name(self):
73 """
74 Returns the first_name plus the last_name, with a space in between.
75 """
76 full_name = u'%s %s' % (self.first_name, self.last_name)
77 return full_name.strip()
78
79 def get_email(self):
80 return self.get_emails().filter(primary = True)[0]
81
82 def get_emails(self):
83 return Email.objects.all().filter(user = self)
84
85 @property
86 def year(self):
87 today = datetime.datetime.now().date()
88 started = self.started_date
89
90 # We say that a year is 360 days incase we are a bit slower to
91 # add users one year.
92 year = ((today - started).days / 360) + 1
93
94 if self.field_of_study == 0 or self.field_of_study == 100: # others
95 return 0
96 # dont return a bachelor student as 4th or 5th grade
97 elif self.field_of_study == 1: # bachelor
98 if year > 3:
99 return 3
100 return year
101 elif 9 < self.field_of_study < 30: # 10-29 is considered master
102 if year >= 2:
103 return 5
104 return 4
105 elif self.field_of_study == 80: # phd
106 return year + 5
107 elif self.field_of_study == 90: # international
108 if year == 1:
109 return 1
110 return 4
111
112 def __unicode__(self):
113 return self.username
114
115 class Meta:
116 verbose_name = _(u"brukerprofil")
117 verbose_name_plural = _(u"brukerprofiler")
118
119
120 class Email(models.Model):
121 user = models.ForeignKey(OnlineUser, related_name="email_user")
122 email = models.EmailField(_(u"epostadresse"), unique=True)
123 primary = models.BooleanField(_(u"aktiv"), default=False)
124 verified = models.BooleanField(_(u"verifisert"), default=False)
125
126 def __unicode__(self):
127 return self.email
128
129 class Meta:
130 verbose_name = _(u"epostadresse")
131 verbose_name_plural = _(u"epostadresser")
132
133
134 class RegisterToken(models.Model):
135 user = models.ForeignKey(OnlineUser, related_name="register_user")
136 email = models.EmailField(_("epost"), max_length=254)
137 token = models.CharField(_("token"), max_length=32)
138 created = models.DateTimeField(_("opprettet dato"), editable=False, auto_now_add=True, default=datetime.datetime.now())
139
140 @property
141 def is_valid(self):
142 valid_period = datetime.timedelta(days=1)
143 now = datetime.datetime.now()
144 return now < self.created + valid_period
145
146
147 class AllowedUsername(models.Model):
148 """
149 Holds usernames that are considered valid members of Online and the time they expire.
150 """
151 username = models.CharField(_(u"brukernavn"), max_length=10)
152 registered = models.DateField(_(u"registrert"))
153 note = models.CharField(_(u"notat"), max_length=100)
154 description = models.TextField(_(u"beskrivelse"), blank=True, null=True)
155 expiration_date = models.DateField(_(u"utløpsdato"))
156
157 @property
158 def is_active(self):
159 return datetime.datetime.now() < self.expiration_date
160
161 def __unicode__(self):
162 return self.username
163
164 class Meta:
165 verbose_name = _("tillatt brukernavn")
166 verbose_name_plural = _("tillatte brukernavn")
167 ordering = ("username",)
168
[end of apps/authentication/models.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/apps/authentication/models.py b/apps/authentication/models.py
--- a/apps/authentication/models.py
+++ b/apps/authentication/models.py
@@ -133,9 +133,9 @@
class RegisterToken(models.Model):
user = models.ForeignKey(OnlineUser, related_name="register_user")
- email = models.EmailField(_("epost"), max_length=254)
- token = models.CharField(_("token"), max_length=32)
- created = models.DateTimeField(_("opprettet dato"), editable=False, auto_now_add=True, default=datetime.datetime.now())
+ email = models.EmailField(_(u"epost"), max_length=254)
+ token = models.CharField(_(u"token"), max_length=32)
+ created = models.DateTimeField(_(u"opprettet dato"), editable=False, auto_now_add=True, default=datetime.datetime.now())
@property
def is_valid(self):
@@ -162,6 +162,6 @@
return self.username
class Meta:
- verbose_name = _("tillatt brukernavn")
- verbose_name_plural = _("tillatte brukernavn")
- ordering = ("username",)
+ verbose_name = _(u"tillatt brukernavn")
+ verbose_name_plural = _(u"tillatte brukernavn")
+ ordering = (u"username",)
|
{"golden_diff": "diff --git a/apps/authentication/models.py b/apps/authentication/models.py\n--- a/apps/authentication/models.py\n+++ b/apps/authentication/models.py\n@@ -133,9 +133,9 @@\n \n class RegisterToken(models.Model):\n user = models.ForeignKey(OnlineUser, related_name=\"register_user\")\n- email = models.EmailField(_(\"epost\"), max_length=254)\n- token = models.CharField(_(\"token\"), max_length=32)\n- created = models.DateTimeField(_(\"opprettet dato\"), editable=False, auto_now_add=True, default=datetime.datetime.now())\n+ email = models.EmailField(_(u\"epost\"), max_length=254)\n+ token = models.CharField(_(u\"token\"), max_length=32)\n+ created = models.DateTimeField(_(u\"opprettet dato\"), editable=False, auto_now_add=True, default=datetime.datetime.now())\n \n @property\n def is_valid(self):\n@@ -162,6 +162,6 @@\n return self.username\n \n class Meta:\n- verbose_name = _(\"tillatt brukernavn\")\n- verbose_name_plural = _(\"tillatte brukernavn\")\n- ordering = (\"username\",)\n+ verbose_name = _(u\"tillatt brukernavn\")\n+ verbose_name_plural = _(u\"tillatte brukernavn\")\n+ ordering = (u\"username\",)\n", "issue": "All strings should be unicode\nI'm quite sure there are some strings still missing the u. Should have a look at this as it can cause trouble. \n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\nimport datetime\n\nfrom django.conf import settings\nfrom django.contrib.auth.models import AbstractUser\nfrom django.db import models\nfrom django.utils.translation import ugettext as _\n\n\n# If this list is changed, remember to check that the year property on\n# OnlineUser is still correct!\nFIELD_OF_STUDY_CHOICES = [\n (0, _(u'Gjest')),\n (1, _(u'Bachelor i Informatikk (BIT)')),\n # master degrees take up the interval [10,30>\n (10, _(u'Software (SW)')),\n (11, _(u'Informasjonsforvaltning (DIF)')),\n (12, _(u'Komplekse Datasystemer (KDS)')),\n (13, _(u'Spillteknologi (SPT)')),\n (14, _(u'Intelligente Systemer (IRS)')),\n (15, _(u'Helseinformatikk (MSMEDTEK)')),\n (30, _(u'Annen mastergrad')),\n (80, _(u'PhD')),\n (90, _(u'International')),\n (100, _(u'Annet Onlinemedlem')),\n]\n\nclass OnlineUser(AbstractUser):\n\n IMAGE_FOLDER = \"images/profiles\"\n IMAGE_EXTENSIONS = ['.jpg', '.jpeg', '.gif', '.png']\n \n # Online related fields\n field_of_study = models.SmallIntegerField(_(u\"studieretning\"), choices=FIELD_OF_STUDY_CHOICES, default=0)\n started_date = models.DateField(_(u\"startet studie\"), default=datetime.datetime.now())\n compiled = models.BooleanField(_(u\"kompilert\"), default=False)\n\n # Email\n infomail = models.BooleanField(_(u\"vil ha infomail\"), default=True)\n\n # Address\n phone_number = models.CharField(_(u\"telefonnummer\"), max_length=20, blank=True, null=True)\n address = models.CharField(_(u\"adresse\"), max_length=30, blank=True, null=True)\n zip_code = models.CharField(_(u\"postnummer\"), max_length=4, blank=True, null=True)\n\n # Other\n allergies = models.TextField(_(u\"allergier\"), blank=True, null=True)\n mark_rules = models.BooleanField(_(u\"godtatt prikkeregler\"), default=False)\n rfid = models.CharField(_(u\"RFID\"), max_length=50, blank=True, null=True)\n nickname = models.CharField(_(u\"nickname\"), max_length=50, blank=True, null=True)\n website = models.CharField(_(u\"hjemmeside\"), max_length=50, blank=True, null=True)\n\n image = models.ImageField(_(u\"bilde\"), max_length=200, upload_to=IMAGE_FOLDER, blank=True, null=True,\n default=settings.DEFAULT_PROFILE_PICTURE_URL)\n\n # NTNU credentials\n ntnu_username = models.CharField(_(u\"NTNU-brukernavn\"), max_length=10, blank=True, null=True)\n\n # TODO profile pictures\n # TODO checkbox for forwarding of @online.ntnu.no mail\n \n @property\n def is_member(self):\n \"\"\"\n Returns true if the User object is associated with Online.\n \"\"\"\n if AllowedUsername.objects.filter(username=self.ntnu_username).filter(expiration_date__gte=datetime.datetime.now()).count() > 0:\n return True\n return False\n\n def get_full_name(self):\n \"\"\"\n Returns the first_name plus the last_name, with a space in between.\n \"\"\"\n full_name = u'%s %s' % (self.first_name, self.last_name)\n return full_name.strip()\n\n def get_email(self):\n return self.get_emails().filter(primary = True)[0]\n\n def get_emails(self):\n return Email.objects.all().filter(user = self)\n\n @property\n def year(self):\n today = datetime.datetime.now().date()\n started = self.started_date\n\n # We say that a year is 360 days incase we are a bit slower to\n # add users one year.\n year = ((today - started).days / 360) + 1\n\n if self.field_of_study == 0 or self.field_of_study == 100: # others\n return 0\n # dont return a bachelor student as 4th or 5th grade\n elif self.field_of_study == 1: # bachelor\n if year > 3:\n return 3\n return year\n elif 9 < self.field_of_study < 30: # 10-29 is considered master\n if year >= 2:\n return 5\n return 4\n elif self.field_of_study == 80: # phd\n return year + 5\n elif self.field_of_study == 90: # international\n if year == 1:\n return 1\n return 4\n\n def __unicode__(self):\n return self.username\n\n class Meta:\n verbose_name = _(u\"brukerprofil\")\n verbose_name_plural = _(u\"brukerprofiler\")\n\n\nclass Email(models.Model):\n user = models.ForeignKey(OnlineUser, related_name=\"email_user\")\n email = models.EmailField(_(u\"epostadresse\"), unique=True)\n primary = models.BooleanField(_(u\"aktiv\"), default=False)\n verified = models.BooleanField(_(u\"verifisert\"), default=False)\n\n def __unicode__(self):\n return self.email\n\n class Meta:\n verbose_name = _(u\"epostadresse\")\n verbose_name_plural = _(u\"epostadresser\")\n\n\nclass RegisterToken(models.Model):\n user = models.ForeignKey(OnlineUser, related_name=\"register_user\")\n email = models.EmailField(_(\"epost\"), max_length=254)\n token = models.CharField(_(\"token\"), max_length=32)\n created = models.DateTimeField(_(\"opprettet dato\"), editable=False, auto_now_add=True, default=datetime.datetime.now())\n\n @property\n def is_valid(self):\n valid_period = datetime.timedelta(days=1)\n now = datetime.datetime.now()\n return now < self.created + valid_period \n\n\nclass AllowedUsername(models.Model):\n \"\"\"\n Holds usernames that are considered valid members of Online and the time they expire.\n \"\"\"\n username = models.CharField(_(u\"brukernavn\"), max_length=10)\n registered = models.DateField(_(u\"registrert\"))\n note = models.CharField(_(u\"notat\"), max_length=100)\n description = models.TextField(_(u\"beskrivelse\"), blank=True, null=True)\n expiration_date = models.DateField(_(u\"utl\u00f8psdato\"))\n\n @property\n def is_active(self):\n return datetime.datetime.now() < self.expiration_date\n\n def __unicode__(self):\n return self.username\n\n class Meta:\n verbose_name = _(\"tillatt brukernavn\")\n verbose_name_plural = _(\"tillatte brukernavn\")\n ordering = (\"username\",)\n", "path": "apps/authentication/models.py"}]}
| 2,529 | 301 |
gh_patches_debug_5646
|
rasdani/github-patches
|
git_diff
|
napari__napari-5726
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
3D camera rotation is broken
## 🐛 Bug
It seems we introduced a bug in #5701, according to `git bisect`. The 3D camera rotation is "absolute" instead of relative (i.e: the point you press on the canvas determines the starting point of that rotation).
## To Reproduce
```py
import napari
import numpy as np
v = napari.Viewer(ndisplay=3)
pl = v.add_points(np.random.rand(10, 3) * 100)
```
Then move the camera a few times.
</issue>
<code>
[start of napari/_vispy/camera.py]
1 from typing import Type
2
3 import numpy as np
4 from vispy.scene import ArcballCamera, BaseCamera, PanZoomCamera
5
6 from napari._vispy.utils.quaternion import quaternion2euler
7
8
9 class VispyCamera:
10 """Vipsy camera for both 2D and 3D rendering.
11
12 Parameters
13 ----------
14 view : vispy.scene.widgets.viewbox.ViewBox
15 Viewbox for current scene.
16 camera : napari.components.Camera
17 napari camera model.
18 dims : napari.components.Dims
19 napari dims model.
20 """
21
22 def __init__(self, view, camera, dims) -> None:
23 self._view = view
24 self._camera = camera
25 self._dims = dims
26
27 # Create 2D camera
28 self._2D_camera = MouseToggledPanZoomCamera(aspect=1)
29 # flip y-axis to have correct alignment
30 self._2D_camera.flip = (0, 1, 0)
31 self._2D_camera.viewbox_key_event = viewbox_key_event
32
33 # Create 3D camera
34 self._3D_camera = MouseToggledArcballCamera(fov=0)
35 self._3D_camera.viewbox_key_event = viewbox_key_event
36
37 # Set 2D camera by default
38 self._view.camera = self._2D_camera
39
40 self._dims.events.ndisplay.connect(
41 self._on_ndisplay_change, position='first'
42 )
43
44 self._camera.events.center.connect(self._on_center_change)
45 self._camera.events.zoom.connect(self._on_zoom_change)
46 self._camera.events.angles.connect(self._on_angles_change)
47 self._camera.events.perspective.connect(self._on_perspective_change)
48 self._camera.events.mouse_pan.connect(self._on_mouse_toggles_change)
49 self._camera.events.mouse_zoom.connect(self._on_mouse_toggles_change)
50
51 self._on_ndisplay_change()
52
53 @property
54 def angles(self):
55 """3-tuple: Euler angles of camera in 3D viewing, in degrees.
56 Note that angles might be different than the ones that might have generated the quaternion.
57 """
58
59 if self._view.camera == self._3D_camera:
60 # Do conversion from quaternion representation to euler angles
61 angles = quaternion2euler(
62 self._view.camera._quaternion, degrees=True
63 )
64 else:
65 angles = (0, 0, 90)
66 return angles
67
68 @angles.setter
69 def angles(self, angles):
70 if self.angles == tuple(angles):
71 return
72
73 # Only update angles if current camera is 3D camera
74 if self._view.camera == self._3D_camera:
75 # Create and set quaternion
76 quat = self._view.camera._quaternion.create_from_euler_angles(
77 *angles,
78 degrees=True,
79 )
80 self._view.camera._quaternion = quat
81 self._view.camera.view_changed()
82
83 @property
84 def center(self):
85 """tuple: Center point of camera view for 2D or 3D viewing."""
86 if self._view.camera == self._3D_camera:
87 center = tuple(self._view.camera.center)
88 else:
89 # in 2D, we arbitrarily choose 0.0 as the center in z
90 center = (*self._view.camera.center[:2], 0.0)
91 # switch from VisPy xyz ordering to NumPy prc ordering
92 return center[::-1]
93
94 @center.setter
95 def center(self, center):
96 if self.center == tuple(center):
97 return
98 self._view.camera.center = center[::-1]
99 self._view.camera.view_changed()
100
101 @property
102 def zoom(self):
103 """float: Scale from canvas pixels to world pixels."""
104 canvas_size = np.array(self._view.canvas.size)
105 if self._view.camera == self._3D_camera:
106 # For fov = 0.0 normalize scale factor by canvas size to get scale factor.
107 # Note that the scaling is stored in the `_projection` property of the
108 # camera which is updated in vispy here
109 # https://github.com/vispy/vispy/blob/v0.6.5/vispy/scene/cameras/perspective.py#L301-L313
110 scale = self._view.camera.scale_factor
111 else:
112 scale = np.array(
113 [self._view.camera.rect.width, self._view.camera.rect.height]
114 )
115 scale[np.isclose(scale, 0)] = 1 # fix for #2875
116 zoom = np.min(canvas_size / scale)
117 return zoom
118
119 @zoom.setter
120 def zoom(self, zoom):
121 if self.zoom == zoom:
122 return
123 scale = np.array(self._view.canvas.size) / zoom
124 if self._view.camera == self._3D_camera:
125 self._view.camera.scale_factor = np.min(scale)
126 else:
127 # Set view rectangle, as left, right, width, height
128 corner = np.subtract(self._view.camera.center[:2], scale / 2)
129 self._view.camera.rect = tuple(corner) + tuple(scale)
130
131 @property
132 def perspective(self):
133 """Field of view of camera (only visible in 3D mode)."""
134 return self._3D_camera.fov
135
136 @perspective.setter
137 def perspective(self, perspective):
138 if self.perspective == perspective:
139 return
140 self._3D_camera.fov = perspective
141 self._view.camera.view_changed()
142
143 @property
144 def mouse_zoom(self) -> bool:
145 return self._view.camera.mouse_zoom
146
147 @mouse_zoom.setter
148 def mouse_zoom(self, mouse_zoom: bool):
149 self._view.camera.mouse_zoom = mouse_zoom
150
151 @property
152 def mouse_pan(self) -> bool:
153 return self._view.camera.mouse_pan
154
155 @mouse_pan.setter
156 def mouse_pan(self, mouse_pan: bool):
157 self._view.camera.mouse_pan = mouse_pan
158
159 def _on_ndisplay_change(self):
160 if self._dims.ndisplay == 3:
161 self._view.camera = self._3D_camera
162 else:
163 self._view.camera = self._2D_camera
164
165 self._on_mouse_toggles_change()
166 self._on_center_change()
167 self._on_zoom_change()
168 self._on_angles_change()
169
170 def _on_mouse_toggles_change(self):
171 self.mouse_pan = self._camera.mouse_pan
172 self.mouse_zoom = self._camera.mouse_zoom
173
174 def _on_center_change(self):
175 self.center = self._camera.center[-self._dims.ndisplay :]
176
177 def _on_zoom_change(self):
178 self.zoom = self._camera.zoom
179
180 def _on_perspective_change(self):
181 self.perspective = self._camera.perspective
182
183 def _on_angles_change(self):
184 self.angles = self._camera.angles
185
186 def on_draw(self, _event):
187 """Called whenever the canvas is drawn.
188
189 Update camera model angles, center, and zoom.
190 """
191 with self._camera.events.angles.blocker(self._on_angles_change):
192 self._camera.angles = self.angles
193 with self._camera.events.center.blocker(self._on_center_change):
194 self._camera.center = self.center
195 with self._camera.events.zoom.blocker(self._on_zoom_change):
196 self._camera.zoom = self.zoom
197 with self._camera.events.perspective.blocker(
198 self._on_perspective_change
199 ):
200 self._camera.perspective = self.perspective
201
202
203 def viewbox_key_event(event):
204 """ViewBox key event handler.
205
206 Parameters
207 ----------
208 event : vispy.util.event.Event
209 The vispy event that triggered this method.
210 """
211 return
212
213
214 def add_mouse_pan_zoom_toggles(
215 vispy_camera_cls: Type[BaseCamera],
216 ) -> Type[BaseCamera]:
217 """Add separate mouse pan and mouse zoom toggles to VisPy.
218
219 By default, VisPy uses an ``interactive`` toggle that turns *both*
220 panning and zooming on and off. This decorator adds separate toggles,
221 ``mouse_pan`` and ``mouse_zoom``, to enable controlling them
222 separately.
223
224 Parameters
225 ----------
226 vispy_camera_cls : Type[vispy.scene.cameras.BaseCamera]
227 A VisPy camera class to decorate.
228
229 Returns
230 -------
231 A decorated VisPy camera class.
232 """
233
234 class _vispy_camera_cls(vispy_camera_cls):
235 def __init__(self, **kwargs):
236 super().__init__(**kwargs)
237 self.mouse_pan = True
238 self.mouse_zoom = True
239
240 def viewbox_mouse_event(self, event):
241 if (
242 self.mouse_zoom
243 and event.type == 'mouse_wheel'
244 or self.mouse_pan
245 and event.type in ('mouse_move', 'mouse_press')
246 ):
247 super().viewbox_mouse_event(event)
248 else:
249 event.handled = False
250
251 return _vispy_camera_cls
252
253
254 MouseToggledPanZoomCamera = add_mouse_pan_zoom_toggles(PanZoomCamera)
255 MouseToggledArcballCamera = add_mouse_pan_zoom_toggles(ArcballCamera)
256
[end of napari/_vispy/camera.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/napari/_vispy/camera.py b/napari/_vispy/camera.py
--- a/napari/_vispy/camera.py
+++ b/napari/_vispy/camera.py
@@ -242,7 +242,8 @@
self.mouse_zoom
and event.type == 'mouse_wheel'
or self.mouse_pan
- and event.type in ('mouse_move', 'mouse_press')
+ and event.type
+ in ('mouse_move', 'mouse_press', 'mouse_release')
):
super().viewbox_mouse_event(event)
else:
|
{"golden_diff": "diff --git a/napari/_vispy/camera.py b/napari/_vispy/camera.py\n--- a/napari/_vispy/camera.py\n+++ b/napari/_vispy/camera.py\n@@ -242,7 +242,8 @@\n self.mouse_zoom\n and event.type == 'mouse_wheel'\n or self.mouse_pan\n- and event.type in ('mouse_move', 'mouse_press')\n+ and event.type\n+ in ('mouse_move', 'mouse_press', 'mouse_release')\n ):\n super().viewbox_mouse_event(event)\n else:\n", "issue": "3D camera rotation is broken\n## \ud83d\udc1b Bug\r\n\r\nIt seems we introduced a bug in #5701, according to `git bisect`. The 3D camera rotation is \"absolute\" instead of relative (i.e: the point you press on the canvas determines the starting point of that rotation).\r\n\r\n## To Reproduce\r\n\r\n```py\r\nimport napari\r\nimport numpy as np\r\nv = napari.Viewer(ndisplay=3)\r\npl = v.add_points(np.random.rand(10, 3) * 100)\r\n```\r\n\r\nThen move the camera a few times.\n", "before_files": [{"content": "from typing import Type\n\nimport numpy as np\nfrom vispy.scene import ArcballCamera, BaseCamera, PanZoomCamera\n\nfrom napari._vispy.utils.quaternion import quaternion2euler\n\n\nclass VispyCamera:\n \"\"\"Vipsy camera for both 2D and 3D rendering.\n\n Parameters\n ----------\n view : vispy.scene.widgets.viewbox.ViewBox\n Viewbox for current scene.\n camera : napari.components.Camera\n napari camera model.\n dims : napari.components.Dims\n napari dims model.\n \"\"\"\n\n def __init__(self, view, camera, dims) -> None:\n self._view = view\n self._camera = camera\n self._dims = dims\n\n # Create 2D camera\n self._2D_camera = MouseToggledPanZoomCamera(aspect=1)\n # flip y-axis to have correct alignment\n self._2D_camera.flip = (0, 1, 0)\n self._2D_camera.viewbox_key_event = viewbox_key_event\n\n # Create 3D camera\n self._3D_camera = MouseToggledArcballCamera(fov=0)\n self._3D_camera.viewbox_key_event = viewbox_key_event\n\n # Set 2D camera by default\n self._view.camera = self._2D_camera\n\n self._dims.events.ndisplay.connect(\n self._on_ndisplay_change, position='first'\n )\n\n self._camera.events.center.connect(self._on_center_change)\n self._camera.events.zoom.connect(self._on_zoom_change)\n self._camera.events.angles.connect(self._on_angles_change)\n self._camera.events.perspective.connect(self._on_perspective_change)\n self._camera.events.mouse_pan.connect(self._on_mouse_toggles_change)\n self._camera.events.mouse_zoom.connect(self._on_mouse_toggles_change)\n\n self._on_ndisplay_change()\n\n @property\n def angles(self):\n \"\"\"3-tuple: Euler angles of camera in 3D viewing, in degrees.\n Note that angles might be different than the ones that might have generated the quaternion.\n \"\"\"\n\n if self._view.camera == self._3D_camera:\n # Do conversion from quaternion representation to euler angles\n angles = quaternion2euler(\n self._view.camera._quaternion, degrees=True\n )\n else:\n angles = (0, 0, 90)\n return angles\n\n @angles.setter\n def angles(self, angles):\n if self.angles == tuple(angles):\n return\n\n # Only update angles if current camera is 3D camera\n if self._view.camera == self._3D_camera:\n # Create and set quaternion\n quat = self._view.camera._quaternion.create_from_euler_angles(\n *angles,\n degrees=True,\n )\n self._view.camera._quaternion = quat\n self._view.camera.view_changed()\n\n @property\n def center(self):\n \"\"\"tuple: Center point of camera view for 2D or 3D viewing.\"\"\"\n if self._view.camera == self._3D_camera:\n center = tuple(self._view.camera.center)\n else:\n # in 2D, we arbitrarily choose 0.0 as the center in z\n center = (*self._view.camera.center[:2], 0.0)\n # switch from VisPy xyz ordering to NumPy prc ordering\n return center[::-1]\n\n @center.setter\n def center(self, center):\n if self.center == tuple(center):\n return\n self._view.camera.center = center[::-1]\n self._view.camera.view_changed()\n\n @property\n def zoom(self):\n \"\"\"float: Scale from canvas pixels to world pixels.\"\"\"\n canvas_size = np.array(self._view.canvas.size)\n if self._view.camera == self._3D_camera:\n # For fov = 0.0 normalize scale factor by canvas size to get scale factor.\n # Note that the scaling is stored in the `_projection` property of the\n # camera which is updated in vispy here\n # https://github.com/vispy/vispy/blob/v0.6.5/vispy/scene/cameras/perspective.py#L301-L313\n scale = self._view.camera.scale_factor\n else:\n scale = np.array(\n [self._view.camera.rect.width, self._view.camera.rect.height]\n )\n scale[np.isclose(scale, 0)] = 1 # fix for #2875\n zoom = np.min(canvas_size / scale)\n return zoom\n\n @zoom.setter\n def zoom(self, zoom):\n if self.zoom == zoom:\n return\n scale = np.array(self._view.canvas.size) / zoom\n if self._view.camera == self._3D_camera:\n self._view.camera.scale_factor = np.min(scale)\n else:\n # Set view rectangle, as left, right, width, height\n corner = np.subtract(self._view.camera.center[:2], scale / 2)\n self._view.camera.rect = tuple(corner) + tuple(scale)\n\n @property\n def perspective(self):\n \"\"\"Field of view of camera (only visible in 3D mode).\"\"\"\n return self._3D_camera.fov\n\n @perspective.setter\n def perspective(self, perspective):\n if self.perspective == perspective:\n return\n self._3D_camera.fov = perspective\n self._view.camera.view_changed()\n\n @property\n def mouse_zoom(self) -> bool:\n return self._view.camera.mouse_zoom\n\n @mouse_zoom.setter\n def mouse_zoom(self, mouse_zoom: bool):\n self._view.camera.mouse_zoom = mouse_zoom\n\n @property\n def mouse_pan(self) -> bool:\n return self._view.camera.mouse_pan\n\n @mouse_pan.setter\n def mouse_pan(self, mouse_pan: bool):\n self._view.camera.mouse_pan = mouse_pan\n\n def _on_ndisplay_change(self):\n if self._dims.ndisplay == 3:\n self._view.camera = self._3D_camera\n else:\n self._view.camera = self._2D_camera\n\n self._on_mouse_toggles_change()\n self._on_center_change()\n self._on_zoom_change()\n self._on_angles_change()\n\n def _on_mouse_toggles_change(self):\n self.mouse_pan = self._camera.mouse_pan\n self.mouse_zoom = self._camera.mouse_zoom\n\n def _on_center_change(self):\n self.center = self._camera.center[-self._dims.ndisplay :]\n\n def _on_zoom_change(self):\n self.zoom = self._camera.zoom\n\n def _on_perspective_change(self):\n self.perspective = self._camera.perspective\n\n def _on_angles_change(self):\n self.angles = self._camera.angles\n\n def on_draw(self, _event):\n \"\"\"Called whenever the canvas is drawn.\n\n Update camera model angles, center, and zoom.\n \"\"\"\n with self._camera.events.angles.blocker(self._on_angles_change):\n self._camera.angles = self.angles\n with self._camera.events.center.blocker(self._on_center_change):\n self._camera.center = self.center\n with self._camera.events.zoom.blocker(self._on_zoom_change):\n self._camera.zoom = self.zoom\n with self._camera.events.perspective.blocker(\n self._on_perspective_change\n ):\n self._camera.perspective = self.perspective\n\n\ndef viewbox_key_event(event):\n \"\"\"ViewBox key event handler.\n\n Parameters\n ----------\n event : vispy.util.event.Event\n The vispy event that triggered this method.\n \"\"\"\n return\n\n\ndef add_mouse_pan_zoom_toggles(\n vispy_camera_cls: Type[BaseCamera],\n) -> Type[BaseCamera]:\n \"\"\"Add separate mouse pan and mouse zoom toggles to VisPy.\n\n By default, VisPy uses an ``interactive`` toggle that turns *both*\n panning and zooming on and off. This decorator adds separate toggles,\n ``mouse_pan`` and ``mouse_zoom``, to enable controlling them\n separately.\n\n Parameters\n ----------\n vispy_camera_cls : Type[vispy.scene.cameras.BaseCamera]\n A VisPy camera class to decorate.\n\n Returns\n -------\n A decorated VisPy camera class.\n \"\"\"\n\n class _vispy_camera_cls(vispy_camera_cls):\n def __init__(self, **kwargs):\n super().__init__(**kwargs)\n self.mouse_pan = True\n self.mouse_zoom = True\n\n def viewbox_mouse_event(self, event):\n if (\n self.mouse_zoom\n and event.type == 'mouse_wheel'\n or self.mouse_pan\n and event.type in ('mouse_move', 'mouse_press')\n ):\n super().viewbox_mouse_event(event)\n else:\n event.handled = False\n\n return _vispy_camera_cls\n\n\nMouseToggledPanZoomCamera = add_mouse_pan_zoom_toggles(PanZoomCamera)\nMouseToggledArcballCamera = add_mouse_pan_zoom_toggles(ArcballCamera)\n", "path": "napari/_vispy/camera.py"}]}
| 3,313 | 131 |
gh_patches_debug_26955
|
rasdani/github-patches
|
git_diff
|
alltheplaces__alltheplaces-3411
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
BMO Harris Bank
https://branchlocator.bmoharris.com/
</issue>
<code>
[start of locations/spiders/bmo_harris.py]
1 import html
2 import json
3 import scrapy
4
5 from locations.items import GeojsonPointItem
6 from locations.hours import OpeningHours
7
8
9 class BMOHarrisSpider(scrapy.Spider):
10 name = "bmo-harris"
11 item_attributes = { 'brand': "BMO Harris Bank" }
12 allowed_domains = ["branches.bmoharris.com"]
13 download_delay = 0.5
14 start_urls = (
15 'https://branches.bmoharris.com/',
16 )
17
18 def parse_store(self, response):
19 properties = {
20 'addr_full': response.xpath('//meta[@property="business:contact_data:street_address"]/@content').extract_first(),
21 'phone': response.xpath('//meta[@property="business:contact_data:phone_number"]/@content').extract_first(),
22 'city': response.xpath('//meta[@property="business:contact_data:locality"]/@content').extract_first(),
23 'state': response.xpath('//meta[@property="business:contact_data:region"]/@content').extract_first(),
24 'postcode': response.xpath('//meta[@property="business:contact_data:postal_code"]/@content').extract_first(),
25 'country': response.xpath('//meta[@property="business:contact_data:country_name"]/@content').extract_first(),
26 'ref': response.url,
27 'website': response.url,
28 'lat': response.xpath('//meta[@property="place:location:latitude"]/@content').extract_first(),
29 'lon': response.xpath('//meta[@property="place:location:longitude"]/@content').extract_first(),
30 }
31
32 yield GeojsonPointItem(**properties)
33
34 def parse(self, response):
35 # Step into hierarchy of place
36 for url in response.xpath("//div[@class='itemlist']/p/a/@href").extract():
37 yield scrapy.Request(response.urljoin(url))
38
39 # Look for links to stores
40 for url in response.xpath("//div[@class='itemlist']/li/span[@itemprop='streetAddress']/a/@href").extract():
41 yield scrapy.Request(response.urljoin(url), callback=self.parse_store)
42
[end of locations/spiders/bmo_harris.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/locations/spiders/bmo_harris.py b/locations/spiders/bmo_harris.py
--- a/locations/spiders/bmo_harris.py
+++ b/locations/spiders/bmo_harris.py
@@ -7,13 +7,14 @@
class BMOHarrisSpider(scrapy.Spider):
- name = "bmo-harris"
- item_attributes = { 'brand': "BMO Harris Bank" }
+ name = "bmo_harris"
+ item_attributes = {'brand': "BMO Harris Bank", 'brand_wikidata': "Q4835981"}
allowed_domains = ["branches.bmoharris.com"]
download_delay = 0.5
start_urls = (
'https://branches.bmoharris.com/',
)
+ user_agent = 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.159 Safari/537.36'
def parse_store(self, response):
properties = {
@@ -33,9 +34,9 @@
def parse(self, response):
# Step into hierarchy of place
- for url in response.xpath("//div[@class='itemlist']/p/a/@href").extract():
+ for url in response.xpath("//ul[@class='itemlist']/li/a/@href").extract():
yield scrapy.Request(response.urljoin(url))
# Look for links to stores
- for url in response.xpath("//div[@class='itemlist']/li/span[@itemprop='streetAddress']/a/@href").extract():
+ for url in response.xpath("//ul[@class='itemlist']/li/div/span[@itemprop='streetAddress']/a/@href").extract():
yield scrapy.Request(response.urljoin(url), callback=self.parse_store)
|
{"golden_diff": "diff --git a/locations/spiders/bmo_harris.py b/locations/spiders/bmo_harris.py\n--- a/locations/spiders/bmo_harris.py\n+++ b/locations/spiders/bmo_harris.py\n@@ -7,13 +7,14 @@\n \n \n class BMOHarrisSpider(scrapy.Spider):\n- name = \"bmo-harris\"\n- item_attributes = { 'brand': \"BMO Harris Bank\" }\n+ name = \"bmo_harris\"\n+ item_attributes = {'brand': \"BMO Harris Bank\", 'brand_wikidata': \"Q4835981\"}\n allowed_domains = [\"branches.bmoharris.com\"]\n download_delay = 0.5\n start_urls = (\n 'https://branches.bmoharris.com/',\n )\n+ user_agent = 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.159 Safari/537.36'\n \n def parse_store(self, response):\n properties = {\n@@ -33,9 +34,9 @@\n \n def parse(self, response):\n # Step into hierarchy of place\n- for url in response.xpath(\"//div[@class='itemlist']/p/a/@href\").extract():\n+ for url in response.xpath(\"//ul[@class='itemlist']/li/a/@href\").extract():\n yield scrapy.Request(response.urljoin(url))\n \n # Look for links to stores\n- for url in response.xpath(\"//div[@class='itemlist']/li/span[@itemprop='streetAddress']/a/@href\").extract():\n+ for url in response.xpath(\"//ul[@class='itemlist']/li/div/span[@itemprop='streetAddress']/a/@href\").extract():\n yield scrapy.Request(response.urljoin(url), callback=self.parse_store)\n", "issue": "BMO Harris Bank\nhttps://branchlocator.bmoharris.com/\n", "before_files": [{"content": "import html\nimport json\nimport scrapy\n\nfrom locations.items import GeojsonPointItem\nfrom locations.hours import OpeningHours\n\n\nclass BMOHarrisSpider(scrapy.Spider):\n name = \"bmo-harris\"\n item_attributes = { 'brand': \"BMO Harris Bank\" }\n allowed_domains = [\"branches.bmoharris.com\"]\n download_delay = 0.5\n start_urls = (\n 'https://branches.bmoharris.com/',\n )\n\n def parse_store(self, response):\n properties = {\n 'addr_full': response.xpath('//meta[@property=\"business:contact_data:street_address\"]/@content').extract_first(),\n 'phone': response.xpath('//meta[@property=\"business:contact_data:phone_number\"]/@content').extract_first(),\n 'city': response.xpath('//meta[@property=\"business:contact_data:locality\"]/@content').extract_first(),\n 'state': response.xpath('//meta[@property=\"business:contact_data:region\"]/@content').extract_first(),\n 'postcode': response.xpath('//meta[@property=\"business:contact_data:postal_code\"]/@content').extract_first(),\n 'country': response.xpath('//meta[@property=\"business:contact_data:country_name\"]/@content').extract_first(),\n 'ref': response.url,\n 'website': response.url,\n 'lat': response.xpath('//meta[@property=\"place:location:latitude\"]/@content').extract_first(),\n 'lon': response.xpath('//meta[@property=\"place:location:longitude\"]/@content').extract_first(),\n }\n\n yield GeojsonPointItem(**properties)\n\n def parse(self, response):\n # Step into hierarchy of place\n for url in response.xpath(\"//div[@class='itemlist']/p/a/@href\").extract():\n yield scrapy.Request(response.urljoin(url))\n\n # Look for links to stores\n for url in response.xpath(\"//div[@class='itemlist']/li/span[@itemprop='streetAddress']/a/@href\").extract():\n yield scrapy.Request(response.urljoin(url), callback=self.parse_store)\n", "path": "locations/spiders/bmo_harris.py"}]}
| 1,066 | 421 |
gh_patches_debug_5298
|
rasdani/github-patches
|
git_diff
|
pyca__cryptography-2845
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
_ModuleWithDeprecations doesn't handle patching properly.
`_ModuleWithDeprecations` catches `__getattr__` and `__setattr__` to patch through to the underlying module, but does not intercept `__delattr__`. That means that if you're using something like `mock.patch`, the mock successfully lands in place, but cannot be removed: the mock was applied to the underlying module, but the delete comes from the proxy.
Should be easily fixed.
</issue>
<code>
[start of src/cryptography/utils.py]
1 # This file is dual licensed under the terms of the Apache License, Version
2 # 2.0, and the BSD License. See the LICENSE file in the root of this repository
3 # for complete details.
4
5 from __future__ import absolute_import, division, print_function
6
7 import abc
8 import binascii
9 import inspect
10 import struct
11 import sys
12 import warnings
13
14
15 # the functions deprecated in 1.0 are on an arbitrarily extended deprecation
16 # cycle and should not be removed until we agree on when that cycle ends.
17 DeprecatedIn10 = DeprecationWarning
18 DeprecatedIn12 = DeprecationWarning
19
20
21 def read_only_property(name):
22 return property(lambda self: getattr(self, name))
23
24
25 def register_interface(iface):
26 def register_decorator(klass):
27 verify_interface(iface, klass)
28 iface.register(klass)
29 return klass
30 return register_decorator
31
32
33 if hasattr(int, "from_bytes"):
34 int_from_bytes = int.from_bytes
35 else:
36 def int_from_bytes(data, byteorder, signed=False):
37 assert byteorder == 'big'
38 assert not signed
39
40 if len(data) % 4 != 0:
41 data = (b'\x00' * (4 - (len(data) % 4))) + data
42
43 result = 0
44
45 while len(data) > 0:
46 digit, = struct.unpack('>I', data[:4])
47 result = (result << 32) + digit
48 # TODO: this is quadratic in the length of data
49 data = data[4:]
50
51 return result
52
53
54 def int_to_bytes(integer, length=None):
55 hex_string = '%x' % integer
56 if length is None:
57 n = len(hex_string)
58 else:
59 n = length * 2
60 return binascii.unhexlify(hex_string.zfill(n + (n & 1)))
61
62
63 class InterfaceNotImplemented(Exception):
64 pass
65
66
67 if hasattr(inspect, "signature"):
68 signature = inspect.signature
69 else:
70 signature = inspect.getargspec
71
72
73 def verify_interface(iface, klass):
74 for method in iface.__abstractmethods__:
75 if not hasattr(klass, method):
76 raise InterfaceNotImplemented(
77 "{0} is missing a {1!r} method".format(klass, method)
78 )
79 if isinstance(getattr(iface, method), abc.abstractproperty):
80 # Can't properly verify these yet.
81 continue
82 sig = signature(getattr(iface, method))
83 actual = signature(getattr(klass, method))
84 if sig != actual:
85 raise InterfaceNotImplemented(
86 "{0}.{1}'s signature differs from the expected. Expected: "
87 "{2!r}. Received: {3!r}".format(
88 klass, method, sig, actual
89 )
90 )
91
92
93 if sys.version_info >= (2, 7):
94 def bit_length(x):
95 return x.bit_length()
96 else:
97 def bit_length(x):
98 return len(bin(x)) - (2 + (x <= 0))
99
100
101 class _DeprecatedValue(object):
102 def __init__(self, value, message, warning_class):
103 self.value = value
104 self.message = message
105 self.warning_class = warning_class
106
107
108 class _ModuleWithDeprecations(object):
109 def __init__(self, module):
110 self.__dict__["_module"] = module
111
112 def __getattr__(self, attr):
113 obj = getattr(self._module, attr)
114 if isinstance(obj, _DeprecatedValue):
115 warnings.warn(obj.message, obj.warning_class, stacklevel=2)
116 obj = obj.value
117 return obj
118
119 def __setattr__(self, attr, value):
120 setattr(self._module, attr, value)
121
122 def __dir__(self):
123 return ["_module"] + dir(self._module)
124
125
126 def deprecated(value, module_name, message, warning_class):
127 module = sys.modules[module_name]
128 if not isinstance(module, _ModuleWithDeprecations):
129 sys.modules[module_name] = module = _ModuleWithDeprecations(module)
130 return _DeprecatedValue(value, message, warning_class)
131
[end of src/cryptography/utils.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/src/cryptography/utils.py b/src/cryptography/utils.py
--- a/src/cryptography/utils.py
+++ b/src/cryptography/utils.py
@@ -119,6 +119,13 @@
def __setattr__(self, attr, value):
setattr(self._module, attr, value)
+ def __delattr__(self, attr):
+ obj = getattr(self._module, attr)
+ if isinstance(obj, _DeprecatedValue):
+ warnings.warn(obj.message, obj.warning_class, stacklevel=2)
+
+ delattr(self._module, attr)
+
def __dir__(self):
return ["_module"] + dir(self._module)
|
{"golden_diff": "diff --git a/src/cryptography/utils.py b/src/cryptography/utils.py\n--- a/src/cryptography/utils.py\n+++ b/src/cryptography/utils.py\n@@ -119,6 +119,13 @@\n def __setattr__(self, attr, value):\n setattr(self._module, attr, value)\n \n+ def __delattr__(self, attr):\n+ obj = getattr(self._module, attr)\n+ if isinstance(obj, _DeprecatedValue):\n+ warnings.warn(obj.message, obj.warning_class, stacklevel=2)\n+\n+ delattr(self._module, attr)\n+\n def __dir__(self):\n return [\"_module\"] + dir(self._module)\n", "issue": "_ModuleWithDeprecations doesn't handle patching properly.\n`_ModuleWithDeprecations` catches `__getattr__` and `__setattr__` to patch through to the underlying module, but does not intercept `__delattr__`. That means that if you're using something like `mock.patch`, the mock successfully lands in place, but cannot be removed: the mock was applied to the underlying module, but the delete comes from the proxy.\n\nShould be easily fixed.\n\n", "before_files": [{"content": "# This file is dual licensed under the terms of the Apache License, Version\n# 2.0, and the BSD License. See the LICENSE file in the root of this repository\n# for complete details.\n\nfrom __future__ import absolute_import, division, print_function\n\nimport abc\nimport binascii\nimport inspect\nimport struct\nimport sys\nimport warnings\n\n\n# the functions deprecated in 1.0 are on an arbitrarily extended deprecation\n# cycle and should not be removed until we agree on when that cycle ends.\nDeprecatedIn10 = DeprecationWarning\nDeprecatedIn12 = DeprecationWarning\n\n\ndef read_only_property(name):\n return property(lambda self: getattr(self, name))\n\n\ndef register_interface(iface):\n def register_decorator(klass):\n verify_interface(iface, klass)\n iface.register(klass)\n return klass\n return register_decorator\n\n\nif hasattr(int, \"from_bytes\"):\n int_from_bytes = int.from_bytes\nelse:\n def int_from_bytes(data, byteorder, signed=False):\n assert byteorder == 'big'\n assert not signed\n\n if len(data) % 4 != 0:\n data = (b'\\x00' * (4 - (len(data) % 4))) + data\n\n result = 0\n\n while len(data) > 0:\n digit, = struct.unpack('>I', data[:4])\n result = (result << 32) + digit\n # TODO: this is quadratic in the length of data\n data = data[4:]\n\n return result\n\n\ndef int_to_bytes(integer, length=None):\n hex_string = '%x' % integer\n if length is None:\n n = len(hex_string)\n else:\n n = length * 2\n return binascii.unhexlify(hex_string.zfill(n + (n & 1)))\n\n\nclass InterfaceNotImplemented(Exception):\n pass\n\n\nif hasattr(inspect, \"signature\"):\n signature = inspect.signature\nelse:\n signature = inspect.getargspec\n\n\ndef verify_interface(iface, klass):\n for method in iface.__abstractmethods__:\n if not hasattr(klass, method):\n raise InterfaceNotImplemented(\n \"{0} is missing a {1!r} method\".format(klass, method)\n )\n if isinstance(getattr(iface, method), abc.abstractproperty):\n # Can't properly verify these yet.\n continue\n sig = signature(getattr(iface, method))\n actual = signature(getattr(klass, method))\n if sig != actual:\n raise InterfaceNotImplemented(\n \"{0}.{1}'s signature differs from the expected. Expected: \"\n \"{2!r}. Received: {3!r}\".format(\n klass, method, sig, actual\n )\n )\n\n\nif sys.version_info >= (2, 7):\n def bit_length(x):\n return x.bit_length()\nelse:\n def bit_length(x):\n return len(bin(x)) - (2 + (x <= 0))\n\n\nclass _DeprecatedValue(object):\n def __init__(self, value, message, warning_class):\n self.value = value\n self.message = message\n self.warning_class = warning_class\n\n\nclass _ModuleWithDeprecations(object):\n def __init__(self, module):\n self.__dict__[\"_module\"] = module\n\n def __getattr__(self, attr):\n obj = getattr(self._module, attr)\n if isinstance(obj, _DeprecatedValue):\n warnings.warn(obj.message, obj.warning_class, stacklevel=2)\n obj = obj.value\n return obj\n\n def __setattr__(self, attr, value):\n setattr(self._module, attr, value)\n\n def __dir__(self):\n return [\"_module\"] + dir(self._module)\n\n\ndef deprecated(value, module_name, message, warning_class):\n module = sys.modules[module_name]\n if not isinstance(module, _ModuleWithDeprecations):\n sys.modules[module_name] = module = _ModuleWithDeprecations(module)\n return _DeprecatedValue(value, message, warning_class)\n", "path": "src/cryptography/utils.py"}]}
| 1,800 | 148 |
gh_patches_debug_6466
|
rasdani/github-patches
|
git_diff
|
plone__Products.CMFPlone-1417
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Return HTTP errors in proper format
Proposer: Eric Brehault
Seconder:
# Motivation
When a page does not exist, or has an error, or is not allowed for the user, Plone returns the appropriate HTTP error (404, 500, ...), and the response is a human readable page, properly skinned, which nice for the user.
And if the requested resource is not a page (an image, a JS file, an AJAX call, etc.), Plone also returns this human readable page.
It is useless because the page will not be displayed, and it produces many problems:
- the response is very heavy,
- it involves a lot of processing (Plone will render an entire page for nothing),
- for AJAX call, the response cannot be easily interperted,
- it might produce a cascade of errors (for instance: the regular response is not supposed to be rendered via Diazo, as it is not an HTML page, but the error is rendered by Diazo, and it might produce another error).
# Proposed solution
We could display the human readable error page only if the current request `HTTP_ACCEPT` parameter contains `text/html`, in other cases, we would just return a simple JSON error reponse.
# Proposed implementation
Test the `HTTP_ACCEPT` value in `Products/CMFPlone/skins/plone_templates/standard_error_message.py`, and call the existing template or make a JSON response accordingly.
# Risks
No identified risks.
</issue>
<code>
[start of Products/CMFPlone/skins/plone_templates/standard_error_message.py]
1 ## Script (Python) "standard_error_message"
2 ##bind container=container
3 ##bind context=context
4 ##bind namespace=
5 ##bind script=script
6 ##bind subpath=traverse_subpath
7 ##parameters=**kwargs
8 ##title=Dispatches to relevant error view
9
10 ## by default we handle everything in 1 PageTemplate.
11 # you could easily check for the error_type and
12 # dispatch to an appropriate PageTemplate.
13
14 # Check if the object is traversable, if not it might be a view, get its parent
15 # because we need to render the error on an actual content object
16 from AccessControl import Unauthorized
17 try:
18 while not hasattr(context.aq_explicit, 'restrictedTraverse'):
19 context = context.aq_parent
20 except (Unauthorized, AttributeError):
21 context = context.portal_url.getPortalObject()
22
23 error_type = kwargs.get('error_type', None)
24 error_message = kwargs.get('error_message', None)
25 error_log_url = kwargs.get('error_log_url', None)
26 error_tb = kwargs.get('error_tb', None)
27 error_traceback = kwargs.get('error_traceback', None)
28 error_value = kwargs.get('error_value', None)
29
30 if error_log_url:
31 error_log_id = error_log_url.split('?id=')[1]
32 else:
33 error_log_id = None
34
35
36 no_actions = {'folder': [], 'user': [], 'global': [], 'workflow': []}
37 error_page = context.default_error_message(
38 error_type=error_type,
39 error_message=error_message,
40 error_tb=error_tb,
41 error_value=error_value,
42 error_log_url=error_log_url,
43 error_log_id=error_log_id,
44 no_portlets=True,
45 actions=no_actions)
46
47 return error_page
48
[end of Products/CMFPlone/skins/plone_templates/standard_error_message.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/Products/CMFPlone/skins/plone_templates/standard_error_message.py b/Products/CMFPlone/skins/plone_templates/standard_error_message.py
--- a/Products/CMFPlone/skins/plone_templates/standard_error_message.py
+++ b/Products/CMFPlone/skins/plone_templates/standard_error_message.py
@@ -27,6 +27,10 @@
error_traceback = kwargs.get('error_traceback', None)
error_value = kwargs.get('error_value', None)
+if "text/html" not in context.REQUEST.getHeader('Accept', ''):
+ context.REQUEST.RESPONSE.setHeader("Content-Type", "application/json")
+ return '{"error_type": "{0:s}"}'.format(error_type)
+
if error_log_url:
error_log_id = error_log_url.split('?id=')[1]
else:
|
{"golden_diff": "diff --git a/Products/CMFPlone/skins/plone_templates/standard_error_message.py b/Products/CMFPlone/skins/plone_templates/standard_error_message.py\n--- a/Products/CMFPlone/skins/plone_templates/standard_error_message.py\n+++ b/Products/CMFPlone/skins/plone_templates/standard_error_message.py\n@@ -27,6 +27,10 @@\n error_traceback = kwargs.get('error_traceback', None)\n error_value = kwargs.get('error_value', None)\n \n+if \"text/html\" not in context.REQUEST.getHeader('Accept', ''):\n+ context.REQUEST.RESPONSE.setHeader(\"Content-Type\", \"application/json\")\n+ return '{\"error_type\": \"{0:s}\"}'.format(error_type)\n+\n if error_log_url:\n error_log_id = error_log_url.split('?id=')[1]\n else:\n", "issue": "Return HTTP errors in proper format\nProposer: Eric Brehault\nSeconder:\n# Motivation\n\nWhen a page does not exist, or has an error, or is not allowed for the user, Plone returns the appropriate HTTP error (404, 500, ...), and the response is a human readable page, properly skinned, which nice for the user.\nAnd if the requested resource is not a page (an image, a JS file, an AJAX call, etc.), Plone also returns this human readable page.\nIt is useless because the page will not be displayed, and it produces many problems:\n- the response is very heavy,\n- it involves a lot of processing (Plone will render an entire page for nothing),\n- for AJAX call, the response cannot be easily interperted,\n- it might produce a cascade of errors (for instance: the regular response is not supposed to be rendered via Diazo, as it is not an HTML page, but the error is rendered by Diazo, and it might produce another error).\n# Proposed solution\n\nWe could display the human readable error page only if the current request `HTTP_ACCEPT` parameter contains `text/html`, in other cases, we would just return a simple JSON error reponse.\n# Proposed implementation\n\nTest the `HTTP_ACCEPT` value in `Products/CMFPlone/skins/plone_templates/standard_error_message.py`, and call the existing template or make a JSON response accordingly.\n# Risks\n\nNo identified risks.\n\n", "before_files": [{"content": "## Script (Python) \"standard_error_message\"\n##bind container=container\n##bind context=context\n##bind namespace=\n##bind script=script\n##bind subpath=traverse_subpath\n##parameters=**kwargs\n##title=Dispatches to relevant error view\n\n## by default we handle everything in 1 PageTemplate.\n# you could easily check for the error_type and\n# dispatch to an appropriate PageTemplate.\n\n# Check if the object is traversable, if not it might be a view, get its parent\n# because we need to render the error on an actual content object\nfrom AccessControl import Unauthorized\ntry:\n while not hasattr(context.aq_explicit, 'restrictedTraverse'):\n context = context.aq_parent\nexcept (Unauthorized, AttributeError):\n context = context.portal_url.getPortalObject()\n\nerror_type = kwargs.get('error_type', None)\nerror_message = kwargs.get('error_message', None)\nerror_log_url = kwargs.get('error_log_url', None)\nerror_tb = kwargs.get('error_tb', None)\nerror_traceback = kwargs.get('error_traceback', None)\nerror_value = kwargs.get('error_value', None)\n\nif error_log_url:\n error_log_id = error_log_url.split('?id=')[1]\nelse:\n error_log_id = None\n\n\nno_actions = {'folder': [], 'user': [], 'global': [], 'workflow': []}\nerror_page = context.default_error_message(\n error_type=error_type,\n error_message=error_message,\n error_tb=error_tb,\n error_value=error_value,\n error_log_url=error_log_url,\n error_log_id=error_log_id,\n no_portlets=True,\n actions=no_actions)\n\nreturn error_page\n", "path": "Products/CMFPlone/skins/plone_templates/standard_error_message.py"}]}
| 1,321 | 190 |
gh_patches_debug_3520
|
rasdani/github-patches
|
git_diff
|
encode__uvicorn-1328
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
No `python_requires` defined
### Checklist
- [X] The bug is reproducible against the latest release or `master`.
- [X] There are no similar issues or pull requests to fix it yet.
### Describe the bug
It seems that no `python_requires` is defined for the `uvicorn` package, which in turn results in the latest version being installed in a Python 3.6 (CI) environment (that subsequently fails).
If `python_requires` were defined to restrict the package to supported versions of the interpreter, I would have got an older version (that supported `py36`) instead.
### Steps to reproduce the bug
In a `py36` environment
```
pip install uvicorn
# Run uvicorn
# ...
```
### Expected behavior
An older version is installed that works.
### Actual behavior
`uvicorn` errors out, says `py36` is unsupported.
### Debugging material
_No response_
### Environment
CPython 3.6
### Additional context
_No response_
</issue>
<code>
[start of setup.py]
1 #!/usr/bin/env python
2 # -*- coding: utf-8 -*-
3
4 import os
5 import re
6
7 from setuptools import setup
8
9
10 def get_version(package):
11 """
12 Return package version as listed in `__version__` in `init.py`.
13 """
14 path = os.path.join(package, "__init__.py")
15 init_py = open(path, "r", encoding="utf8").read()
16 return re.search("__version__ = ['\"]([^'\"]+)['\"]", init_py).group(1)
17
18
19 def get_long_description():
20 """
21 Return the README.
22 """
23 return open("README.md", "r", encoding="utf8").read()
24
25
26 def get_packages(package):
27 """
28 Return root package and all sub-packages.
29 """
30 return [
31 dirpath
32 for dirpath, dirnames, filenames in os.walk(package)
33 if os.path.exists(os.path.join(dirpath, "__init__.py"))
34 ]
35
36
37 env_marker_cpython = (
38 "sys_platform != 'win32'"
39 " and (sys_platform != 'cygwin'"
40 " and platform_python_implementation != 'PyPy')"
41 )
42
43 env_marker_win = "sys_platform == 'win32'"
44 env_marker_below_38 = "python_version < '3.8'"
45
46 minimal_requirements = [
47 "asgiref>=3.4.0",
48 "click>=7.0",
49 "h11>=0.8",
50 "typing-extensions;" + env_marker_below_38,
51 ]
52
53
54 extra_requirements = [
55 "websockets>=10.0",
56 "httptools>=0.2.0,<0.4.0",
57 "uvloop>=0.14.0,!=0.15.0,!=0.15.1; " + env_marker_cpython,
58 "colorama>=0.4;" + env_marker_win,
59 "watchgod>=0.6",
60 "python-dotenv>=0.13",
61 "PyYAML>=5.1",
62 ]
63
64
65 setup(
66 name="uvicorn",
67 version=get_version("uvicorn"),
68 url="https://www.uvicorn.org/",
69 license="BSD",
70 description="The lightning-fast ASGI server.",
71 long_description=get_long_description(),
72 long_description_content_type="text/markdown",
73 author="Tom Christie",
74 author_email="[email protected]",
75 packages=get_packages("uvicorn"),
76 install_requires=minimal_requirements,
77 extras_require={"standard": extra_requirements},
78 include_package_data=True,
79 classifiers=[
80 "Development Status :: 4 - Beta",
81 "Environment :: Web Environment",
82 "Intended Audience :: Developers",
83 "License :: OSI Approved :: BSD License",
84 "Operating System :: OS Independent",
85 "Topic :: Internet :: WWW/HTTP",
86 "Programming Language :: Python :: 3",
87 "Programming Language :: Python :: 3.7",
88 "Programming Language :: Python :: 3.8",
89 "Programming Language :: Python :: 3.9",
90 "Programming Language :: Python :: 3.10",
91 "Programming Language :: Python :: Implementation :: CPython",
92 "Programming Language :: Python :: Implementation :: PyPy",
93 ],
94 entry_points="""
95 [console_scripts]
96 uvicorn=uvicorn.main:main
97 """,
98 project_urls={
99 "Funding": "https://github.com/sponsors/encode",
100 "Source": "https://github.com/encode/uvicorn",
101 "Changelog": "https://github.com/encode/uvicorn/blob/master/CHANGELOG.md",
102 },
103 )
104
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -73,6 +73,7 @@
author="Tom Christie",
author_email="[email protected]",
packages=get_packages("uvicorn"),
+ python_requires=">=3.7",
install_requires=minimal_requirements,
extras_require={"standard": extra_requirements},
include_package_data=True,
|
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -73,6 +73,7 @@\n author=\"Tom Christie\",\n author_email=\"[email protected]\",\n packages=get_packages(\"uvicorn\"),\n+ python_requires=\">=3.7\",\n install_requires=minimal_requirements,\n extras_require={\"standard\": extra_requirements},\n include_package_data=True,\n", "issue": "No `python_requires` defined\n### Checklist\r\n\r\n- [X] The bug is reproducible against the latest release or `master`.\r\n- [X] There are no similar issues or pull requests to fix it yet.\r\n\r\n### Describe the bug\r\n\r\nIt seems that no `python_requires` is defined for the `uvicorn` package, which in turn results in the latest version being installed in a Python 3.6 (CI) environment (that subsequently fails).\r\n\r\nIf `python_requires` were defined to restrict the package to supported versions of the interpreter, I would have got an older version (that supported `py36`) instead.\r\n\r\n### Steps to reproduce the bug\r\n\r\nIn a `py36` environment\r\n```\r\npip install uvicorn\r\n# Run uvicorn\r\n# ...\r\n```\r\n\r\n### Expected behavior\r\n\r\nAn older version is installed that works.\r\n\r\n### Actual behavior\r\n\r\n`uvicorn` errors out, says `py36` is unsupported.\r\n\r\n### Debugging material\r\n\r\n_No response_\r\n\r\n### Environment\r\n\r\nCPython 3.6\r\n\r\n### Additional context\r\n\r\n_No response_\n", "before_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\nimport os\nimport re\n\nfrom setuptools import setup\n\n\ndef get_version(package):\n \"\"\"\n Return package version as listed in `__version__` in `init.py`.\n \"\"\"\n path = os.path.join(package, \"__init__.py\")\n init_py = open(path, \"r\", encoding=\"utf8\").read()\n return re.search(\"__version__ = ['\\\"]([^'\\\"]+)['\\\"]\", init_py).group(1)\n\n\ndef get_long_description():\n \"\"\"\n Return the README.\n \"\"\"\n return open(\"README.md\", \"r\", encoding=\"utf8\").read()\n\n\ndef get_packages(package):\n \"\"\"\n Return root package and all sub-packages.\n \"\"\"\n return [\n dirpath\n for dirpath, dirnames, filenames in os.walk(package)\n if os.path.exists(os.path.join(dirpath, \"__init__.py\"))\n ]\n\n\nenv_marker_cpython = (\n \"sys_platform != 'win32'\"\n \" and (sys_platform != 'cygwin'\"\n \" and platform_python_implementation != 'PyPy')\"\n)\n\nenv_marker_win = \"sys_platform == 'win32'\"\nenv_marker_below_38 = \"python_version < '3.8'\"\n\nminimal_requirements = [\n \"asgiref>=3.4.0\",\n \"click>=7.0\",\n \"h11>=0.8\",\n \"typing-extensions;\" + env_marker_below_38,\n]\n\n\nextra_requirements = [\n \"websockets>=10.0\",\n \"httptools>=0.2.0,<0.4.0\",\n \"uvloop>=0.14.0,!=0.15.0,!=0.15.1; \" + env_marker_cpython,\n \"colorama>=0.4;\" + env_marker_win,\n \"watchgod>=0.6\",\n \"python-dotenv>=0.13\",\n \"PyYAML>=5.1\",\n]\n\n\nsetup(\n name=\"uvicorn\",\n version=get_version(\"uvicorn\"),\n url=\"https://www.uvicorn.org/\",\n license=\"BSD\",\n description=\"The lightning-fast ASGI server.\",\n long_description=get_long_description(),\n long_description_content_type=\"text/markdown\",\n author=\"Tom Christie\",\n author_email=\"[email protected]\",\n packages=get_packages(\"uvicorn\"),\n install_requires=minimal_requirements,\n extras_require={\"standard\": extra_requirements},\n include_package_data=True,\n classifiers=[\n \"Development Status :: 4 - Beta\",\n \"Environment :: Web Environment\",\n \"Intended Audience :: Developers\",\n \"License :: OSI Approved :: BSD License\",\n \"Operating System :: OS Independent\",\n \"Topic :: Internet :: WWW/HTTP\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Programming Language :: Python :: 3.10\",\n \"Programming Language :: Python :: Implementation :: CPython\",\n \"Programming Language :: Python :: Implementation :: PyPy\",\n ],\n entry_points=\"\"\"\n [console_scripts]\n uvicorn=uvicorn.main:main\n \"\"\",\n project_urls={\n \"Funding\": \"https://github.com/sponsors/encode\",\n \"Source\": \"https://github.com/encode/uvicorn\",\n \"Changelog\": \"https://github.com/encode/uvicorn/blob/master/CHANGELOG.md\",\n },\n)\n", "path": "setup.py"}]}
| 1,723 | 89 |
gh_patches_debug_19035
|
rasdani/github-patches
|
git_diff
|
opsdroid__opsdroid-960
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Should we drop Python 3.5 support
It has been raised a few times in the Matrix chat that we should consider dropping Python 3.5 support.
There are a number of improvements in [Python 3.6](https://docs.python.org/3/whatsnew/3.6.html) which we could benefit from. This would also be a breaking change for users still on 3.5 and will require them to upgrade to continue using opsdroid.
The reason this has cropped up for me again is because there appears to be a bug still in 3.5.7 which causes problems with catching exceptions within coroutines. This problem is not present in 3.6.8. I would rather not have to work around this bug if I can help it.
We decided to support 3.5+ because that is the default version which comes pre-installed on the latest release of Debian and variations like Raspbian. This should ensure good support for many users without them having to tinker with their Python and provide a good beginner experience.
As this is an open source software project with a motivation around self hosting and privacy it isn't possible to collect user metrics on Python versions being used. Otherwise we could use this data to asses the impact of dropping 3.5.
[Home Assistant](https://www.home-assistant.io/) (the project which inspired much of opsdroid's community practices) are moving to only supporting the two most recent minor versions of Python, which means they will also be dropping 3.5 in the near future.
My proposal would be to follow their lead. This does put a responsibility on us to keep an eye on Python releases (3.8 is [pencilled in](https://www.python.org/dev/peps/pep-0569/) for the end of the year) and remove support as versions are released.
I would love thoughts and feedback from the community.
</issue>
<code>
[start of setup.py]
1 #!/usr/bin/env python3
2 import os
3 from setuptools import setup, find_packages
4 from setuptools.command.build_py import build_py
5 from setuptools.command.sdist import sdist
6 from setuptools.command.develop import develop
7 import versioneer
8
9 PACKAGE_NAME = 'opsdroid'
10 HERE = os.path.abspath(os.path.dirname(__file__))
11 README = open(os.path.join(HERE, 'README.md'), encoding="utf8").read()
12
13 PACKAGES = find_packages(exclude=['tests', 'tests.*', 'modules',
14 'modules.*', 'docs', 'docs.*'])
15
16
17 # For now we simply define the install_requires based on the contents
18 # of requirements.txt. In the future, install_requires may become much
19 # looser than the (automatically) resolved requirements.txt.
20 with open(os.path.join(HERE, 'requirements.txt'), 'r') as fh:
21 REQUIRES = [line.strip() for line in fh]
22
23
24 class Develop(develop):
25 """Custom `develop` command to always build mo files on install -e."""
26
27 def run(self):
28 self.run_command('compile_catalog')
29 develop.run(self) # old style class
30
31
32 class BuildPy(build_py):
33 """Custom `build_py` command to always build mo files for wheels."""
34
35 def run(self):
36 self.run_command('compile_catalog')
37 build_py.run(self) # old style class
38
39
40 class Sdist(sdist):
41 """Custom `sdist` command to ensure that mo files are always created."""
42
43 def run(self):
44 self.run_command('compile_catalog')
45 sdist.run(self) # old style class
46
47
48 setup(
49 name=PACKAGE_NAME,
50 version=versioneer.get_version(),
51 license='Apache License 2.0',
52 url='https://opsdroid.github.io/',
53 download_url='https://github.com/opsdroid/opsdroid/releases',
54 author='Jacob Tomlinson',
55 author_email='[email protected]',
56 description='An open source ChatOps bot framework.',
57 long_description=README,
58 long_description_content_type='text/markdown',
59 packages=PACKAGES,
60 include_package_data=True,
61 zip_safe=False,
62 platforms='any',
63 classifiers=[
64 'Development Status :: 4 - Beta',
65 'Environment :: Console',
66 'Framework :: AsyncIO',
67 'Intended Audience :: Developers',
68 'Intended Audience :: System Administrators',
69 'Intended Audience :: Information Technology',
70 'License :: OSI Approved :: Apache Software License',
71 'Programming Language :: Python',
72 'Programming Language :: Python :: 3',
73 'Programming Language :: Python :: 3 :: Only',
74 'Programming Language :: Python :: 3.5',
75 'Programming Language :: Python :: 3.6',
76 'Programming Language :: Python :: 3.7',
77 'Topic :: Communications :: Chat',
78 'Topic :: Scientific/Engineering :: Artificial Intelligence',
79 'Topic :: Software Development :: Libraries :: Python Modules'
80 ],
81 install_requires=REQUIRES,
82 test_suite='tests',
83 keywords=[
84 'bot',
85 'bot-framework',
86 'opsdroid',
87 'botkit',
88 'python3',
89 'asyncio',
90 'chatops',
91 'devops',
92 'nlu'
93 ],
94 setup_requires=['Babel'],
95 cmdclass=versioneer.get_cmdclass({'sdist': Sdist,
96 'build_py': BuildPy,
97 'develop': Develop}),
98 entry_points={
99 'console_scripts': [
100 'opsdroid = opsdroid.__main__:main'
101 ]
102 },
103 )
104
[end of setup.py]
[start of opsdroid/__main__.py]
1 """Starts opsdroid."""
2
3 import os
4 import subprocess
5 import sys
6 import logging
7 import gettext
8 import time
9 import contextlib
10
11 import click
12
13 from opsdroid import __version__
14 from opsdroid.core import OpsDroid
15 from opsdroid.loader import Loader
16 from opsdroid.const import DEFAULT_LOG_FILENAME, LOCALE_DIR, \
17 EXAMPLE_CONFIG_FILE, DEFAULT_LANGUAGE, DEFAULT_CONFIG_PATH
18
19
20 gettext.install('opsdroid')
21 _LOGGER = logging.getLogger("opsdroid")
22
23
24 def configure_lang(config):
25 """Configure app language based on user config.
26
27 Args:
28 config: Language Configuration and it uses ISO 639-1 code.
29 for more info https://en.m.wikipedia.org/wiki/List_of_ISO_639-1_codes
30
31
32 """
33 lang_code = config.get("lang", DEFAULT_LANGUAGE)
34 if lang_code != DEFAULT_LANGUAGE:
35 lang = gettext.translation(
36 'opsdroid', LOCALE_DIR, (lang_code,), fallback=True)
37 lang.install()
38
39
40 def configure_logging(config):
41 """Configure the root logger based on user config."""
42 rootlogger = logging.getLogger()
43 while rootlogger.handlers:
44 rootlogger.handlers.pop()
45
46 try:
47 if config["logging"]["path"]:
48 logfile_path = os.path.expanduser(config["logging"]["path"])
49 else:
50 logfile_path = config["logging"]["path"]
51 except KeyError:
52 logfile_path = DEFAULT_LOG_FILENAME
53
54 try:
55 log_level = get_logging_level(
56 config["logging"]["level"])
57 except KeyError:
58 log_level = logging.INFO
59
60 rootlogger.setLevel(log_level)
61 formatter = logging.Formatter('%(levelname)s %(name)s: %(message)s')
62
63 console_handler = logging.StreamHandler()
64 console_handler.setLevel(log_level)
65 console_handler.setFormatter(formatter)
66 rootlogger.addHandler(console_handler)
67
68 with contextlib.suppress(KeyError):
69 if not config["logging"]["console"]:
70 console_handler.setLevel(logging.CRITICAL)
71
72 if logfile_path:
73 logdir = os.path.dirname(os.path.realpath(logfile_path))
74 if not os.path.isdir(logdir):
75 os.makedirs(logdir)
76 file_handler = logging.FileHandler(logfile_path)
77 file_handler.setLevel(log_level)
78 file_handler.setFormatter(formatter)
79 rootlogger.addHandler(file_handler)
80 _LOGGER.info("="*40)
81 _LOGGER.info(_("Started opsdroid %s"), __version__)
82
83
84 def get_logging_level(logging_level):
85 """Get the logger level based on the user configuration.
86
87 Args:
88 logging_level: logging level from config file
89
90 Returns:
91 logging LEVEL ->
92 CRITICAL = 50
93 FATAL = CRITICAL
94 ERROR = 40
95 WARNING = 30
96 WARN = WARNING
97 INFO = 20
98 DEBUG = 10
99 NOTSET = 0
100
101 """
102 if logging_level == 'critical':
103 return logging.CRITICAL
104
105 if logging_level == 'error':
106 return logging.ERROR
107 if logging_level == 'warning':
108 return logging.WARNING
109
110 if logging_level == 'debug':
111 return logging.DEBUG
112
113 return logging.INFO
114
115
116 def check_dependencies():
117 """Check for system dependencies required by opsdroid."""
118 if sys.version_info.major < 3 or sys.version_info.minor < 5:
119 logging.critical(_("Whoops! opsdroid requires python 3.5 or above."))
120 sys.exit(1)
121
122
123 def print_version(ctx, param, value):
124 """Print out the version of opsdroid that is installed."""
125 if not value or ctx.resilient_parsing:
126 return
127 click.echo('opsdroid {version}'.format(version=__version__))
128 ctx.exit(0)
129
130
131 def print_example_config(ctx, param, value):
132 """Print out the example config."""
133 if not value or ctx.resilient_parsing:
134 return
135 with open(EXAMPLE_CONFIG_FILE, 'r') as conf:
136 click.echo(conf.read())
137 ctx.exit(0)
138
139
140 def edit_files(ctx, param, value):
141 """Open config/log file with favourite editor."""
142 if value == 'config':
143 file = DEFAULT_CONFIG_PATH
144 elif value == 'log':
145 file = DEFAULT_LOG_FILENAME
146 else:
147 return
148
149 editor = os.environ.get('EDITOR', 'vi')
150 if editor == 'vi':
151 click.echo('You are about to edit a file in vim. \n'
152 'Read the tutorial on vim at: https://bit.ly/2HRvvrB')
153 time.sleep(3)
154
155 subprocess.run([editor, file])
156 ctx.exit(0)
157
158
159 def welcome_message(config):
160 """Add welcome message if set to true in configuration.
161
162 Args:
163 config: config loaded by Loader
164
165 Raises:
166 KeyError: If 'welcome-message' key is not found in configuration file
167
168 """
169 try:
170 if config['welcome-message']:
171 _LOGGER.info("=" * 40)
172 _LOGGER.info(_("You can customise your opsdroid by modifying "
173 "your configuration.yaml"))
174 _LOGGER.info(_("Read more at: "
175 "http://opsdroid.readthedocs.io/#configuration"))
176 _LOGGER.info(_("Watch the Get Started Videos at: "
177 "http://bit.ly/2fnC0Fh"))
178 _LOGGER.info(_("Install Opsdroid Desktop at: \n"
179 "https://github.com/opsdroid/opsdroid-desktop/"
180 "releases"))
181 _LOGGER.info("=" * 40)
182 except KeyError:
183 _LOGGER.warning(_("'welcome-message: true/false' is missing in "
184 "configuration.yaml"))
185
186
187 @click.command()
188 @click.option('--gen-config', is_flag=True, callback=print_example_config,
189 expose_value=False, default=False,
190 help='Print an example config and exit.')
191 @click.option('--version', '-v', is_flag=True, callback=print_version,
192 expose_value=False, default=False, is_eager=True,
193 help='Print the version and exit.')
194 @click.option('--edit-config', '-e', is_flag=True, callback=edit_files,
195 default=False, flag_value='config', expose_value=False,
196 help='Opens configuration.yaml with your favorite editor'
197 ' and exits.')
198 @click.option('--view-log', '-l', is_flag=True, callback=edit_files,
199 default=False, flag_value='log', expose_value=False,
200 help='Opens opsdroid logs with your favorite editor'
201 ' and exits.')
202 def main():
203 """Opsdroid is a chat bot framework written in Python.
204
205 It is designed to be extendable, scalable and simple.
206 See https://opsdroid.github.io/ for more information.
207 """
208 check_dependencies()
209
210 config = Loader.load_config_file([
211 "configuration.yaml",
212 DEFAULT_CONFIG_PATH,
213 "/etc/opsdroid/configuration.yaml"
214 ])
215 configure_lang(config)
216 configure_logging(config)
217 welcome_message(config)
218
219 with OpsDroid(config=config) as opsdroid:
220 opsdroid.load()
221 opsdroid.run()
222
223
224 def init():
225 """Enter the application."""
226 if __name__ == "__main__":
227 main()
228
229
230 init()
231
[end of opsdroid/__main__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/opsdroid/__main__.py b/opsdroid/__main__.py
--- a/opsdroid/__main__.py
+++ b/opsdroid/__main__.py
@@ -115,8 +115,8 @@
def check_dependencies():
"""Check for system dependencies required by opsdroid."""
- if sys.version_info.major < 3 or sys.version_info.minor < 5:
- logging.critical(_("Whoops! opsdroid requires python 3.5 or above."))
+ if sys.version_info.major < 3 or sys.version_info.minor < 6:
+ logging.critical(_("Whoops! opsdroid requires python 3.6 or above."))
sys.exit(1)
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -71,7 +71,6 @@
'Programming Language :: Python',
'Programming Language :: Python :: 3',
'Programming Language :: Python :: 3 :: Only',
- 'Programming Language :: Python :: 3.5',
'Programming Language :: Python :: 3.6',
'Programming Language :: Python :: 3.7',
'Topic :: Communications :: Chat',
|
{"golden_diff": "diff --git a/opsdroid/__main__.py b/opsdroid/__main__.py\n--- a/opsdroid/__main__.py\n+++ b/opsdroid/__main__.py\n@@ -115,8 +115,8 @@\n \n def check_dependencies():\n \"\"\"Check for system dependencies required by opsdroid.\"\"\"\n- if sys.version_info.major < 3 or sys.version_info.minor < 5:\n- logging.critical(_(\"Whoops! opsdroid requires python 3.5 or above.\"))\n+ if sys.version_info.major < 3 or sys.version_info.minor < 6:\n+ logging.critical(_(\"Whoops! opsdroid requires python 3.6 or above.\"))\n sys.exit(1)\n \n \ndiff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -71,7 +71,6 @@\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3 :: Only',\n- 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Topic :: Communications :: Chat',\n", "issue": "Should we drop Python 3.5 support\nIt has been raised a few times in the Matrix chat that we should consider dropping Python 3.5 support.\r\n\r\nThere are a number of improvements in [Python 3.6](https://docs.python.org/3/whatsnew/3.6.html) which we could benefit from. This would also be a breaking change for users still on 3.5 and will require them to upgrade to continue using opsdroid.\r\n\r\nThe reason this has cropped up for me again is because there appears to be a bug still in 3.5.7 which causes problems with catching exceptions within coroutines. This problem is not present in 3.6.8. I would rather not have to work around this bug if I can help it.\r\n\r\nWe decided to support 3.5+ because that is the default version which comes pre-installed on the latest release of Debian and variations like Raspbian. This should ensure good support for many users without them having to tinker with their Python and provide a good beginner experience.\r\n\r\nAs this is an open source software project with a motivation around self hosting and privacy it isn't possible to collect user metrics on Python versions being used. Otherwise we could use this data to asses the impact of dropping 3.5.\r\n\r\n[Home Assistant](https://www.home-assistant.io/) (the project which inspired much of opsdroid's community practices) are moving to only supporting the two most recent minor versions of Python, which means they will also be dropping 3.5 in the near future.\r\n\r\nMy proposal would be to follow their lead. This does put a responsibility on us to keep an eye on Python releases (3.8 is [pencilled in](https://www.python.org/dev/peps/pep-0569/) for the end of the year) and remove support as versions are released.\r\n\r\nI would love thoughts and feedback from the community.\n", "before_files": [{"content": "#!/usr/bin/env python3\nimport os\nfrom setuptools import setup, find_packages\nfrom setuptools.command.build_py import build_py\nfrom setuptools.command.sdist import sdist\nfrom setuptools.command.develop import develop\nimport versioneer\n\nPACKAGE_NAME = 'opsdroid'\nHERE = os.path.abspath(os.path.dirname(__file__))\nREADME = open(os.path.join(HERE, 'README.md'), encoding=\"utf8\").read()\n\nPACKAGES = find_packages(exclude=['tests', 'tests.*', 'modules',\n 'modules.*', 'docs', 'docs.*'])\n\n\n# For now we simply define the install_requires based on the contents\n# of requirements.txt. In the future, install_requires may become much\n# looser than the (automatically) resolved requirements.txt.\nwith open(os.path.join(HERE, 'requirements.txt'), 'r') as fh:\n REQUIRES = [line.strip() for line in fh]\n\n\nclass Develop(develop):\n \"\"\"Custom `develop` command to always build mo files on install -e.\"\"\"\n\n def run(self):\n self.run_command('compile_catalog')\n develop.run(self) # old style class\n\n\nclass BuildPy(build_py):\n \"\"\"Custom `build_py` command to always build mo files for wheels.\"\"\"\n\n def run(self):\n self.run_command('compile_catalog')\n build_py.run(self) # old style class\n\n\nclass Sdist(sdist):\n \"\"\"Custom `sdist` command to ensure that mo files are always created.\"\"\"\n\n def run(self):\n self.run_command('compile_catalog')\n sdist.run(self) # old style class\n\n\nsetup(\n name=PACKAGE_NAME,\n version=versioneer.get_version(),\n license='Apache License 2.0',\n url='https://opsdroid.github.io/',\n download_url='https://github.com/opsdroid/opsdroid/releases',\n author='Jacob Tomlinson',\n author_email='[email protected]',\n description='An open source ChatOps bot framework.',\n long_description=README,\n long_description_content_type='text/markdown',\n packages=PACKAGES,\n include_package_data=True,\n zip_safe=False,\n platforms='any',\n classifiers=[\n 'Development Status :: 4 - Beta',\n 'Environment :: Console',\n 'Framework :: AsyncIO',\n 'Intended Audience :: Developers',\n 'Intended Audience :: System Administrators',\n 'Intended Audience :: Information Technology',\n 'License :: OSI Approved :: Apache Software License',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3 :: Only',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Topic :: Communications :: Chat',\n 'Topic :: Scientific/Engineering :: Artificial Intelligence',\n 'Topic :: Software Development :: Libraries :: Python Modules'\n ],\n install_requires=REQUIRES,\n test_suite='tests',\n keywords=[\n 'bot',\n 'bot-framework',\n 'opsdroid',\n 'botkit',\n 'python3',\n 'asyncio',\n 'chatops',\n 'devops',\n 'nlu'\n ],\n setup_requires=['Babel'],\n cmdclass=versioneer.get_cmdclass({'sdist': Sdist,\n 'build_py': BuildPy,\n 'develop': Develop}),\n entry_points={\n 'console_scripts': [\n 'opsdroid = opsdroid.__main__:main'\n ]\n },\n)\n", "path": "setup.py"}, {"content": "\"\"\"Starts opsdroid.\"\"\"\n\nimport os\nimport subprocess\nimport sys\nimport logging\nimport gettext\nimport time\nimport contextlib\n\nimport click\n\nfrom opsdroid import __version__\nfrom opsdroid.core import OpsDroid\nfrom opsdroid.loader import Loader\nfrom opsdroid.const import DEFAULT_LOG_FILENAME, LOCALE_DIR, \\\n EXAMPLE_CONFIG_FILE, DEFAULT_LANGUAGE, DEFAULT_CONFIG_PATH\n\n\ngettext.install('opsdroid')\n_LOGGER = logging.getLogger(\"opsdroid\")\n\n\ndef configure_lang(config):\n \"\"\"Configure app language based on user config.\n\n Args:\n config: Language Configuration and it uses ISO 639-1 code.\n for more info https://en.m.wikipedia.org/wiki/List_of_ISO_639-1_codes\n\n\n \"\"\"\n lang_code = config.get(\"lang\", DEFAULT_LANGUAGE)\n if lang_code != DEFAULT_LANGUAGE:\n lang = gettext.translation(\n 'opsdroid', LOCALE_DIR, (lang_code,), fallback=True)\n lang.install()\n\n\ndef configure_logging(config):\n \"\"\"Configure the root logger based on user config.\"\"\"\n rootlogger = logging.getLogger()\n while rootlogger.handlers:\n rootlogger.handlers.pop()\n\n try:\n if config[\"logging\"][\"path\"]:\n logfile_path = os.path.expanduser(config[\"logging\"][\"path\"])\n else:\n logfile_path = config[\"logging\"][\"path\"]\n except KeyError:\n logfile_path = DEFAULT_LOG_FILENAME\n\n try:\n log_level = get_logging_level(\n config[\"logging\"][\"level\"])\n except KeyError:\n log_level = logging.INFO\n\n rootlogger.setLevel(log_level)\n formatter = logging.Formatter('%(levelname)s %(name)s: %(message)s')\n\n console_handler = logging.StreamHandler()\n console_handler.setLevel(log_level)\n console_handler.setFormatter(formatter)\n rootlogger.addHandler(console_handler)\n\n with contextlib.suppress(KeyError):\n if not config[\"logging\"][\"console\"]:\n console_handler.setLevel(logging.CRITICAL)\n\n if logfile_path:\n logdir = os.path.dirname(os.path.realpath(logfile_path))\n if not os.path.isdir(logdir):\n os.makedirs(logdir)\n file_handler = logging.FileHandler(logfile_path)\n file_handler.setLevel(log_level)\n file_handler.setFormatter(formatter)\n rootlogger.addHandler(file_handler)\n _LOGGER.info(\"=\"*40)\n _LOGGER.info(_(\"Started opsdroid %s\"), __version__)\n\n\ndef get_logging_level(logging_level):\n \"\"\"Get the logger level based on the user configuration.\n\n Args:\n logging_level: logging level from config file\n\n Returns:\n logging LEVEL ->\n CRITICAL = 50\n FATAL = CRITICAL\n ERROR = 40\n WARNING = 30\n WARN = WARNING\n INFO = 20\n DEBUG = 10\n NOTSET = 0\n\n \"\"\"\n if logging_level == 'critical':\n return logging.CRITICAL\n\n if logging_level == 'error':\n return logging.ERROR\n if logging_level == 'warning':\n return logging.WARNING\n\n if logging_level == 'debug':\n return logging.DEBUG\n\n return logging.INFO\n\n\ndef check_dependencies():\n \"\"\"Check for system dependencies required by opsdroid.\"\"\"\n if sys.version_info.major < 3 or sys.version_info.minor < 5:\n logging.critical(_(\"Whoops! opsdroid requires python 3.5 or above.\"))\n sys.exit(1)\n\n\ndef print_version(ctx, param, value):\n \"\"\"Print out the version of opsdroid that is installed.\"\"\"\n if not value or ctx.resilient_parsing:\n return\n click.echo('opsdroid {version}'.format(version=__version__))\n ctx.exit(0)\n\n\ndef print_example_config(ctx, param, value):\n \"\"\"Print out the example config.\"\"\"\n if not value or ctx.resilient_parsing:\n return\n with open(EXAMPLE_CONFIG_FILE, 'r') as conf:\n click.echo(conf.read())\n ctx.exit(0)\n\n\ndef edit_files(ctx, param, value):\n \"\"\"Open config/log file with favourite editor.\"\"\"\n if value == 'config':\n file = DEFAULT_CONFIG_PATH\n elif value == 'log':\n file = DEFAULT_LOG_FILENAME\n else:\n return\n\n editor = os.environ.get('EDITOR', 'vi')\n if editor == 'vi':\n click.echo('You are about to edit a file in vim. \\n'\n 'Read the tutorial on vim at: https://bit.ly/2HRvvrB')\n time.sleep(3)\n\n subprocess.run([editor, file])\n ctx.exit(0)\n\n\ndef welcome_message(config):\n \"\"\"Add welcome message if set to true in configuration.\n\n Args:\n config: config loaded by Loader\n\n Raises:\n KeyError: If 'welcome-message' key is not found in configuration file\n\n \"\"\"\n try:\n if config['welcome-message']:\n _LOGGER.info(\"=\" * 40)\n _LOGGER.info(_(\"You can customise your opsdroid by modifying \"\n \"your configuration.yaml\"))\n _LOGGER.info(_(\"Read more at: \"\n \"http://opsdroid.readthedocs.io/#configuration\"))\n _LOGGER.info(_(\"Watch the Get Started Videos at: \"\n \"http://bit.ly/2fnC0Fh\"))\n _LOGGER.info(_(\"Install Opsdroid Desktop at: \\n\"\n \"https://github.com/opsdroid/opsdroid-desktop/\"\n \"releases\"))\n _LOGGER.info(\"=\" * 40)\n except KeyError:\n _LOGGER.warning(_(\"'welcome-message: true/false' is missing in \"\n \"configuration.yaml\"))\n\n\[email protected]()\[email protected]('--gen-config', is_flag=True, callback=print_example_config,\n expose_value=False, default=False,\n help='Print an example config and exit.')\[email protected]('--version', '-v', is_flag=True, callback=print_version,\n expose_value=False, default=False, is_eager=True,\n help='Print the version and exit.')\[email protected]('--edit-config', '-e', is_flag=True, callback=edit_files,\n default=False, flag_value='config', expose_value=False,\n help='Opens configuration.yaml with your favorite editor'\n ' and exits.')\[email protected]('--view-log', '-l', is_flag=True, callback=edit_files,\n default=False, flag_value='log', expose_value=False,\n help='Opens opsdroid logs with your favorite editor'\n ' and exits.')\ndef main():\n \"\"\"Opsdroid is a chat bot framework written in Python.\n\n It is designed to be extendable, scalable and simple.\n See https://opsdroid.github.io/ for more information.\n \"\"\"\n check_dependencies()\n\n config = Loader.load_config_file([\n \"configuration.yaml\",\n DEFAULT_CONFIG_PATH,\n \"/etc/opsdroid/configuration.yaml\"\n ])\n configure_lang(config)\n configure_logging(config)\n welcome_message(config)\n\n with OpsDroid(config=config) as opsdroid:\n opsdroid.load()\n opsdroid.run()\n\n\ndef init():\n \"\"\"Enter the application.\"\"\"\n if __name__ == \"__main__\":\n main()\n\n\ninit()\n", "path": "opsdroid/__main__.py"}]}
| 4,036 | 272 |
gh_patches_debug_36295
|
rasdani/github-patches
|
git_diff
|
fossasia__open-event-server-2278
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Tab Sales in Admin section with relevant information
Add tab "Sales" in admin section at `https://open-event-dev.herokuapp.com/admin/`
We need to display ticket sales in a suitable way to understand the status of the system. How can this be best achieved. Useful information includes:
- [x] Sales by event
- [x] Sales by organizer, and email of organizer
- [x] Sales by location
- [x] Sale depending on date and period (maybe search option)
- [x] Fees by event
- [x] Status/list of (automatic) emails sent to organizer.
- [x] Status of fees paid (invoice to be sent, pending, late)
- [ ] What other information is useful?
</issue>
<code>
[start of app/models/event_invoice.py]
1 import uuid
2 from datetime import datetime
3 import time
4
5 from app.helpers.helpers import get_count
6 from . import db
7
8 def get_new_identifier():
9 identifier = str(uuid.uuid4())
10 count = get_count(EventInvoice.query.filter_by(identifier=identifier))
11 if count == 0:
12 return identifier
13 else:
14 return get_new_identifier()
15
16 class EventInvoice(db.Model):
17 """
18 Stripe authorization information for an event.
19 """
20 __tablename__ = 'event_invoices'
21
22 id = db.Column(db.Integer, primary_key=True)
23 identifier = db.Column(db.String, unique=True)
24 amount = db.Column(db.Float)
25 address = db.Column(db.String)
26 city = db.Column(db.String)
27 state = db.Column(db.String)
28 country = db.Column(db.String)
29 zipcode = db.Column(db.String)
30
31 user_id = db.Column(db.Integer, db.ForeignKey('user.id'))
32 event_id = db.Column(db.Integer, db.ForeignKey('events.id', ondelete='SET NULL'))
33
34 created_at = db.Column(db.DateTime)
35 completed_at = db.Column(db.DateTime, nullable=True, default=None)
36 transaction_id = db.Column(db.String)
37 paid_via = db.Column(db.String)
38 payment_mode = db.Column(db.String)
39 brand = db.Column(db.String)
40 exp_month = db.Column(db.Integer)
41 exp_year = db.Column(db.Integer)
42 last4 = db.Column(db.String)
43 stripe_token = db.Column(db.String)
44 paypal_token = db.Column(db.String)
45 status = db.Column(db.String)
46
47 event = db.relationship('Event', backref='invoices')
48 user = db.relationship('User', backref='invoices')
49
50 def __init__(self,
51 amount=None,
52 address=None,
53 city=city,
54 state=None,
55 country=None,
56 zipcode=None,
57 transaction_id=None,
58 paid_via=None,
59 user_id=None,
60 event_id=None):
61 self.identifier = get_new_identifier()
62 self.amount = amount
63 self.address = address
64 self.state = state
65 self.country = country
66 self.zipcode = zipcode
67 self.user_id = user_id
68 self.event_id = event_id
69 self.transaction_id = transaction_id
70 self.paid_via = paid_via
71 self.created_at = datetime.utcnow()
72
73 def get_invoice_number(self):
74 return 'I' + str(int(time.mktime(self.created_at.timetuple()))) + '-' + str(self.id)
75
76 def __repr__(self):
77 return '<EventInvoice %r>' % self.stripe_user_id
78
79 def __str__(self):
80 return unicode(self).encode('utf-8')
81
82 def __unicode__(self):
83 return self.stripe_user_id
84
[end of app/models/event_invoice.py]
[start of app/views/admin/super_admin/sales.py]
1 import copy
2 from datetime import datetime
3
4 from flask import request
5 from flask import url_for
6 from flask_admin import expose
7 from werkzeug.exceptions import abort
8 from werkzeug.utils import redirect
9
10 from app import forex
11 from app.helpers.data_getter import DataGetter
12 from app.helpers.payment import get_fee
13 from app.views.admin.super_admin.super_admin_base import SuperAdminBaseView, SALES
14 from app.helpers.ticketing import TicketingManager
15
16 class SuperAdminSalesView(SuperAdminBaseView):
17 PANEL_NAME = SALES
18 display_currency = 'USD'
19
20 @expose('/')
21 def index(self):
22 return redirect(url_for('.sales_by_events_view', path='events'))
23
24 @expose('/fees/')
25 def fees_by_events_view(self):
26 from_date = request.args.get('from_date')
27 to_date = request.args.get('to_date')
28
29 if ('from_date' in request.args and not from_date) or ('to_date' in request.args and not to_date) or \
30 ('from_date' in request.args and 'to_date' not in request.args) or \
31 ('to_date' in request.args and 'from_date' not in request.args):
32
33 return redirect(url_for('.fees_by_events_view'))
34
35 if from_date and to_date:
36 orders = TicketingManager.get_orders(
37 from_date=datetime.strptime(from_date, '%d/%m/%Y'),
38 to_date=datetime.strptime(to_date, '%d/%m/%Y'),
39 status='completed'
40 )
41 else:
42 orders = TicketingManager.get_orders(status='completed')
43
44 events = DataGetter.get_all_events()
45
46 fee_summary = {}
47 for event in events:
48 fee_summary[str(event.id)] = {
49 'name': event.name,
50 'payment_currency': event.payment_currency,
51 'fee_rate': get_fee(event.payment_currency),
52 'fee_amount': 0,
53 'tickets_count': 0
54 }
55
56 fee_total = 0
57 tickets_total = 0
58
59 for order in orders:
60 for order_ticket in order.tickets:
61 fee_summary[str(order.event.id)]['tickets_count'] += order_ticket.quantity
62 tickets_total += order_ticket.quantity
63 ticket = TicketingManager.get_ticket(order_ticket.ticket_id)
64 if order.paid_via != 'free' and order.amount > 0 and ticket.price > 0:
65 fee = ticket.price * (get_fee(order.event.payment_currency)/100)
66 fee = forex(order.event.payment_currency, self.display_currency, fee)
67 fee_summary[str(order.event.id)]['fee_amount'] += fee
68 fee_total += fee
69
70 return self.render('/gentelella/admin/super_admin/sales/fees.html',
71 fee_summary=fee_summary,
72 display_currency=self.display_currency,
73 from_date=from_date,
74 to_date=to_date,
75 tickets_total=tickets_total,
76 fee_total=fee_total)
77
78 @expose('/<path>/')
79 def sales_by_events_view(self, path):
80
81 from_date = request.args.get('from_date')
82 to_date = request.args.get('to_date')
83
84 if ('from_date' in request.args and not from_date) or ('to_date' in request.args and not to_date) or \
85 ('from_date' in request.args and 'to_date' not in request.args) or \
86 ('to_date' in request.args and 'from_date' not in request.args):
87
88 return redirect(url_for('.sales_by_events_view', path=path))
89
90 if from_date and to_date:
91 orders = TicketingManager.get_orders(
92 from_date=datetime.strptime(from_date, '%d/%m/%Y'),
93 to_date=datetime.strptime(to_date, '%d/%m/%Y')
94 )
95 else:
96 orders = TicketingManager.get_orders()
97
98 events = DataGetter.get_all_events()
99
100 completed_count = 0
101 completed_amount = 0
102 tickets_count = 0
103
104 orders_summary = {
105 'completed': {
106 'class': 'success',
107 'tickets_count': 0,
108 'orders_count': 0,
109 'total_sales': 0
110 },
111 'pending': {
112 'class': 'warning',
113 'tickets_count': 0,
114 'orders_count': 0,
115 'total_sales': 0
116 },
117 'expired': {
118 'class': 'danger',
119 'tickets_count': 0,
120 'orders_count': 0,
121 'total_sales': 0
122 }
123 }
124
125 tickets_summary_event_wise = {}
126 tickets_summary_organizer_wise = {}
127 tickets_summary_location_wise = {}
128 for event in events:
129 tickets_summary_event_wise[str(event.id)] = {
130 'name': event.name,
131 'payment_currency': event.payment_currency,
132 'completed': {
133 'tickets_count': 0,
134 'sales': 0
135 },
136 'pending': {
137 'tickets_count': 0,
138 'sales': 0
139 },
140 'expired': {
141 'class': 'danger',
142 'tickets_count': 0,
143 'sales': 0
144 }
145 }
146 tickets_summary_organizer_wise[str(event.creator_id)] = \
147 copy.deepcopy(tickets_summary_event_wise[str(event.id)])
148 if event.creator:
149 tickets_summary_organizer_wise[str(event.creator_id)]['name'] = event.creator.email
150
151 tickets_summary_location_wise[unicode(event.searchable_location_name)] = \
152 copy.deepcopy(tickets_summary_event_wise[str(event.id)])
153 tickets_summary_location_wise[unicode(event.searchable_location_name)]['name'] = \
154 event.searchable_location_name
155
156 for order in orders:
157 if order.status == 'initialized':
158 order.status = 'pending'
159 orders_summary[str(order.status)]['orders_count'] += 1
160 orders_summary[str(order.status)]['total_sales'] += forex(order.event.payment_currency,
161 self.display_currency, order.amount)
162 for order_ticket in order.tickets:
163 orders_summary[str(order.status)]['tickets_count'] += order_ticket.quantity
164 ticket = TicketingManager.get_ticket(order_ticket.ticket_id)
165 tickets_summary_event_wise[str(order.event_id)][str(order.status)]['tickets_count'] \
166 += order_ticket.quantity
167 tickets_summary_organizer_wise[str(order.event.creator_id)][str(order.status)]['tickets_count'] \
168 += order_ticket.quantity
169 tickets_summary_location_wise[str(order
170 .event.searchable_location_name)][str(order
171 .status)]['tickets_count'] \
172 += order_ticket.quantity
173
174 if order.paid_via != 'free' and order.amount > 0:
175 tickets_summary_event_wise[str(order.event_id)][str(order.status)]['sales'] += \
176 order_ticket.quantity * ticket.price
177 tickets_summary_organizer_wise[str(order.event.creator_id)][str(order.status)]['sales'] += \
178 order_ticket.quantity * ticket.price
179 tickets_summary_location_wise[str(order.event.
180 searchable_location_name)][str(order.
181 status)]['sales'] += \
182 order_ticket.quantity * ticket.price
183
184 if path == 'events':
185 return self.render('/gentelella/admin/super_admin/sales/by_events.html',
186 tickets_summary=tickets_summary_event_wise,
187 display_currency=self.display_currency,
188 from_date=from_date,
189 to_date=to_date,
190 orders_summary=orders_summary)
191 elif path == 'organizers':
192 return self.render('/gentelella/admin/super_admin/sales/by_organizer.html',
193 tickets_summary=tickets_summary_organizer_wise,
194 display_currency=self.display_currency,
195 from_date=from_date,
196 to_date=to_date,
197 orders_summary=orders_summary)
198 elif path == 'locations':
199 return self.render('/gentelella/admin/super_admin/sales/by_location.html',
200 tickets_summary=tickets_summary_location_wise,
201 display_currency=self.display_currency,
202 from_date=from_date,
203 to_date=to_date,
204 orders_summary=orders_summary)
205
206 else:
207 abort(404)
208
[end of app/views/admin/super_admin/sales.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/app/models/event_invoice.py b/app/models/event_invoice.py
--- a/app/models/event_invoice.py
+++ b/app/models/event_invoice.py
@@ -69,6 +69,7 @@
self.transaction_id = transaction_id
self.paid_via = paid_via
self.created_at = datetime.utcnow()
+ self.status = 'pending'
def get_invoice_number(self):
return 'I' + str(int(time.mktime(self.created_at.timetuple()))) + '-' + str(self.id)
diff --git a/app/views/admin/super_admin/sales.py b/app/views/admin/super_admin/sales.py
--- a/app/views/admin/super_admin/sales.py
+++ b/app/views/admin/super_admin/sales.py
@@ -1,5 +1,5 @@
import copy
-from datetime import datetime
+from datetime import datetime, timedelta
from flask import request
from flask import url_for
@@ -12,6 +12,7 @@
from app.helpers.payment import get_fee
from app.views.admin.super_admin.super_admin_base import SuperAdminBaseView, SALES
from app.helpers.ticketing import TicketingManager
+from app.helpers.invoicing import InvoicingManager
class SuperAdminSalesView(SuperAdminBaseView):
PANEL_NAME = SALES
@@ -75,6 +76,33 @@
tickets_total=tickets_total,
fee_total=fee_total)
+ @expose('/fees/status/')
+ def fees_status_view(self):
+ from_date = request.args.get('from_date')
+ to_date = request.args.get('to_date')
+
+ if ('from_date' in request.args and not from_date) or ('to_date' in request.args and not to_date) or \
+ ('from_date' in request.args and 'to_date' not in request.args) or \
+ ('to_date' in request.args and 'from_date' not in request.args):
+
+ return redirect(url_for('.fees_status_view'))
+
+ if from_date and to_date:
+ invoices = InvoicingManager.get_invoices(
+ from_date=datetime.strptime(from_date, '%d/%m/%Y'),
+ to_date=datetime.strptime(to_date, '%d/%m/%Y'),
+ )
+ else:
+ invoices = InvoicingManager.get_invoices()
+
+ return self.render('/gentelella/admin/super_admin/sales/fees_status.html',
+ display_currency=self.display_currency,
+ from_date=from_date,
+ current_date=datetime.now(),
+ overdue_date=datetime.now() + timedelta(days=15),
+ invoices=invoices,
+ to_date=to_date)
+
@expose('/<path>/')
def sales_by_events_view(self, path):
|
{"golden_diff": "diff --git a/app/models/event_invoice.py b/app/models/event_invoice.py\n--- a/app/models/event_invoice.py\n+++ b/app/models/event_invoice.py\n@@ -69,6 +69,7 @@\n self.transaction_id = transaction_id\n self.paid_via = paid_via\n self.created_at = datetime.utcnow()\n+ self.status = 'pending'\n \n def get_invoice_number(self):\n return 'I' + str(int(time.mktime(self.created_at.timetuple()))) + '-' + str(self.id)\ndiff --git a/app/views/admin/super_admin/sales.py b/app/views/admin/super_admin/sales.py\n--- a/app/views/admin/super_admin/sales.py\n+++ b/app/views/admin/super_admin/sales.py\n@@ -1,5 +1,5 @@\n import copy\n-from datetime import datetime\n+from datetime import datetime, timedelta\n \n from flask import request\n from flask import url_for\n@@ -12,6 +12,7 @@\n from app.helpers.payment import get_fee\n from app.views.admin.super_admin.super_admin_base import SuperAdminBaseView, SALES\n from app.helpers.ticketing import TicketingManager\n+from app.helpers.invoicing import InvoicingManager\n \n class SuperAdminSalesView(SuperAdminBaseView):\n PANEL_NAME = SALES\n@@ -75,6 +76,33 @@\n tickets_total=tickets_total,\n fee_total=fee_total)\n \n+ @expose('/fees/status/')\n+ def fees_status_view(self):\n+ from_date = request.args.get('from_date')\n+ to_date = request.args.get('to_date')\n+\n+ if ('from_date' in request.args and not from_date) or ('to_date' in request.args and not to_date) or \\\n+ ('from_date' in request.args and 'to_date' not in request.args) or \\\n+ ('to_date' in request.args and 'from_date' not in request.args):\n+\n+ return redirect(url_for('.fees_status_view'))\n+\n+ if from_date and to_date:\n+ invoices = InvoicingManager.get_invoices(\n+ from_date=datetime.strptime(from_date, '%d/%m/%Y'),\n+ to_date=datetime.strptime(to_date, '%d/%m/%Y'),\n+ )\n+ else:\n+ invoices = InvoicingManager.get_invoices()\n+\n+ return self.render('/gentelella/admin/super_admin/sales/fees_status.html',\n+ display_currency=self.display_currency,\n+ from_date=from_date,\n+ current_date=datetime.now(),\n+ overdue_date=datetime.now() + timedelta(days=15),\n+ invoices=invoices,\n+ to_date=to_date)\n+\n @expose('/<path>/')\n def sales_by_events_view(self, path):\n", "issue": "Tab Sales in Admin section with relevant information\nAdd tab \"Sales\" in admin section at `https://open-event-dev.herokuapp.com/admin/`\n\nWe need to display ticket sales in a suitable way to understand the status of the system. How can this be best achieved. Useful information includes:\n- [x] Sales by event\n- [x] Sales by organizer, and email of organizer\n- [x] Sales by location\n- [x] Sale depending on date and period (maybe search option)\n- [x] Fees by event\n- [x] Status/list of (automatic) emails sent to organizer.\n- [x] Status of fees paid (invoice to be sent, pending, late)\n- [ ] What other information is useful?\n\n", "before_files": [{"content": "import uuid\nfrom datetime import datetime\nimport time\n\nfrom app.helpers.helpers import get_count\nfrom . import db\n\ndef get_new_identifier():\n identifier = str(uuid.uuid4())\n count = get_count(EventInvoice.query.filter_by(identifier=identifier))\n if count == 0:\n return identifier\n else:\n return get_new_identifier()\n\nclass EventInvoice(db.Model):\n \"\"\"\n Stripe authorization information for an event.\n \"\"\"\n __tablename__ = 'event_invoices'\n\n id = db.Column(db.Integer, primary_key=True)\n identifier = db.Column(db.String, unique=True)\n amount = db.Column(db.Float)\n address = db.Column(db.String)\n city = db.Column(db.String)\n state = db.Column(db.String)\n country = db.Column(db.String)\n zipcode = db.Column(db.String)\n\n user_id = db.Column(db.Integer, db.ForeignKey('user.id'))\n event_id = db.Column(db.Integer, db.ForeignKey('events.id', ondelete='SET NULL'))\n\n created_at = db.Column(db.DateTime)\n completed_at = db.Column(db.DateTime, nullable=True, default=None)\n transaction_id = db.Column(db.String)\n paid_via = db.Column(db.String)\n payment_mode = db.Column(db.String)\n brand = db.Column(db.String)\n exp_month = db.Column(db.Integer)\n exp_year = db.Column(db.Integer)\n last4 = db.Column(db.String)\n stripe_token = db.Column(db.String)\n paypal_token = db.Column(db.String)\n status = db.Column(db.String)\n\n event = db.relationship('Event', backref='invoices')\n user = db.relationship('User', backref='invoices')\n\n def __init__(self,\n amount=None,\n address=None,\n city=city,\n state=None,\n country=None,\n zipcode=None,\n transaction_id=None,\n paid_via=None,\n user_id=None,\n event_id=None):\n self.identifier = get_new_identifier()\n self.amount = amount\n self.address = address\n self.state = state\n self.country = country\n self.zipcode = zipcode\n self.user_id = user_id\n self.event_id = event_id\n self.transaction_id = transaction_id\n self.paid_via = paid_via\n self.created_at = datetime.utcnow()\n\n def get_invoice_number(self):\n return 'I' + str(int(time.mktime(self.created_at.timetuple()))) + '-' + str(self.id)\n\n def __repr__(self):\n return '<EventInvoice %r>' % self.stripe_user_id\n\n def __str__(self):\n return unicode(self).encode('utf-8')\n\n def __unicode__(self):\n return self.stripe_user_id\n", "path": "app/models/event_invoice.py"}, {"content": "import copy\nfrom datetime import datetime\n\nfrom flask import request\nfrom flask import url_for\nfrom flask_admin import expose\nfrom werkzeug.exceptions import abort\nfrom werkzeug.utils import redirect\n\nfrom app import forex\nfrom app.helpers.data_getter import DataGetter\nfrom app.helpers.payment import get_fee\nfrom app.views.admin.super_admin.super_admin_base import SuperAdminBaseView, SALES\nfrom app.helpers.ticketing import TicketingManager\n\nclass SuperAdminSalesView(SuperAdminBaseView):\n PANEL_NAME = SALES\n display_currency = 'USD'\n\n @expose('/')\n def index(self):\n return redirect(url_for('.sales_by_events_view', path='events'))\n\n @expose('/fees/')\n def fees_by_events_view(self):\n from_date = request.args.get('from_date')\n to_date = request.args.get('to_date')\n\n if ('from_date' in request.args and not from_date) or ('to_date' in request.args and not to_date) or \\\n ('from_date' in request.args and 'to_date' not in request.args) or \\\n ('to_date' in request.args and 'from_date' not in request.args):\n\n return redirect(url_for('.fees_by_events_view'))\n\n if from_date and to_date:\n orders = TicketingManager.get_orders(\n from_date=datetime.strptime(from_date, '%d/%m/%Y'),\n to_date=datetime.strptime(to_date, '%d/%m/%Y'),\n status='completed'\n )\n else:\n orders = TicketingManager.get_orders(status='completed')\n\n events = DataGetter.get_all_events()\n\n fee_summary = {}\n for event in events:\n fee_summary[str(event.id)] = {\n 'name': event.name,\n 'payment_currency': event.payment_currency,\n 'fee_rate': get_fee(event.payment_currency),\n 'fee_amount': 0,\n 'tickets_count': 0\n }\n\n fee_total = 0\n tickets_total = 0\n\n for order in orders:\n for order_ticket in order.tickets:\n fee_summary[str(order.event.id)]['tickets_count'] += order_ticket.quantity\n tickets_total += order_ticket.quantity\n ticket = TicketingManager.get_ticket(order_ticket.ticket_id)\n if order.paid_via != 'free' and order.amount > 0 and ticket.price > 0:\n fee = ticket.price * (get_fee(order.event.payment_currency)/100)\n fee = forex(order.event.payment_currency, self.display_currency, fee)\n fee_summary[str(order.event.id)]['fee_amount'] += fee\n fee_total += fee\n\n return self.render('/gentelella/admin/super_admin/sales/fees.html',\n fee_summary=fee_summary,\n display_currency=self.display_currency,\n from_date=from_date,\n to_date=to_date,\n tickets_total=tickets_total,\n fee_total=fee_total)\n\n @expose('/<path>/')\n def sales_by_events_view(self, path):\n\n from_date = request.args.get('from_date')\n to_date = request.args.get('to_date')\n\n if ('from_date' in request.args and not from_date) or ('to_date' in request.args and not to_date) or \\\n ('from_date' in request.args and 'to_date' not in request.args) or \\\n ('to_date' in request.args and 'from_date' not in request.args):\n\n return redirect(url_for('.sales_by_events_view', path=path))\n\n if from_date and to_date:\n orders = TicketingManager.get_orders(\n from_date=datetime.strptime(from_date, '%d/%m/%Y'),\n to_date=datetime.strptime(to_date, '%d/%m/%Y')\n )\n else:\n orders = TicketingManager.get_orders()\n\n events = DataGetter.get_all_events()\n\n completed_count = 0\n completed_amount = 0\n tickets_count = 0\n\n orders_summary = {\n 'completed': {\n 'class': 'success',\n 'tickets_count': 0,\n 'orders_count': 0,\n 'total_sales': 0\n },\n 'pending': {\n 'class': 'warning',\n 'tickets_count': 0,\n 'orders_count': 0,\n 'total_sales': 0\n },\n 'expired': {\n 'class': 'danger',\n 'tickets_count': 0,\n 'orders_count': 0,\n 'total_sales': 0\n }\n }\n\n tickets_summary_event_wise = {}\n tickets_summary_organizer_wise = {}\n tickets_summary_location_wise = {}\n for event in events:\n tickets_summary_event_wise[str(event.id)] = {\n 'name': event.name,\n 'payment_currency': event.payment_currency,\n 'completed': {\n 'tickets_count': 0,\n 'sales': 0\n },\n 'pending': {\n 'tickets_count': 0,\n 'sales': 0\n },\n 'expired': {\n 'class': 'danger',\n 'tickets_count': 0,\n 'sales': 0\n }\n }\n tickets_summary_organizer_wise[str(event.creator_id)] = \\\n copy.deepcopy(tickets_summary_event_wise[str(event.id)])\n if event.creator:\n tickets_summary_organizer_wise[str(event.creator_id)]['name'] = event.creator.email\n\n tickets_summary_location_wise[unicode(event.searchable_location_name)] = \\\n copy.deepcopy(tickets_summary_event_wise[str(event.id)])\n tickets_summary_location_wise[unicode(event.searchable_location_name)]['name'] = \\\n event.searchable_location_name\n\n for order in orders:\n if order.status == 'initialized':\n order.status = 'pending'\n orders_summary[str(order.status)]['orders_count'] += 1\n orders_summary[str(order.status)]['total_sales'] += forex(order.event.payment_currency,\n self.display_currency, order.amount)\n for order_ticket in order.tickets:\n orders_summary[str(order.status)]['tickets_count'] += order_ticket.quantity\n ticket = TicketingManager.get_ticket(order_ticket.ticket_id)\n tickets_summary_event_wise[str(order.event_id)][str(order.status)]['tickets_count'] \\\n += order_ticket.quantity\n tickets_summary_organizer_wise[str(order.event.creator_id)][str(order.status)]['tickets_count'] \\\n += order_ticket.quantity\n tickets_summary_location_wise[str(order\n .event.searchable_location_name)][str(order\n .status)]['tickets_count'] \\\n += order_ticket.quantity\n\n if order.paid_via != 'free' and order.amount > 0:\n tickets_summary_event_wise[str(order.event_id)][str(order.status)]['sales'] += \\\n order_ticket.quantity * ticket.price\n tickets_summary_organizer_wise[str(order.event.creator_id)][str(order.status)]['sales'] += \\\n order_ticket.quantity * ticket.price\n tickets_summary_location_wise[str(order.event.\n searchable_location_name)][str(order.\n status)]['sales'] += \\\n order_ticket.quantity * ticket.price\n\n if path == 'events':\n return self.render('/gentelella/admin/super_admin/sales/by_events.html',\n tickets_summary=tickets_summary_event_wise,\n display_currency=self.display_currency,\n from_date=from_date,\n to_date=to_date,\n orders_summary=orders_summary)\n elif path == 'organizers':\n return self.render('/gentelella/admin/super_admin/sales/by_organizer.html',\n tickets_summary=tickets_summary_organizer_wise,\n display_currency=self.display_currency,\n from_date=from_date,\n to_date=to_date,\n orders_summary=orders_summary)\n elif path == 'locations':\n return self.render('/gentelella/admin/super_admin/sales/by_location.html',\n tickets_summary=tickets_summary_location_wise,\n display_currency=self.display_currency,\n from_date=from_date,\n to_date=to_date,\n orders_summary=orders_summary)\n\n else:\n abort(404)\n", "path": "app/views/admin/super_admin/sales.py"}]}
| 3,682 | 595 |
gh_patches_debug_7776
|
rasdani/github-patches
|
git_diff
|
secdev__scapy-4349
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Incorrect RTCP SR + RR parsing
### Brief description
The RTCP parser fails to handle a packet that contains both Sender Report and Received Report, which is is the most common data for a two-way session.
It seems that the "sender_info" info contain a payload, this should be parsed as a ReceptionReport info
Incorrect behavior demonstrated in UTS here: https://github.com/secdev/scapy/commit/0bb9db2932d91d2f6e057caea60db78a2ad54f96
### Scapy version
main
### Python version
3.10
### Operating system
Linux 5.15.146
### Additional environment information
_No response_
### How to reproduce
Run tests on provided branch:
`test/run_tests -P "load_contrib('rtcp')" -t test/contrib/rtcp.uts -F`
### Actual result
Demo test should fail.
ReceptionReport after SenderInfo should be parsed. SenderInfo should never have a payload, it's a fixed-sized struct
### Expected result
The commented asserts should pass instead
### Related resources
https://datatracker.ietf.org/doc/html/rfc3550
</issue>
<code>
[start of scapy/contrib/rtcp.py]
1 # SPDX-License-Identifier: GPL-2.0-only
2 # This file is part of Scapy
3 # See https://scapy.net/ for more information
4 # Copyright (C) Pavel Oborin <[email protected]>
5
6 # RFC 3550
7 # scapy.contrib.description = Real-Time Transport Control Protocol
8 # scapy.contrib.status = loads
9
10 """
11 RTCP (rfc 3550)
12
13 Use bind_layers(UDP, RTCP, dport=...) to start using it
14 """
15
16 import struct
17
18 from scapy.packet import Packet
19 from scapy.fields import (
20 BitField,
21 BitFieldLenField,
22 ByteEnumField,
23 ByteField,
24 ConditionalField,
25 FieldLenField,
26 IntField,
27 LenField,
28 LongField,
29 PacketField,
30 PacketListField,
31 StrLenField,
32 X3BytesField,
33 )
34
35
36 _rtcp_packet_types = {
37 200: 'Sender report',
38 201: 'Receiver report',
39 202: 'Source description',
40 203: 'BYE',
41 204: 'APP'
42 }
43
44
45 class SenderInfo(Packet):
46 name = "Sender info"
47 fields_desc = [
48 LongField('ntp_timestamp', None),
49 IntField('rtp_timestamp', None),
50 IntField('sender_packet_count', None),
51 IntField('sender_octet_count', None)
52 ]
53
54
55 class ReceptionReport(Packet):
56 name = "Reception report"
57 fields_desc = [
58 IntField('sourcesync', None),
59 ByteField('fraction_lost', None),
60 X3BytesField('cumulative_lost', None),
61 IntField('highest_seqnum_recv', None),
62 IntField('interarrival_jitter', None),
63 IntField('last_SR_timestamp', None),
64 IntField('delay_since_last_SR', None)
65 ]
66
67
68 _sdes_chunk_types = {
69 0: "END",
70 1: "CNAME",
71 2: "NAME",
72 3: "EMAIL",
73 4: "PHONE",
74 5: "LOC",
75 6: "TOOL",
76 7: "NOTE",
77 8: "PRIV"
78 }
79
80
81 class SDESItem(Packet):
82 name = "SDES item"
83 fields_desc = [
84 ByteEnumField('chunk_type', None, _sdes_chunk_types),
85 FieldLenField('length', None, fmt='!b', length_of='value'),
86 StrLenField('value', None, length_from=lambda pkt: pkt.length)
87 ]
88
89 def extract_padding(self, p):
90 return "", p
91
92
93 class SDESChunk(Packet):
94 name = "SDES chunk"
95 fields_desc = [
96 IntField('sourcesync', None),
97 PacketListField(
98 'items', None,
99 next_cls_cb=(
100 lambda x, y, p, z: None if (p and p.chunk_type == 0) else SDESItem
101 )
102 )
103 ]
104
105
106 class RTCP(Packet):
107 name = "RTCP"
108
109 fields_desc = [
110 # HEADER
111 BitField('version', 2, 2),
112 BitField('padding', 0, 1),
113 BitFieldLenField('count', 0, 5, count_of='report_blocks'),
114 ByteEnumField('packet_type', 0, _rtcp_packet_types),
115 LenField('length', None, fmt='!h'),
116 # SR/RR
117 ConditionalField(
118 IntField('sourcesync', 0),
119 lambda pkt: pkt.packet_type in (200, 201)
120 ),
121 ConditionalField(
122 PacketField('sender_info', SenderInfo(), SenderInfo),
123 lambda pkt: pkt.packet_type == 200
124 ),
125 ConditionalField(
126 PacketListField('report_blocks', None, pkt_cls=ReceptionReport,
127 count_from=lambda pkt: pkt.count),
128 lambda pkt: pkt.packet_type in (200, 201)
129 ),
130 # SDES
131 ConditionalField(
132 PacketListField('sdes_chunks', None, pkt_cls=SDESChunk,
133 count_from=lambda pkt: pkt.count),
134 lambda pkt: pkt.packet_type == 202
135 ),
136 ]
137
138 def post_build(self, pkt, pay):
139 pkt += pay
140 if self.length is None:
141 pkt = pkt[:2] + struct.pack("!h", len(pkt) // 4 - 1) + pkt[4:]
142 return pkt
143
[end of scapy/contrib/rtcp.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/scapy/contrib/rtcp.py b/scapy/contrib/rtcp.py
--- a/scapy/contrib/rtcp.py
+++ b/scapy/contrib/rtcp.py
@@ -51,6 +51,9 @@
IntField('sender_octet_count', None)
]
+ def extract_padding(self, p):
+ return "", p
+
class ReceptionReport(Packet):
name = "Reception report"
@@ -64,6 +67,9 @@
IntField('delay_since_last_SR', None)
]
+ def extract_padding(self, p):
+ return "", p
+
_sdes_chunk_types = {
0: "END",
|
{"golden_diff": "diff --git a/scapy/contrib/rtcp.py b/scapy/contrib/rtcp.py\n--- a/scapy/contrib/rtcp.py\n+++ b/scapy/contrib/rtcp.py\n@@ -51,6 +51,9 @@\n IntField('sender_octet_count', None)\n ]\n \n+ def extract_padding(self, p):\n+ return \"\", p\n+\n \n class ReceptionReport(Packet):\n name = \"Reception report\"\n@@ -64,6 +67,9 @@\n IntField('delay_since_last_SR', None)\n ]\n \n+ def extract_padding(self, p):\n+ return \"\", p\n+\n \n _sdes_chunk_types = {\n 0: \"END\",\n", "issue": "Incorrect RTCP SR + RR parsing\n### Brief description\n\nThe RTCP parser fails to handle a packet that contains both Sender Report and Received Report, which is is the most common data for a two-way session.\r\n\r\nIt seems that the \"sender_info\" info contain a payload, this should be parsed as a ReceptionReport info\r\n\r\nIncorrect behavior demonstrated in UTS here: https://github.com/secdev/scapy/commit/0bb9db2932d91d2f6e057caea60db78a2ad54f96\n\n### Scapy version\n\nmain\n\n### Python version\n\n3.10\n\n### Operating system\n\nLinux 5.15.146\n\n### Additional environment information\n\n_No response_\n\n### How to reproduce\n\nRun tests on provided branch:\r\n\r\n`test/run_tests -P \"load_contrib('rtcp')\" -t test/contrib/rtcp.uts -F`\r\n\r\n\n\n### Actual result\n\nDemo test should fail.\r\n\r\nReceptionReport after SenderInfo should be parsed. SenderInfo should never have a payload, it's a fixed-sized struct\n\n### Expected result\n\nThe commented asserts should pass instead\n\n### Related resources\n\nhttps://datatracker.ietf.org/doc/html/rfc3550\n", "before_files": [{"content": "# SPDX-License-Identifier: GPL-2.0-only\n# This file is part of Scapy\n# See https://scapy.net/ for more information\n# Copyright (C) Pavel Oborin <[email protected]>\n\n# RFC 3550\n# scapy.contrib.description = Real-Time Transport Control Protocol\n# scapy.contrib.status = loads\n\n\"\"\"\nRTCP (rfc 3550)\n\nUse bind_layers(UDP, RTCP, dport=...) to start using it\n\"\"\"\n\nimport struct\n\nfrom scapy.packet import Packet\nfrom scapy.fields import (\n BitField,\n BitFieldLenField,\n ByteEnumField,\n ByteField,\n ConditionalField,\n FieldLenField,\n IntField,\n LenField,\n LongField,\n PacketField,\n PacketListField,\n StrLenField,\n X3BytesField,\n)\n\n\n_rtcp_packet_types = {\n 200: 'Sender report',\n 201: 'Receiver report',\n 202: 'Source description',\n 203: 'BYE',\n 204: 'APP'\n}\n\n\nclass SenderInfo(Packet):\n name = \"Sender info\"\n fields_desc = [\n LongField('ntp_timestamp', None),\n IntField('rtp_timestamp', None),\n IntField('sender_packet_count', None),\n IntField('sender_octet_count', None)\n ]\n\n\nclass ReceptionReport(Packet):\n name = \"Reception report\"\n fields_desc = [\n IntField('sourcesync', None),\n ByteField('fraction_lost', None),\n X3BytesField('cumulative_lost', None),\n IntField('highest_seqnum_recv', None),\n IntField('interarrival_jitter', None),\n IntField('last_SR_timestamp', None),\n IntField('delay_since_last_SR', None)\n ]\n\n\n_sdes_chunk_types = {\n 0: \"END\",\n 1: \"CNAME\",\n 2: \"NAME\",\n 3: \"EMAIL\",\n 4: \"PHONE\",\n 5: \"LOC\",\n 6: \"TOOL\",\n 7: \"NOTE\",\n 8: \"PRIV\"\n}\n\n\nclass SDESItem(Packet):\n name = \"SDES item\"\n fields_desc = [\n ByteEnumField('chunk_type', None, _sdes_chunk_types),\n FieldLenField('length', None, fmt='!b', length_of='value'),\n StrLenField('value', None, length_from=lambda pkt: pkt.length)\n ]\n\n def extract_padding(self, p):\n return \"\", p\n\n\nclass SDESChunk(Packet):\n name = \"SDES chunk\"\n fields_desc = [\n IntField('sourcesync', None),\n PacketListField(\n 'items', None,\n next_cls_cb=(\n lambda x, y, p, z: None if (p and p.chunk_type == 0) else SDESItem\n )\n )\n ]\n\n\nclass RTCP(Packet):\n name = \"RTCP\"\n\n fields_desc = [\n # HEADER\n BitField('version', 2, 2),\n BitField('padding', 0, 1),\n BitFieldLenField('count', 0, 5, count_of='report_blocks'),\n ByteEnumField('packet_type', 0, _rtcp_packet_types),\n LenField('length', None, fmt='!h'),\n # SR/RR\n ConditionalField(\n IntField('sourcesync', 0),\n lambda pkt: pkt.packet_type in (200, 201)\n ),\n ConditionalField(\n PacketField('sender_info', SenderInfo(), SenderInfo),\n lambda pkt: pkt.packet_type == 200\n ),\n ConditionalField(\n PacketListField('report_blocks', None, pkt_cls=ReceptionReport,\n count_from=lambda pkt: pkt.count),\n lambda pkt: pkt.packet_type in (200, 201)\n ),\n # SDES\n ConditionalField(\n PacketListField('sdes_chunks', None, pkt_cls=SDESChunk,\n count_from=lambda pkt: pkt.count),\n lambda pkt: pkt.packet_type == 202\n ),\n ]\n\n def post_build(self, pkt, pay):\n pkt += pay\n if self.length is None:\n pkt = pkt[:2] + struct.pack(\"!h\", len(pkt) // 4 - 1) + pkt[4:]\n return pkt\n", "path": "scapy/contrib/rtcp.py"}]}
| 2,103 | 157 |
gh_patches_debug_4463
|
rasdani/github-patches
|
git_diff
|
getsentry__sentry-python-464
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
logging breadcrumb collection time is incorrect

I found the breadcrumbs timestamp converted to the local time zone
datetime.datetime.fromtimestamp(record.created)
> ```python
> # /sentry_sdk/integrations/logging.py:90
> def _breadcrumb_from_record(record):
> # type: (LogRecord) -> Dict[str, Any]
> return {
> "ty": "log",
> "level": _logging_to_event_level(record.levelname),
> "category": record.name,
> "message": record.message,
> "timestamp": datetime.datetime.fromtimestamp(record.created),
> "data": _extra_from_record(record),
> }
> ```
</issue>
<code>
[start of sentry_sdk/integrations/logging.py]
1 from __future__ import absolute_import
2
3 import logging
4 import datetime
5
6 from sentry_sdk.hub import Hub
7 from sentry_sdk.utils import (
8 to_string,
9 event_from_exception,
10 current_stacktrace,
11 capture_internal_exceptions,
12 )
13 from sentry_sdk.integrations import Integration
14 from sentry_sdk._compat import iteritems
15
16 from sentry_sdk._types import MYPY
17
18 if MYPY:
19 from logging import LogRecord
20 from typing import Any
21 from typing import Dict
22 from typing import Optional
23
24 DEFAULT_LEVEL = logging.INFO
25 DEFAULT_EVENT_LEVEL = logging.ERROR
26
27 _IGNORED_LOGGERS = set(["sentry_sdk.errors"])
28
29
30 def ignore_logger(name):
31 # type: (str) -> None
32 """This disables recording (both in breadcrumbs and as events) calls to
33 a logger of a specific name. Among other uses, many of our integrations
34 use this to prevent their actions being recorded as breadcrumbs. Exposed
35 to users as a way to quiet spammy loggers.
36 """
37 _IGNORED_LOGGERS.add(name)
38
39
40 class LoggingIntegration(Integration):
41 identifier = "logging"
42
43 def __init__(self, level=DEFAULT_LEVEL, event_level=DEFAULT_EVENT_LEVEL):
44 # type: (Optional[int], Optional[int]) -> None
45 self._handler = None
46 self._breadcrumb_handler = None
47
48 if level is not None:
49 self._breadcrumb_handler = BreadcrumbHandler(level=level)
50
51 if event_level is not None:
52 self._handler = EventHandler(level=event_level)
53
54 def _handle_record(self, record):
55 # type: (LogRecord) -> None
56 if self._handler is not None and record.levelno >= self._handler.level:
57 self._handler.handle(record)
58
59 if (
60 self._breadcrumb_handler is not None
61 and record.levelno >= self._breadcrumb_handler.level
62 ):
63 self._breadcrumb_handler.handle(record)
64
65 @staticmethod
66 def setup_once():
67 # type: () -> None
68 old_callhandlers = logging.Logger.callHandlers # type: ignore
69
70 def sentry_patched_callhandlers(self, record):
71 # type: (Any, LogRecord) -> Any
72 try:
73 return old_callhandlers(self, record)
74 finally:
75 # This check is done twice, once also here before we even get
76 # the integration. Otherwise we have a high chance of getting
77 # into a recursion error when the integration is resolved
78 # (this also is slower).
79 if record.name not in _IGNORED_LOGGERS:
80 integration = Hub.current.get_integration(LoggingIntegration)
81 if integration is not None:
82 integration._handle_record(record)
83
84 logging.Logger.callHandlers = sentry_patched_callhandlers # type: ignore
85
86
87 def _can_record(record):
88 # type: (LogRecord) -> bool
89 return record.name not in _IGNORED_LOGGERS
90
91
92 def _breadcrumb_from_record(record):
93 # type: (LogRecord) -> Dict[str, Any]
94 return {
95 "ty": "log",
96 "level": _logging_to_event_level(record.levelname),
97 "category": record.name,
98 "message": record.message,
99 "timestamp": datetime.datetime.fromtimestamp(record.created),
100 "data": _extra_from_record(record),
101 }
102
103
104 def _logging_to_event_level(levelname):
105 # type: (str) -> str
106 return {"critical": "fatal"}.get(levelname.lower(), levelname.lower())
107
108
109 COMMON_RECORD_ATTRS = frozenset(
110 (
111 "args",
112 "created",
113 "exc_info",
114 "exc_text",
115 "filename",
116 "funcName",
117 "levelname",
118 "levelno",
119 "linenno",
120 "lineno",
121 "message",
122 "module",
123 "msecs",
124 "msg",
125 "name",
126 "pathname",
127 "process",
128 "processName",
129 "relativeCreated",
130 "stack",
131 "tags",
132 "thread",
133 "threadName",
134 )
135 )
136
137
138 def _extra_from_record(record):
139 # type: (LogRecord) -> Dict[str, None]
140 return {
141 k: v
142 for k, v in iteritems(vars(record))
143 if k not in COMMON_RECORD_ATTRS
144 and (not isinstance(k, str) or not k.startswith("_"))
145 }
146
147
148 class EventHandler(logging.Handler, object):
149 def emit(self, record):
150 # type: (LogRecord) -> Any
151 with capture_internal_exceptions():
152 self.format(record)
153 return self._emit(record)
154
155 def _emit(self, record):
156 # type: (LogRecord) -> None
157 if not _can_record(record):
158 return
159
160 hub = Hub.current
161 if hub.client is None:
162 return
163
164 client_options = hub.client.options
165
166 # exc_info might be None or (None, None, None)
167 if record.exc_info is not None and record.exc_info[0] is not None:
168 event, hint = event_from_exception(
169 record.exc_info,
170 client_options=client_options,
171 mechanism={"type": "logging", "handled": True},
172 )
173 elif record.exc_info and record.exc_info[0] is None:
174 event = {}
175 hint = {}
176 with capture_internal_exceptions():
177 event["threads"] = {
178 "values": [
179 {
180 "stacktrace": current_stacktrace(
181 client_options["with_locals"]
182 ),
183 "crashed": False,
184 "current": True,
185 }
186 ]
187 }
188 else:
189 event = {}
190 hint = {}
191
192 hint["log_record"] = record
193
194 event["level"] = _logging_to_event_level(record.levelname)
195 event["logger"] = record.name
196 event["logentry"] = {"message": to_string(record.msg), "params": record.args}
197 event["extra"] = _extra_from_record(record)
198
199 hub.capture_event(event, hint=hint)
200
201
202 # Legacy name
203 SentryHandler = EventHandler
204
205
206 class BreadcrumbHandler(logging.Handler, object):
207 def emit(self, record):
208 # type: (LogRecord) -> Any
209 with capture_internal_exceptions():
210 self.format(record)
211 return self._emit(record)
212
213 def _emit(self, record):
214 # type: (LogRecord) -> None
215 if not _can_record(record):
216 return
217
218 Hub.current.add_breadcrumb(
219 _breadcrumb_from_record(record), hint={"log_record": record}
220 )
221
[end of sentry_sdk/integrations/logging.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/sentry_sdk/integrations/logging.py b/sentry_sdk/integrations/logging.py
--- a/sentry_sdk/integrations/logging.py
+++ b/sentry_sdk/integrations/logging.py
@@ -96,7 +96,7 @@
"level": _logging_to_event_level(record.levelname),
"category": record.name,
"message": record.message,
- "timestamp": datetime.datetime.fromtimestamp(record.created),
+ "timestamp": datetime.datetime.utcfromtimestamp(record.created),
"data": _extra_from_record(record),
}
|
{"golden_diff": "diff --git a/sentry_sdk/integrations/logging.py b/sentry_sdk/integrations/logging.py\n--- a/sentry_sdk/integrations/logging.py\n+++ b/sentry_sdk/integrations/logging.py\n@@ -96,7 +96,7 @@\n \"level\": _logging_to_event_level(record.levelname),\n \"category\": record.name,\n \"message\": record.message,\n- \"timestamp\": datetime.datetime.fromtimestamp(record.created),\n+ \"timestamp\": datetime.datetime.utcfromtimestamp(record.created),\n \"data\": _extra_from_record(record),\n }\n", "issue": "logging breadcrumb collection time is incorrect\n\r\n\r\n\r\nI found the breadcrumbs timestamp converted to the local time zone\r\ndatetime.datetime.fromtimestamp(record.created)\r\n\r\n> ```python\r\n> # /sentry_sdk/integrations/logging.py:90\r\n> def _breadcrumb_from_record(record):\r\n> # type: (LogRecord) -> Dict[str, Any]\r\n> return {\r\n> \"ty\": \"log\",\r\n> \"level\": _logging_to_event_level(record.levelname),\r\n> \"category\": record.name,\r\n> \"message\": record.message,\r\n> \"timestamp\": datetime.datetime.fromtimestamp(record.created),\r\n> \"data\": _extra_from_record(record),\r\n> }\r\n> ```\n", "before_files": [{"content": "from __future__ import absolute_import\n\nimport logging\nimport datetime\n\nfrom sentry_sdk.hub import Hub\nfrom sentry_sdk.utils import (\n to_string,\n event_from_exception,\n current_stacktrace,\n capture_internal_exceptions,\n)\nfrom sentry_sdk.integrations import Integration\nfrom sentry_sdk._compat import iteritems\n\nfrom sentry_sdk._types import MYPY\n\nif MYPY:\n from logging import LogRecord\n from typing import Any\n from typing import Dict\n from typing import Optional\n\nDEFAULT_LEVEL = logging.INFO\nDEFAULT_EVENT_LEVEL = logging.ERROR\n\n_IGNORED_LOGGERS = set([\"sentry_sdk.errors\"])\n\n\ndef ignore_logger(name):\n # type: (str) -> None\n \"\"\"This disables recording (both in breadcrumbs and as events) calls to\n a logger of a specific name. Among other uses, many of our integrations\n use this to prevent their actions being recorded as breadcrumbs. Exposed\n to users as a way to quiet spammy loggers.\n \"\"\"\n _IGNORED_LOGGERS.add(name)\n\n\nclass LoggingIntegration(Integration):\n identifier = \"logging\"\n\n def __init__(self, level=DEFAULT_LEVEL, event_level=DEFAULT_EVENT_LEVEL):\n # type: (Optional[int], Optional[int]) -> None\n self._handler = None\n self._breadcrumb_handler = None\n\n if level is not None:\n self._breadcrumb_handler = BreadcrumbHandler(level=level)\n\n if event_level is not None:\n self._handler = EventHandler(level=event_level)\n\n def _handle_record(self, record):\n # type: (LogRecord) -> None\n if self._handler is not None and record.levelno >= self._handler.level:\n self._handler.handle(record)\n\n if (\n self._breadcrumb_handler is not None\n and record.levelno >= self._breadcrumb_handler.level\n ):\n self._breadcrumb_handler.handle(record)\n\n @staticmethod\n def setup_once():\n # type: () -> None\n old_callhandlers = logging.Logger.callHandlers # type: ignore\n\n def sentry_patched_callhandlers(self, record):\n # type: (Any, LogRecord) -> Any\n try:\n return old_callhandlers(self, record)\n finally:\n # This check is done twice, once also here before we even get\n # the integration. Otherwise we have a high chance of getting\n # into a recursion error when the integration is resolved\n # (this also is slower).\n if record.name not in _IGNORED_LOGGERS:\n integration = Hub.current.get_integration(LoggingIntegration)\n if integration is not None:\n integration._handle_record(record)\n\n logging.Logger.callHandlers = sentry_patched_callhandlers # type: ignore\n\n\ndef _can_record(record):\n # type: (LogRecord) -> bool\n return record.name not in _IGNORED_LOGGERS\n\n\ndef _breadcrumb_from_record(record):\n # type: (LogRecord) -> Dict[str, Any]\n return {\n \"ty\": \"log\",\n \"level\": _logging_to_event_level(record.levelname),\n \"category\": record.name,\n \"message\": record.message,\n \"timestamp\": datetime.datetime.fromtimestamp(record.created),\n \"data\": _extra_from_record(record),\n }\n\n\ndef _logging_to_event_level(levelname):\n # type: (str) -> str\n return {\"critical\": \"fatal\"}.get(levelname.lower(), levelname.lower())\n\n\nCOMMON_RECORD_ATTRS = frozenset(\n (\n \"args\",\n \"created\",\n \"exc_info\",\n \"exc_text\",\n \"filename\",\n \"funcName\",\n \"levelname\",\n \"levelno\",\n \"linenno\",\n \"lineno\",\n \"message\",\n \"module\",\n \"msecs\",\n \"msg\",\n \"name\",\n \"pathname\",\n \"process\",\n \"processName\",\n \"relativeCreated\",\n \"stack\",\n \"tags\",\n \"thread\",\n \"threadName\",\n )\n)\n\n\ndef _extra_from_record(record):\n # type: (LogRecord) -> Dict[str, None]\n return {\n k: v\n for k, v in iteritems(vars(record))\n if k not in COMMON_RECORD_ATTRS\n and (not isinstance(k, str) or not k.startswith(\"_\"))\n }\n\n\nclass EventHandler(logging.Handler, object):\n def emit(self, record):\n # type: (LogRecord) -> Any\n with capture_internal_exceptions():\n self.format(record)\n return self._emit(record)\n\n def _emit(self, record):\n # type: (LogRecord) -> None\n if not _can_record(record):\n return\n\n hub = Hub.current\n if hub.client is None:\n return\n\n client_options = hub.client.options\n\n # exc_info might be None or (None, None, None)\n if record.exc_info is not None and record.exc_info[0] is not None:\n event, hint = event_from_exception(\n record.exc_info,\n client_options=client_options,\n mechanism={\"type\": \"logging\", \"handled\": True},\n )\n elif record.exc_info and record.exc_info[0] is None:\n event = {}\n hint = {}\n with capture_internal_exceptions():\n event[\"threads\"] = {\n \"values\": [\n {\n \"stacktrace\": current_stacktrace(\n client_options[\"with_locals\"]\n ),\n \"crashed\": False,\n \"current\": True,\n }\n ]\n }\n else:\n event = {}\n hint = {}\n\n hint[\"log_record\"] = record\n\n event[\"level\"] = _logging_to_event_level(record.levelname)\n event[\"logger\"] = record.name\n event[\"logentry\"] = {\"message\": to_string(record.msg), \"params\": record.args}\n event[\"extra\"] = _extra_from_record(record)\n\n hub.capture_event(event, hint=hint)\n\n\n# Legacy name\nSentryHandler = EventHandler\n\n\nclass BreadcrumbHandler(logging.Handler, object):\n def emit(self, record):\n # type: (LogRecord) -> Any\n with capture_internal_exceptions():\n self.format(record)\n return self._emit(record)\n\n def _emit(self, record):\n # type: (LogRecord) -> None\n if not _can_record(record):\n return\n\n Hub.current.add_breadcrumb(\n _breadcrumb_from_record(record), hint={\"log_record\": record}\n )\n", "path": "sentry_sdk/integrations/logging.py"}]}
| 2,700 | 120 |
gh_patches_debug_1016
|
rasdani/github-patches
|
git_diff
|
scikit-hep__pyhf-2068
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
docs build failing on Pygments lexter warning
Hm. Something related to https://github.com/spatialaudio/nbsphinx/issues/24 is breaking the docs build. We're getting
```pytb
WARNING: Pygments lexer name 'ipython3' is not known
```
for all the notebooks during the docs build and we fail on warnings.
_Originally posted by @matthewfeickert in https://github.com/scikit-hep/pyhf/issues/2066#issuecomment-1329937208_
</issue>
<code>
[start of setup.py]
1 from setuptools import setup
2
3 extras_require = {
4 'shellcomplete': ['click_completion'],
5 'tensorflow': [
6 'tensorflow>=2.7.0', # c.f. PR #1962
7 'tensorflow-probability>=0.11.0', # c.f. PR #1657
8 ],
9 'torch': ['torch>=1.10.0'], # c.f. PR #1657
10 'jax': ['jax>=0.2.10', 'jaxlib>=0.1.61,!=0.1.68'], # c.f. PR #1962, Issue #1501
11 'xmlio': ['uproot>=4.1.1'], # c.f. PR #1567
12 'minuit': ['iminuit>=2.7.0'], # c.f. PR #1895
13 }
14 extras_require['backends'] = sorted(
15 set(
16 extras_require['tensorflow']
17 + extras_require['torch']
18 + extras_require['jax']
19 + extras_require['minuit']
20 )
21 )
22 extras_require['contrib'] = sorted({'matplotlib', 'requests'})
23 extras_require['test'] = sorted(
24 set(
25 extras_require['backends']
26 + extras_require['xmlio']
27 + extras_require['contrib']
28 + extras_require['shellcomplete']
29 + [
30 'scikit-hep-testdata>=0.4.11',
31 'pytest>=6.0',
32 'coverage[toml]>=6.0.0',
33 'pytest-mock',
34 'requests-mock>=1.9.0',
35 'pytest-benchmark[histogram]',
36 'pytest-console-scripts',
37 'pytest-mpl',
38 'pydocstyle',
39 'papermill~=2.3.4',
40 'scrapbook~=0.5.0',
41 'jupyter',
42 'graphviz',
43 'pytest-socket>=0.2.0', # c.f. PR #1917
44 ]
45 )
46 )
47 extras_require['docs'] = sorted(
48 set(
49 extras_require['xmlio']
50 + extras_require['contrib']
51 + [
52 'sphinx>=5.1.1', # c.f. https://github.com/scikit-hep/pyhf/pull/1926
53 'sphinxcontrib-bibtex~=2.1',
54 'sphinx-click',
55 'sphinx_rtd_theme',
56 'nbsphinx!=0.8.8', # c.f. https://github.com/spatialaudio/nbsphinx/issues/620
57 'ipywidgets',
58 'sphinx-issues',
59 'sphinx-copybutton>=0.3.2',
60 'sphinx-togglebutton>=0.3.0',
61 ]
62 )
63 )
64 extras_require['develop'] = sorted(
65 set(
66 extras_require['docs']
67 + extras_require['test']
68 + [
69 'nbdime',
70 'tbump>=6.7.0',
71 'ipython',
72 'pre-commit',
73 'nox',
74 'check-manifest',
75 'codemetapy>=2.3.0',
76 'twine',
77 ]
78 )
79 )
80 extras_require['complete'] = sorted(set(sum(extras_require.values(), [])))
81
82
83 setup(
84 extras_require=extras_require,
85 use_scm_version=lambda: {'local_scheme': lambda version: ''},
86 )
87
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -58,6 +58,7 @@
'sphinx-issues',
'sphinx-copybutton>=0.3.2',
'sphinx-togglebutton>=0.3.0',
+ 'ipython!=8.7.0', # c.f. https://github.com/scikit-hep/pyhf/pull/2068
]
)
)
|
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -58,6 +58,7 @@\n 'sphinx-issues',\n 'sphinx-copybutton>=0.3.2',\n 'sphinx-togglebutton>=0.3.0',\n+ 'ipython!=8.7.0', # c.f. https://github.com/scikit-hep/pyhf/pull/2068\n ]\n )\n )\n", "issue": "docs build failing on Pygments lexter warning\nHm. Something related to https://github.com/spatialaudio/nbsphinx/issues/24 is breaking the docs build. We're getting\r\n\r\n```pytb\r\nWARNING: Pygments lexer name 'ipython3' is not known\r\n```\r\n\r\nfor all the notebooks during the docs build and we fail on warnings.\r\n\r\n_Originally posted by @matthewfeickert in https://github.com/scikit-hep/pyhf/issues/2066#issuecomment-1329937208_\r\n \n", "before_files": [{"content": "from setuptools import setup\n\nextras_require = {\n 'shellcomplete': ['click_completion'],\n 'tensorflow': [\n 'tensorflow>=2.7.0', # c.f. PR #1962\n 'tensorflow-probability>=0.11.0', # c.f. PR #1657\n ],\n 'torch': ['torch>=1.10.0'], # c.f. PR #1657\n 'jax': ['jax>=0.2.10', 'jaxlib>=0.1.61,!=0.1.68'], # c.f. PR #1962, Issue #1501\n 'xmlio': ['uproot>=4.1.1'], # c.f. PR #1567\n 'minuit': ['iminuit>=2.7.0'], # c.f. PR #1895\n}\nextras_require['backends'] = sorted(\n set(\n extras_require['tensorflow']\n + extras_require['torch']\n + extras_require['jax']\n + extras_require['minuit']\n )\n)\nextras_require['contrib'] = sorted({'matplotlib', 'requests'})\nextras_require['test'] = sorted(\n set(\n extras_require['backends']\n + extras_require['xmlio']\n + extras_require['contrib']\n + extras_require['shellcomplete']\n + [\n 'scikit-hep-testdata>=0.4.11',\n 'pytest>=6.0',\n 'coverage[toml]>=6.0.0',\n 'pytest-mock',\n 'requests-mock>=1.9.0',\n 'pytest-benchmark[histogram]',\n 'pytest-console-scripts',\n 'pytest-mpl',\n 'pydocstyle',\n 'papermill~=2.3.4',\n 'scrapbook~=0.5.0',\n 'jupyter',\n 'graphviz',\n 'pytest-socket>=0.2.0', # c.f. PR #1917\n ]\n )\n)\nextras_require['docs'] = sorted(\n set(\n extras_require['xmlio']\n + extras_require['contrib']\n + [\n 'sphinx>=5.1.1', # c.f. https://github.com/scikit-hep/pyhf/pull/1926\n 'sphinxcontrib-bibtex~=2.1',\n 'sphinx-click',\n 'sphinx_rtd_theme',\n 'nbsphinx!=0.8.8', # c.f. https://github.com/spatialaudio/nbsphinx/issues/620\n 'ipywidgets',\n 'sphinx-issues',\n 'sphinx-copybutton>=0.3.2',\n 'sphinx-togglebutton>=0.3.0',\n ]\n )\n)\nextras_require['develop'] = sorted(\n set(\n extras_require['docs']\n + extras_require['test']\n + [\n 'nbdime',\n 'tbump>=6.7.0',\n 'ipython',\n 'pre-commit',\n 'nox',\n 'check-manifest',\n 'codemetapy>=2.3.0',\n 'twine',\n ]\n )\n)\nextras_require['complete'] = sorted(set(sum(extras_require.values(), [])))\n\n\nsetup(\n extras_require=extras_require,\n use_scm_version=lambda: {'local_scheme': lambda version: ''},\n)\n", "path": "setup.py"}]}
| 1,558 | 105 |
gh_patches_debug_9948
|
rasdani/github-patches
|
git_diff
|
DataDog__dd-trace-py-481
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
DatabaseError with ddtrace
```
Traceback (most recent call last):
File "/opt/prezi/signup/versions/318-db1a27e562e267603c55ea13e8db46b29d715834/virtualenv/local/lib/python2.7/site-packages/django/core/handlers/base.py", line 132, in get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/opt/prezi/signup/versions/318-db1a27e562e267603c55ea13e8db46b29d715834/signup/views/signup.py", line 199, in edu_school_reg
token = EDUVerifyToken.objects.create_token(email=email, domain=domain, name=data['name'], plan_family=data['plan_family'], action=action, upgrade_email=upgrade_email, user_id=user_id, school_data=school_data)
File "/opt/prezi/signup/versions/318-db1a27e562e267603c55ea13e8db46b29d715834/signup/models.py", line 69, in create_token
token.save()
File "/opt/prezi/signup/versions/318-db1a27e562e267603c55ea13e8db46b29d715834/signup/models.py", line 99, in save
super(EDUVerifyToken, self).save(**kwargs)
File "/opt/prezi/signup/versions/318-db1a27e562e267603c55ea13e8db46b29d715834/virtualenv/local/lib/python2.7/site-packages/django/db/models/base.py", line 734, in save
force_update=force_update, update_fields=update_fields)
File "/opt/prezi/signup/versions/318-db1a27e562e267603c55ea13e8db46b29d715834/virtualenv/local/lib/python2.7/site-packages/django/db/models/base.py", line 759, in save_base
with transaction.atomic(using=using, savepoint=False):
File "/opt/prezi/signup/versions/318-db1a27e562e267603c55ea13e8db46b29d715834/virtualenv/local/lib/python2.7/site-packages/django/db/transaction.py", line 150, in __enter__
if not connection.get_autocommit():
File "/opt/prezi/signup/versions/318-db1a27e562e267603c55ea13e8db46b29d715834/virtualenv/local/lib/python2.7/site-packages/django/db/backends/base/base.py", line 286, in get_autocommit
self.ensure_connection()
File "/opt/prezi/signup/versions/318-db1a27e562e267603c55ea13e8db46b29d715834/virtualenv/local/lib/python2.7/site-packages/django/db/backends/base/base.py", line 130, in ensure_connection
self.connect()
File "/opt/prezi/signup/versions/318-db1a27e562e267603c55ea13e8db46b29d715834/virtualenv/local/lib/python2.7/site-packages/django/db/backends/base/base.py", line 121, in connect
self.init_connection_state()
File "/opt/prezi/signup/versions/318-db1a27e562e267603c55ea13e8db46b29d715834/virtualenv/local/lib/python2.7/site-packages/django/db/backends/mysql/base.py", line 282, in init_connection_state
with self.cursor() as cursor:
File "/opt/prezi/signup/versions/318-db1a27e562e267603c55ea13e8db46b29d715834/virtualenv/local/lib/python2.7/site-packages/ddtrace/contrib/django/db.py", line 35, in cursor
return TracedCursor(tracer, conn, conn._datadog_original_cursor())
File "/opt/prezi/signup/versions/318-db1a27e562e267603c55ea13e8db46b29d715834/virtualenv/local/lib/python2.7/site-packages/django/db/backends/base/base.py", line 160, in cursor
self.validate_thread_sharing()
File "/opt/prezi/signup/versions/318-db1a27e562e267603c55ea13e8db46b29d715834/virtualenv/local/lib/python2.7/site-packages/django/db/backends/base/base.py", line 421, in validate_thread_sharing
% (self.alias, self._thread_ident, thread.get_ident()))
DatabaseError: DatabaseWrapper objects created in a thread can only be used in that same thread. The object with alias 'default' was created in thread id 40345840 and this is thread id 76693744.
```
ddtrace created a `DatabaseWrapper` in a different thread. And an exception is raised when the save called on the model.
Current dependencies
```
gunicorn==19.3
MySQL-python==1.2.5
ddtrace==0.11.0
datadog==0.19.0
```
</issue>
<code>
[start of ddtrace/contrib/django/conf.py]
1 """
2 Settings for Datadog tracer are all namespaced in the DATADOG_TRACE setting.
3 For example your project's `settings.py` file might look like this:
4
5 DATADOG_TRACE = {
6 'TRACER': 'myapp.tracer',
7 }
8
9 This module provides the `setting` object, that is used to access
10 Datadog settings, checking for user settings first, then falling
11 back to the defaults.
12 """
13 from __future__ import unicode_literals
14
15 import os
16 import importlib
17 import logging
18
19 from django.conf import settings as django_settings
20
21 from django.test.signals import setting_changed
22
23
24 log = logging.getLogger(__name__)
25
26 # List of available settings with their defaults
27 DEFAULTS = {
28 'AGENT_HOSTNAME': 'localhost',
29 'AGENT_PORT': 8126,
30 'AUTO_INSTRUMENT': True,
31 'INSTRUMENT_CACHE': True,
32 'INSTRUMENT_DATABASE': True,
33 'INSTRUMENT_TEMPLATE': True,
34 'DEFAULT_DATABASE_PREFIX': '',
35 'DEFAULT_SERVICE': 'django',
36 'ENABLED': True,
37 'DISTRIBUTED_TRACING': False,
38 'TAGS': {},
39 'TRACER': 'ddtrace.tracer',
40 }
41
42 # List of settings that may be in string import notation.
43 IMPORT_STRINGS = (
44 'TRACER',
45 )
46
47 # List of settings that have been removed
48 REMOVED_SETTINGS = ()
49
50
51 def import_from_string(val, setting_name):
52 """
53 Attempt to import a class from a string representation.
54 """
55 try:
56 # Nod to tastypie's use of importlib.
57 parts = val.split('.')
58 module_path, class_name = '.'.join(parts[:-1]), parts[-1]
59 module = importlib.import_module(module_path)
60 return getattr(module, class_name)
61 except (ImportError, AttributeError) as e:
62 msg = 'Could not import "{}" for setting "{}". {}: {}.'.format(
63 val, setting_name,
64 e.__class__.__name__, e
65 )
66
67 raise ImportError(msg)
68
69
70 class DatadogSettings(object):
71 """
72 A settings object, that allows Datadog settings to be accessed as properties.
73 For example:
74
75 from ddtrace.contrib.django.conf import settings
76
77 tracer = settings.TRACER
78
79 Any setting with string import paths will be automatically resolved
80 and return the class, rather than the string literal.
81 """
82 def __init__(self, user_settings=None, defaults=None, import_strings=None):
83 if user_settings:
84 self._user_settings = self.__check_user_settings(user_settings)
85
86 self.defaults = defaults or DEFAULTS
87 if os.environ.get('DATADOG_ENV'):
88 self.defaults['TAGS'].update({'env': os.environ.get('DATADOG_ENV')})
89 if os.environ.get('DATADOG_SERVICE_NAME'):
90 self.defaults['DEFAULT_SERVICE'] = os.environ.get('DATADOG_SERVICE_NAME')
91 if os.environ.get('DATADOG_TRACE_AGENT_HOSTNAME'):
92 self.defaults['AGENT_HOSTNAME'] = os.environ.get('DATADOG_TRACE_AGENT_HOSTNAME')
93 if os.environ.get('DATADOG_TRACE_AGENT_PORT'):
94 # if the agent port is a string, the underlying library that creates the socket
95 # stops working
96 try:
97 port = int(os.environ.get('DATADOG_TRACE_AGENT_PORT'))
98 except ValueError:
99 log.warning('DATADOG_TRACE_AGENT_PORT is not an integer value; default to 8126')
100 else:
101 self.defaults['AGENT_PORT'] = port
102
103 self.import_strings = import_strings or IMPORT_STRINGS
104
105 @property
106 def user_settings(self):
107 if not hasattr(self, '_user_settings'):
108 self._user_settings = getattr(django_settings, 'DATADOG_TRACE', {})
109
110 # TODO[manu]: prevents docs import errors; provide a better implementation
111 if 'ENABLED' not in self._user_settings:
112 self._user_settings['ENABLED'] = not django_settings.DEBUG
113 return self._user_settings
114
115 def __getattr__(self, attr):
116 if attr not in self.defaults:
117 raise AttributeError('Invalid setting: "{}"'.format(attr))
118
119 try:
120 # Check if present in user settings
121 val = self.user_settings[attr]
122 except KeyError:
123 # Otherwise, fall back to defaults
124 val = self.defaults[attr]
125
126 # Coerce import strings into classes
127 if attr in self.import_strings:
128 val = import_from_string(val, attr)
129
130 # Cache the result
131 setattr(self, attr, val)
132 return val
133
134 def __check_user_settings(self, user_settings):
135 SETTINGS_DOC = 'http://pypi.datadoghq.com/trace/docs/#module-ddtrace.contrib.django'
136 for setting in REMOVED_SETTINGS:
137 if setting in user_settings:
138 raise RuntimeError(
139 'The "{}" setting has been removed, check "{}".'.format(setting, SETTINGS_DOC)
140 )
141 return user_settings
142
143
144 settings = DatadogSettings(None, DEFAULTS, IMPORT_STRINGS)
145
146
147 def reload_settings(*args, **kwargs):
148 """
149 Triggers a reload when Django emits the reloading signal
150 """
151 global settings
152 setting, value = kwargs['setting'], kwargs['value']
153 if setting == 'DATADOG_TRACE':
154 settings = DatadogSettings(value, DEFAULTS, IMPORT_STRINGS)
155
156
157 setting_changed.connect(reload_settings)
158
[end of ddtrace/contrib/django/conf.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/ddtrace/contrib/django/conf.py b/ddtrace/contrib/django/conf.py
--- a/ddtrace/contrib/django/conf.py
+++ b/ddtrace/contrib/django/conf.py
@@ -18,8 +18,6 @@
from django.conf import settings as django_settings
-from django.test.signals import setting_changed
-
log = logging.getLogger(__name__)
@@ -152,6 +150,3 @@
setting, value = kwargs['setting'], kwargs['value']
if setting == 'DATADOG_TRACE':
settings = DatadogSettings(value, DEFAULTS, IMPORT_STRINGS)
-
-
-setting_changed.connect(reload_settings)
|
{"golden_diff": "diff --git a/ddtrace/contrib/django/conf.py b/ddtrace/contrib/django/conf.py\n--- a/ddtrace/contrib/django/conf.py\n+++ b/ddtrace/contrib/django/conf.py\n@@ -18,8 +18,6 @@\n \n from django.conf import settings as django_settings\n \n-from django.test.signals import setting_changed\n-\n \n log = logging.getLogger(__name__)\n \n@@ -152,6 +150,3 @@\n setting, value = kwargs['setting'], kwargs['value']\n if setting == 'DATADOG_TRACE':\n settings = DatadogSettings(value, DEFAULTS, IMPORT_STRINGS)\n-\n-\n-setting_changed.connect(reload_settings)\n", "issue": "DatabaseError with ddtrace\n```\r\nTraceback (most recent call last):\r\n File \"/opt/prezi/signup/versions/318-db1a27e562e267603c55ea13e8db46b29d715834/virtualenv/local/lib/python2.7/site-packages/django/core/handlers/base.py\", line 132, in get_response\r\n response = wrapped_callback(request, *callback_args, **callback_kwargs)\r\n File \"/opt/prezi/signup/versions/318-db1a27e562e267603c55ea13e8db46b29d715834/signup/views/signup.py\", line 199, in edu_school_reg\r\n token = EDUVerifyToken.objects.create_token(email=email, domain=domain, name=data['name'], plan_family=data['plan_family'], action=action, upgrade_email=upgrade_email, user_id=user_id, school_data=school_data)\r\n File \"/opt/prezi/signup/versions/318-db1a27e562e267603c55ea13e8db46b29d715834/signup/models.py\", line 69, in create_token\r\n token.save()\r\n File \"/opt/prezi/signup/versions/318-db1a27e562e267603c55ea13e8db46b29d715834/signup/models.py\", line 99, in save\r\n super(EDUVerifyToken, self).save(**kwargs)\r\n File \"/opt/prezi/signup/versions/318-db1a27e562e267603c55ea13e8db46b29d715834/virtualenv/local/lib/python2.7/site-packages/django/db/models/base.py\", line 734, in save\r\n force_update=force_update, update_fields=update_fields)\r\n File \"/opt/prezi/signup/versions/318-db1a27e562e267603c55ea13e8db46b29d715834/virtualenv/local/lib/python2.7/site-packages/django/db/models/base.py\", line 759, in save_base\r\n with transaction.atomic(using=using, savepoint=False):\r\n File \"/opt/prezi/signup/versions/318-db1a27e562e267603c55ea13e8db46b29d715834/virtualenv/local/lib/python2.7/site-packages/django/db/transaction.py\", line 150, in __enter__\r\n if not connection.get_autocommit():\r\n File \"/opt/prezi/signup/versions/318-db1a27e562e267603c55ea13e8db46b29d715834/virtualenv/local/lib/python2.7/site-packages/django/db/backends/base/base.py\", line 286, in get_autocommit\r\n self.ensure_connection()\r\n File \"/opt/prezi/signup/versions/318-db1a27e562e267603c55ea13e8db46b29d715834/virtualenv/local/lib/python2.7/site-packages/django/db/backends/base/base.py\", line 130, in ensure_connection\r\n self.connect()\r\n File \"/opt/prezi/signup/versions/318-db1a27e562e267603c55ea13e8db46b29d715834/virtualenv/local/lib/python2.7/site-packages/django/db/backends/base/base.py\", line 121, in connect\r\n self.init_connection_state()\r\n File \"/opt/prezi/signup/versions/318-db1a27e562e267603c55ea13e8db46b29d715834/virtualenv/local/lib/python2.7/site-packages/django/db/backends/mysql/base.py\", line 282, in init_connection_state\r\n with self.cursor() as cursor:\r\n File \"/opt/prezi/signup/versions/318-db1a27e562e267603c55ea13e8db46b29d715834/virtualenv/local/lib/python2.7/site-packages/ddtrace/contrib/django/db.py\", line 35, in cursor\r\n return TracedCursor(tracer, conn, conn._datadog_original_cursor())\r\n File \"/opt/prezi/signup/versions/318-db1a27e562e267603c55ea13e8db46b29d715834/virtualenv/local/lib/python2.7/site-packages/django/db/backends/base/base.py\", line 160, in cursor\r\n self.validate_thread_sharing()\r\n File \"/opt/prezi/signup/versions/318-db1a27e562e267603c55ea13e8db46b29d715834/virtualenv/local/lib/python2.7/site-packages/django/db/backends/base/base.py\", line 421, in validate_thread_sharing\r\n % (self.alias, self._thread_ident, thread.get_ident()))\r\nDatabaseError: DatabaseWrapper objects created in a thread can only be used in that same thread. The object with alias 'default' was created in thread id 40345840 and this is thread id 76693744.\r\n```\r\n\r\nddtrace created a `DatabaseWrapper` in a different thread. And an exception is raised when the save called on the model.\r\n\r\nCurrent dependencies\r\n```\r\ngunicorn==19.3\r\nMySQL-python==1.2.5\r\nddtrace==0.11.0\r\ndatadog==0.19.0\r\n```\n", "before_files": [{"content": "\"\"\"\nSettings for Datadog tracer are all namespaced in the DATADOG_TRACE setting.\nFor example your project's `settings.py` file might look like this:\n\nDATADOG_TRACE = {\n 'TRACER': 'myapp.tracer',\n}\n\nThis module provides the `setting` object, that is used to access\nDatadog settings, checking for user settings first, then falling\nback to the defaults.\n\"\"\"\nfrom __future__ import unicode_literals\n\nimport os\nimport importlib\nimport logging\n\nfrom django.conf import settings as django_settings\n\nfrom django.test.signals import setting_changed\n\n\nlog = logging.getLogger(__name__)\n\n# List of available settings with their defaults\nDEFAULTS = {\n 'AGENT_HOSTNAME': 'localhost',\n 'AGENT_PORT': 8126,\n 'AUTO_INSTRUMENT': True,\n 'INSTRUMENT_CACHE': True,\n 'INSTRUMENT_DATABASE': True,\n 'INSTRUMENT_TEMPLATE': True,\n 'DEFAULT_DATABASE_PREFIX': '',\n 'DEFAULT_SERVICE': 'django',\n 'ENABLED': True,\n 'DISTRIBUTED_TRACING': False,\n 'TAGS': {},\n 'TRACER': 'ddtrace.tracer',\n}\n\n# List of settings that may be in string import notation.\nIMPORT_STRINGS = (\n 'TRACER',\n)\n\n# List of settings that have been removed\nREMOVED_SETTINGS = ()\n\n\ndef import_from_string(val, setting_name):\n \"\"\"\n Attempt to import a class from a string representation.\n \"\"\"\n try:\n # Nod to tastypie's use of importlib.\n parts = val.split('.')\n module_path, class_name = '.'.join(parts[:-1]), parts[-1]\n module = importlib.import_module(module_path)\n return getattr(module, class_name)\n except (ImportError, AttributeError) as e:\n msg = 'Could not import \"{}\" for setting \"{}\". {}: {}.'.format(\n val, setting_name,\n e.__class__.__name__, e\n )\n\n raise ImportError(msg)\n\n\nclass DatadogSettings(object):\n \"\"\"\n A settings object, that allows Datadog settings to be accessed as properties.\n For example:\n\n from ddtrace.contrib.django.conf import settings\n\n tracer = settings.TRACER\n\n Any setting with string import paths will be automatically resolved\n and return the class, rather than the string literal.\n \"\"\"\n def __init__(self, user_settings=None, defaults=None, import_strings=None):\n if user_settings:\n self._user_settings = self.__check_user_settings(user_settings)\n\n self.defaults = defaults or DEFAULTS\n if os.environ.get('DATADOG_ENV'):\n self.defaults['TAGS'].update({'env': os.environ.get('DATADOG_ENV')})\n if os.environ.get('DATADOG_SERVICE_NAME'):\n self.defaults['DEFAULT_SERVICE'] = os.environ.get('DATADOG_SERVICE_NAME')\n if os.environ.get('DATADOG_TRACE_AGENT_HOSTNAME'):\n self.defaults['AGENT_HOSTNAME'] = os.environ.get('DATADOG_TRACE_AGENT_HOSTNAME')\n if os.environ.get('DATADOG_TRACE_AGENT_PORT'):\n # if the agent port is a string, the underlying library that creates the socket\n # stops working\n try:\n port = int(os.environ.get('DATADOG_TRACE_AGENT_PORT'))\n except ValueError:\n log.warning('DATADOG_TRACE_AGENT_PORT is not an integer value; default to 8126')\n else:\n self.defaults['AGENT_PORT'] = port\n\n self.import_strings = import_strings or IMPORT_STRINGS\n\n @property\n def user_settings(self):\n if not hasattr(self, '_user_settings'):\n self._user_settings = getattr(django_settings, 'DATADOG_TRACE', {})\n\n # TODO[manu]: prevents docs import errors; provide a better implementation\n if 'ENABLED' not in self._user_settings:\n self._user_settings['ENABLED'] = not django_settings.DEBUG\n return self._user_settings\n\n def __getattr__(self, attr):\n if attr not in self.defaults:\n raise AttributeError('Invalid setting: \"{}\"'.format(attr))\n\n try:\n # Check if present in user settings\n val = self.user_settings[attr]\n except KeyError:\n # Otherwise, fall back to defaults\n val = self.defaults[attr]\n\n # Coerce import strings into classes\n if attr in self.import_strings:\n val = import_from_string(val, attr)\n\n # Cache the result\n setattr(self, attr, val)\n return val\n\n def __check_user_settings(self, user_settings):\n SETTINGS_DOC = 'http://pypi.datadoghq.com/trace/docs/#module-ddtrace.contrib.django'\n for setting in REMOVED_SETTINGS:\n if setting in user_settings:\n raise RuntimeError(\n 'The \"{}\" setting has been removed, check \"{}\".'.format(setting, SETTINGS_DOC)\n )\n return user_settings\n\n\nsettings = DatadogSettings(None, DEFAULTS, IMPORT_STRINGS)\n\n\ndef reload_settings(*args, **kwargs):\n \"\"\"\n Triggers a reload when Django emits the reloading signal\n \"\"\"\n global settings\n setting, value = kwargs['setting'], kwargs['value']\n if setting == 'DATADOG_TRACE':\n settings = DatadogSettings(value, DEFAULTS, IMPORT_STRINGS)\n\n\nsetting_changed.connect(reload_settings)\n", "path": "ddtrace/contrib/django/conf.py"}]}
| 3,455 | 146 |
gh_patches_debug_22693
|
rasdani/github-patches
|
git_diff
|
googleapis__google-cloud-python-1440
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Pubsub.list_topics fails when there are no topics
[Offending line](https://github.com/GoogleCloudPlatform/gcloud-python/blob/0910f9979a45af8cc2826dd4c6ff38d9efa5ccec/gcloud/pubsub/client.py#L80). Reproduce via:
``` python
client = pubsub.Client()
>>> client.list_topics()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "gcloud/pubsub/client.py", line 80, in list_topics
for resource in resp['topics']]
KeyError: 'topics'
```
@tseaver ISTM we should locate all instances where we assume a key is present and just protect against this. The time between releases behooves us to be "protective" of users. (I realize that we've usually done it this way based on documented outputs.)
</issue>
<code>
[start of gcloud/pubsub/client.py]
1 # Copyright 2015 Google Inc. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """Client for interacting with the Google Cloud Pub/Sub API."""
16
17
18 from gcloud.client import JSONClient
19 from gcloud.pubsub.connection import Connection
20 from gcloud.pubsub.subscription import Subscription
21 from gcloud.pubsub.topic import Topic
22
23
24 class Client(JSONClient):
25 """Client to bundle configuration needed for API requests.
26
27 :type project: string
28 :param project: the project which the client acts on behalf of. Will be
29 passed when creating a topic. If not passed,
30 falls back to the default inferred from the environment.
31
32 :type credentials: :class:`oauth2client.client.OAuth2Credentials` or
33 :class:`NoneType`
34 :param credentials: The OAuth2 Credentials to use for the connection
35 owned by this client. If not passed (and if no ``http``
36 object is passed), falls back to the default inferred
37 from the environment.
38
39 :type http: :class:`httplib2.Http` or class that defines ``request()``.
40 :param http: An optional HTTP object to make requests. If not passed, an
41 ``http`` object is created that is bound to the
42 ``credentials`` for the current object.
43 """
44
45 _connection_class = Connection
46
47 def list_topics(self, page_size=None, page_token=None):
48 """List topics for the project associated with this client.
49
50 See:
51 https://cloud.google.com/pubsub/reference/rest/v1/projects.topics/list
52
53 :type page_size: int
54 :param page_size: maximum number of topics to return, If not passed,
55 defaults to a value set by the API.
56
57 :type page_token: string
58 :param page_token: opaque marker for the next "page" of topics. If not
59 passed, the API will return the first page of
60 topics.
61
62 :rtype: tuple, (list, str)
63 :returns: list of :class:`gcloud.pubsub.topic.Topic`, plus a
64 "next page token" string: if not None, indicates that
65 more topics can be retrieved with another call (pass that
66 value as ``page_token``).
67 """
68 params = {}
69
70 if page_size is not None:
71 params['pageSize'] = page_size
72
73 if page_token is not None:
74 params['pageToken'] = page_token
75
76 path = '/projects/%s/topics' % (self.project,)
77 resp = self.connection.api_request(method='GET', path=path,
78 query_params=params)
79 topics = [Topic.from_api_repr(resource, self)
80 for resource in resp['topics']]
81 return topics, resp.get('nextPageToken')
82
83 def list_subscriptions(self, page_size=None, page_token=None,
84 topic_name=None):
85 """List subscriptions for the project associated with this client.
86
87 See:
88 https://cloud.google.com/pubsub/reference/rest/v1/projects.topics/list
89
90 and (where ``topic_name`` is passed):
91 https://cloud.google.com/pubsub/reference/rest/v1/projects.topics.subscriptions/list
92
93 :type page_size: int
94 :param page_size: maximum number of topics to return, If not passed,
95 defaults to a value set by the API.
96
97 :type page_token: string
98 :param page_token: opaque marker for the next "page" of topics. If not
99 passed, the API will return the first page of
100 topics.
101
102 :type topic_name: string
103 :param topic_name: limit results to subscriptions bound to the given
104 topic.
105
106 :rtype: tuple, (list, str)
107 :returns: list of :class:`gcloud.pubsub.subscription.Subscription`,
108 plus a "next page token" string: if not None, indicates that
109 more topics can be retrieved with another call (pass that
110 value as ``page_token``).
111 """
112 params = {}
113
114 if page_size is not None:
115 params['pageSize'] = page_size
116
117 if page_token is not None:
118 params['pageToken'] = page_token
119
120 if topic_name is None:
121 path = '/projects/%s/subscriptions' % (self.project,)
122 else:
123 path = '/projects/%s/topics/%s/subscriptions' % (self.project,
124 topic_name)
125
126 resp = self.connection.api_request(method='GET', path=path,
127 query_params=params)
128 topics = {}
129 subscriptions = [Subscription.from_api_repr(resource, self,
130 topics=topics)
131 for resource in resp['subscriptions']]
132 return subscriptions, resp.get('nextPageToken')
133
134 def topic(self, name, timestamp_messages=False):
135 """Creates a topic bound to the current client.
136
137 :type name: string
138 :param name: the name of the topic to be constructed.
139
140 :type timestamp_messages: boolean
141 :param timestamp_messages: To be passed to ``Topic`` constructor.
142
143 :rtype: :class:`gcloud.pubsub.topic.Topic`
144 :returns: Topic created with the current client.
145 """
146 return Topic(name, client=self, timestamp_messages=timestamp_messages)
147
[end of gcloud/pubsub/client.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/gcloud/pubsub/client.py b/gcloud/pubsub/client.py
--- a/gcloud/pubsub/client.py
+++ b/gcloud/pubsub/client.py
@@ -77,7 +77,7 @@
resp = self.connection.api_request(method='GET', path=path,
query_params=params)
topics = [Topic.from_api_repr(resource, self)
- for resource in resp['topics']]
+ for resource in resp.get('topics', ())]
return topics, resp.get('nextPageToken')
def list_subscriptions(self, page_size=None, page_token=None,
@@ -128,7 +128,7 @@
topics = {}
subscriptions = [Subscription.from_api_repr(resource, self,
topics=topics)
- for resource in resp['subscriptions']]
+ for resource in resp.get('subscriptions', ())]
return subscriptions, resp.get('nextPageToken')
def topic(self, name, timestamp_messages=False):
|
{"golden_diff": "diff --git a/gcloud/pubsub/client.py b/gcloud/pubsub/client.py\n--- a/gcloud/pubsub/client.py\n+++ b/gcloud/pubsub/client.py\n@@ -77,7 +77,7 @@\n resp = self.connection.api_request(method='GET', path=path,\n query_params=params)\n topics = [Topic.from_api_repr(resource, self)\n- for resource in resp['topics']]\n+ for resource in resp.get('topics', ())]\n return topics, resp.get('nextPageToken')\n \n def list_subscriptions(self, page_size=None, page_token=None,\n@@ -128,7 +128,7 @@\n topics = {}\n subscriptions = [Subscription.from_api_repr(resource, self,\n topics=topics)\n- for resource in resp['subscriptions']]\n+ for resource in resp.get('subscriptions', ())]\n return subscriptions, resp.get('nextPageToken')\n \n def topic(self, name, timestamp_messages=False):\n", "issue": "Pubsub.list_topics fails when there are no topics\n[Offending line](https://github.com/GoogleCloudPlatform/gcloud-python/blob/0910f9979a45af8cc2826dd4c6ff38d9efa5ccec/gcloud/pubsub/client.py#L80). Reproduce via:\n\n``` python\nclient = pubsub.Client()\n>>> client.list_topics()\nTraceback (most recent call last):\n File \"<stdin>\", line 1, in <module>\n File \"gcloud/pubsub/client.py\", line 80, in list_topics\n for resource in resp['topics']]\nKeyError: 'topics'\n```\n\n@tseaver ISTM we should locate all instances where we assume a key is present and just protect against this. The time between releases behooves us to be \"protective\" of users. (I realize that we've usually done it this way based on documented outputs.)\n\n", "before_files": [{"content": "# Copyright 2015 Google Inc. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Client for interacting with the Google Cloud Pub/Sub API.\"\"\"\n\n\nfrom gcloud.client import JSONClient\nfrom gcloud.pubsub.connection import Connection\nfrom gcloud.pubsub.subscription import Subscription\nfrom gcloud.pubsub.topic import Topic\n\n\nclass Client(JSONClient):\n \"\"\"Client to bundle configuration needed for API requests.\n\n :type project: string\n :param project: the project which the client acts on behalf of. Will be\n passed when creating a topic. If not passed,\n falls back to the default inferred from the environment.\n\n :type credentials: :class:`oauth2client.client.OAuth2Credentials` or\n :class:`NoneType`\n :param credentials: The OAuth2 Credentials to use for the connection\n owned by this client. If not passed (and if no ``http``\n object is passed), falls back to the default inferred\n from the environment.\n\n :type http: :class:`httplib2.Http` or class that defines ``request()``.\n :param http: An optional HTTP object to make requests. If not passed, an\n ``http`` object is created that is bound to the\n ``credentials`` for the current object.\n \"\"\"\n\n _connection_class = Connection\n\n def list_topics(self, page_size=None, page_token=None):\n \"\"\"List topics for the project associated with this client.\n\n See:\n https://cloud.google.com/pubsub/reference/rest/v1/projects.topics/list\n\n :type page_size: int\n :param page_size: maximum number of topics to return, If not passed,\n defaults to a value set by the API.\n\n :type page_token: string\n :param page_token: opaque marker for the next \"page\" of topics. If not\n passed, the API will return the first page of\n topics.\n\n :rtype: tuple, (list, str)\n :returns: list of :class:`gcloud.pubsub.topic.Topic`, plus a\n \"next page token\" string: if not None, indicates that\n more topics can be retrieved with another call (pass that\n value as ``page_token``).\n \"\"\"\n params = {}\n\n if page_size is not None:\n params['pageSize'] = page_size\n\n if page_token is not None:\n params['pageToken'] = page_token\n\n path = '/projects/%s/topics' % (self.project,)\n resp = self.connection.api_request(method='GET', path=path,\n query_params=params)\n topics = [Topic.from_api_repr(resource, self)\n for resource in resp['topics']]\n return topics, resp.get('nextPageToken')\n\n def list_subscriptions(self, page_size=None, page_token=None,\n topic_name=None):\n \"\"\"List subscriptions for the project associated with this client.\n\n See:\n https://cloud.google.com/pubsub/reference/rest/v1/projects.topics/list\n\n and (where ``topic_name`` is passed):\n https://cloud.google.com/pubsub/reference/rest/v1/projects.topics.subscriptions/list\n\n :type page_size: int\n :param page_size: maximum number of topics to return, If not passed,\n defaults to a value set by the API.\n\n :type page_token: string\n :param page_token: opaque marker for the next \"page\" of topics. If not\n passed, the API will return the first page of\n topics.\n\n :type topic_name: string\n :param topic_name: limit results to subscriptions bound to the given\n topic.\n\n :rtype: tuple, (list, str)\n :returns: list of :class:`gcloud.pubsub.subscription.Subscription`,\n plus a \"next page token\" string: if not None, indicates that\n more topics can be retrieved with another call (pass that\n value as ``page_token``).\n \"\"\"\n params = {}\n\n if page_size is not None:\n params['pageSize'] = page_size\n\n if page_token is not None:\n params['pageToken'] = page_token\n\n if topic_name is None:\n path = '/projects/%s/subscriptions' % (self.project,)\n else:\n path = '/projects/%s/topics/%s/subscriptions' % (self.project,\n topic_name)\n\n resp = self.connection.api_request(method='GET', path=path,\n query_params=params)\n topics = {}\n subscriptions = [Subscription.from_api_repr(resource, self,\n topics=topics)\n for resource in resp['subscriptions']]\n return subscriptions, resp.get('nextPageToken')\n\n def topic(self, name, timestamp_messages=False):\n \"\"\"Creates a topic bound to the current client.\n\n :type name: string\n :param name: the name of the topic to be constructed.\n\n :type timestamp_messages: boolean\n :param timestamp_messages: To be passed to ``Topic`` constructor.\n\n :rtype: :class:`gcloud.pubsub.topic.Topic`\n :returns: Topic created with the current client.\n \"\"\"\n return Topic(name, client=self, timestamp_messages=timestamp_messages)\n", "path": "gcloud/pubsub/client.py"}]}
| 2,299 | 206 |
gh_patches_debug_22083
|
rasdani/github-patches
|
git_diff
|
crytic__slither-414
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Spelling mistake in detector output
The `uninitialized-local` and `uninitialized-storage` detectors each have a minor spelling mistake in their output. The word *initiali~a~zed* is misspelled.
**Current output**
```(tests/uninitialized_local_variable.sol#4) is a local variable never initialiazed```
```(tests/uninitialized_storage_pointer.sol#10) is a storage variable never initialiazed```
**Expected output**
```(tests/uninitialized_local_variable.sol#4) is a local variable never initialized```
```(tests/uninitialized_storage_pointer.sol#10) is a storage variable never initialized```
</issue>
<code>
[start of slither/detectors/variables/uninitialized_storage_variables.py]
1 """
2 Module detecting uninitialized storage variables
3
4 Recursively explore the CFG to only report uninitialized storage variables that are
5 written before being read
6 """
7
8 from slither.detectors.abstract_detector import AbstractDetector, DetectorClassification
9
10
11 class UninitializedStorageVars(AbstractDetector):
12 """
13 """
14
15 ARGUMENT = 'uninitialized-storage'
16 HELP = 'Uninitialized storage variables'
17 IMPACT = DetectorClassification.HIGH
18 CONFIDENCE = DetectorClassification.HIGH
19
20 WIKI = 'https://github.com/crytic/slither/wiki/Detector-Documentation#uninitialized-storage-variables'
21
22 WIKI_TITLE = 'Uninitialized storage variables'
23 WIKI_DESCRIPTION = 'An uinitialized storage variable will act as a reference to the first state variable, and can override a critical variable.'
24 WIKI_EXPLOIT_SCENARIO = '''
25 ```solidity
26 contract Uninitialized{
27 address owner = msg.sender;
28
29 struct St{
30 uint a;
31 }
32
33 function func() {
34 St st;
35 st.a = 0x0;
36 }
37 }
38 ```
39 Bob calls `func`. As a result, `owner` is override to 0.
40 '''
41
42 WIKI_RECOMMENDATION = 'Initialize all the storage variables.'
43
44 # node.context[self.key] contains the uninitialized storage variables
45 key = "UNINITIALIZEDSTORAGE"
46
47 def _detect_uninitialized(self, function, node, visited):
48 if node in visited:
49 return
50
51 visited = visited + [node]
52
53 fathers_context = []
54
55 for father in node.fathers:
56 if self.key in father.context:
57 fathers_context += father.context[self.key]
58
59 # Exclude paths that dont bring further information
60 if node in self.visited_all_paths:
61 if all(f_c in self.visited_all_paths[node] for f_c in fathers_context):
62 return
63 else:
64 self.visited_all_paths[node] = []
65
66 self.visited_all_paths[node] = list(set(self.visited_all_paths[node] + fathers_context))
67
68 if self.key in node.context:
69 fathers_context += node.context[self.key]
70
71 variables_read = node.variables_read
72 for uninitialized_storage_variable in fathers_context:
73 if uninitialized_storage_variable in variables_read:
74 self.results.append((function, uninitialized_storage_variable))
75
76 # Only save the storage variables that are not yet written
77 uninitialized_storage_variables = list(set(fathers_context) - set(node.variables_written))
78 node.context[self.key] = uninitialized_storage_variables
79
80 for son in node.sons:
81 self._detect_uninitialized(function, son, visited)
82
83
84 def _detect(self):
85 """ Detect uninitialized storage variables
86
87 Recursively visit the calls
88 Returns:
89 dict: [contract name] = set(storage variable uninitialized)
90 """
91 results = []
92
93 self.results = []
94 self.visited_all_paths = {}
95
96 for contract in self.slither.contracts:
97 for function in contract.functions:
98 if function.is_implemented:
99 uninitialized_storage_variables = [v for v in function.local_variables if v.is_storage and v.uninitialized]
100 function.entry_point.context[self.key] = uninitialized_storage_variables
101 self._detect_uninitialized(function, function.entry_point, [])
102
103 for(function, uninitialized_storage_variable) in self.results:
104 info = [uninitialized_storage_variable, " is a storage variable never initialiazed\n"]
105 json = self.generate_result(info)
106 results.append(json)
107
108 return results
109
[end of slither/detectors/variables/uninitialized_storage_variables.py]
[start of slither/detectors/variables/uninitialized_local_variables.py]
1 """
2 Module detecting uninitialized local variables
3
4 Recursively explore the CFG to only report uninitialized local variables that are
5 read before being written
6 """
7
8 from slither.detectors.abstract_detector import AbstractDetector, DetectorClassification
9
10
11 class UninitializedLocalVars(AbstractDetector):
12 """
13 """
14
15 ARGUMENT = 'uninitialized-local'
16 HELP = 'Uninitialized local variables'
17 IMPACT = DetectorClassification.MEDIUM
18 CONFIDENCE = DetectorClassification.MEDIUM
19
20 WIKI = 'https://github.com/crytic/slither/wiki/Detector-Documentation#uninitialized-local-variables'
21
22
23 WIKI_TITLE = 'Uninitialized local variables'
24 WIKI_DESCRIPTION = 'Uninitialized local variables.'
25 WIKI_EXPLOIT_SCENARIO = '''
26 ```solidity
27 contract Uninitialized is Owner{
28 function withdraw() payable public onlyOwner{
29 address to;
30 to.transfer(this.balance)
31 }
32 }
33 ```
34 Bob calls `transfer`. As a result, the ethers are sent to the address 0x0 and are lost.'''
35
36 WIKI_RECOMMENDATION = 'Initialize all the variables. If a variable is meant to be initialized to zero, explicitly set it to zero.'
37
38 key = "UNINITIALIZEDLOCAL"
39
40 def _detect_uninitialized(self, function, node, visited):
41 if node in visited:
42 return
43
44 visited = visited + [node]
45
46 fathers_context = []
47
48 for father in node.fathers:
49 if self.key in father.context:
50 fathers_context += father.context[self.key]
51
52 # Exclude path that dont bring further information
53 if node in self.visited_all_paths:
54 if all(f_c in self.visited_all_paths[node] for f_c in fathers_context):
55 return
56 else:
57 self.visited_all_paths[node] = []
58
59 self.visited_all_paths[node] = list(set(self.visited_all_paths[node] + fathers_context))
60
61 if self.key in node.context:
62 fathers_context += node.context[self.key]
63
64 variables_read = node.variables_read
65 for uninitialized_local_variable in fathers_context:
66 if uninitialized_local_variable in variables_read:
67 self.results.append((function, uninitialized_local_variable))
68
69 # Only save the local variables that are not yet written
70 uninitialized_local_variables = list(set(fathers_context) - set(node.variables_written))
71 node.context[self.key] = uninitialized_local_variables
72
73 for son in node.sons:
74 self._detect_uninitialized(function, son, visited)
75
76
77 def _detect(self):
78 """ Detect uninitialized local variables
79
80 Recursively visit the calls
81 Returns:
82 dict: [contract name] = set(local variable uninitialized)
83 """
84 results = []
85
86 self.results = []
87 self.visited_all_paths = {}
88
89 for contract in self.slither.contracts:
90 for function in contract.functions:
91 if function.is_implemented and function.contract_declarer == contract:
92 if function.contains_assembly:
93 continue
94 # dont consider storage variable, as they are detected by another detector
95 uninitialized_local_variables = [v for v in function.local_variables if not v.is_storage and v.uninitialized]
96 function.entry_point.context[self.key] = uninitialized_local_variables
97 self._detect_uninitialized(function, function.entry_point, [])
98 all_results = list(set(self.results))
99 for(function, uninitialized_local_variable) in all_results:
100
101 info = [uninitialized_local_variable, " is a local variable never initialiazed\n"]
102 json = self.generate_result(info)
103 results.append(json)
104
105 return results
106
[end of slither/detectors/variables/uninitialized_local_variables.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/slither/detectors/variables/uninitialized_local_variables.py b/slither/detectors/variables/uninitialized_local_variables.py
--- a/slither/detectors/variables/uninitialized_local_variables.py
+++ b/slither/detectors/variables/uninitialized_local_variables.py
@@ -98,7 +98,7 @@
all_results = list(set(self.results))
for(function, uninitialized_local_variable) in all_results:
- info = [uninitialized_local_variable, " is a local variable never initialiazed\n"]
+ info = [uninitialized_local_variable, " is a local variable never initialized\n"]
json = self.generate_result(info)
results.append(json)
diff --git a/slither/detectors/variables/uninitialized_storage_variables.py b/slither/detectors/variables/uninitialized_storage_variables.py
--- a/slither/detectors/variables/uninitialized_storage_variables.py
+++ b/slither/detectors/variables/uninitialized_storage_variables.py
@@ -101,7 +101,7 @@
self._detect_uninitialized(function, function.entry_point, [])
for(function, uninitialized_storage_variable) in self.results:
- info = [uninitialized_storage_variable, " is a storage variable never initialiazed\n"]
+ info = [uninitialized_storage_variable, " is a storage variable never initialized\n"]
json = self.generate_result(info)
results.append(json)
|
{"golden_diff": "diff --git a/slither/detectors/variables/uninitialized_local_variables.py b/slither/detectors/variables/uninitialized_local_variables.py\n--- a/slither/detectors/variables/uninitialized_local_variables.py\n+++ b/slither/detectors/variables/uninitialized_local_variables.py\n@@ -98,7 +98,7 @@\n all_results = list(set(self.results))\n for(function, uninitialized_local_variable) in all_results:\n \n- info = [uninitialized_local_variable, \" is a local variable never initialiazed\\n\"]\n+ info = [uninitialized_local_variable, \" is a local variable never initialized\\n\"]\n json = self.generate_result(info)\n results.append(json)\n \ndiff --git a/slither/detectors/variables/uninitialized_storage_variables.py b/slither/detectors/variables/uninitialized_storage_variables.py\n--- a/slither/detectors/variables/uninitialized_storage_variables.py\n+++ b/slither/detectors/variables/uninitialized_storage_variables.py\n@@ -101,7 +101,7 @@\n self._detect_uninitialized(function, function.entry_point, [])\n \n for(function, uninitialized_storage_variable) in self.results:\n- info = [uninitialized_storage_variable, \" is a storage variable never initialiazed\\n\"]\n+ info = [uninitialized_storage_variable, \" is a storage variable never initialized\\n\"]\n json = self.generate_result(info)\n results.append(json)\n", "issue": "Spelling mistake in detector output\nThe `uninitialized-local` and `uninitialized-storage` detectors each have a minor spelling mistake in their output. The word *initiali~a~zed* is misspelled.\r\n\r\n**Current output**\r\n\r\n```(tests/uninitialized_local_variable.sol#4) is a local variable never initialiazed```\r\n```(tests/uninitialized_storage_pointer.sol#10) is a storage variable never initialiazed```\r\n\r\n**Expected output**\r\n\r\n```(tests/uninitialized_local_variable.sol#4) is a local variable never initialized```\r\n```(tests/uninitialized_storage_pointer.sol#10) is a storage variable never initialized```\n", "before_files": [{"content": "\"\"\"\n Module detecting uninitialized storage variables\n\n Recursively explore the CFG to only report uninitialized storage variables that are\n written before being read\n\"\"\"\n\nfrom slither.detectors.abstract_detector import AbstractDetector, DetectorClassification\n\n\nclass UninitializedStorageVars(AbstractDetector):\n \"\"\"\n \"\"\"\n\n ARGUMENT = 'uninitialized-storage'\n HELP = 'Uninitialized storage variables'\n IMPACT = DetectorClassification.HIGH\n CONFIDENCE = DetectorClassification.HIGH\n\n WIKI = 'https://github.com/crytic/slither/wiki/Detector-Documentation#uninitialized-storage-variables'\n\n WIKI_TITLE = 'Uninitialized storage variables'\n WIKI_DESCRIPTION = 'An uinitialized storage variable will act as a reference to the first state variable, and can override a critical variable.'\n WIKI_EXPLOIT_SCENARIO = '''\n```solidity\ncontract Uninitialized{\n address owner = msg.sender;\n\n struct St{\n uint a;\n }\n\n function func() {\n St st;\n st.a = 0x0;\n }\n}\n```\nBob calls `func`. As a result, `owner` is override to 0.\n'''\n\n WIKI_RECOMMENDATION = 'Initialize all the storage variables.'\n\n # node.context[self.key] contains the uninitialized storage variables\n key = \"UNINITIALIZEDSTORAGE\"\n\n def _detect_uninitialized(self, function, node, visited):\n if node in visited:\n return\n\n visited = visited + [node]\n\n fathers_context = []\n\n for father in node.fathers:\n if self.key in father.context:\n fathers_context += father.context[self.key]\n\n # Exclude paths that dont bring further information\n if node in self.visited_all_paths:\n if all(f_c in self.visited_all_paths[node] for f_c in fathers_context):\n return\n else:\n self.visited_all_paths[node] = []\n\n self.visited_all_paths[node] = list(set(self.visited_all_paths[node] + fathers_context))\n\n if self.key in node.context:\n fathers_context += node.context[self.key]\n\n variables_read = node.variables_read\n for uninitialized_storage_variable in fathers_context:\n if uninitialized_storage_variable in variables_read:\n self.results.append((function, uninitialized_storage_variable))\n\n # Only save the storage variables that are not yet written\n uninitialized_storage_variables = list(set(fathers_context) - set(node.variables_written))\n node.context[self.key] = uninitialized_storage_variables\n\n for son in node.sons:\n self._detect_uninitialized(function, son, visited)\n\n\n def _detect(self):\n \"\"\" Detect uninitialized storage variables\n\n Recursively visit the calls\n Returns:\n dict: [contract name] = set(storage variable uninitialized)\n \"\"\"\n results = []\n\n self.results = []\n self.visited_all_paths = {}\n\n for contract in self.slither.contracts:\n for function in contract.functions:\n if function.is_implemented:\n uninitialized_storage_variables = [v for v in function.local_variables if v.is_storage and v.uninitialized]\n function.entry_point.context[self.key] = uninitialized_storage_variables\n self._detect_uninitialized(function, function.entry_point, [])\n\n for(function, uninitialized_storage_variable) in self.results:\n info = [uninitialized_storage_variable, \" is a storage variable never initialiazed\\n\"]\n json = self.generate_result(info)\n results.append(json)\n\n return results\n", "path": "slither/detectors/variables/uninitialized_storage_variables.py"}, {"content": "\"\"\"\n Module detecting uninitialized local variables\n\n Recursively explore the CFG to only report uninitialized local variables that are\n read before being written\n\"\"\"\n\nfrom slither.detectors.abstract_detector import AbstractDetector, DetectorClassification\n\n\nclass UninitializedLocalVars(AbstractDetector):\n \"\"\"\n \"\"\"\n\n ARGUMENT = 'uninitialized-local'\n HELP = 'Uninitialized local variables'\n IMPACT = DetectorClassification.MEDIUM\n CONFIDENCE = DetectorClassification.MEDIUM\n\n WIKI = 'https://github.com/crytic/slither/wiki/Detector-Documentation#uninitialized-local-variables'\n\n\n WIKI_TITLE = 'Uninitialized local variables'\n WIKI_DESCRIPTION = 'Uninitialized local variables.'\n WIKI_EXPLOIT_SCENARIO = '''\n```solidity\ncontract Uninitialized is Owner{\n function withdraw() payable public onlyOwner{\n address to;\n to.transfer(this.balance)\n }\n}\n```\nBob calls `transfer`. As a result, the ethers are sent to the address 0x0 and are lost.'''\n\n WIKI_RECOMMENDATION = 'Initialize all the variables. If a variable is meant to be initialized to zero, explicitly set it to zero.'\n\n key = \"UNINITIALIZEDLOCAL\"\n\n def _detect_uninitialized(self, function, node, visited):\n if node in visited:\n return\n\n visited = visited + [node]\n\n fathers_context = []\n\n for father in node.fathers:\n if self.key in father.context:\n fathers_context += father.context[self.key]\n\n # Exclude path that dont bring further information\n if node in self.visited_all_paths:\n if all(f_c in self.visited_all_paths[node] for f_c in fathers_context):\n return\n else:\n self.visited_all_paths[node] = []\n\n self.visited_all_paths[node] = list(set(self.visited_all_paths[node] + fathers_context))\n\n if self.key in node.context:\n fathers_context += node.context[self.key]\n\n variables_read = node.variables_read\n for uninitialized_local_variable in fathers_context:\n if uninitialized_local_variable in variables_read:\n self.results.append((function, uninitialized_local_variable))\n\n # Only save the local variables that are not yet written\n uninitialized_local_variables = list(set(fathers_context) - set(node.variables_written))\n node.context[self.key] = uninitialized_local_variables\n\n for son in node.sons:\n self._detect_uninitialized(function, son, visited)\n\n\n def _detect(self):\n \"\"\" Detect uninitialized local variables\n\n Recursively visit the calls\n Returns:\n dict: [contract name] = set(local variable uninitialized)\n \"\"\"\n results = []\n\n self.results = []\n self.visited_all_paths = {}\n\n for contract in self.slither.contracts:\n for function in contract.functions:\n if function.is_implemented and function.contract_declarer == contract:\n if function.contains_assembly:\n continue\n # dont consider storage variable, as they are detected by another detector\n uninitialized_local_variables = [v for v in function.local_variables if not v.is_storage and v.uninitialized]\n function.entry_point.context[self.key] = uninitialized_local_variables\n self._detect_uninitialized(function, function.entry_point, [])\n all_results = list(set(self.results))\n for(function, uninitialized_local_variable) in all_results:\n\n info = [uninitialized_local_variable, \" is a local variable never initialiazed\\n\"]\n json = self.generate_result(info)\n results.append(json)\n\n return results\n", "path": "slither/detectors/variables/uninitialized_local_variables.py"}]}
| 2,637 | 301 |
gh_patches_debug_17421
|
rasdani/github-patches
|
git_diff
|
Lightning-Universe__lightning-flash-1094
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Is there a way to seed experiments?
## ❓ Questions and Help
#### What is your question?
Is there a way to seed experiments? Attempts at using `seed_everything` from pytorch lightning do not appear to work (also with the the workers argument set to True).
#### What's your environment?
- OS: Linux
- Packaging: pip
- Version: 0.5.2
</issue>
<code>
[start of flash/image/segmentation/input.py]
1 # Copyright The PyTorch Lightning team.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 import os
15 from typing import Any, Dict, List, Optional, Tuple, TYPE_CHECKING, Union
16
17 import torch
18 from pytorch_lightning.utilities import rank_zero_warn
19
20 from flash.core.data.io.input import DataKeys, ImageLabelsMap, Input
21 from flash.core.data.utilities.paths import filter_valid_files, PATH_TYPE
22 from flash.core.data.utilities.samples import to_samples
23 from flash.core.data.utils import image_default_loader
24 from flash.core.integrations.fiftyone.utils import FiftyOneLabelUtilities
25 from flash.core.utilities.imports import _FIFTYONE_AVAILABLE, _TORCHVISION_AVAILABLE, lazy_import
26 from flash.image.data import ImageDeserializer, IMG_EXTENSIONS
27 from flash.image.segmentation.output import SegmentationLabelsOutput
28
29 SampleCollection = None
30 if _FIFTYONE_AVAILABLE:
31 fo = lazy_import("fiftyone")
32 if TYPE_CHECKING:
33 from fiftyone.core.collections import SampleCollection
34 else:
35 fo = None
36
37 if _TORCHVISION_AVAILABLE:
38 import torchvision
39 import torchvision.transforms.functional as FT
40
41
42 class SemanticSegmentationInput(Input):
43 def load_labels_map(
44 self, num_classes: Optional[int] = None, labels_map: Optional[Dict[int, Tuple[int, int, int]]] = None
45 ) -> None:
46 if num_classes is not None:
47 self.num_classes = num_classes
48 labels_map = labels_map or SegmentationLabelsOutput.create_random_labels_map(num_classes)
49
50 if labels_map is not None:
51 self.set_state(ImageLabelsMap(labels_map))
52 self.labels_map = labels_map
53
54 def load_sample(self, sample: Dict[str, Any]) -> Dict[str, Any]:
55 sample[DataKeys.INPUT] = sample[DataKeys.INPUT].float()
56 if DataKeys.TARGET in sample:
57 sample[DataKeys.TARGET] = sample[DataKeys.TARGET].float()
58 sample[DataKeys.METADATA] = {"size": sample[DataKeys.INPUT].shape[-2:]}
59 return sample
60
61
62 class SemanticSegmentationTensorInput(SemanticSegmentationInput):
63 def load_data(
64 self,
65 tensor: Any,
66 masks: Any = None,
67 num_classes: Optional[int] = None,
68 labels_map: Optional[Dict[int, Tuple[int, int, int]]] = None,
69 ) -> List[Dict[str, Any]]:
70 self.load_labels_map(num_classes, labels_map)
71 return to_samples(tensor, masks)
72
73
74 class SemanticSegmentationNumpyInput(SemanticSegmentationInput):
75 def load_data(
76 self,
77 array: Any,
78 masks: Any = None,
79 num_classes: Optional[int] = None,
80 labels_map: Optional[Dict[int, Tuple[int, int, int]]] = None,
81 ) -> List[Dict[str, Any]]:
82 self.load_labels_map(num_classes, labels_map)
83 return to_samples(array, masks)
84
85 def load_sample(self, sample: Dict[str, Any]) -> Dict[str, Any]:
86 sample[DataKeys.INPUT] = torch.from_numpy(sample[DataKeys.INPUT])
87 if DataKeys.TARGET in sample:
88 sample[DataKeys.TARGET] = torch.from_numpy(sample[DataKeys.TARGET])
89 return super().load_sample(sample)
90
91
92 class SemanticSegmentationFilesInput(SemanticSegmentationInput):
93 def load_data(
94 self,
95 files: Union[PATH_TYPE, List[PATH_TYPE]],
96 mask_files: Optional[Union[PATH_TYPE, List[PATH_TYPE]]] = None,
97 num_classes: Optional[int] = None,
98 labels_map: Optional[Dict[int, Tuple[int, int, int]]] = None,
99 ) -> List[Dict[str, Any]]:
100 self.load_labels_map(num_classes, labels_map)
101 if mask_files is None:
102 files = filter_valid_files(files, valid_extensions=IMG_EXTENSIONS)
103 else:
104 files, masks = filter_valid_files(files, mask_files, valid_extensions=IMG_EXTENSIONS)
105 return to_samples(files, mask_files)
106
107 def load_sample(self, sample: Dict[str, Any]) -> Dict[str, Any]:
108 filepath = sample[DataKeys.INPUT]
109 sample[DataKeys.INPUT] = FT.to_tensor(image_default_loader(filepath))
110 if DataKeys.TARGET in sample:
111 sample[DataKeys.TARGET] = torchvision.io.read_image(sample[DataKeys.TARGET])[0]
112 sample = super().load_sample(sample)
113 sample[DataKeys.METADATA]["filepath"] = filepath
114 return sample
115
116
117 class SemanticSegmentationFolderInput(SemanticSegmentationFilesInput):
118 def load_data(
119 self,
120 folder: PATH_TYPE,
121 mask_folder: Optional[PATH_TYPE] = None,
122 num_classes: Optional[int] = None,
123 labels_map: Optional[Dict[int, Tuple[int, int, int]]] = None,
124 ) -> List[Dict[str, Any]]:
125 self.load_labels_map(num_classes, labels_map)
126 files = os.listdir(folder)
127 if mask_folder is not None:
128 mask_files = os.listdir(mask_folder)
129
130 all_files = set(files).intersection(set(mask_files))
131 if len(all_files) != len(files) or len(all_files) != len(mask_files):
132 rank_zero_warn(
133 f"Found inconsistent files in input folder: {folder} and mask folder: {mask_folder}. Some files"
134 " have been dropped.",
135 UserWarning,
136 )
137
138 files = [os.path.join(folder, file) for file in all_files]
139 mask_files = [os.path.join(mask_folder, file) for file in all_files]
140 return super().load_data(files, mask_files)
141 return super().load_data(files)
142
143
144 class SemanticSegmentationFiftyOneInput(SemanticSegmentationFilesInput):
145 def load_data(
146 self,
147 sample_collection: SampleCollection,
148 label_field: str = "ground_truth",
149 num_classes: Optional[int] = None,
150 labels_map: Optional[Dict[int, Tuple[int, int, int]]] = None,
151 ) -> List[Dict[str, Any]]:
152 self.load_labels_map(num_classes, labels_map)
153
154 self.label_field = label_field
155 label_utilities = FiftyOneLabelUtilities(label_field, fo.Segmentation)
156 label_utilities.validate(sample_collection)
157
158 self._fo_dataset_name = sample_collection.name
159 return to_samples(sample_collection.values("filepath"))
160
161 def load_sample(self, sample: Dict[str, Any]) -> Dict[str, Any]:
162 filepath = sample[DataKeys.INPUT]
163 sample = super().load_sample(sample)
164 if not self.predicting:
165 fo_dataset = fo.load_dataset(self._fo_dataset_name)
166 fo_sample = fo_dataset[filepath]
167 sample[DataKeys.TARGET] = torch.from_numpy(fo_sample[self.label_field].mask).float() # H x W
168 return sample
169
170
171 class SemanticSegmentationDeserializer(ImageDeserializer):
172 def serve_load_sample(self, data: str) -> Dict[str, Any]:
173 result = super().serve_load_sample(data)
174 result[DataKeys.INPUT] = FT.to_tensor(result[DataKeys.INPUT])
175 result[DataKeys.METADATA] = {"size": result[DataKeys.INPUT].shape[-2:]}
176 return result
177
[end of flash/image/segmentation/input.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/flash/image/segmentation/input.py b/flash/image/segmentation/input.py
--- a/flash/image/segmentation/input.py
+++ b/flash/image/segmentation/input.py
@@ -124,6 +124,7 @@
) -> List[Dict[str, Any]]:
self.load_labels_map(num_classes, labels_map)
files = os.listdir(folder)
+ files.sort()
if mask_folder is not None:
mask_files = os.listdir(mask_folder)
@@ -137,6 +138,8 @@
files = [os.path.join(folder, file) for file in all_files]
mask_files = [os.path.join(mask_folder, file) for file in all_files]
+ files.sort()
+ mask_files.sort()
return super().load_data(files, mask_files)
return super().load_data(files)
|
{"golden_diff": "diff --git a/flash/image/segmentation/input.py b/flash/image/segmentation/input.py\n--- a/flash/image/segmentation/input.py\n+++ b/flash/image/segmentation/input.py\n@@ -124,6 +124,7 @@\n ) -> List[Dict[str, Any]]:\n self.load_labels_map(num_classes, labels_map)\n files = os.listdir(folder)\n+ files.sort()\n if mask_folder is not None:\n mask_files = os.listdir(mask_folder)\n \n@@ -137,6 +138,8 @@\n \n files = [os.path.join(folder, file) for file in all_files]\n mask_files = [os.path.join(mask_folder, file) for file in all_files]\n+ files.sort()\n+ mask_files.sort()\n return super().load_data(files, mask_files)\n return super().load_data(files)\n", "issue": "Is there a way to seed experiments?\n## \u2753 Questions and Help\r\n\r\n#### What is your question?\r\nIs there a way to seed experiments? Attempts at using `seed_everything` from pytorch lightning do not appear to work (also with the the workers argument set to True).\r\n\r\n#### What's your environment?\r\n\r\n - OS: Linux\r\n - Packaging: pip\r\n - Version: 0.5.2\r\n\n", "before_files": [{"content": "# Copyright The PyTorch Lightning team.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nimport os\nfrom typing import Any, Dict, List, Optional, Tuple, TYPE_CHECKING, Union\n\nimport torch\nfrom pytorch_lightning.utilities import rank_zero_warn\n\nfrom flash.core.data.io.input import DataKeys, ImageLabelsMap, Input\nfrom flash.core.data.utilities.paths import filter_valid_files, PATH_TYPE\nfrom flash.core.data.utilities.samples import to_samples\nfrom flash.core.data.utils import image_default_loader\nfrom flash.core.integrations.fiftyone.utils import FiftyOneLabelUtilities\nfrom flash.core.utilities.imports import _FIFTYONE_AVAILABLE, _TORCHVISION_AVAILABLE, lazy_import\nfrom flash.image.data import ImageDeserializer, IMG_EXTENSIONS\nfrom flash.image.segmentation.output import SegmentationLabelsOutput\n\nSampleCollection = None\nif _FIFTYONE_AVAILABLE:\n fo = lazy_import(\"fiftyone\")\n if TYPE_CHECKING:\n from fiftyone.core.collections import SampleCollection\nelse:\n fo = None\n\nif _TORCHVISION_AVAILABLE:\n import torchvision\n import torchvision.transforms.functional as FT\n\n\nclass SemanticSegmentationInput(Input):\n def load_labels_map(\n self, num_classes: Optional[int] = None, labels_map: Optional[Dict[int, Tuple[int, int, int]]] = None\n ) -> None:\n if num_classes is not None:\n self.num_classes = num_classes\n labels_map = labels_map or SegmentationLabelsOutput.create_random_labels_map(num_classes)\n\n if labels_map is not None:\n self.set_state(ImageLabelsMap(labels_map))\n self.labels_map = labels_map\n\n def load_sample(self, sample: Dict[str, Any]) -> Dict[str, Any]:\n sample[DataKeys.INPUT] = sample[DataKeys.INPUT].float()\n if DataKeys.TARGET in sample:\n sample[DataKeys.TARGET] = sample[DataKeys.TARGET].float()\n sample[DataKeys.METADATA] = {\"size\": sample[DataKeys.INPUT].shape[-2:]}\n return sample\n\n\nclass SemanticSegmentationTensorInput(SemanticSegmentationInput):\n def load_data(\n self,\n tensor: Any,\n masks: Any = None,\n num_classes: Optional[int] = None,\n labels_map: Optional[Dict[int, Tuple[int, int, int]]] = None,\n ) -> List[Dict[str, Any]]:\n self.load_labels_map(num_classes, labels_map)\n return to_samples(tensor, masks)\n\n\nclass SemanticSegmentationNumpyInput(SemanticSegmentationInput):\n def load_data(\n self,\n array: Any,\n masks: Any = None,\n num_classes: Optional[int] = None,\n labels_map: Optional[Dict[int, Tuple[int, int, int]]] = None,\n ) -> List[Dict[str, Any]]:\n self.load_labels_map(num_classes, labels_map)\n return to_samples(array, masks)\n\n def load_sample(self, sample: Dict[str, Any]) -> Dict[str, Any]:\n sample[DataKeys.INPUT] = torch.from_numpy(sample[DataKeys.INPUT])\n if DataKeys.TARGET in sample:\n sample[DataKeys.TARGET] = torch.from_numpy(sample[DataKeys.TARGET])\n return super().load_sample(sample)\n\n\nclass SemanticSegmentationFilesInput(SemanticSegmentationInput):\n def load_data(\n self,\n files: Union[PATH_TYPE, List[PATH_TYPE]],\n mask_files: Optional[Union[PATH_TYPE, List[PATH_TYPE]]] = None,\n num_classes: Optional[int] = None,\n labels_map: Optional[Dict[int, Tuple[int, int, int]]] = None,\n ) -> List[Dict[str, Any]]:\n self.load_labels_map(num_classes, labels_map)\n if mask_files is None:\n files = filter_valid_files(files, valid_extensions=IMG_EXTENSIONS)\n else:\n files, masks = filter_valid_files(files, mask_files, valid_extensions=IMG_EXTENSIONS)\n return to_samples(files, mask_files)\n\n def load_sample(self, sample: Dict[str, Any]) -> Dict[str, Any]:\n filepath = sample[DataKeys.INPUT]\n sample[DataKeys.INPUT] = FT.to_tensor(image_default_loader(filepath))\n if DataKeys.TARGET in sample:\n sample[DataKeys.TARGET] = torchvision.io.read_image(sample[DataKeys.TARGET])[0]\n sample = super().load_sample(sample)\n sample[DataKeys.METADATA][\"filepath\"] = filepath\n return sample\n\n\nclass SemanticSegmentationFolderInput(SemanticSegmentationFilesInput):\n def load_data(\n self,\n folder: PATH_TYPE,\n mask_folder: Optional[PATH_TYPE] = None,\n num_classes: Optional[int] = None,\n labels_map: Optional[Dict[int, Tuple[int, int, int]]] = None,\n ) -> List[Dict[str, Any]]:\n self.load_labels_map(num_classes, labels_map)\n files = os.listdir(folder)\n if mask_folder is not None:\n mask_files = os.listdir(mask_folder)\n\n all_files = set(files).intersection(set(mask_files))\n if len(all_files) != len(files) or len(all_files) != len(mask_files):\n rank_zero_warn(\n f\"Found inconsistent files in input folder: {folder} and mask folder: {mask_folder}. Some files\"\n \" have been dropped.\",\n UserWarning,\n )\n\n files = [os.path.join(folder, file) for file in all_files]\n mask_files = [os.path.join(mask_folder, file) for file in all_files]\n return super().load_data(files, mask_files)\n return super().load_data(files)\n\n\nclass SemanticSegmentationFiftyOneInput(SemanticSegmentationFilesInput):\n def load_data(\n self,\n sample_collection: SampleCollection,\n label_field: str = \"ground_truth\",\n num_classes: Optional[int] = None,\n labels_map: Optional[Dict[int, Tuple[int, int, int]]] = None,\n ) -> List[Dict[str, Any]]:\n self.load_labels_map(num_classes, labels_map)\n\n self.label_field = label_field\n label_utilities = FiftyOneLabelUtilities(label_field, fo.Segmentation)\n label_utilities.validate(sample_collection)\n\n self._fo_dataset_name = sample_collection.name\n return to_samples(sample_collection.values(\"filepath\"))\n\n def load_sample(self, sample: Dict[str, Any]) -> Dict[str, Any]:\n filepath = sample[DataKeys.INPUT]\n sample = super().load_sample(sample)\n if not self.predicting:\n fo_dataset = fo.load_dataset(self._fo_dataset_name)\n fo_sample = fo_dataset[filepath]\n sample[DataKeys.TARGET] = torch.from_numpy(fo_sample[self.label_field].mask).float() # H x W\n return sample\n\n\nclass SemanticSegmentationDeserializer(ImageDeserializer):\n def serve_load_sample(self, data: str) -> Dict[str, Any]:\n result = super().serve_load_sample(data)\n result[DataKeys.INPUT] = FT.to_tensor(result[DataKeys.INPUT])\n result[DataKeys.METADATA] = {\"size\": result[DataKeys.INPUT].shape[-2:]}\n return result\n", "path": "flash/image/segmentation/input.py"}]}
| 2,698 | 190 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.