problem_id
stringlengths 18
22
| source
stringclasses 1
value | task_type
stringclasses 1
value | in_source_id
stringlengths 13
58
| prompt
stringlengths 1.71k
18.9k
| golden_diff
stringlengths 145
5.13k
| verification_info
stringlengths 465
23.6k
| num_tokens_prompt
int64 556
4.1k
| num_tokens_diff
int64 47
1.02k
|
---|---|---|---|---|---|---|---|---|
gh_patches_debug_38642
|
rasdani/github-patches
|
git_diff
|
horovod__horovod-275
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
DistributedOptimizer cannot be used when underlying optimizer overwrites compute_gradients and apply_gradients
In our use case we have a TensorFlow optimizer that explicitly defines `compute_gradients` and `apply_gradients` functions. The current version of `DistributedOptimizer` cannot be used in our case because it does not overwrite `apply_gradients` to use` self._optimizer.apply_gradients` and it calls (lines 178-179 of `horovod/tensorflow/__init__.py`)
```python
gradients = super(DistributedOptimizer, self).compute_gradients(*args, **kwargs)
```
instead of `gradients = self._optimizer.compute_gradients(*args, **kwargs)`.
Is it possible to update `DistributedOptimizer` to work in our use case and similar situations when `compute_gradients` and `apply_gradients` are redefined? As far as I can see simply changing lines 178-179 of `horovod/tensorflow/__init__.py` to
```python
gradients = self._optimizer.compute_gradients(*args, **kwargs)
```
and adding additional method:
```python
def apply_gradients(self, grads_and_vars, global_step=None, name=None):
"""Calls this same method on the underlying optimizer."""
return self._optimizer.apply_gradients(grads_and_vars, global_step, name)
```
should make it work.
</issue>
<code>
[start of horovod/__init__.py]
1 __version__ = '0.13.0'
2
[end of horovod/__init__.py]
[start of horovod/tensorflow/__init__.py]
1 # Copyright 2016 The TensorFlow Authors. All Rights Reserved.
2 # Modifications copyright (C) 2017 Uber Technologies, Inc.
3 #
4 # Licensed under the Apache License, Version 2.0 (the "License");
5 # you may not use this file except in compliance with the License.
6 # You may obtain a copy of the License at
7 #
8 # http://www.apache.org/licenses/LICENSE-2.0
9 #
10 # Unless required by applicable law or agreed to in writing, software
11 # distributed under the License is distributed on an "AS IS" BASIS,
12 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 # See the License for the specific language governing permissions and
14 # limitations under the License.
15 # ==============================================================================
16 # pylint: disable=g-short-docstring-punctuation
17 """## Communicating Between Processes with MPI
18
19 TensorFlow natively provides inter-device communication through send and
20 receive ops and inter-node communication through Distributed TensorFlow, based
21 on the same send and receive abstractions. On HPC clusters where Infiniband or
22 other high-speed node interconnects are available, these can end up being
23 insufficient for synchronous data-parallel training (without asynchronous
24 gradient descent). This module implements a variety of MPI ops which can take
25 advantage of hardware-specific MPI libraries for efficient communication.
26 """
27
28 from __future__ import absolute_import
29 from __future__ import division
30 from __future__ import print_function
31
32 from horovod.common import init
33 from horovod.common import size
34 from horovod.common import local_size
35 from horovod.common import rank
36 from horovod.common import local_rank
37 from horovod.common import mpi_threads_supported
38 from horovod.common import check_extension
39
40 check_extension('horovod.tensorflow', 'HOROVOD_WITH_TENSORFLOW', __file__, 'mpi_lib')
41
42 from horovod.tensorflow.mpi_ops import allgather
43 from horovod.tensorflow.mpi_ops import broadcast
44 from horovod.tensorflow.mpi_ops import _allreduce
45
46 import tensorflow as tf
47
48
49 def allreduce(tensor, average=True, device_dense='', device_sparse=''):
50 """Perform an allreduce on a tf.Tensor or tf.IndexedSlices.
51
52 Arguments:
53 tensor: tf.Tensor, tf.Variable, or tf.IndexedSlices to reduce.
54 The shape of the input must be identical across all ranks.
55 average: If True, computes the average over all ranks.
56 Otherwise, computes the sum over all ranks.
57 device_dense: Device to be used for dense tensors. Uses GPU by default
58 if Horovod was build with HOROVOD_GPU_ALLREDUCE.
59 device_sparse: Device to be used for sparse tensors. Uses GPU by default
60 if Horovod was build with HOROVOD_GPU_ALLGATHER.
61
62 This function performs a bandwidth-optimal ring allreduce on the input
63 tensor. If the input is an tf.IndexedSlices, the function instead does an
64 allgather on the values and the indices, effectively doing an allreduce on
65 the represented tensor.
66 """
67 if isinstance(tensor, tf.IndexedSlices):
68 with tf.device(device_sparse):
69 # For IndexedSlices, do two allgathers intead of an allreduce.
70 horovod_size = tf.cast(size(), tensor.values.dtype)
71 values = allgather(tensor.values)
72 indices = allgather(tensor.indices)
73
74 # To make this operation into an average, divide all gathered values by
75 # the Horovod size.
76 new_values = tf.div(values, horovod_size) if average else values
77 return tf.IndexedSlices(new_values, indices,
78 dense_shape=tensor.dense_shape)
79 else:
80 with tf.device(device_dense):
81 horovod_size = tf.cast(size(), tensor.dtype)
82 summed_tensor = _allreduce(tensor)
83 new_tensor = (tf.div(summed_tensor, horovod_size)
84 if average else summed_tensor)
85 return new_tensor
86
87
88 def broadcast_global_variables(root_rank):
89 """Broadcasts all global variables from root rank to all other processes.
90
91 Arguments:
92 root_rank: rank of the process from which global variables will be broadcasted
93 to all other processes.
94 """
95 return tf.group(*[tf.assign(var, broadcast(var, root_rank))
96 for var in tf.global_variables()])
97
98
99 class BroadcastGlobalVariablesHook(tf.train.SessionRunHook):
100 """
101 SessionRunHook that will broadcast all global variables from root rank
102 to all other processes during initialization.
103
104 This is necessary to ensure consistent initialization of all workers when
105 training is started with random weights or restored from a checkpoint.
106 """
107
108 def __init__(self, root_rank, device=''):
109 """Construct a new BroadcastGlobalVariablesHook that will broadcast all
110 global variables from root rank to all other processes during initialization.
111
112 Args:
113 root_rank:
114 Rank that will send data, other ranks will receive data.
115 device:
116 Device to be used for broadcasting. Uses GPU by default
117 if Horovod was build with HOROVOD_GPU_BROADCAST.
118 """
119 super(BroadcastGlobalVariablesHook, self).__init__()
120 self.root_rank = root_rank
121 self.bcast_op = None
122 self.device = device
123
124 def begin(self):
125 if not self.bcast_op or self.bcast_op.graph != tf.get_default_graph():
126 with tf.device(self.device):
127 self.bcast_op = broadcast_global_variables(self.root_rank)
128
129 def after_create_session(self, session, coord):
130 session.run(self.bcast_op)
131
132
133 class DistributedOptimizer(tf.train.Optimizer):
134 """An optimizer that wraps another tf.Optimizer, using an allreduce to
135 average gradient values before applying gradients to model weights."""
136
137 def __init__(self, optimizer, name=None, use_locking=False, device_dense='',
138 device_sparse=''):
139 """Construct a new DistributedOptimizer, which uses another optimizer
140 under the hood for computing single-process gradient values and
141 applying gradient updates after the gradient values have been averaged
142 across all the Horovod ranks.
143
144 Args:
145 optimizer:
146 Optimizer to use for computing gradients and applying updates.
147 name:
148 Optional name prefix for the operations created when applying
149 gradients. Defaults to "Distributed" followed by the provided
150 optimizer type.
151 use_locking:
152 Whether to use locking when updating variables.
153 See Optimizer.__init__ for more info.
154 device_dense:
155 Device to be used for dense tensors. Uses GPU by default
156 if Horovod was build with HOROVOD_GPU_ALLREDUCE.
157 device_sparse:
158 Device to be used for sparse tensors. Uses GPU by default
159 if Horovod was build with HOROVOD_GPU_ALLGATHER.
160 """
161 if name is None:
162 name = "Distributed{}".format(type(optimizer).__name__)
163
164 self._optimizer = optimizer
165 self._device_dense = device_dense
166 self._device_sparse = device_sparse
167 super(DistributedOptimizer, self).__init__(
168 name=name, use_locking=use_locking)
169
170 def compute_gradients(self, *args, **kwargs):
171 """Compute gradients of all trainable variables.
172
173 See Optimizer.compute_gradients() for more info.
174
175 In DistributedOptimizer, compute_gradients() is overriden to also
176 allreduce the gradients before returning them.
177 """
178 gradients = (super(DistributedOptimizer, self)
179 .compute_gradients(*args, **kwargs))
180 if size() > 1:
181 averaged_gradients = []
182 with tf.name_scope(self._name + "_Allreduce"):
183 for grad, var in gradients:
184 if grad is not None:
185 avg_grad = allreduce(grad, device_dense=self._device_dense,
186 device_sparse=self._device_sparse)
187 averaged_gradients.append((avg_grad, var))
188 else:
189 averaged_gradients.append((None, var))
190 return averaged_gradients
191 else:
192 return gradients
193
194 def _apply_dense(self, *args, **kwargs):
195 """Calls this same method on the underlying optimizer."""
196 return self._optimizer._apply_dense(*args, **kwargs)
197
198 def _resource_apply_dense(self, *args, **kwargs):
199 """Calls this same method on the underlying optimizer."""
200 return self._optimizer._resource_apply_dense(*args, **kwargs)
201
202 def _resource_apply_sparse_duplicate_indices(self, *args, **kwargs):
203 """Calls this same method on the underlying optimizer."""
204 return self._optimizer._resource_apply_sparse_duplicate_indices(*args, **kwargs)
205
206 def _resource_apply_sparse(self, *args, **kwargs):
207 """Calls this same method on the underlying optimizer."""
208 return self._optimizer._resource_apply_sparse(*args, **kwargs)
209
210 def _apply_sparse_duplicate_indices(self, *args, **kwargs):
211 """Calls this same method on the underlying optimizer."""
212 return self._optimizer._apply_sparse_duplicate_indices(*args, **kwargs)
213
214 def _apply_sparse(self, *args, **kwargs):
215 """Calls this same method on the underlying optimizer."""
216 return self._optimizer._apply_sparse(*args, **kwargs)
217
218 def _prepare(self, *args, **kwargs):
219 """Calls this same method on the underlying optimizer."""
220 return self._optimizer._prepare(*args, **kwargs)
221
222 def _create_slots(self, *args, **kwargs):
223 """Calls this same method on the underlying optimizer."""
224 return self._optimizer._create_slots(*args, **kwargs)
225
226 def _valid_dtypes(self, *args, **kwargs):
227 """Calls this same method on the underlying optimizer."""
228 return self._optimizer._valid_dtypes(*args, **kwargs)
229
230 def _finish(self, *args, **kwargs):
231 """Calls this same method on the underlying optimizer."""
232 return self._optimizer._finish(*args, **kwargs)
233
[end of horovod/tensorflow/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/horovod/__init__.py b/horovod/__init__.py
--- a/horovod/__init__.py
+++ b/horovod/__init__.py
@@ -1 +1 @@
-__version__ = '0.13.0'
+__version__ = '0.13.1'
diff --git a/horovod/tensorflow/__init__.py b/horovod/tensorflow/__init__.py
--- a/horovod/tensorflow/__init__.py
+++ b/horovod/tensorflow/__init__.py
@@ -175,8 +175,7 @@
In DistributedOptimizer, compute_gradients() is overriden to also
allreduce the gradients before returning them.
"""
- gradients = (super(DistributedOptimizer, self)
- .compute_gradients(*args, **kwargs))
+ gradients = self._optimizer.compute_gradients(*args, **kwargs)
if size() > 1:
averaged_gradients = []
with tf.name_scope(self._name + "_Allreduce"):
@@ -191,42 +190,18 @@
else:
return gradients
- def _apply_dense(self, *args, **kwargs):
+ def apply_gradients(self, *args, **kwargs):
"""Calls this same method on the underlying optimizer."""
- return self._optimizer._apply_dense(*args, **kwargs)
+ return self._optimizer.apply_gradients(*args, **kwargs)
- def _resource_apply_dense(self, *args, **kwargs):
+ def get_slot(self, *args, **kwargs):
"""Calls this same method on the underlying optimizer."""
- return self._optimizer._resource_apply_dense(*args, **kwargs)
+ return self._optimizer.get_slot(*args, **kwargs)
- def _resource_apply_sparse_duplicate_indices(self, *args, **kwargs):
+ def get_slot_names(self, *args, **kwargs):
"""Calls this same method on the underlying optimizer."""
- return self._optimizer._resource_apply_sparse_duplicate_indices(*args, **kwargs)
+ return self._optimizer.get_slot_names(*args, **kwargs)
- def _resource_apply_sparse(self, *args, **kwargs):
+ def variables(self, *args, **kwargs):
"""Calls this same method on the underlying optimizer."""
- return self._optimizer._resource_apply_sparse(*args, **kwargs)
-
- def _apply_sparse_duplicate_indices(self, *args, **kwargs):
- """Calls this same method on the underlying optimizer."""
- return self._optimizer._apply_sparse_duplicate_indices(*args, **kwargs)
-
- def _apply_sparse(self, *args, **kwargs):
- """Calls this same method on the underlying optimizer."""
- return self._optimizer._apply_sparse(*args, **kwargs)
-
- def _prepare(self, *args, **kwargs):
- """Calls this same method on the underlying optimizer."""
- return self._optimizer._prepare(*args, **kwargs)
-
- def _create_slots(self, *args, **kwargs):
- """Calls this same method on the underlying optimizer."""
- return self._optimizer._create_slots(*args, **kwargs)
-
- def _valid_dtypes(self, *args, **kwargs):
- """Calls this same method on the underlying optimizer."""
- return self._optimizer._valid_dtypes(*args, **kwargs)
-
- def _finish(self, *args, **kwargs):
- """Calls this same method on the underlying optimizer."""
- return self._optimizer._finish(*args, **kwargs)
+ return self._optimizer.variables(*args, **kwargs)
|
{"golden_diff": "diff --git a/horovod/__init__.py b/horovod/__init__.py\n--- a/horovod/__init__.py\n+++ b/horovod/__init__.py\n@@ -1 +1 @@\n-__version__ = '0.13.0'\n+__version__ = '0.13.1'\ndiff --git a/horovod/tensorflow/__init__.py b/horovod/tensorflow/__init__.py\n--- a/horovod/tensorflow/__init__.py\n+++ b/horovod/tensorflow/__init__.py\n@@ -175,8 +175,7 @@\n In DistributedOptimizer, compute_gradients() is overriden to also\n allreduce the gradients before returning them.\n \"\"\"\n- gradients = (super(DistributedOptimizer, self)\n- .compute_gradients(*args, **kwargs))\n+ gradients = self._optimizer.compute_gradients(*args, **kwargs)\n if size() > 1:\n averaged_gradients = []\n with tf.name_scope(self._name + \"_Allreduce\"):\n@@ -191,42 +190,18 @@\n else:\n return gradients\n \n- def _apply_dense(self, *args, **kwargs):\n+ def apply_gradients(self, *args, **kwargs):\n \"\"\"Calls this same method on the underlying optimizer.\"\"\"\n- return self._optimizer._apply_dense(*args, **kwargs)\n+ return self._optimizer.apply_gradients(*args, **kwargs)\n \n- def _resource_apply_dense(self, *args, **kwargs):\n+ def get_slot(self, *args, **kwargs):\n \"\"\"Calls this same method on the underlying optimizer.\"\"\"\n- return self._optimizer._resource_apply_dense(*args, **kwargs)\n+ return self._optimizer.get_slot(*args, **kwargs)\n \n- def _resource_apply_sparse_duplicate_indices(self, *args, **kwargs):\n+ def get_slot_names(self, *args, **kwargs):\n \"\"\"Calls this same method on the underlying optimizer.\"\"\"\n- return self._optimizer._resource_apply_sparse_duplicate_indices(*args, **kwargs)\n+ return self._optimizer.get_slot_names(*args, **kwargs)\n \n- def _resource_apply_sparse(self, *args, **kwargs):\n+ def variables(self, *args, **kwargs):\n \"\"\"Calls this same method on the underlying optimizer.\"\"\"\n- return self._optimizer._resource_apply_sparse(*args, **kwargs)\n-\n- def _apply_sparse_duplicate_indices(self, *args, **kwargs):\n- \"\"\"Calls this same method on the underlying optimizer.\"\"\"\n- return self._optimizer._apply_sparse_duplicate_indices(*args, **kwargs)\n-\n- def _apply_sparse(self, *args, **kwargs):\n- \"\"\"Calls this same method on the underlying optimizer.\"\"\"\n- return self._optimizer._apply_sparse(*args, **kwargs)\n-\n- def _prepare(self, *args, **kwargs):\n- \"\"\"Calls this same method on the underlying optimizer.\"\"\"\n- return self._optimizer._prepare(*args, **kwargs)\n-\n- def _create_slots(self, *args, **kwargs):\n- \"\"\"Calls this same method on the underlying optimizer.\"\"\"\n- return self._optimizer._create_slots(*args, **kwargs)\n-\n- def _valid_dtypes(self, *args, **kwargs):\n- \"\"\"Calls this same method on the underlying optimizer.\"\"\"\n- return self._optimizer._valid_dtypes(*args, **kwargs)\n-\n- def _finish(self, *args, **kwargs):\n- \"\"\"Calls this same method on the underlying optimizer.\"\"\"\n- return self._optimizer._finish(*args, **kwargs)\n+ return self._optimizer.variables(*args, **kwargs)\n", "issue": "DistributedOptimizer cannot be used when underlying optimizer overwrites compute_gradients and apply_gradients\nIn our use case we have a TensorFlow optimizer that explicitly defines `compute_gradients` and `apply_gradients` functions. The current version of `DistributedOptimizer` cannot be used in our case because it does not overwrite `apply_gradients` to use` self._optimizer.apply_gradients` and it calls (lines 178-179 of `horovod/tensorflow/__init__.py`)\r\n```python\r\ngradients = super(DistributedOptimizer, self).compute_gradients(*args, **kwargs)\r\n```\r\ninstead of `gradients = self._optimizer.compute_gradients(*args, **kwargs)`.\r\n\r\nIs it possible to update `DistributedOptimizer` to work in our use case and similar situations when `compute_gradients` and `apply_gradients` are redefined? As far as I can see simply changing lines 178-179 of `horovod/tensorflow/__init__.py` to \r\n```python\r\ngradients = self._optimizer.compute_gradients(*args, **kwargs)\r\n```\r\nand adding additional method:\r\n```python\r\n def apply_gradients(self, grads_and_vars, global_step=None, name=None):\r\n \"\"\"Calls this same method on the underlying optimizer.\"\"\"\r\n return self._optimizer.apply_gradients(grads_and_vars, global_step, name)\r\n```\r\nshould make it work.\n", "before_files": [{"content": "__version__ = '0.13.0'\n", "path": "horovod/__init__.py"}, {"content": "# Copyright 2016 The TensorFlow Authors. All Rights Reserved.\n# Modifications copyright (C) 2017 Uber Technologies, Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n# ==============================================================================\n# pylint: disable=g-short-docstring-punctuation\n\"\"\"## Communicating Between Processes with MPI\n\nTensorFlow natively provides inter-device communication through send and\nreceive ops and inter-node communication through Distributed TensorFlow, based\non the same send and receive abstractions. On HPC clusters where Infiniband or\nother high-speed node interconnects are available, these can end up being\ninsufficient for synchronous data-parallel training (without asynchronous\ngradient descent). This module implements a variety of MPI ops which can take\nadvantage of hardware-specific MPI libraries for efficient communication.\n\"\"\"\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nfrom horovod.common import init\nfrom horovod.common import size\nfrom horovod.common import local_size\nfrom horovod.common import rank\nfrom horovod.common import local_rank\nfrom horovod.common import mpi_threads_supported\nfrom horovod.common import check_extension\n\ncheck_extension('horovod.tensorflow', 'HOROVOD_WITH_TENSORFLOW', __file__, 'mpi_lib')\n\nfrom horovod.tensorflow.mpi_ops import allgather\nfrom horovod.tensorflow.mpi_ops import broadcast\nfrom horovod.tensorflow.mpi_ops import _allreduce\n\nimport tensorflow as tf\n\n\ndef allreduce(tensor, average=True, device_dense='', device_sparse=''):\n \"\"\"Perform an allreduce on a tf.Tensor or tf.IndexedSlices.\n\n Arguments:\n tensor: tf.Tensor, tf.Variable, or tf.IndexedSlices to reduce.\n The shape of the input must be identical across all ranks.\n average: If True, computes the average over all ranks.\n Otherwise, computes the sum over all ranks.\n device_dense: Device to be used for dense tensors. Uses GPU by default\n if Horovod was build with HOROVOD_GPU_ALLREDUCE.\n device_sparse: Device to be used for sparse tensors. Uses GPU by default\n if Horovod was build with HOROVOD_GPU_ALLGATHER.\n\n This function performs a bandwidth-optimal ring allreduce on the input\n tensor. If the input is an tf.IndexedSlices, the function instead does an\n allgather on the values and the indices, effectively doing an allreduce on\n the represented tensor.\n \"\"\"\n if isinstance(tensor, tf.IndexedSlices):\n with tf.device(device_sparse):\n # For IndexedSlices, do two allgathers intead of an allreduce.\n horovod_size = tf.cast(size(), tensor.values.dtype)\n values = allgather(tensor.values)\n indices = allgather(tensor.indices)\n\n # To make this operation into an average, divide all gathered values by\n # the Horovod size.\n new_values = tf.div(values, horovod_size) if average else values\n return tf.IndexedSlices(new_values, indices,\n dense_shape=tensor.dense_shape)\n else:\n with tf.device(device_dense):\n horovod_size = tf.cast(size(), tensor.dtype)\n summed_tensor = _allreduce(tensor)\n new_tensor = (tf.div(summed_tensor, horovod_size)\n if average else summed_tensor)\n return new_tensor\n\n\ndef broadcast_global_variables(root_rank):\n \"\"\"Broadcasts all global variables from root rank to all other processes.\n\n Arguments:\n root_rank: rank of the process from which global variables will be broadcasted\n to all other processes.\n \"\"\"\n return tf.group(*[tf.assign(var, broadcast(var, root_rank))\n for var in tf.global_variables()])\n\n\nclass BroadcastGlobalVariablesHook(tf.train.SessionRunHook):\n \"\"\"\n SessionRunHook that will broadcast all global variables from root rank\n to all other processes during initialization.\n\n This is necessary to ensure consistent initialization of all workers when\n training is started with random weights or restored from a checkpoint.\n \"\"\"\n\n def __init__(self, root_rank, device=''):\n \"\"\"Construct a new BroadcastGlobalVariablesHook that will broadcast all\n global variables from root rank to all other processes during initialization.\n\n Args:\n root_rank:\n Rank that will send data, other ranks will receive data.\n device:\n Device to be used for broadcasting. Uses GPU by default\n if Horovod was build with HOROVOD_GPU_BROADCAST.\n \"\"\"\n super(BroadcastGlobalVariablesHook, self).__init__()\n self.root_rank = root_rank\n self.bcast_op = None\n self.device = device\n\n def begin(self):\n if not self.bcast_op or self.bcast_op.graph != tf.get_default_graph():\n with tf.device(self.device):\n self.bcast_op = broadcast_global_variables(self.root_rank)\n\n def after_create_session(self, session, coord):\n session.run(self.bcast_op)\n\n\nclass DistributedOptimizer(tf.train.Optimizer):\n \"\"\"An optimizer that wraps another tf.Optimizer, using an allreduce to\n average gradient values before applying gradients to model weights.\"\"\"\n\n def __init__(self, optimizer, name=None, use_locking=False, device_dense='',\n device_sparse=''):\n \"\"\"Construct a new DistributedOptimizer, which uses another optimizer\n under the hood for computing single-process gradient values and\n applying gradient updates after the gradient values have been averaged\n across all the Horovod ranks.\n\n Args:\n optimizer:\n Optimizer to use for computing gradients and applying updates.\n name:\n Optional name prefix for the operations created when applying\n gradients. Defaults to \"Distributed\" followed by the provided\n optimizer type.\n use_locking:\n Whether to use locking when updating variables.\n See Optimizer.__init__ for more info.\n device_dense:\n Device to be used for dense tensors. Uses GPU by default\n if Horovod was build with HOROVOD_GPU_ALLREDUCE.\n device_sparse:\n Device to be used for sparse tensors. Uses GPU by default\n if Horovod was build with HOROVOD_GPU_ALLGATHER.\n \"\"\"\n if name is None:\n name = \"Distributed{}\".format(type(optimizer).__name__)\n\n self._optimizer = optimizer\n self._device_dense = device_dense\n self._device_sparse = device_sparse\n super(DistributedOptimizer, self).__init__(\n name=name, use_locking=use_locking)\n\n def compute_gradients(self, *args, **kwargs):\n \"\"\"Compute gradients of all trainable variables.\n\n See Optimizer.compute_gradients() for more info.\n\n In DistributedOptimizer, compute_gradients() is overriden to also\n allreduce the gradients before returning them.\n \"\"\"\n gradients = (super(DistributedOptimizer, self)\n .compute_gradients(*args, **kwargs))\n if size() > 1:\n averaged_gradients = []\n with tf.name_scope(self._name + \"_Allreduce\"):\n for grad, var in gradients:\n if grad is not None:\n avg_grad = allreduce(grad, device_dense=self._device_dense,\n device_sparse=self._device_sparse)\n averaged_gradients.append((avg_grad, var))\n else:\n averaged_gradients.append((None, var))\n return averaged_gradients\n else:\n return gradients\n\n def _apply_dense(self, *args, **kwargs):\n \"\"\"Calls this same method on the underlying optimizer.\"\"\"\n return self._optimizer._apply_dense(*args, **kwargs)\n\n def _resource_apply_dense(self, *args, **kwargs):\n \"\"\"Calls this same method on the underlying optimizer.\"\"\"\n return self._optimizer._resource_apply_dense(*args, **kwargs)\n\n def _resource_apply_sparse_duplicate_indices(self, *args, **kwargs):\n \"\"\"Calls this same method on the underlying optimizer.\"\"\"\n return self._optimizer._resource_apply_sparse_duplicate_indices(*args, **kwargs)\n\n def _resource_apply_sparse(self, *args, **kwargs):\n \"\"\"Calls this same method on the underlying optimizer.\"\"\"\n return self._optimizer._resource_apply_sparse(*args, **kwargs)\n\n def _apply_sparse_duplicate_indices(self, *args, **kwargs):\n \"\"\"Calls this same method on the underlying optimizer.\"\"\"\n return self._optimizer._apply_sparse_duplicate_indices(*args, **kwargs)\n\n def _apply_sparse(self, *args, **kwargs):\n \"\"\"Calls this same method on the underlying optimizer.\"\"\"\n return self._optimizer._apply_sparse(*args, **kwargs)\n\n def _prepare(self, *args, **kwargs):\n \"\"\"Calls this same method on the underlying optimizer.\"\"\"\n return self._optimizer._prepare(*args, **kwargs)\n\n def _create_slots(self, *args, **kwargs):\n \"\"\"Calls this same method on the underlying optimizer.\"\"\"\n return self._optimizer._create_slots(*args, **kwargs)\n\n def _valid_dtypes(self, *args, **kwargs):\n \"\"\"Calls this same method on the underlying optimizer.\"\"\"\n return self._optimizer._valid_dtypes(*args, **kwargs)\n\n def _finish(self, *args, **kwargs):\n \"\"\"Calls this same method on the underlying optimizer.\"\"\"\n return self._optimizer._finish(*args, **kwargs)\n", "path": "horovod/tensorflow/__init__.py"}]}
| 3,535 | 794 |
gh_patches_debug_30935
|
rasdani/github-patches
|
git_diff
|
ansible-collections__community.vmware-1958
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Feature: vmware_guest_tools_upgrade to be applied without forcing reboot
<!--- Verify first that your feature was not already discussed on GitHub -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Describe the new feature/improvement briefly below -->
vmware_guest_tools_upgrade should support shutdown as an option
##### ISSUE TYPE
- Feature Idea
- Users may wish to apply the latest vmtools without rebooting immediately.
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
- https://github.com/ansible-collections/community.vmware/blob/main/plugins/module_utils/vmware.py#L968
##### ADDITIONAL INFORMATION
<!--- Describe how the feature would be used, why it is needed and what it would solve -->
- This allows the flexibility to set the desired state and then schedule the reboot in a production environment.
<!--- Paste example playbooks or commands between quotes below -->
```yaml
---
- name: Upgrade VMtools
hosts: "{{ ansible_hosts }}"
gather_facts: false
become: true
tasks:
- name: Upgrade VMtools
community.vmware.vmware_guest_tools_upgrade:
hostname: "{{ vcenter_url }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
datacenter: "{{ datacenter_name }}"
name: "{{ inventory_hostname.split('_')[0] }}"
validate_certs: "{{ vcenter_cert_validation }}"
allow_reboot: false
become: false
delegate_to: localhost
```
<!--- HINT: You can also paste gist.github.com links for larger files -->
</issue>
<code>
[start of plugins/modules/vmware_guest_tools_upgrade.py]
1 #!/usr/bin/python
2 # -*- coding: utf-8 -*-
3
4 # Copyright: (c) 2018, Mike Klebolt <[email protected]>
5 # GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
6 # SPDX-License-Identifier: GPL-3.0-or-later
7
8 from __future__ import (absolute_import, division, print_function)
9 __metaclass__ = type
10
11
12 DOCUMENTATION = r'''
13 ---
14 module: vmware_guest_tools_upgrade
15 short_description: Module to upgrade VMTools
16 description:
17 - This module upgrades the VMware Tools on Windows and Linux guests and reboots them.
18 notes:
19 - "In order to upgrade VMTools, please power on virtual machine before hand - either 'manually' or
20 using module M(community.vmware.vmware_guest_powerstate)."
21 options:
22 name:
23 description:
24 - Name of the virtual machine to work with.
25 - 'This is required if O(uuid) or O(moid) is not supplied.'
26 type: str
27 name_match:
28 description:
29 - If multiple virtual machines matching the name, use the first or last found.
30 default: 'first'
31 choices: ['first', 'last']
32 type: str
33 uuid:
34 description:
35 - "UUID of the instance to manage if known, this is VMware's unique identifier."
36 - This is required if O(name) or O(moid) is not supplied.
37 type: str
38 moid:
39 description:
40 - Managed Object ID of the instance to manage if known, this is a unique identifier only within a single vCenter instance.
41 - This is required if O(name) or O(uuid) is not supplied.
42 type: str
43 folder:
44 description:
45 - Destination folder, absolute or relative path to find an existing guest.
46 - This is required, if O(name) is supplied.
47 - "The folder should include the datacenter. ESX's datacenter is ha-datacenter"
48 - 'Examples:'
49 - ' folder: /ha-datacenter/vm'
50 - ' folder: ha-datacenter/vm'
51 - ' folder: /datacenter1/vm'
52 - ' folder: datacenter1/vm'
53 - ' folder: /datacenter1/vm/folder1'
54 - ' folder: datacenter1/vm/folder1'
55 - ' folder: /folder1/datacenter1/vm'
56 - ' folder: folder1/datacenter1/vm'
57 - ' folder: /folder1/datacenter1/vm/folder2'
58 type: str
59 datacenter:
60 description:
61 - Destination datacenter where the virtual machine exists.
62 required: true
63 type: str
64 force_upgrade:
65 description:
66 - This flag overrides the guest operating system detection and forcibly upgrade VMware tools or open-vm-tools.
67 - "This is useful when VMware tools is too old and unable to detect the 'guestFamily' value."
68 - 'Using this flag may sometime give unexpected results since module will override the default'
69 - "behaviour of 'guestFamily' detection."
70 default: false
71 type: bool
72 required: false
73 extends_documentation_fragment:
74 - community.vmware.vmware.documentation
75
76 author:
77 - Mike Klebolt (@MikeKlebolt) <[email protected]>
78 '''
79
80 EXAMPLES = r'''
81 - name: Get VM UUID
82 vmware_guest_facts:
83 hostname: "{{ vcenter_hostname }}"
84 username: "{{ vcenter_username }}"
85 password: "{{ vcenter_password }}"
86 datacenter: "{{ datacenter }}"
87 folder: "/{{datacenter}}/vm"
88 name: "{{ vm_name }}"
89 delegate_to: localhost
90 register: vm_facts
91
92 - name: Upgrade VMware Tools using uuid
93 community.vmware.vmware_guest_tools_upgrade:
94 hostname: "{{ vcenter_hostname }}"
95 username: "{{ vcenter_username }}"
96 password: "{{ vcenter_password }}"
97 datacenter: "{{ datacenter_name }}"
98 uuid: "{{ vm_facts.instance.hw_product_uuid }}"
99 delegate_to: localhost
100
101 - name: Upgrade VMware Tools using MoID
102 community.vmware.vmware_guest_tools_upgrade:
103 hostname: "{{ vcenter_hostname }}"
104 username: "{{ vcenter_username }}"
105 password: "{{ vcenter_password }}"
106 datacenter: "{{ datacenter_name }}"
107 moid: vm-42
108 delegate_to: localhost
109 '''
110
111 RETURN = r''' # '''
112
113
114 from ansible.module_utils.basic import AnsibleModule
115 from ansible_collections.community.vmware.plugins.module_utils.vmware import PyVmomi, vmware_argument_spec, wait_for_task
116 from ansible.module_utils._text import to_native
117
118
119 class PyVmomiHelper(PyVmomi):
120 def __init__(self, module):
121 super(PyVmomiHelper, self).__init__(module)
122
123 def upgrade_tools(self, vm):
124 result = {'failed': False, 'changed': False, 'msg': ''}
125 # Exit if VMware tools is already up to date
126 if vm.guest.toolsStatus == "toolsOk":
127 result.update(
128 changed=False,
129 msg="VMware tools is already up to date",
130 )
131 return result
132
133 # Fail if VM is not powered on
134 elif vm.summary.runtime.powerState != "poweredOn":
135 result.update(
136 failed=True,
137 msg="VM must be powered on to upgrade tools",
138 )
139 return result
140
141 # Fail if VMware tools is either not running or not installed
142 elif vm.guest.toolsStatus in ["toolsNotRunning", "toolsNotInstalled"]:
143 result.update(
144 failed=True,
145 msg="VMware tools is either not running or not installed",
146 )
147 return result
148
149 # If vmware tools is out of date, check major OS family
150 # Upgrade tools on Linux and Windows guests
151 elif vm.guest.toolsStatus == "toolsOld":
152 try:
153 force = self.module.params.get('force_upgrade')
154 if force or vm.guest.guestFamily in ["linuxGuest", "windowsGuest"]:
155 task = vm.UpgradeTools()
156 changed, err_msg = wait_for_task(task)
157 result.update(changed=changed, msg=to_native(err_msg))
158 else:
159 result.update(msg='Guest Operating System is other than Linux and Windows.')
160 return result
161 except Exception as exc:
162 result.update(
163 failed=True,
164 msg='Error while upgrading VMware tools %s' % to_native(exc),
165 )
166 return result
167 else:
168 result.update(
169 failed=True,
170 msg="VMware tools could not be upgraded",
171 )
172 return result
173
174
175 def main():
176 argument_spec = vmware_argument_spec()
177 argument_spec.update(
178 name=dict(type='str'),
179 name_match=dict(type='str', choices=['first', 'last'], default='first'),
180 uuid=dict(type='str'),
181 moid=dict(type='str'),
182 folder=dict(type='str'),
183 datacenter=dict(type='str', required=True),
184 force_upgrade=dict(type='bool', default=False),
185 )
186 module = AnsibleModule(
187 argument_spec=argument_spec,
188 required_one_of=[
189 ['name', 'uuid', 'moid']
190 ]
191 )
192
193 if module.params['folder']:
194 # FindByInventoryPath() does not require an absolute path
195 # so we should leave the input folder path unmodified
196 module.params['folder'] = module.params['folder'].rstrip('/')
197
198 pyv = PyVmomiHelper(module)
199 # Check if the VM exists before continuing
200 vm = pyv.get_vm()
201
202 # VM already exists
203 if vm:
204 try:
205 result = pyv.upgrade_tools(vm)
206 if result['changed']:
207 module.exit_json(changed=result['changed'])
208 elif result['failed']:
209 module.fail_json(msg=result['msg'])
210 else:
211 module.exit_json(msg=result['msg'], changed=result['changed'])
212 except Exception as exc:
213 module.fail_json(msg='Unknown error: %s' % to_native(exc))
214 else:
215 vm_id = module.params.get('uuid') or module.params.get('name') or module.params.get('moid')
216 module.fail_json(msg='Unable to find VM %s' % vm_id)
217
218
219 if __name__ == '__main__':
220 main()
221
[end of plugins/modules/vmware_guest_tools_upgrade.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/plugins/modules/vmware_guest_tools_upgrade.py b/plugins/modules/vmware_guest_tools_upgrade.py
--- a/plugins/modules/vmware_guest_tools_upgrade.py
+++ b/plugins/modules/vmware_guest_tools_upgrade.py
@@ -70,6 +70,12 @@
default: false
type: bool
required: false
+ installer_options:
+ version_added: '4.1.0'
+ description:
+ - Command line options passed to the installer to modify the installation procedure for tools.
+ type: str
+ required: false
extends_documentation_fragment:
- community.vmware.vmware.documentation
@@ -151,8 +157,12 @@
elif vm.guest.toolsStatus == "toolsOld":
try:
force = self.module.params.get('force_upgrade')
+ installer_options = self.module.params.get('installer_options')
if force or vm.guest.guestFamily in ["linuxGuest", "windowsGuest"]:
- task = vm.UpgradeTools()
+ if installer_options is not None:
+ task = vm.UpgradeTools(installer_options)
+ else:
+ task = vm.UpgradeTools()
changed, err_msg = wait_for_task(task)
result.update(changed=changed, msg=to_native(err_msg))
else:
@@ -182,6 +192,7 @@
folder=dict(type='str'),
datacenter=dict(type='str', required=True),
force_upgrade=dict(type='bool', default=False),
+ installer_options=dict(type='str'),
)
module = AnsibleModule(
argument_spec=argument_spec,
|
{"golden_diff": "diff --git a/plugins/modules/vmware_guest_tools_upgrade.py b/plugins/modules/vmware_guest_tools_upgrade.py\n--- a/plugins/modules/vmware_guest_tools_upgrade.py\n+++ b/plugins/modules/vmware_guest_tools_upgrade.py\n@@ -70,6 +70,12 @@\n default: false\n type: bool\n required: false\n+ installer_options:\n+ version_added: '4.1.0'\n+ description:\n+ - Command line options passed to the installer to modify the installation procedure for tools.\n+ type: str\n+ required: false\n extends_documentation_fragment:\n - community.vmware.vmware.documentation\n \n@@ -151,8 +157,12 @@\n elif vm.guest.toolsStatus == \"toolsOld\":\n try:\n force = self.module.params.get('force_upgrade')\n+ installer_options = self.module.params.get('installer_options')\n if force or vm.guest.guestFamily in [\"linuxGuest\", \"windowsGuest\"]:\n- task = vm.UpgradeTools()\n+ if installer_options is not None:\n+ task = vm.UpgradeTools(installer_options)\n+ else:\n+ task = vm.UpgradeTools()\n changed, err_msg = wait_for_task(task)\n result.update(changed=changed, msg=to_native(err_msg))\n else:\n@@ -182,6 +192,7 @@\n folder=dict(type='str'),\n datacenter=dict(type='str', required=True),\n force_upgrade=dict(type='bool', default=False),\n+ installer_options=dict(type='str'),\n )\n module = AnsibleModule(\n argument_spec=argument_spec,\n", "issue": "Feature: vmware_guest_tools_upgrade to be applied without forcing reboot\n<!--- Verify first that your feature was not already discussed on GitHub -->\r\n<!--- Complete *all* sections as described, this form is processed automatically -->\r\n\r\n##### SUMMARY\r\n<!--- Describe the new feature/improvement briefly below -->\r\nvmware_guest_tools_upgrade should support shutdown as an option\r\n\r\n##### ISSUE TYPE\r\n- Feature Idea\r\n- Users may wish to apply the latest vmtools without rebooting immediately.\r\n\r\n##### COMPONENT NAME\r\n<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->\r\n- https://github.com/ansible-collections/community.vmware/blob/main/plugins/module_utils/vmware.py#L968\r\n\r\n##### ADDITIONAL INFORMATION\r\n<!--- Describe how the feature would be used, why it is needed and what it would solve -->\r\n- This allows the flexibility to set the desired state and then schedule the reboot in a production environment.\r\n<!--- Paste example playbooks or commands between quotes below -->\r\n```yaml\r\n---\r\n- name: Upgrade VMtools\r\n hosts: \"{{ ansible_hosts }}\"\r\n gather_facts: false\r\n become: true\r\n tasks:\r\n\r\n - name: Upgrade VMtools\r\n community.vmware.vmware_guest_tools_upgrade:\r\n hostname: \"{{ vcenter_url }}\"\r\n username: \"{{ vcenter_username }}\"\r\n password: \"{{ vcenter_password }}\"\r\n datacenter: \"{{ datacenter_name }}\"\r\n name: \"{{ inventory_hostname.split('_')[0] }}\"\r\n validate_certs: \"{{ vcenter_cert_validation }}\"\r\n allow_reboot: false\r\n become: false\r\n delegate_to: localhost\r\n```\r\n\r\n<!--- HINT: You can also paste gist.github.com links for larger files -->\r\n\n", "before_files": [{"content": "#!/usr/bin/python\n# -*- coding: utf-8 -*-\n\n# Copyright: (c) 2018, Mike Klebolt <[email protected]>\n# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)\n# SPDX-License-Identifier: GPL-3.0-or-later\n\nfrom __future__ import (absolute_import, division, print_function)\n__metaclass__ = type\n\n\nDOCUMENTATION = r'''\n---\nmodule: vmware_guest_tools_upgrade\nshort_description: Module to upgrade VMTools\ndescription:\n - This module upgrades the VMware Tools on Windows and Linux guests and reboots them.\nnotes:\n - \"In order to upgrade VMTools, please power on virtual machine before hand - either 'manually' or\n using module M(community.vmware.vmware_guest_powerstate).\"\noptions:\n name:\n description:\n - Name of the virtual machine to work with.\n - 'This is required if O(uuid) or O(moid) is not supplied.'\n type: str\n name_match:\n description:\n - If multiple virtual machines matching the name, use the first or last found.\n default: 'first'\n choices: ['first', 'last']\n type: str\n uuid:\n description:\n - \"UUID of the instance to manage if known, this is VMware's unique identifier.\"\n - This is required if O(name) or O(moid) is not supplied.\n type: str\n moid:\n description:\n - Managed Object ID of the instance to manage if known, this is a unique identifier only within a single vCenter instance.\n - This is required if O(name) or O(uuid) is not supplied.\n type: str\n folder:\n description:\n - Destination folder, absolute or relative path to find an existing guest.\n - This is required, if O(name) is supplied.\n - \"The folder should include the datacenter. ESX's datacenter is ha-datacenter\"\n - 'Examples:'\n - ' folder: /ha-datacenter/vm'\n - ' folder: ha-datacenter/vm'\n - ' folder: /datacenter1/vm'\n - ' folder: datacenter1/vm'\n - ' folder: /datacenter1/vm/folder1'\n - ' folder: datacenter1/vm/folder1'\n - ' folder: /folder1/datacenter1/vm'\n - ' folder: folder1/datacenter1/vm'\n - ' folder: /folder1/datacenter1/vm/folder2'\n type: str\n datacenter:\n description:\n - Destination datacenter where the virtual machine exists.\n required: true\n type: str\n force_upgrade:\n description:\n - This flag overrides the guest operating system detection and forcibly upgrade VMware tools or open-vm-tools.\n - \"This is useful when VMware tools is too old and unable to detect the 'guestFamily' value.\"\n - 'Using this flag may sometime give unexpected results since module will override the default'\n - \"behaviour of 'guestFamily' detection.\"\n default: false\n type: bool\n required: false\nextends_documentation_fragment:\n- community.vmware.vmware.documentation\n\nauthor:\n - Mike Klebolt (@MikeKlebolt) <[email protected]>\n'''\n\nEXAMPLES = r'''\n- name: Get VM UUID\n vmware_guest_facts:\n hostname: \"{{ vcenter_hostname }}\"\n username: \"{{ vcenter_username }}\"\n password: \"{{ vcenter_password }}\"\n datacenter: \"{{ datacenter }}\"\n folder: \"/{{datacenter}}/vm\"\n name: \"{{ vm_name }}\"\n delegate_to: localhost\n register: vm_facts\n\n- name: Upgrade VMware Tools using uuid\n community.vmware.vmware_guest_tools_upgrade:\n hostname: \"{{ vcenter_hostname }}\"\n username: \"{{ vcenter_username }}\"\n password: \"{{ vcenter_password }}\"\n datacenter: \"{{ datacenter_name }}\"\n uuid: \"{{ vm_facts.instance.hw_product_uuid }}\"\n delegate_to: localhost\n\n- name: Upgrade VMware Tools using MoID\n community.vmware.vmware_guest_tools_upgrade:\n hostname: \"{{ vcenter_hostname }}\"\n username: \"{{ vcenter_username }}\"\n password: \"{{ vcenter_password }}\"\n datacenter: \"{{ datacenter_name }}\"\n moid: vm-42\n delegate_to: localhost\n'''\n\nRETURN = r''' # '''\n\n\nfrom ansible.module_utils.basic import AnsibleModule\nfrom ansible_collections.community.vmware.plugins.module_utils.vmware import PyVmomi, vmware_argument_spec, wait_for_task\nfrom ansible.module_utils._text import to_native\n\n\nclass PyVmomiHelper(PyVmomi):\n def __init__(self, module):\n super(PyVmomiHelper, self).__init__(module)\n\n def upgrade_tools(self, vm):\n result = {'failed': False, 'changed': False, 'msg': ''}\n # Exit if VMware tools is already up to date\n if vm.guest.toolsStatus == \"toolsOk\":\n result.update(\n changed=False,\n msg=\"VMware tools is already up to date\",\n )\n return result\n\n # Fail if VM is not powered on\n elif vm.summary.runtime.powerState != \"poweredOn\":\n result.update(\n failed=True,\n msg=\"VM must be powered on to upgrade tools\",\n )\n return result\n\n # Fail if VMware tools is either not running or not installed\n elif vm.guest.toolsStatus in [\"toolsNotRunning\", \"toolsNotInstalled\"]:\n result.update(\n failed=True,\n msg=\"VMware tools is either not running or not installed\",\n )\n return result\n\n # If vmware tools is out of date, check major OS family\n # Upgrade tools on Linux and Windows guests\n elif vm.guest.toolsStatus == \"toolsOld\":\n try:\n force = self.module.params.get('force_upgrade')\n if force or vm.guest.guestFamily in [\"linuxGuest\", \"windowsGuest\"]:\n task = vm.UpgradeTools()\n changed, err_msg = wait_for_task(task)\n result.update(changed=changed, msg=to_native(err_msg))\n else:\n result.update(msg='Guest Operating System is other than Linux and Windows.')\n return result\n except Exception as exc:\n result.update(\n failed=True,\n msg='Error while upgrading VMware tools %s' % to_native(exc),\n )\n return result\n else:\n result.update(\n failed=True,\n msg=\"VMware tools could not be upgraded\",\n )\n return result\n\n\ndef main():\n argument_spec = vmware_argument_spec()\n argument_spec.update(\n name=dict(type='str'),\n name_match=dict(type='str', choices=['first', 'last'], default='first'),\n uuid=dict(type='str'),\n moid=dict(type='str'),\n folder=dict(type='str'),\n datacenter=dict(type='str', required=True),\n force_upgrade=dict(type='bool', default=False),\n )\n module = AnsibleModule(\n argument_spec=argument_spec,\n required_one_of=[\n ['name', 'uuid', 'moid']\n ]\n )\n\n if module.params['folder']:\n # FindByInventoryPath() does not require an absolute path\n # so we should leave the input folder path unmodified\n module.params['folder'] = module.params['folder'].rstrip('/')\n\n pyv = PyVmomiHelper(module)\n # Check if the VM exists before continuing\n vm = pyv.get_vm()\n\n # VM already exists\n if vm:\n try:\n result = pyv.upgrade_tools(vm)\n if result['changed']:\n module.exit_json(changed=result['changed'])\n elif result['failed']:\n module.fail_json(msg=result['msg'])\n else:\n module.exit_json(msg=result['msg'], changed=result['changed'])\n except Exception as exc:\n module.fail_json(msg='Unknown error: %s' % to_native(exc))\n else:\n vm_id = module.params.get('uuid') or module.params.get('name') or module.params.get('moid')\n module.fail_json(msg='Unable to find VM %s' % vm_id)\n\n\nif __name__ == '__main__':\n main()\n", "path": "plugins/modules/vmware_guest_tools_upgrade.py"}]}
| 3,251 | 353 |
gh_patches_debug_8220
|
rasdani/github-patches
|
git_diff
|
SciTools__cartopy-157
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Path <-> LineString vs Polygon conversion
Consider the following:
```
p = Path([[0, 0], [0, 2], [0, 0]])
print cpatch.path_to_geos(p)
```
In this case, the result should be a LineString, but the following should be a polygon:
```
p = Path([[0, 0], [0, 2], [2, 2], [2, 0], [0, 0]])
print cpatch.path_to_geos(p)
```
Update `cartopy.mpl_integration.patch` (possibly renaming it) to handle these cases in the best possible way (which way is, as yet, unclear).
Add tests for these cases.
</issue>
<code>
[start of lib/cartopy/mpl_integration/patch.py]
1 # (C) British Crown Copyright 2011 - 2012, Met Office
2 #
3 # This file is part of cartopy.
4 #
5 # cartopy is free software: you can redistribute it and/or modify it under
6 # the terms of the GNU Lesser General Public License as published by the
7 # Free Software Foundation, either version 3 of the License, or
8 # (at your option) any later version.
9 #
10 # cartopy is distributed in the hope that it will be useful,
11 # but WITHOUT ANY WARRANTY; without even the implied warranty of
12 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
13 # GNU Lesser General Public License for more details.
14 #
15 # You should have received a copy of the GNU Lesser General Public License
16 # along with cartopy. If not, see <http://www.gnu.org/licenses/>.
17 """
18 Provides shapely geometry <-> matplotlib path support.
19
20 See also `Shapely Geometric Objects <http://toblerity.github.com/shapely/manual.html#geometric-objects>`_
21 and `Matplotlib Path API <http://matplotlib.org/api/path_api.html>`_.
22
23 """
24
25 import numpy as np
26 import matplotlib.path
27 from matplotlib.path import Path
28 from shapely.geometry.collection import GeometryCollection
29 from shapely.geometry.linestring import LineString
30 from shapely.geometry.point import Point
31 from shapely.geometry.polygon import Polygon
32 from shapely.geometry.multilinestring import MultiLineString
33 from shapely.geometry.multipoint import MultiPoint
34 from shapely.geometry.multipolygon import MultiPolygon
35
36
37 def geos_to_path(shape):
38 """
39 Creates a list of :class:`matplotlib.path.Path` objects that describe
40 a shape.
41
42 Args:
43
44 * shape
45 A list, tuple or single instance of any of the following
46 types: :class:`shapely.geometry.point.Point`,
47 :class:`shapely.geometry.linestring.LineString`,
48 :class:`shapely.geometry.polygon.Polygon`,
49 :class:`shapely.geometry.multipoint.MultiPoint`,
50 :class:`shapely.geometry.multipolygon.MultiPolygon`,
51 :class:`shapely.geometry.multilinestring.MultiLineString`,
52 :class:`shapely.geometry.collection.GeometryCollection`,
53 or any type with a _as_mpl_path() method.
54
55 Returns:
56 A list of :class:`matplotlib.path.Path` objects.
57
58 """
59 if isinstance(shape, (list, tuple)):
60 paths = []
61 for shp in shape:
62 paths.extend(geos_to_path(shp))
63 return paths
64
65 if isinstance(shape, (LineString, Point)):
66 return [Path(np.vstack(shape.xy).T)]
67 elif isinstance(shape, Polygon):
68 def poly_codes(poly):
69 codes = np.ones(len(poly.xy[0])) * Path.LINETO
70 codes[0] = Path.MOVETO
71 return codes
72
73 vertices = np.concatenate([np.array(shape.exterior.xy)] +
74 [np.array(ring.xy) for ring in
75 shape.interiors], 1).T
76 codes = np.concatenate([poly_codes(shape.exterior)] +
77 [poly_codes(ring) for ring in shape.interiors])
78 return [Path(vertices, codes)]
79 elif isinstance(shape, (MultiPolygon, GeometryCollection, MultiLineString,
80 MultiPoint)):
81 paths = []
82 for geom in shape.geoms:
83 paths.extend(geos_to_path(geom))
84 return paths
85 elif hasattr(shape, '_as_mpl_path'):
86 vertices, codes = shape._as_mpl_path()
87 return [Path(vertices, codes)]
88 else:
89 raise ValueError('Unsupported shape type {}.'.format(type(shape)))
90
91
92 def path_segments(path, transform=None, remove_nans=False, clip=None,
93 quantize=False, simplify=False, curves=False,
94 stroke_width=1.0, snap=False):
95 """
96 Creates an array of vertices and a corresponding array of codes from a
97 :class:`matplotlib.path.Path`.
98
99 Args:
100
101 * path
102 A :class:`matplotlib.path.Path` instance.
103
104 Kwargs:
105 See :func:`matplotlib.path.iter_segments` for details of the keyword
106 arguments.
107
108 Returns:
109 A (vertices, codes) tuple, where vertices is a numpy array of
110 coordinates, and codes is a numpy array of matplotlib path codes.
111 See :class:`matplotlib.path.Path` for information on the types of
112 codes and their meanings.
113
114 """
115 # XXX assigned to avoid a ValueError inside the mpl C code...
116 a = transform, remove_nans, clip, quantize, simplify, curves
117
118 # Series of cleanups and conversions to the path e.g. it
119 # can convert curved segments to line segments.
120 vertices, codes = matplotlib.path.cleanup_path(path, transform,
121 remove_nans, clip,
122 snap, stroke_width,
123 simplify, curves)
124
125 # Remove the final vertex (with code 0)
126 return vertices[:-1, :], codes[:-1]
127
128
129 def path_to_geos(path):
130 """
131 Creates a list of Shapely geometric objects from a
132 :class:`matplotlib.path.Path`.
133
134 Args:
135
136 * path
137 A :class:`matplotlib.path.Path` instance.
138
139 Returns:
140 A list of :class:`shapely.geometry.polygon.Polygon`,
141 :class:`shapely.geometry.linestring.LineString` and/or
142 :class:`shapely.geometry.multilinestring.MultiLineString` instances.
143
144 """
145 # Convert path into numpy array of vertices (and associated codes)
146 path_verts, path_codes = path_segments(path, curves=False)
147
148 # Split into subarrays such that each subarray consists of connected
149 # line segments based on the start of each one being marked by a
150 # matplotlib MOVETO code.
151 verts_split_inds = np.where(path_codes == Path.MOVETO)[0]
152 verts_split = np.split(path_verts, verts_split_inds)
153 codes_split = np.split(path_codes, verts_split_inds)
154
155 # Iterate through the vertices generating a list of
156 # (external_geom, [internal_polygons]) tuples.
157 collection = []
158 for path_verts, path_codes in zip(verts_split, codes_split):
159 if len(path_verts) == 0:
160 continue
161
162 # XXX A path can be given which does not end with close poly, in that
163 # situation, we have to guess?
164 # XXX Implement a point
165
166 if path_verts.shape[0] > 2 and (path_codes[-1] == Path.CLOSEPOLY or
167 all(path_verts[0, :] == path_verts[-1, :])):
168 if path_codes[-1] == Path.CLOSEPOLY:
169 geom = Polygon(path_verts[:-1, :])
170 else:
171 geom = Polygon(path_verts)
172 else:
173 geom = LineString(path_verts)
174
175 # If geom is a Polygon and is contained within the last geom in collection,
176 # add it to its list of internal polygons, otherwise simple append it as a
177 # new external geom.
178 if (len(collection) > 0 and
179 isinstance(collection[-1][0], Polygon) and
180 isinstance(geom, Polygon) and
181 collection[-1][0].contains(geom.exterior)):
182 collection[-1][1].append(geom.exterior)
183 else:
184 collection.append((geom, []))
185
186 # Convert each (external_geom, [internal_polygons]) pair into a
187 # a shapely Polygon that encapsulates the internal polygons, if the
188 # external geom is a LineSting leave it alone.
189 geom_collection = []
190 for external_geom, internal_polys in collection:
191 if internal_polys:
192 # XXX worry about islands within lakes
193 geom = Polygon(external_geom.exterior, internal_polys)
194 else:
195 geom = external_geom
196 geom_collection.append(geom)
197
198 # If the geom_collection only contains LineStrings combine them
199 # into a single MultiLinestring.
200 if geom_collection and all(isinstance(geom, LineString) for
201 geom in geom_collection):
202 geom_collection = [MultiLineString(geom_collection)]
203
204 # Remove any zero area Polygons
205 result = filter(lambda geom: (isinstance(geom, Polygon) and geom.area != 0) or
206 not isinstance(geom, Polygon), geom_collection)
207
208 return result
209
[end of lib/cartopy/mpl_integration/patch.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/lib/cartopy/mpl_integration/patch.py b/lib/cartopy/mpl_integration/patch.py
--- a/lib/cartopy/mpl_integration/patch.py
+++ b/lib/cartopy/mpl_integration/patch.py
@@ -202,7 +202,9 @@
geom_collection = [MultiLineString(geom_collection)]
# Remove any zero area Polygons
- result = filter(lambda geom: (isinstance(geom, Polygon) and geom.area != 0) or
- not isinstance(geom, Polygon), geom_collection)
-
+ not_zero_poly = lambda geom: ((isinstance(geom, Polygon) and
+ not geom._is_empty and geom.area != 0) or
+ not isinstance(geom, Polygon))
+ result = filter(not_zero_poly, geom_collection)
+
return result
|
{"golden_diff": "diff --git a/lib/cartopy/mpl_integration/patch.py b/lib/cartopy/mpl_integration/patch.py\n--- a/lib/cartopy/mpl_integration/patch.py\n+++ b/lib/cartopy/mpl_integration/patch.py\n@@ -202,7 +202,9 @@\n geom_collection = [MultiLineString(geom_collection)]\n \n # Remove any zero area Polygons\n- result = filter(lambda geom: (isinstance(geom, Polygon) and geom.area != 0) or\n- not isinstance(geom, Polygon), geom_collection)\n- \n+ not_zero_poly = lambda geom: ((isinstance(geom, Polygon) and\n+ not geom._is_empty and geom.area != 0) or\n+ not isinstance(geom, Polygon))\n+ result = filter(not_zero_poly, geom_collection)\n+\n return result\n", "issue": "Path <-> LineString vs Polygon conversion\nConsider the following:\n\n```\np = Path([[0, 0], [0, 2], [0, 0]])\nprint cpatch.path_to_geos(p)\n```\n\nIn this case, the result should be a LineString, but the following should be a polygon:\n\n```\np = Path([[0, 0], [0, 2], [2, 2], [2, 0], [0, 0]])\nprint cpatch.path_to_geos(p)\n```\n\nUpdate `cartopy.mpl_integration.patch` (possibly renaming it) to handle these cases in the best possible way (which way is, as yet, unclear). \n\nAdd tests for these cases.\n\n", "before_files": [{"content": "# (C) British Crown Copyright 2011 - 2012, Met Office\n#\n# This file is part of cartopy.\n#\n# cartopy is free software: you can redistribute it and/or modify it under\n# the terms of the GNU Lesser General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# cartopy is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU Lesser General Public License for more details.\n#\n# You should have received a copy of the GNU Lesser General Public License\n# along with cartopy. If not, see <http://www.gnu.org/licenses/>.\n\"\"\"\nProvides shapely geometry <-> matplotlib path support.\n\nSee also `Shapely Geometric Objects <http://toblerity.github.com/shapely/manual.html#geometric-objects>`_\nand `Matplotlib Path API <http://matplotlib.org/api/path_api.html>`_.\n\n\"\"\"\n\nimport numpy as np\nimport matplotlib.path\nfrom matplotlib.path import Path\nfrom shapely.geometry.collection import GeometryCollection\nfrom shapely.geometry.linestring import LineString\nfrom shapely.geometry.point import Point\nfrom shapely.geometry.polygon import Polygon\nfrom shapely.geometry.multilinestring import MultiLineString\nfrom shapely.geometry.multipoint import MultiPoint\nfrom shapely.geometry.multipolygon import MultiPolygon\n\n\ndef geos_to_path(shape):\n \"\"\"\n Creates a list of :class:`matplotlib.path.Path` objects that describe\n a shape.\n\n Args:\n\n * shape\n A list, tuple or single instance of any of the following\n types: :class:`shapely.geometry.point.Point`,\n :class:`shapely.geometry.linestring.LineString`,\n :class:`shapely.geometry.polygon.Polygon`,\n :class:`shapely.geometry.multipoint.MultiPoint`,\n :class:`shapely.geometry.multipolygon.MultiPolygon`,\n :class:`shapely.geometry.multilinestring.MultiLineString`,\n :class:`shapely.geometry.collection.GeometryCollection`,\n or any type with a _as_mpl_path() method.\n\n Returns:\n A list of :class:`matplotlib.path.Path` objects.\n\n \"\"\"\n if isinstance(shape, (list, tuple)):\n paths = []\n for shp in shape:\n paths.extend(geos_to_path(shp))\n return paths\n\n if isinstance(shape, (LineString, Point)):\n return [Path(np.vstack(shape.xy).T)]\n elif isinstance(shape, Polygon):\n def poly_codes(poly):\n codes = np.ones(len(poly.xy[0])) * Path.LINETO\n codes[0] = Path.MOVETO\n return codes\n \n vertices = np.concatenate([np.array(shape.exterior.xy)] +\n [np.array(ring.xy) for ring in\n shape.interiors], 1).T\n codes = np.concatenate([poly_codes(shape.exterior)] +\n [poly_codes(ring) for ring in shape.interiors])\n return [Path(vertices, codes)]\n elif isinstance(shape, (MultiPolygon, GeometryCollection, MultiLineString,\n MultiPoint)):\n paths = []\n for geom in shape.geoms:\n paths.extend(geos_to_path(geom))\n return paths\n elif hasattr(shape, '_as_mpl_path'):\n vertices, codes = shape._as_mpl_path()\n return [Path(vertices, codes)]\n else:\n raise ValueError('Unsupported shape type {}.'.format(type(shape)))\n\n\ndef path_segments(path, transform=None, remove_nans=False, clip=None,\n quantize=False, simplify=False, curves=False,\n stroke_width=1.0, snap=False):\n \"\"\"\n Creates an array of vertices and a corresponding array of codes from a\n :class:`matplotlib.path.Path`.\n\n Args:\n\n * path\n A :class:`matplotlib.path.Path` instance.\n\n Kwargs:\n See :func:`matplotlib.path.iter_segments` for details of the keyword\n arguments.\n\n Returns:\n A (vertices, codes) tuple, where vertices is a numpy array of\n coordinates, and codes is a numpy array of matplotlib path codes.\n See :class:`matplotlib.path.Path` for information on the types of\n codes and their meanings.\n\n \"\"\"\n # XXX assigned to avoid a ValueError inside the mpl C code...\n a = transform, remove_nans, clip, quantize, simplify, curves\n\n # Series of cleanups and conversions to the path e.g. it\n # can convert curved segments to line segments.\n vertices, codes = matplotlib.path.cleanup_path(path, transform,\n remove_nans, clip,\n snap, stroke_width,\n simplify, curves)\n \n # Remove the final vertex (with code 0)\n return vertices[:-1, :], codes[:-1]\n\n\ndef path_to_geos(path):\n \"\"\"\n Creates a list of Shapely geometric objects from a\n :class:`matplotlib.path.Path`.\n\n Args:\n\n * path\n A :class:`matplotlib.path.Path` instance.\n\n Returns:\n A list of :class:`shapely.geometry.polygon.Polygon`,\n :class:`shapely.geometry.linestring.LineString` and/or\n :class:`shapely.geometry.multilinestring.MultiLineString` instances.\n\n \"\"\"\n # Convert path into numpy array of vertices (and associated codes)\n path_verts, path_codes = path_segments(path, curves=False)\n \n # Split into subarrays such that each subarray consists of connected\n # line segments based on the start of each one being marked by a\n # matplotlib MOVETO code.\n verts_split_inds = np.where(path_codes == Path.MOVETO)[0]\n verts_split = np.split(path_verts, verts_split_inds)\n codes_split = np.split(path_codes, verts_split_inds)\n \n # Iterate through the vertices generating a list of\n # (external_geom, [internal_polygons]) tuples.\n collection = []\n for path_verts, path_codes in zip(verts_split, codes_split):\n if len(path_verts) == 0:\n continue\n\n # XXX A path can be given which does not end with close poly, in that\n # situation, we have to guess?\n # XXX Implement a point\n \n if path_verts.shape[0] > 2 and (path_codes[-1] == Path.CLOSEPOLY or\n all(path_verts[0, :] == path_verts[-1, :])):\n if path_codes[-1] == Path.CLOSEPOLY:\n geom = Polygon(path_verts[:-1, :])\n else:\n geom = Polygon(path_verts)\n else:\n geom = LineString(path_verts)\n\n # If geom is a Polygon and is contained within the last geom in collection,\n # add it to its list of internal polygons, otherwise simple append it as a\n # new external geom.\n if (len(collection) > 0 and \n isinstance(collection[-1][0], Polygon) and\n isinstance(geom, Polygon) and\n collection[-1][0].contains(geom.exterior)):\n collection[-1][1].append(geom.exterior)\n else:\n collection.append((geom, []))\n \n # Convert each (external_geom, [internal_polygons]) pair into a\n # a shapely Polygon that encapsulates the internal polygons, if the\n # external geom is a LineSting leave it alone.\n geom_collection = []\n for external_geom, internal_polys in collection:\n if internal_polys:\n # XXX worry about islands within lakes\n geom = Polygon(external_geom.exterior, internal_polys)\n else:\n geom = external_geom\n geom_collection.append(geom)\n \n # If the geom_collection only contains LineStrings combine them\n # into a single MultiLinestring.\n if geom_collection and all(isinstance(geom, LineString) for\n geom in geom_collection):\n geom_collection = [MultiLineString(geom_collection)]\n\n # Remove any zero area Polygons\n result = filter(lambda geom: (isinstance(geom, Polygon) and geom.area != 0) or\n not isinstance(geom, Polygon), geom_collection)\n \n return result\n", "path": "lib/cartopy/mpl_integration/patch.py"}]}
| 3,010 | 184 |
gh_patches_debug_6593
|
rasdani/github-patches
|
git_diff
|
enthought__chaco-540
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
ScatterInspector does not deselect all on index == None
I've been playing around with the scattertoggle.py example. Maybe I'm missing something, but the ScatterInspector seems to fail to deselect all points when clicking on whitespace on the plot (that is, index == None). Something along the lines of this seems to fix it.
``` python
class PatchedScatterInspector(ScatterInspector):
def _deselect(self, index=None):
plot = self.component
if index:
super(PatchedScatterInspector, self)._deselect(index)
else:
for name in ('index', 'value'):
if not hasattr(plot, name):
continue
md = getattr(plot, name).metadata
md[self.selection_metadata_name] = []
getattr(plot, name).metadata_changed = True
return
```
Cheers,
-A
</issue>
<code>
[start of chaco/tools/scatter_inspector.py]
1 """ Defines the ScatterInspector tool class.
2 """
3
4 # Enthought library imports
5 from traits.api import Any, Bool, Enum, Event, HasStrictTraits, Str
6
7 # Local, relative imports
8 from .select_tool import SelectTool
9
10 HOVER_EVENT = "hover"
11
12 SELECT_EVENT = "select"
13
14 DESELECT_EVENT = "deselect"
15
16
17 class ScatterInspectorEvent(HasStrictTraits):
18 #: Is it a hover event or a selection event?
19 event_type = Enum([HOVER_EVENT, SELECT_EVENT, DESELECT_EVENT])
20
21 #: What index was involved?
22 event_index = Any
23
24
25 class ScatterInspector(SelectTool):
26 """ A tool for inspecting scatter plots.
27
28 It writes the index of the point under the cursor to the metadata of the
29 index and value data sources, and allows clicking to select the point.
30 Other components can listen for metadata updates on the data sources.
31
32 By default, it writes the index of the point under the cursor to the
33 "hover" key in metadata, and the index of a clicked point to "selection".
34 """
35
36 #: If persistent_hover is False, then a point will be de-hovered as soon as
37 #: the mouse leaves its hit-testing area. If persistent_hover is True, then
38 #: a point does no de-hover until another point get hover focus.
39 persistent_hover = Bool(False)
40
41 #: The names of the data source metadata for hover and selection events.
42 hover_metadata_name = Str('hover')
43 selection_metadata_name = Str('selections')
44
45 #: This tool emits events when hover or selection changes
46 inspector_event = Event(ScatterInspectorEvent)
47
48 # -------------------------------------------------------------------------
49 # Override/configure inherited traits
50 # -------------------------------------------------------------------------
51
52 #: This tool is not visible
53 visible = False
54
55 #: This tool does not have a visual representation
56 draw_mode = "none"
57
58 def normal_mouse_move(self, event):
59 """ Handles the mouse moving when the tool is in the 'normal' state.
60
61 If the cursor is within **threshold** of a data point, the method
62 writes the index to the plot's data sources' "hover" metadata.
63
64 This method emits a ScatterInspectorEvent when a new scatter point is
65 hovered over and when the mouse leaves that point.
66 """
67 plot = self.component
68 index = plot.map_index((event.x, event.y), threshold=self.threshold)
69 insp_event = ScatterInspectorEvent(event_type=HOVER_EVENT,
70 event_index=index)
71 if index is not None:
72 old = plot.index.metadata.get(self.hover_metadata_name, None)
73 plot.index.metadata[self.hover_metadata_name] = [index]
74 if old != [index]:
75 self.inspector_event = insp_event
76 if hasattr(plot, "value"):
77 plot.value.metadata[self.hover_metadata_name] = [index]
78 elif not self.persistent_hover:
79 old = plot.index.metadata.pop(self.hover_metadata_name, None)
80 if old:
81 self.inspector_event = insp_event
82 if hasattr(plot, "value"):
83 plot.value.metadata.pop(self.hover_metadata_name, None)
84
85 return
86
87 def _get_selection_state(self, event):
88 plot = self.component
89 index = plot.map_index((event.x, event.y), threshold=self.threshold)
90
91 already_selected = False
92 for name in ('index', 'value'):
93 if not hasattr(plot, name):
94 continue
95 md = getattr(plot, name).metadata
96 if md is None or self.selection_metadata_name not in md:
97 continue
98 if index in md[self.selection_metadata_name]:
99 already_selected = True
100 break
101 return already_selected, (index is not None)
102
103 def _get_selection_token(self, event):
104 plot = self.component
105 index = plot.map_index((event.x, event.y), threshold=self.threshold)
106 return index
107
108 def _deselect(self, index=None):
109 """ Deselects a particular index. If no index is given, then
110 deselects all points.
111 """
112 plot = self.component
113 insp_event = ScatterInspectorEvent(event_type=DESELECT_EVENT,
114 event_index=index)
115 for name in ('index', 'value'):
116 if not hasattr(plot, name):
117 continue
118 md = getattr(plot, name).metadata
119 if self.selection_metadata_name not in md:
120 pass
121 elif index in md[self.selection_metadata_name]:
122 new_list = md[self.selection_metadata_name][:]
123 new_list.remove(index)
124 md[self.selection_metadata_name] = new_list
125 # Only issue 1 event:
126 if name == 'index':
127 self.inspector_event = insp_event
128 return
129
130 def _select(self, index, append=True):
131 plot = self.component
132 insp_event = ScatterInspectorEvent(event_type=SELECT_EVENT,
133 event_index=index)
134 for name in ('index', 'value'):
135 if not hasattr(plot, name):
136 continue
137 md = getattr(plot, name).metadata
138 selection = md.get(self.selection_metadata_name, None)
139
140 # If no existing selection
141 if selection is None:
142 md[self.selection_metadata_name] = [index]
143 # check for list-like object supporting append
144 else:
145 if append:
146 if index not in md[self.selection_metadata_name]:
147 new_list = md[self.selection_metadata_name] + [index]
148 md[self.selection_metadata_name] = new_list
149 # Manually trigger the metadata_changed event on
150 # the datasource. Datasources only automatically
151 # fire notifications when the values inside the
152 # metadata dict change, but they do not listen
153 # for further changes on those values.
154 # DEPRECATED: use self.inspector_event instead:
155 getattr(plot, name).metadata_changed = True
156 else:
157 md[self.selection_metadata_name] = [index]
158
159 # Test to only issue 1 event per selection, not 1 per axis:
160 if name == 'index':
161 self.inspector_event = insp_event
162
163 return
164
165
166 # EOF
167
[end of chaco/tools/scatter_inspector.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/chaco/tools/scatter_inspector.py b/chaco/tools/scatter_inspector.py
--- a/chaco/tools/scatter_inspector.py
+++ b/chaco/tools/scatter_inspector.py
@@ -125,6 +125,11 @@
# Only issue 1 event:
if name == 'index':
self.inspector_event = insp_event
+ elif index is None:
+ md[self.selection_metadata_name] = []
+ # Only issue 1 event:
+ if name == 'index':
+ self.inspector_event = insp_event
return
def _select(self, index, append=True):
|
{"golden_diff": "diff --git a/chaco/tools/scatter_inspector.py b/chaco/tools/scatter_inspector.py\n--- a/chaco/tools/scatter_inspector.py\n+++ b/chaco/tools/scatter_inspector.py\n@@ -125,6 +125,11 @@\n # Only issue 1 event:\n if name == 'index':\n self.inspector_event = insp_event\n+ elif index is None:\n+ md[self.selection_metadata_name] = []\n+ # Only issue 1 event:\n+ if name == 'index':\n+ self.inspector_event = insp_event\n return\n \n def _select(self, index, append=True):\n", "issue": "ScatterInspector does not deselect all on index == None\nI've been playing around with the scattertoggle.py example. Maybe I'm missing something, but the ScatterInspector seems to fail to deselect all points when clicking on whitespace on the plot (that is, index == None). Something along the lines of this seems to fix it.\n\n``` python\nclass PatchedScatterInspector(ScatterInspector):\n def _deselect(self, index=None):\n plot = self.component\n if index:\n super(PatchedScatterInspector, self)._deselect(index)\n else:\n for name in ('index', 'value'):\n if not hasattr(plot, name):\n continue\n md = getattr(plot, name).metadata\n md[self.selection_metadata_name] = []\n getattr(plot, name).metadata_changed = True\n return\n```\n\nCheers,\n\n-A\n\n", "before_files": [{"content": "\"\"\" Defines the ScatterInspector tool class.\n\"\"\"\n\n# Enthought library imports\nfrom traits.api import Any, Bool, Enum, Event, HasStrictTraits, Str\n\n# Local, relative imports\nfrom .select_tool import SelectTool\n\nHOVER_EVENT = \"hover\"\n\nSELECT_EVENT = \"select\"\n\nDESELECT_EVENT = \"deselect\"\n\n\nclass ScatterInspectorEvent(HasStrictTraits):\n #: Is it a hover event or a selection event?\n event_type = Enum([HOVER_EVENT, SELECT_EVENT, DESELECT_EVENT])\n\n #: What index was involved?\n event_index = Any\n\n\nclass ScatterInspector(SelectTool):\n \"\"\" A tool for inspecting scatter plots.\n\n It writes the index of the point under the cursor to the metadata of the\n index and value data sources, and allows clicking to select the point.\n Other components can listen for metadata updates on the data sources.\n\n By default, it writes the index of the point under the cursor to the\n \"hover\" key in metadata, and the index of a clicked point to \"selection\".\n \"\"\"\n\n #: If persistent_hover is False, then a point will be de-hovered as soon as\n #: the mouse leaves its hit-testing area. If persistent_hover is True, then\n #: a point does no de-hover until another point get hover focus.\n persistent_hover = Bool(False)\n\n #: The names of the data source metadata for hover and selection events.\n hover_metadata_name = Str('hover')\n selection_metadata_name = Str('selections')\n\n #: This tool emits events when hover or selection changes\n inspector_event = Event(ScatterInspectorEvent)\n\n # -------------------------------------------------------------------------\n # Override/configure inherited traits\n # -------------------------------------------------------------------------\n\n #: This tool is not visible\n visible = False\n\n #: This tool does not have a visual representation\n draw_mode = \"none\"\n\n def normal_mouse_move(self, event):\n \"\"\" Handles the mouse moving when the tool is in the 'normal' state.\n\n If the cursor is within **threshold** of a data point, the method\n writes the index to the plot's data sources' \"hover\" metadata.\n\n This method emits a ScatterInspectorEvent when a new scatter point is\n hovered over and when the mouse leaves that point.\n \"\"\"\n plot = self.component\n index = plot.map_index((event.x, event.y), threshold=self.threshold)\n insp_event = ScatterInspectorEvent(event_type=HOVER_EVENT,\n event_index=index)\n if index is not None:\n old = plot.index.metadata.get(self.hover_metadata_name, None)\n plot.index.metadata[self.hover_metadata_name] = [index]\n if old != [index]:\n self.inspector_event = insp_event\n if hasattr(plot, \"value\"):\n plot.value.metadata[self.hover_metadata_name] = [index]\n elif not self.persistent_hover:\n old = plot.index.metadata.pop(self.hover_metadata_name, None)\n if old:\n self.inspector_event = insp_event\n if hasattr(plot, \"value\"):\n plot.value.metadata.pop(self.hover_metadata_name, None)\n\n return\n\n def _get_selection_state(self, event):\n plot = self.component\n index = plot.map_index((event.x, event.y), threshold=self.threshold)\n\n already_selected = False\n for name in ('index', 'value'):\n if not hasattr(plot, name):\n continue\n md = getattr(plot, name).metadata\n if md is None or self.selection_metadata_name not in md:\n continue\n if index in md[self.selection_metadata_name]:\n already_selected = True\n break\n return already_selected, (index is not None)\n\n def _get_selection_token(self, event):\n plot = self.component\n index = plot.map_index((event.x, event.y), threshold=self.threshold)\n return index\n\n def _deselect(self, index=None):\n \"\"\" Deselects a particular index. If no index is given, then\n deselects all points.\n \"\"\"\n plot = self.component\n insp_event = ScatterInspectorEvent(event_type=DESELECT_EVENT,\n event_index=index)\n for name in ('index', 'value'):\n if not hasattr(plot, name):\n continue\n md = getattr(plot, name).metadata\n if self.selection_metadata_name not in md:\n pass\n elif index in md[self.selection_metadata_name]:\n new_list = md[self.selection_metadata_name][:]\n new_list.remove(index)\n md[self.selection_metadata_name] = new_list\n # Only issue 1 event:\n if name == 'index':\n self.inspector_event = insp_event\n return\n\n def _select(self, index, append=True):\n plot = self.component\n insp_event = ScatterInspectorEvent(event_type=SELECT_EVENT,\n event_index=index)\n for name in ('index', 'value'):\n if not hasattr(plot, name):\n continue\n md = getattr(plot, name).metadata\n selection = md.get(self.selection_metadata_name, None)\n\n # If no existing selection\n if selection is None:\n md[self.selection_metadata_name] = [index]\n # check for list-like object supporting append\n else:\n if append:\n if index not in md[self.selection_metadata_name]:\n new_list = md[self.selection_metadata_name] + [index]\n md[self.selection_metadata_name] = new_list\n # Manually trigger the metadata_changed event on\n # the datasource. Datasources only automatically\n # fire notifications when the values inside the\n # metadata dict change, but they do not listen\n # for further changes on those values.\n # DEPRECATED: use self.inspector_event instead:\n getattr(plot, name).metadata_changed = True\n else:\n md[self.selection_metadata_name] = [index]\n\n # Test to only issue 1 event per selection, not 1 per axis:\n if name == 'index':\n self.inspector_event = insp_event\n\n return\n\n\n# EOF\n", "path": "chaco/tools/scatter_inspector.py"}]}
| 2,398 | 143 |
gh_patches_debug_58004
|
rasdani/github-patches
|
git_diff
|
CiviWiki__OpenCiviWiki-980
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Restore SessionAuthenticationMiddleware
We aim to move away from having a heavy JavaScript front-end, preferring instead to use Django templates (and sprinkles of JS where needed). This means we can use SessionAuthenticationMiddleware.
This will also require restoring the default authentication classes in `settings.py`
</issue>
<code>
[start of project/core/settings.py]
1 """
2 Django settings for civiwiki project.
3 Darius Calliet May 12, 2016
4
5 Production settings file to select proper environment variables.
6 """
7 import os
8
9 # False if not in os.environ
10 DEBUG = os.getenv("DEBUG", False)
11
12 # defaults to second value if not found in os.environ
13 DJANGO_HOST = os.getenv("DJANGO_HOST", "LOCALHOST")
14
15 BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
16 SECRET_KEY = os.getenv("DJANGO_SECRET_KEY", "TEST_KEY_FOR_DEVELOPMENT")
17 ALLOWED_HOSTS = [".herokuapp.com", ".civiwiki.org", "127.0.0.1", "localhost", "0.0.0.0"]
18
19 INSTALLED_APPS = (
20 "django.contrib.admin",
21 "django.contrib.auth",
22 "django.contrib.contenttypes",
23 "django.contrib.sessions",
24 "django.contrib.messages",
25 "django.contrib.staticfiles",
26 "django_extensions",
27 "storages",
28 "core", # TODO: consider removing this, if we can move the decorators, etc. to an actual app
29 "api",
30 "rest_framework",
31 "accounts",
32 "threads",
33 "frontend_views",
34 "notifications",
35 "corsheaders",
36 "taggit",
37 )
38
39 MIDDLEWARE = [
40 "corsheaders.middleware.CorsMiddleware",
41 "django.middleware.security.SecurityMiddleware",
42 "whitenoise.middleware.WhiteNoiseMiddleware",
43 "django.contrib.sessions.middleware.SessionMiddleware",
44 "django.middleware.common.CommonMiddleware",
45 "django.middleware.csrf.CsrfViewMiddleware",
46 "django.contrib.auth.middleware.AuthenticationMiddleware",
47 # 'django.contrib.auth.middleware.SessionAuthenticationMiddleware',
48 "django.contrib.messages.middleware.MessageMiddleware",
49 "django.middleware.clickjacking.XFrameOptionsMiddleware",
50 ]
51
52 CSRF_USE_SESSIONS = (
53 True # Store the CSRF token in the users session instead of in a cookie
54 )
55
56 CORS_ORIGIN_ALLOW_ALL = True
57 ROOT_URLCONF = "core.urls"
58 LOGIN_URL = "/login"
59
60 # SSL Setup
61 if DJANGO_HOST != "LOCALHOST":
62 SECURE_PROXY_SSL_HEADER = ("HTTP_X_FORWARDED_PROTO", "https")
63 SECURE_SSL_REDIRECT = True
64 SESSION_COOKIE_SECURE = True
65 CSRF_COOKIE_SECURE = True
66
67 # Internationalization & Localization
68 LANGUAGE_CODE = "en-us"
69 TIME_ZONE = "UTC"
70 USE_I18N = True
71 USE_L10N = True
72 USE_TZ = True
73
74 TEMPLATES = [
75 {
76 "BACKEND": "django.template.backends.django.DjangoTemplates",
77 "DIRS": [
78 os.path.join(BASE_DIR, "threads/templates/threads"), os.path.join(BASE_DIR, "accounts/templates/accounts")
79 ], # TODO: Add non-webapp template directory
80 "APP_DIRS": True,
81 "OPTIONS": {
82 "context_processors": [
83 "django.template.context_processors.debug",
84 "django.template.context_processors.request",
85 "django.contrib.auth.context_processors.auth",
86 "django.contrib.messages.context_processors.messages",
87 ],
88 },
89 },
90 ]
91
92 WSGI_APPLICATION = "core.wsgi.application"
93
94 # Apex Contact for Production Errors
95 ADMINS = [("Development Team", "[email protected]")]
96
97 # AWS S3 Setup
98 if "AWS_STORAGE_BUCKET_NAME" not in os.environ:
99 MEDIA_URL = "/media/"
100 MEDIA_ROOT = os.path.join(BASE_DIR, "media")
101 else:
102 AWS_STORAGE_BUCKET_NAME = os.getenv("AWS_STORAGE_BUCKET_NAME")
103 AWS_S3_ACCESS_KEY_ID = os.getenv("AWS_S3_ACCESS_KEY_ID")
104 AWS_S3_SECRET_ACCESS_KEY = os.getenv("AWS_S3_SECRET_ACCESS_KEY")
105 DEFAULT_FILE_STORAGE = "storages.backends.s3boto.S3BotoStorage"
106 AWS_S3_SECURE_URLS = False
107 AWS_QUERYSTRING_AUTH = False
108
109 STATIC_URL = "/static/"
110 STATICFILES_DIRS = (os.path.join(BASE_DIR, "threads/templates/static"),)
111 STATIC_ROOT = os.path.join(BASE_DIR, "staticfiles")
112
113 # TODO: re-organize and simplify staticfiles settings
114 if "CIVIWIKI_LOCAL_NAME" not in os.environ:
115 STATICFILES_STORAGE = "whitenoise.storage.CompressedManifestStaticFilesStorage"
116
117 # Use DATABASE_URL in production
118 DATABASE_URL = os.getenv("DATABASE_URL")
119
120 if DATABASE_URL is not None:
121 DATABASES = {"default": DATABASE_URL}
122 else:
123 # Default to sqlite for simplicity in development
124 DATABASES = {
125 "default": {
126 "ENGINE": "django.db.backends.sqlite3",
127 "NAME": BASE_DIR + "/" + "db.sqlite3",
128 }
129 }
130
131 # Email Backend Setup
132 if "EMAIL_HOST" not in os.environ:
133 EMAIL_BACKEND = "django.core.mail.backends.console.EmailBackend"
134 EMAIL_HOST_USER = "[email protected]"
135 else:
136 EMAIL_BACKEND = "django.core.mail.backends.smtp.EmailBackend"
137 EMAIL_HOST = os.getenv("EMAIL_HOST")
138 EMAIL_PORT = os.getenv("EMAIL_PORT")
139 EMAIL_HOST_USER = os.getenv("EMAIL_HOST_USER")
140 EMAIL_HOST_PASSWORD = os.getenv("EMAIL_HOST_PASSWORD")
141 EMAIL_USE_SSL = True
142 DEFAULT_FROM_EMAIL = EMAIL_HOST
143
144 # Notification API Settings
145 NOTIFICATIONS_SOFT_DELETE = True
146 NOTIFICATIONS_USE_JSONFIELD = True
147
148 # Django REST API Settings
149 DEFAULT_RENDERER_CLASSES = ("rest_framework.renderers.JSONRenderer",)
150
151 DEFAULT_AUTHENTICATION_CLASSES = ("rest_framework.authentication.BasicAuthentication",)
152
153 if DEBUG:
154 # Browsable HTML - Enabled only in Debug mode (dev)
155 DEFAULT_RENDERER_CLASSES = DEFAULT_RENDERER_CLASSES + (
156 "rest_framework.renderers.BrowsableAPIRenderer",
157 )
158
159 DEFAULT_AUTHENTICATION_CLASSES = (
160 "api.authentication.CsrfExemptSessionAuthentication",
161 ) + DEFAULT_AUTHENTICATION_CLASSES
162
163 REST_FRAMEWORK = {
164 "DEFAULT_PERMISSION_CLASSES": ("rest_framework.permissions.IsAuthenticated",),
165 "DEFAULT_RENDERER_CLASSES": DEFAULT_RENDERER_CLASSES,
166 "DEFAULT_AUTHENTICATION_CLASSES": DEFAULT_AUTHENTICATION_CLASSES,
167 }
168
169 # CORS Settings
170 CORS_ORIGIN_ALLOW_ALL = True
171
172 # Custom User model
173 AUTH_USER_MODEL = 'accounts.User'
174
175 APPEND_SLASH = False
176
177 DEFAULT_AUTO_FIELD = 'django.db.models.AutoField'
178
179 LOGIN_REDIRECT_URL = '/'
180
181 AUTH_PASSWORD_VALIDATORS = [
182 {
183 'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator',
184 },
185 {
186 'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator',
187 'OPTIONS': {
188 'min_length': 8,
189 }
190 },
191 {
192 'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator',
193 },
194 {
195 'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator',
196 },
197 ]
198
[end of project/core/settings.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/project/core/settings.py b/project/core/settings.py
--- a/project/core/settings.py
+++ b/project/core/settings.py
@@ -148,7 +148,7 @@
# Django REST API Settings
DEFAULT_RENDERER_CLASSES = ("rest_framework.renderers.JSONRenderer",)
-DEFAULT_AUTHENTICATION_CLASSES = ("rest_framework.authentication.BasicAuthentication",)
+DEFAULT_AUTHENTICATION_CLASSES = ("rest_framework.authentication.SessionAuthentication",)
if DEBUG:
# Browsable HTML - Enabled only in Debug mode (dev)
|
{"golden_diff": "diff --git a/project/core/settings.py b/project/core/settings.py\n--- a/project/core/settings.py\n+++ b/project/core/settings.py\n@@ -148,7 +148,7 @@\n # Django REST API Settings\n DEFAULT_RENDERER_CLASSES = (\"rest_framework.renderers.JSONRenderer\",)\n \n-DEFAULT_AUTHENTICATION_CLASSES = (\"rest_framework.authentication.BasicAuthentication\",)\n+DEFAULT_AUTHENTICATION_CLASSES = (\"rest_framework.authentication.SessionAuthentication\",)\n \n if DEBUG:\n # Browsable HTML - Enabled only in Debug mode (dev)\n", "issue": "Restore SessionAuthenticationMiddleware\nWe aim to move away from having a heavy JavaScript front-end, preferring instead to use Django templates (and sprinkles of JS where needed). This means we can use SessionAuthenticationMiddleware.\r\n\r\nThis will also require restoring the default authentication classes in `settings.py`\n", "before_files": [{"content": "\"\"\"\nDjango settings for civiwiki project.\nDarius Calliet May 12, 2016\n\nProduction settings file to select proper environment variables.\n\"\"\"\nimport os\n\n# False if not in os.environ\nDEBUG = os.getenv(\"DEBUG\", False)\n\n# defaults to second value if not found in os.environ\nDJANGO_HOST = os.getenv(\"DJANGO_HOST\", \"LOCALHOST\")\n\nBASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))\nSECRET_KEY = os.getenv(\"DJANGO_SECRET_KEY\", \"TEST_KEY_FOR_DEVELOPMENT\")\nALLOWED_HOSTS = [\".herokuapp.com\", \".civiwiki.org\", \"127.0.0.1\", \"localhost\", \"0.0.0.0\"]\n\nINSTALLED_APPS = (\n \"django.contrib.admin\",\n \"django.contrib.auth\",\n \"django.contrib.contenttypes\",\n \"django.contrib.sessions\",\n \"django.contrib.messages\",\n \"django.contrib.staticfiles\",\n \"django_extensions\",\n \"storages\",\n \"core\", # TODO: consider removing this, if we can move the decorators, etc. to an actual app\n \"api\",\n \"rest_framework\",\n \"accounts\",\n \"threads\",\n \"frontend_views\",\n \"notifications\",\n \"corsheaders\",\n \"taggit\",\n)\n\nMIDDLEWARE = [\n \"corsheaders.middleware.CorsMiddleware\",\n \"django.middleware.security.SecurityMiddleware\",\n \"whitenoise.middleware.WhiteNoiseMiddleware\",\n \"django.contrib.sessions.middleware.SessionMiddleware\",\n \"django.middleware.common.CommonMiddleware\",\n \"django.middleware.csrf.CsrfViewMiddleware\",\n \"django.contrib.auth.middleware.AuthenticationMiddleware\",\n # 'django.contrib.auth.middleware.SessionAuthenticationMiddleware',\n \"django.contrib.messages.middleware.MessageMiddleware\",\n \"django.middleware.clickjacking.XFrameOptionsMiddleware\",\n]\n\nCSRF_USE_SESSIONS = (\n True # Store the CSRF token in the users session instead of in a cookie\n)\n\nCORS_ORIGIN_ALLOW_ALL = True\nROOT_URLCONF = \"core.urls\"\nLOGIN_URL = \"/login\"\n\n# SSL Setup\nif DJANGO_HOST != \"LOCALHOST\":\n SECURE_PROXY_SSL_HEADER = (\"HTTP_X_FORWARDED_PROTO\", \"https\")\n SECURE_SSL_REDIRECT = True\n SESSION_COOKIE_SECURE = True\n CSRF_COOKIE_SECURE = True\n\n# Internationalization & Localization\nLANGUAGE_CODE = \"en-us\"\nTIME_ZONE = \"UTC\"\nUSE_I18N = True\nUSE_L10N = True\nUSE_TZ = True\n\nTEMPLATES = [\n {\n \"BACKEND\": \"django.template.backends.django.DjangoTemplates\",\n \"DIRS\": [\n os.path.join(BASE_DIR, \"threads/templates/threads\"), os.path.join(BASE_DIR, \"accounts/templates/accounts\")\n ], # TODO: Add non-webapp template directory\n \"APP_DIRS\": True,\n \"OPTIONS\": {\n \"context_processors\": [\n \"django.template.context_processors.debug\",\n \"django.template.context_processors.request\",\n \"django.contrib.auth.context_processors.auth\",\n \"django.contrib.messages.context_processors.messages\",\n ],\n },\n },\n]\n\nWSGI_APPLICATION = \"core.wsgi.application\"\n\n# Apex Contact for Production Errors\nADMINS = [(\"Development Team\", \"[email protected]\")]\n\n# AWS S3 Setup\nif \"AWS_STORAGE_BUCKET_NAME\" not in os.environ:\n MEDIA_URL = \"/media/\"\n MEDIA_ROOT = os.path.join(BASE_DIR, \"media\")\nelse:\n AWS_STORAGE_BUCKET_NAME = os.getenv(\"AWS_STORAGE_BUCKET_NAME\")\n AWS_S3_ACCESS_KEY_ID = os.getenv(\"AWS_S3_ACCESS_KEY_ID\")\n AWS_S3_SECRET_ACCESS_KEY = os.getenv(\"AWS_S3_SECRET_ACCESS_KEY\")\n DEFAULT_FILE_STORAGE = \"storages.backends.s3boto.S3BotoStorage\"\n AWS_S3_SECURE_URLS = False\n AWS_QUERYSTRING_AUTH = False\n\nSTATIC_URL = \"/static/\"\nSTATICFILES_DIRS = (os.path.join(BASE_DIR, \"threads/templates/static\"),)\nSTATIC_ROOT = os.path.join(BASE_DIR, \"staticfiles\")\n\n# TODO: re-organize and simplify staticfiles settings\nif \"CIVIWIKI_LOCAL_NAME\" not in os.environ:\n STATICFILES_STORAGE = \"whitenoise.storage.CompressedManifestStaticFilesStorage\"\n\n# Use DATABASE_URL in production\nDATABASE_URL = os.getenv(\"DATABASE_URL\")\n\nif DATABASE_URL is not None:\n DATABASES = {\"default\": DATABASE_URL}\nelse:\n # Default to sqlite for simplicity in development\n DATABASES = {\n \"default\": {\n \"ENGINE\": \"django.db.backends.sqlite3\",\n \"NAME\": BASE_DIR + \"/\" + \"db.sqlite3\",\n }\n }\n\n# Email Backend Setup\nif \"EMAIL_HOST\" not in os.environ:\n EMAIL_BACKEND = \"django.core.mail.backends.console.EmailBackend\"\n EMAIL_HOST_USER = \"[email protected]\"\nelse:\n EMAIL_BACKEND = \"django.core.mail.backends.smtp.EmailBackend\"\n EMAIL_HOST = os.getenv(\"EMAIL_HOST\")\n EMAIL_PORT = os.getenv(\"EMAIL_PORT\")\n EMAIL_HOST_USER = os.getenv(\"EMAIL_HOST_USER\")\n EMAIL_HOST_PASSWORD = os.getenv(\"EMAIL_HOST_PASSWORD\")\n EMAIL_USE_SSL = True\n DEFAULT_FROM_EMAIL = EMAIL_HOST\n\n# Notification API Settings\nNOTIFICATIONS_SOFT_DELETE = True\nNOTIFICATIONS_USE_JSONFIELD = True\n\n# Django REST API Settings\nDEFAULT_RENDERER_CLASSES = (\"rest_framework.renderers.JSONRenderer\",)\n\nDEFAULT_AUTHENTICATION_CLASSES = (\"rest_framework.authentication.BasicAuthentication\",)\n\nif DEBUG:\n # Browsable HTML - Enabled only in Debug mode (dev)\n DEFAULT_RENDERER_CLASSES = DEFAULT_RENDERER_CLASSES + (\n \"rest_framework.renderers.BrowsableAPIRenderer\",\n )\n\n DEFAULT_AUTHENTICATION_CLASSES = (\n \"api.authentication.CsrfExemptSessionAuthentication\",\n ) + DEFAULT_AUTHENTICATION_CLASSES\n\nREST_FRAMEWORK = {\n \"DEFAULT_PERMISSION_CLASSES\": (\"rest_framework.permissions.IsAuthenticated\",),\n \"DEFAULT_RENDERER_CLASSES\": DEFAULT_RENDERER_CLASSES,\n \"DEFAULT_AUTHENTICATION_CLASSES\": DEFAULT_AUTHENTICATION_CLASSES,\n}\n\n# CORS Settings\nCORS_ORIGIN_ALLOW_ALL = True\n\n# Custom User model\nAUTH_USER_MODEL = 'accounts.User'\n\nAPPEND_SLASH = False\n\nDEFAULT_AUTO_FIELD = 'django.db.models.AutoField'\n\nLOGIN_REDIRECT_URL = '/'\n\nAUTH_PASSWORD_VALIDATORS = [\n {\n 'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator',\n },\n {\n 'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator',\n 'OPTIONS': {\n 'min_length': 8,\n }\n },\n {\n 'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator',\n },\n {\n 'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator',\n },\n]\n", "path": "project/core/settings.py"}]}
| 2,488 | 108 |
gh_patches_debug_38223
|
rasdani/github-patches
|
git_diff
|
pypi__warehouse-432
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Implement X-PyPI-Last-Serial for the File View
The view that serves files doesn't implement the `X-PyPI-Last-Serial` header, however bandersnatch needs that header, so we need to implement it.
</issue>
<code>
[start of warehouse/packaging/views.py]
1 # Licensed under the Apache License, Version 2.0 (the "License");
2 # you may not use this file except in compliance with the License.
3 # You may obtain a copy of the License at
4 #
5 # http://www.apache.org/licenses/LICENSE-2.0
6 #
7 # Unless required by applicable law or agreed to in writing, software
8 # distributed under the License is distributed on an "AS IS" BASIS,
9 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
10 # See the License for the specific language governing permissions and
11 # limitations under the License.
12
13 import fs.errors
14
15 from pyramid.httpexceptions import HTTPMovedPermanently, HTTPNotFound
16 from pyramid.response import FileIter, Response
17 from pyramid.view import view_config
18 from sqlalchemy.orm.exc import NoResultFound
19
20 from warehouse.accounts.models import User
21 from warehouse.cache.http import cache_control
22 from warehouse.cache.origin import origin_cache
23 from warehouse.packaging.interfaces import IDownloadStatService
24 from warehouse.packaging.models import Release, File, Role
25
26
27 @view_config(
28 route_name="packaging.project",
29 renderer="packaging/detail.html",
30 decorator=[
31 cache_control(1 * 24 * 60 * 60), # 1 day
32 origin_cache(7 * 24 * 60 * 60), # 7 days
33 ],
34 )
35 def project_detail(project, request):
36 if project.name != request.matchdict.get("name", project.name):
37 return HTTPMovedPermanently(
38 request.current_route_url(name=project.name),
39 )
40
41 try:
42 release = project.releases.order_by(
43 Release._pypi_ordering.desc()
44 ).limit(1).one()
45 except NoResultFound:
46 raise HTTPNotFound from None
47
48 return release_detail(release, request)
49
50
51 @view_config(
52 route_name="packaging.release",
53 renderer="packaging/detail.html",
54 decorator=[
55 cache_control(7 * 24 * 60 * 60), # 7 days
56 origin_cache(30 * 24 * 60 * 60), # 30 days
57 ],
58 )
59 def release_detail(release, request):
60 project = release.project
61
62 if project.name != request.matchdict.get("name", project.name):
63 return HTTPMovedPermanently(
64 request.current_route_url(name=project.name),
65 )
66
67 # Get all of the registered versions for this Project, in order of newest
68 # to oldest.
69 all_releases = (
70 project.releases
71 .with_entities(Release.version, Release.created)
72 .order_by(Release._pypi_ordering.desc())
73 .all()
74 )
75
76 # Get all of the maintainers for this project.
77 maintainers = [
78 r.user
79 for r in (
80 request.db.query(Role)
81 .join(User)
82 .filter(Role.project == project)
83 .distinct(User.username)
84 .order_by(User.username)
85 .all()
86 )
87 ]
88
89 stats_svc = request.find_service(IDownloadStatService)
90
91 return {
92 "project": project,
93 "release": release,
94 "files": release.files.all(),
95 "all_releases": all_releases,
96 "maintainers": maintainers,
97 "download_stats": {
98 "daily": stats_svc.get_daily_stats(project.name),
99 "weekly": stats_svc.get_weekly_stats(project.name),
100 "monthly": stats_svc.get_monthly_stats(project.name),
101 },
102 }
103
104
105 @view_config(
106 route_name="packaging.file",
107 decorator=[
108 cache_control(365 * 24 * 60 * 60), # 1 year
109 ],
110 )
111 def packages(request):
112 # The amount of logic that we can do in this view is very limited, this
113 # view needs to be able to be handled by Fastly directly hitting S3 instead
114 # of actually hitting this view. This more or less means that we're limited
115 # to just setting headers and serving the actual file. In addition the
116 # headers that we can set, have to be able to be determined at file upload
117 # time instead of dynamically.
118
119 # Grab the path of the file that we're attempting to serve
120 path = request.matchdict["path"]
121
122 # We need to look up the File that is associated with this path, either the
123 # package path or the pgp path. If that doesn't exist then we'll bail out
124 # early with a 404.
125 try:
126 file_ = (
127 request.db.query(File)
128 .filter((File.path == path) | (File.pgp_path == path))
129 .one()
130 )
131 except NoResultFound:
132 raise HTTPNotFound from None
133
134 # If this request is for a PGP signature, and the file doesn't have a PGP
135 # signature, then we can go ahead and 404 now before hitting the file
136 # storage.
137 if path == file_.pgp_path and not file_.has_pgp_signature:
138 raise HTTPNotFound
139
140 # We also need to get the X-PyPI-Last-Serial for the project associated
141 # with this file. Bandersnatch (and other mirroring clients) will use this
142 # to determine what kind of action to take if the MD5 hash does not match
143 # what it expected.
144 # TODO: Get the X-PyPI-Last-Serial number for this.
145
146 # Try to open the file, streaming if possible, and if this file doesn't
147 # exist then we'll return a 404 error. However we'll log an error because
148 # if the database thinks we have a file, then a file should exist here.
149 try:
150 # TODO: We need to use mode="rb" here because this is a binary file
151 # and we don't want Python to attempt to decode it. However S3FS
152 # checks explicitly for mode="r-" to support streaming access.
153 # We need to get S3FS so that it support rb- as well as r-.
154 f = request.registry["filesystems"]["packages"].open(path, mode="rb")
155 except fs.errors.ResourceNotFoundError:
156 # TODO: Log an error here, this file doesn't exists for some reason,
157 # but it should because the database thinks it should.
158 raise HTTPNotFound from None
159
160 # If the path we're accessing is the path for the package itself, as
161 # opposed to the path for the signature, then we can include a
162 # Content-Length header.
163 content_length = None
164 if path == file_.path:
165 content_length = file_.size
166
167 return Response(
168 # If we have a wsgi.file_wrapper, we'll want to use that so that, if
169 # possible, this will use an optimized method of sending. Otherwise
170 # we'll just use Pyramid's FileIter as a fallback.
171 app_iter=request.environ.get("wsgi.file_wrapper", FileIter)(f),
172 # We use application/octet-stream instead of something nicer because
173 # different HTTP libraries will treat different combinations of
174 # Content-Type and Content-Encoding differently. The only thing that
175 # works sanely across all things without having something in the middle
176 # decide it can decompress the result to "help" the end user is with
177 # Content-Type: applicaton/octet-stream and no Content-Encoding.
178 content_type="application/octet-stream",
179 content_encoding=None,
180 # We need to specify an ETag for this response. Since ETags compared
181 # between URLs have no meaning (and thus, is safe for two URLs to share
182 # the same ETag) we will just use the MD5 hash of the package as our
183 # ETag.
184 etag=file_.md5_digest,
185 # Similarly to the ETag header, we'll just use the date that the file
186 # was uploaded as the Last-Modified header.
187 last_modified=file_.upload_time,
188 # If we have a Content-Length, we'll go ahead and use it here to
189 # hopefully enable the server and clients alike to be smarter about how
190 # they handle downloading this response.
191 content_length=content_length,
192 )
193
[end of warehouse/packaging/views.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/warehouse/packaging/views.py b/warehouse/packaging/views.py
--- a/warehouse/packaging/views.py
+++ b/warehouse/packaging/views.py
@@ -15,13 +15,14 @@
from pyramid.httpexceptions import HTTPMovedPermanently, HTTPNotFound
from pyramid.response import FileIter, Response
from pyramid.view import view_config
+from sqlalchemy import func
from sqlalchemy.orm.exc import NoResultFound
from warehouse.accounts.models import User
from warehouse.cache.http import cache_control
from warehouse.cache.origin import origin_cache
from warehouse.packaging.interfaces import IDownloadStatService
-from warehouse.packaging.models import Release, File, Role
+from warehouse.packaging.models import Release, File, Role, JournalEntry
@view_config(
@@ -137,12 +138,6 @@
if path == file_.pgp_path and not file_.has_pgp_signature:
raise HTTPNotFound
- # We also need to get the X-PyPI-Last-Serial for the project associated
- # with this file. Bandersnatch (and other mirroring clients) will use this
- # to determine what kind of action to take if the MD5 hash does not match
- # what it expected.
- # TODO: Get the X-PyPI-Last-Serial number for this.
-
# Try to open the file, streaming if possible, and if this file doesn't
# exist then we'll return a 404 error. However we'll log an error because
# if the database thinks we have a file, then a file should exist here.
@@ -164,7 +159,7 @@
if path == file_.path:
content_length = file_.size
- return Response(
+ resp = Response(
# If we have a wsgi.file_wrapper, we'll want to use that so that, if
# possible, this will use an optimized method of sending. Otherwise
# we'll just use Pyramid's FileIter as a fallback.
@@ -190,3 +185,16 @@
# they handle downloading this response.
content_length=content_length,
)
+
+ # We also need to get the X-PyPI-Last-Serial for the project associated
+ # with this file. Bandersnatch (and other mirroring clients) will use this
+ # to determine what kind of action to take if the MD5 hash does not match
+ # what it expected.
+ serial = (
+ request.db.query(func.max(JournalEntry.id))
+ .filter(JournalEntry.name == file_.name)
+ .scalar()
+ )
+ resp.headers["X-PyPI-Last-Serial"] = serial or 0
+
+ return resp
|
{"golden_diff": "diff --git a/warehouse/packaging/views.py b/warehouse/packaging/views.py\n--- a/warehouse/packaging/views.py\n+++ b/warehouse/packaging/views.py\n@@ -15,13 +15,14 @@\n from pyramid.httpexceptions import HTTPMovedPermanently, HTTPNotFound\n from pyramid.response import FileIter, Response\n from pyramid.view import view_config\n+from sqlalchemy import func\n from sqlalchemy.orm.exc import NoResultFound\n \n from warehouse.accounts.models import User\n from warehouse.cache.http import cache_control\n from warehouse.cache.origin import origin_cache\n from warehouse.packaging.interfaces import IDownloadStatService\n-from warehouse.packaging.models import Release, File, Role\n+from warehouse.packaging.models import Release, File, Role, JournalEntry\n \n \n @view_config(\n@@ -137,12 +138,6 @@\n if path == file_.pgp_path and not file_.has_pgp_signature:\n raise HTTPNotFound\n \n- # We also need to get the X-PyPI-Last-Serial for the project associated\n- # with this file. Bandersnatch (and other mirroring clients) will use this\n- # to determine what kind of action to take if the MD5 hash does not match\n- # what it expected.\n- # TODO: Get the X-PyPI-Last-Serial number for this.\n-\n # Try to open the file, streaming if possible, and if this file doesn't\n # exist then we'll return a 404 error. However we'll log an error because\n # if the database thinks we have a file, then a file should exist here.\n@@ -164,7 +159,7 @@\n if path == file_.path:\n content_length = file_.size\n \n- return Response(\n+ resp = Response(\n # If we have a wsgi.file_wrapper, we'll want to use that so that, if\n # possible, this will use an optimized method of sending. Otherwise\n # we'll just use Pyramid's FileIter as a fallback.\n@@ -190,3 +185,16 @@\n # they handle downloading this response.\n content_length=content_length,\n )\n+\n+ # We also need to get the X-PyPI-Last-Serial for the project associated\n+ # with this file. Bandersnatch (and other mirroring clients) will use this\n+ # to determine what kind of action to take if the MD5 hash does not match\n+ # what it expected.\n+ serial = (\n+ request.db.query(func.max(JournalEntry.id))\n+ .filter(JournalEntry.name == file_.name)\n+ .scalar()\n+ )\n+ resp.headers[\"X-PyPI-Last-Serial\"] = serial or 0\n+\n+ return resp\n", "issue": "Implement X-PyPI-Last-Serial for the File View\nThe view that serves files doesn't implement the `X-PyPI-Last-Serial` header, however bandersnatch needs that header, so we need to implement it.\n\n", "before_files": [{"content": "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport fs.errors\n\nfrom pyramid.httpexceptions import HTTPMovedPermanently, HTTPNotFound\nfrom pyramid.response import FileIter, Response\nfrom pyramid.view import view_config\nfrom sqlalchemy.orm.exc import NoResultFound\n\nfrom warehouse.accounts.models import User\nfrom warehouse.cache.http import cache_control\nfrom warehouse.cache.origin import origin_cache\nfrom warehouse.packaging.interfaces import IDownloadStatService\nfrom warehouse.packaging.models import Release, File, Role\n\n\n@view_config(\n route_name=\"packaging.project\",\n renderer=\"packaging/detail.html\",\n decorator=[\n cache_control(1 * 24 * 60 * 60), # 1 day\n origin_cache(7 * 24 * 60 * 60), # 7 days\n ],\n)\ndef project_detail(project, request):\n if project.name != request.matchdict.get(\"name\", project.name):\n return HTTPMovedPermanently(\n request.current_route_url(name=project.name),\n )\n\n try:\n release = project.releases.order_by(\n Release._pypi_ordering.desc()\n ).limit(1).one()\n except NoResultFound:\n raise HTTPNotFound from None\n\n return release_detail(release, request)\n\n\n@view_config(\n route_name=\"packaging.release\",\n renderer=\"packaging/detail.html\",\n decorator=[\n cache_control(7 * 24 * 60 * 60), # 7 days\n origin_cache(30 * 24 * 60 * 60), # 30 days\n ],\n)\ndef release_detail(release, request):\n project = release.project\n\n if project.name != request.matchdict.get(\"name\", project.name):\n return HTTPMovedPermanently(\n request.current_route_url(name=project.name),\n )\n\n # Get all of the registered versions for this Project, in order of newest\n # to oldest.\n all_releases = (\n project.releases\n .with_entities(Release.version, Release.created)\n .order_by(Release._pypi_ordering.desc())\n .all()\n )\n\n # Get all of the maintainers for this project.\n maintainers = [\n r.user\n for r in (\n request.db.query(Role)\n .join(User)\n .filter(Role.project == project)\n .distinct(User.username)\n .order_by(User.username)\n .all()\n )\n ]\n\n stats_svc = request.find_service(IDownloadStatService)\n\n return {\n \"project\": project,\n \"release\": release,\n \"files\": release.files.all(),\n \"all_releases\": all_releases,\n \"maintainers\": maintainers,\n \"download_stats\": {\n \"daily\": stats_svc.get_daily_stats(project.name),\n \"weekly\": stats_svc.get_weekly_stats(project.name),\n \"monthly\": stats_svc.get_monthly_stats(project.name),\n },\n }\n\n\n@view_config(\n route_name=\"packaging.file\",\n decorator=[\n cache_control(365 * 24 * 60 * 60), # 1 year\n ],\n)\ndef packages(request):\n # The amount of logic that we can do in this view is very limited, this\n # view needs to be able to be handled by Fastly directly hitting S3 instead\n # of actually hitting this view. This more or less means that we're limited\n # to just setting headers and serving the actual file. In addition the\n # headers that we can set, have to be able to be determined at file upload\n # time instead of dynamically.\n\n # Grab the path of the file that we're attempting to serve\n path = request.matchdict[\"path\"]\n\n # We need to look up the File that is associated with this path, either the\n # package path or the pgp path. If that doesn't exist then we'll bail out\n # early with a 404.\n try:\n file_ = (\n request.db.query(File)\n .filter((File.path == path) | (File.pgp_path == path))\n .one()\n )\n except NoResultFound:\n raise HTTPNotFound from None\n\n # If this request is for a PGP signature, and the file doesn't have a PGP\n # signature, then we can go ahead and 404 now before hitting the file\n # storage.\n if path == file_.pgp_path and not file_.has_pgp_signature:\n raise HTTPNotFound\n\n # We also need to get the X-PyPI-Last-Serial for the project associated\n # with this file. Bandersnatch (and other mirroring clients) will use this\n # to determine what kind of action to take if the MD5 hash does not match\n # what it expected.\n # TODO: Get the X-PyPI-Last-Serial number for this.\n\n # Try to open the file, streaming if possible, and if this file doesn't\n # exist then we'll return a 404 error. However we'll log an error because\n # if the database thinks we have a file, then a file should exist here.\n try:\n # TODO: We need to use mode=\"rb\" here because this is a binary file\n # and we don't want Python to attempt to decode it. However S3FS\n # checks explicitly for mode=\"r-\" to support streaming access.\n # We need to get S3FS so that it support rb- as well as r-.\n f = request.registry[\"filesystems\"][\"packages\"].open(path, mode=\"rb\")\n except fs.errors.ResourceNotFoundError:\n # TODO: Log an error here, this file doesn't exists for some reason,\n # but it should because the database thinks it should.\n raise HTTPNotFound from None\n\n # If the path we're accessing is the path for the package itself, as\n # opposed to the path for the signature, then we can include a\n # Content-Length header.\n content_length = None\n if path == file_.path:\n content_length = file_.size\n\n return Response(\n # If we have a wsgi.file_wrapper, we'll want to use that so that, if\n # possible, this will use an optimized method of sending. Otherwise\n # we'll just use Pyramid's FileIter as a fallback.\n app_iter=request.environ.get(\"wsgi.file_wrapper\", FileIter)(f),\n # We use application/octet-stream instead of something nicer because\n # different HTTP libraries will treat different combinations of\n # Content-Type and Content-Encoding differently. The only thing that\n # works sanely across all things without having something in the middle\n # decide it can decompress the result to \"help\" the end user is with\n # Content-Type: applicaton/octet-stream and no Content-Encoding.\n content_type=\"application/octet-stream\",\n content_encoding=None,\n # We need to specify an ETag for this response. Since ETags compared\n # between URLs have no meaning (and thus, is safe for two URLs to share\n # the same ETag) we will just use the MD5 hash of the package as our\n # ETag.\n etag=file_.md5_digest,\n # Similarly to the ETag header, we'll just use the date that the file\n # was uploaded as the Last-Modified header.\n last_modified=file_.upload_time,\n # If we have a Content-Length, we'll go ahead and use it here to\n # hopefully enable the server and clients alike to be smarter about how\n # they handle downloading this response.\n content_length=content_length,\n )\n", "path": "warehouse/packaging/views.py"}]}
| 2,832 | 614 |
gh_patches_debug_27545
|
rasdani/github-patches
|
git_diff
|
encode__uvicorn-227
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Error integrating with Channels if 'lifespan' is not specified in router
I'm not entirely sure if I should be posting this here or on `channels`.
I'm using v0.3.12 which I believe has already introduced the new `lifespan` protocol defined in asgiref. But this causes an error with `channels`' router
```bash
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/uvicorn/lifespan.py", line 29, in run
await self.asgi(self.receive, self.send)
File "/usr/local/lib/python3.6/site-packages/uvicorn/middleware/message_logger.py", line 51, in __call__
inner = self.app(self.scope)
File "/usr/local/lib/python3.6/site-packages/channels/routing.py", line 58, in __call__
raise ValueError("No application configured for scope type %r" % scope["type"])
ValueError: No application configured for scope type 'lifespan'
```
My `routing.py` file looks like this:
```python
application = ProtocolTypeRouter({
# Empty for now (http->django views is added by default)
'websocket': JWTWebsocketMiddleware(
URLRouter(urlpatterns)
)
})
```
**EDIT**: Sorry my workaround wasn't actually working as you'll need at least one `path` in the `URLRouter`, so I've removed it.
To temporarily get around this, I had to downgrade to `v0.3.9`.
</issue>
<code>
[start of uvicorn/middleware/message_logger.py]
1 import logging
2
3 PLACEHOLDER_FORMAT = {
4 'body': '<{length} bytes>',
5 'bytes': '<{length} bytes>',
6 'text': '<{length} chars>',
7 'headers': '<...>',
8 }
9
10
11 def message_with_placeholders(message):
12 """
13 Return an ASGI message, with any body-type content omitted and replaced
14 with a placeholder.
15 """
16 new_message = message.copy()
17 for attr in PLACEHOLDER_FORMAT.keys():
18 if message.get(attr) is not None:
19 content = message[attr]
20 placeholder = PLACEHOLDER_FORMAT[attr].format(length=len(content))
21 new_message[attr] = placeholder
22 return new_message
23
24
25 class MessageLoggerMiddleware:
26 def __init__(self, app):
27 self.task_counter = 0
28 self.app = app
29 self.logger = logging.getLogger("uvicorn")
30
31 def __call__(self, scope):
32 self.task_counter += 1
33 return MessageLoggerResponder(scope, self.app, self.logger, self.task_counter)
34
35
36 class MessageLoggerResponder:
37 def __init__(self, scope, app, logger, task_counter):
38 self.scope = scope
39 self.app = app
40 self.logger = logger
41 self.task_counter = task_counter
42 self.client_addr = scope.get('client')
43
44 async def __call__(self, receive, send):
45 self._receive = receive
46 self._send = send
47 logged_scope = message_with_placeholders(self.scope)
48 log_text = '%s - ASGI [%d] Started %s'
49 self.logger.debug(log_text, self.client_addr, self.task_counter, logged_scope)
50 try:
51 inner = self.app(self.scope)
52 await inner(self.receive, self.send)
53 except:
54 log_text = '%s - ASGI [%d] Raised exception'
55 self.logger.debug(log_text, self.client_addr, self.task_counter)
56 raise
57 else:
58 log_text = '%s - ASGI [%d] Completed'
59 self.logger.debug(log_text, self.client_addr, self.task_counter)
60
61 async def receive(self):
62 message = await self._receive()
63 logged_message = message_with_placeholders(message)
64 log_text = '%s - ASGI [%d] Sent %s'
65 self.logger.debug(log_text, self.client_addr, self.task_counter, logged_message)
66 return message
67
68 async def send(self, message):
69 logged_message = message_with_placeholders(message)
70 log_text = '%s - ASGI [%d] Received %s'
71 self.logger.debug(log_text, self.client_addr, self.task_counter, logged_message)
72 await self._send(message)
73
[end of uvicorn/middleware/message_logger.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/uvicorn/middleware/message_logger.py b/uvicorn/middleware/message_logger.py
--- a/uvicorn/middleware/message_logger.py
+++ b/uvicorn/middleware/message_logger.py
@@ -36,20 +36,27 @@
class MessageLoggerResponder:
def __init__(self, scope, app, logger, task_counter):
self.scope = scope
- self.app = app
self.logger = logger
self.task_counter = task_counter
self.client_addr = scope.get('client')
+ logged_scope = message_with_placeholders(scope)
+ log_text = '%s - ASGI [%d] Initialized %s'
+ self.logger.debug(log_text, self.client_addr, self.task_counter, logged_scope)
+ try:
+ self.inner = app(scope)
+ except:
+ log_text = '%s - ASGI [%d] Raised exception'
+ self.logger.debug(log_text, self.client_addr, self.task_counter)
+ raise
+
async def __call__(self, receive, send):
self._receive = receive
self._send = send
- logged_scope = message_with_placeholders(self.scope)
- log_text = '%s - ASGI [%d] Started %s'
- self.logger.debug(log_text, self.client_addr, self.task_counter, logged_scope)
+ log_text = '%s - ASGI [%d] Started task'
+ self.logger.debug(log_text, self.client_addr, self.task_counter)
try:
- inner = self.app(self.scope)
- await inner(self.receive, self.send)
+ await self.inner(self.receive, self.send)
except:
log_text = '%s - ASGI [%d] Raised exception'
self.logger.debug(log_text, self.client_addr, self.task_counter)
|
{"golden_diff": "diff --git a/uvicorn/middleware/message_logger.py b/uvicorn/middleware/message_logger.py\n--- a/uvicorn/middleware/message_logger.py\n+++ b/uvicorn/middleware/message_logger.py\n@@ -36,20 +36,27 @@\n class MessageLoggerResponder:\n def __init__(self, scope, app, logger, task_counter):\n self.scope = scope\n- self.app = app\n self.logger = logger\n self.task_counter = task_counter\n self.client_addr = scope.get('client')\n \n+ logged_scope = message_with_placeholders(scope)\n+ log_text = '%s - ASGI [%d] Initialized %s'\n+ self.logger.debug(log_text, self.client_addr, self.task_counter, logged_scope)\n+ try:\n+ self.inner = app(scope)\n+ except:\n+ log_text = '%s - ASGI [%d] Raised exception'\n+ self.logger.debug(log_text, self.client_addr, self.task_counter)\n+ raise\n+\n async def __call__(self, receive, send):\n self._receive = receive\n self._send = send\n- logged_scope = message_with_placeholders(self.scope)\n- log_text = '%s - ASGI [%d] Started %s'\n- self.logger.debug(log_text, self.client_addr, self.task_counter, logged_scope)\n+ log_text = '%s - ASGI [%d] Started task'\n+ self.logger.debug(log_text, self.client_addr, self.task_counter)\n try:\n- inner = self.app(self.scope)\n- await inner(self.receive, self.send)\n+ await self.inner(self.receive, self.send)\n except:\n log_text = '%s - ASGI [%d] Raised exception'\n self.logger.debug(log_text, self.client_addr, self.task_counter)\n", "issue": "Error integrating with Channels if 'lifespan' is not specified in router\nI'm not entirely sure if I should be posting this here or on `channels`.\r\n\r\nI'm using v0.3.12 which I believe has already introduced the new `lifespan` protocol defined in asgiref. But this causes an error with `channels`' router\r\n\r\n```bash\r\nTraceback (most recent call last):\r\n File \"/usr/local/lib/python3.6/site-packages/uvicorn/lifespan.py\", line 29, in run\r\n await self.asgi(self.receive, self.send)\r\n File \"/usr/local/lib/python3.6/site-packages/uvicorn/middleware/message_logger.py\", line 51, in __call__\r\n inner = self.app(self.scope)\r\n File \"/usr/local/lib/python3.6/site-packages/channels/routing.py\", line 58, in __call__\r\n raise ValueError(\"No application configured for scope type %r\" % scope[\"type\"])\r\nValueError: No application configured for scope type 'lifespan'\r\n```\r\n\r\nMy `routing.py` file looks like this:\r\n\r\n```python\r\napplication = ProtocolTypeRouter({\r\n # Empty for now (http->django views is added by default)\r\n 'websocket': JWTWebsocketMiddleware(\r\n URLRouter(urlpatterns)\r\n )\r\n})\r\n```\r\n\r\n**EDIT**: Sorry my workaround wasn't actually working as you'll need at least one `path` in the `URLRouter`, so I've removed it.\r\n\r\nTo temporarily get around this, I had to downgrade to `v0.3.9`.\n", "before_files": [{"content": "import logging\n\nPLACEHOLDER_FORMAT = {\n 'body': '<{length} bytes>',\n 'bytes': '<{length} bytes>',\n 'text': '<{length} chars>',\n 'headers': '<...>',\n}\n\n\ndef message_with_placeholders(message):\n \"\"\"\n Return an ASGI message, with any body-type content omitted and replaced\n with a placeholder.\n \"\"\"\n new_message = message.copy()\n for attr in PLACEHOLDER_FORMAT.keys():\n if message.get(attr) is not None:\n content = message[attr]\n placeholder = PLACEHOLDER_FORMAT[attr].format(length=len(content))\n new_message[attr] = placeholder\n return new_message\n\n\nclass MessageLoggerMiddleware:\n def __init__(self, app):\n self.task_counter = 0\n self.app = app\n self.logger = logging.getLogger(\"uvicorn\")\n\n def __call__(self, scope):\n self.task_counter += 1\n return MessageLoggerResponder(scope, self.app, self.logger, self.task_counter)\n\n\nclass MessageLoggerResponder:\n def __init__(self, scope, app, logger, task_counter):\n self.scope = scope\n self.app = app\n self.logger = logger\n self.task_counter = task_counter\n self.client_addr = scope.get('client')\n\n async def __call__(self, receive, send):\n self._receive = receive\n self._send = send\n logged_scope = message_with_placeholders(self.scope)\n log_text = '%s - ASGI [%d] Started %s'\n self.logger.debug(log_text, self.client_addr, self.task_counter, logged_scope)\n try:\n inner = self.app(self.scope)\n await inner(self.receive, self.send)\n except:\n log_text = '%s - ASGI [%d] Raised exception'\n self.logger.debug(log_text, self.client_addr, self.task_counter)\n raise\n else:\n log_text = '%s - ASGI [%d] Completed'\n self.logger.debug(log_text, self.client_addr, self.task_counter)\n\n async def receive(self):\n message = await self._receive()\n logged_message = message_with_placeholders(message)\n log_text = '%s - ASGI [%d] Sent %s'\n self.logger.debug(log_text, self.client_addr, self.task_counter, logged_message)\n return message\n\n async def send(self, message):\n logged_message = message_with_placeholders(message)\n log_text = '%s - ASGI [%d] Received %s'\n self.logger.debug(log_text, self.client_addr, self.task_counter, logged_message)\n await self._send(message)\n", "path": "uvicorn/middleware/message_logger.py"}]}
| 1,563 | 389 |
gh_patches_debug_25857
|
rasdani/github-patches
|
git_diff
|
certbot__certbot-1534
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Implement --manual-public-ip-logging-ok
Not having such flags slows down my development considerably.
https://github.com/letsencrypt/letsencrypt/pull/1125#issuecomment-150897419
</issue>
<code>
[start of letsencrypt/plugins/manual.py]
1 """Manual plugin."""
2 import os
3 import logging
4 import pipes
5 import shutil
6 import signal
7 import socket
8 import subprocess
9 import sys
10 import tempfile
11 import time
12
13 import zope.component
14 import zope.interface
15
16 from acme import challenges
17
18 from letsencrypt import errors
19 from letsencrypt import interfaces
20 from letsencrypt.plugins import common
21
22
23 logger = logging.getLogger(__name__)
24
25
26 class Authenticator(common.Plugin):
27 """Manual Authenticator.
28
29 This plugin requires user's manual intervention in setting up a HTTP
30 server for solving http-01 challenges and thus does not need to be
31 run as a privileged process. Alternatively shows instructions on how
32 to use Python's built-in HTTP server.
33
34 .. todo:: Support for `~.challenges.TLSSNI01`.
35
36 """
37 zope.interface.implements(interfaces.IAuthenticator)
38 zope.interface.classProvides(interfaces.IPluginFactory)
39 hidden = True
40
41 description = "Manually configure an HTTP server"
42
43 MESSAGE_TEMPLATE = """\
44 Make sure your web server displays the following content at
45 {uri} before continuing:
46
47 {validation}
48
49 Content-Type header MUST be set to {ct}.
50
51 If you don't have HTTP server configured, you can run the following
52 command on the target server (as root):
53
54 {command}
55 """
56
57 # a disclaimer about your current IP being transmitted to Let's Encrypt's servers.
58 IP_DISCLAIMER = """\
59 NOTE: The IP of this machine will be publicly logged as having requested this certificate. \
60 If you're running letsencrypt in manual mode on a machine that is not your server, \
61 please ensure you're okay with that.
62
63 Are you OK with your IP being logged?
64 """
65
66 # "cd /tmp/letsencrypt" makes sure user doesn't serve /root,
67 # separate "public_html" ensures that cert.pem/key.pem are not
68 # served and makes it more obvious that Python command will serve
69 # anything recursively under the cwd
70
71 CMD_TEMPLATE = """\
72 mkdir -p {root}/public_html/{achall.URI_ROOT_PATH}
73 cd {root}/public_html
74 printf "%s" {validation} > {achall.URI_ROOT_PATH}/{encoded_token}
75 # run only once per server:
76 $(command -v python2 || command -v python2.7 || command -v python2.6) -c \\
77 "import BaseHTTPServer, SimpleHTTPServer; \\
78 SimpleHTTPServer.SimpleHTTPRequestHandler.extensions_map = {{'': '{ct}'}}; \\
79 s = BaseHTTPServer.HTTPServer(('', {port}), SimpleHTTPServer.SimpleHTTPRequestHandler); \\
80 s.serve_forever()" """
81 """Command template."""
82
83 def __init__(self, *args, **kwargs):
84 super(Authenticator, self).__init__(*args, **kwargs)
85 self._root = (tempfile.mkdtemp() if self.conf("test-mode")
86 else "/tmp/letsencrypt")
87 self._httpd = None
88
89 @classmethod
90 def add_parser_arguments(cls, add):
91 add("test-mode", action="store_true",
92 help="Test mode. Executes the manual command in subprocess.")
93
94 def prepare(self): # pylint: disable=missing-docstring,no-self-use
95 pass # pragma: no cover
96
97 def more_info(self): # pylint: disable=missing-docstring,no-self-use
98 return ("This plugin requires user's manual intervention in setting "
99 "up an HTTP server for solving http-01 challenges and thus "
100 "does not need to be run as a privileged process. "
101 "Alternatively shows instructions on how to use Python's "
102 "built-in HTTP server.")
103
104 def get_chall_pref(self, domain):
105 # pylint: disable=missing-docstring,no-self-use,unused-argument
106 return [challenges.HTTP01]
107
108 def perform(self, achalls): # pylint: disable=missing-docstring
109 responses = []
110 # TODO: group achalls by the same socket.gethostbyname(_ex)
111 # and prompt only once per server (one "echo -n" per domain)
112 for achall in achalls:
113 responses.append(self._perform_single(achall))
114 return responses
115
116 @classmethod
117 def _test_mode_busy_wait(cls, port):
118 while True:
119 time.sleep(1)
120 sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
121 try:
122 sock.connect(("localhost", port))
123 except socket.error: # pragma: no cover
124 pass
125 else:
126 break
127 finally:
128 sock.close()
129
130 def _perform_single(self, achall):
131 # same path for each challenge response would be easier for
132 # users, but will not work if multiple domains point at the
133 # same server: default command doesn't support virtual hosts
134 response, validation = achall.response_and_validation()
135
136 port = (response.port if self.config.http01_port is None
137 else int(self.config.http01_port))
138 command = self.CMD_TEMPLATE.format(
139 root=self._root, achall=achall, response=response,
140 # TODO(kuba): pipes still necessary?
141 validation=pipes.quote(validation),
142 encoded_token=achall.chall.encode("token"),
143 ct=achall.CONTENT_TYPE, port=port)
144 if self.conf("test-mode"):
145 logger.debug("Test mode. Executing the manual command: %s", command)
146 # sh shipped with OS X does't support echo -n, but supports printf
147 try:
148 self._httpd = subprocess.Popen(
149 command,
150 # don't care about setting stdout and stderr,
151 # we're in test mode anyway
152 shell=True,
153 executable=None,
154 # "preexec_fn" is UNIX specific, but so is "command"
155 preexec_fn=os.setsid)
156 except OSError as error: # ValueError should not happen!
157 logger.debug(
158 "Couldn't execute manual command: %s", error, exc_info=True)
159 return False
160 logger.debug("Manual command running as PID %s.", self._httpd.pid)
161 # give it some time to bootstrap, before we try to verify
162 # (cert generation in case of simpleHttpS might take time)
163 self._test_mode_busy_wait(port)
164 if self._httpd.poll() is not None:
165 raise errors.Error("Couldn't execute manual command")
166 else:
167 if not zope.component.getUtility(interfaces.IDisplay).yesno(
168 self.IP_DISCLAIMER, "Yes", "No"):
169 raise errors.PluginError("Must agree to IP logging to proceed")
170
171 self._notify_and_wait(self.MESSAGE_TEMPLATE.format(
172 validation=validation, response=response,
173 uri=achall.chall.uri(achall.domain),
174 ct=achall.CONTENT_TYPE, command=command))
175
176 if response.simple_verify(
177 achall.chall, achall.domain,
178 achall.account_key.public_key(), self.config.http01_port):
179 return response
180 else:
181 logger.error(
182 "Self-verify of challenge failed, authorization abandoned.")
183 if self.conf("test-mode") and self._httpd.poll() is not None:
184 # simply verify cause command failure...
185 return False
186 return None
187
188 def _notify_and_wait(self, message): # pylint: disable=no-self-use
189 # TODO: IDisplay wraps messages, breaking the command
190 #answer = zope.component.getUtility(interfaces.IDisplay).notification(
191 # message=message, height=25, pause=True)
192 sys.stdout.write(message)
193 raw_input("Press ENTER to continue")
194
195 def cleanup(self, achalls):
196 # pylint: disable=missing-docstring,no-self-use,unused-argument
197 if self.conf("test-mode"):
198 assert self._httpd is not None, (
199 "cleanup() must be called after perform()")
200 if self._httpd.poll() is None:
201 logger.debug("Terminating manual command process")
202 os.killpg(self._httpd.pid, signal.SIGTERM)
203 else:
204 logger.debug("Manual command process already terminated "
205 "with %s code", self._httpd.returncode)
206 shutil.rmtree(self._root)
207
[end of letsencrypt/plugins/manual.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/letsencrypt/plugins/manual.py b/letsencrypt/plugins/manual.py
--- a/letsencrypt/plugins/manual.py
+++ b/letsencrypt/plugins/manual.py
@@ -90,6 +90,8 @@
def add_parser_arguments(cls, add):
add("test-mode", action="store_true",
help="Test mode. Executes the manual command in subprocess.")
+ add("public-ip-logging-ok", action="store_true",
+ help="Automatically allows public IP logging.")
def prepare(self): # pylint: disable=missing-docstring,no-self-use
pass # pragma: no cover
@@ -164,9 +166,10 @@
if self._httpd.poll() is not None:
raise errors.Error("Couldn't execute manual command")
else:
- if not zope.component.getUtility(interfaces.IDisplay).yesno(
- self.IP_DISCLAIMER, "Yes", "No"):
- raise errors.PluginError("Must agree to IP logging to proceed")
+ if not self.conf("public-ip-logging-ok"):
+ if not zope.component.getUtility(interfaces.IDisplay).yesno(
+ self.IP_DISCLAIMER, "Yes", "No"):
+ raise errors.PluginError("Must agree to IP logging to proceed")
self._notify_and_wait(self.MESSAGE_TEMPLATE.format(
validation=validation, response=response,
|
{"golden_diff": "diff --git a/letsencrypt/plugins/manual.py b/letsencrypt/plugins/manual.py\n--- a/letsencrypt/plugins/manual.py\n+++ b/letsencrypt/plugins/manual.py\n@@ -90,6 +90,8 @@\n def add_parser_arguments(cls, add):\n add(\"test-mode\", action=\"store_true\",\n help=\"Test mode. Executes the manual command in subprocess.\")\n+ add(\"public-ip-logging-ok\", action=\"store_true\",\n+ help=\"Automatically allows public IP logging.\")\n \n def prepare(self): # pylint: disable=missing-docstring,no-self-use\n pass # pragma: no cover\n@@ -164,9 +166,10 @@\n if self._httpd.poll() is not None:\n raise errors.Error(\"Couldn't execute manual command\")\n else:\n- if not zope.component.getUtility(interfaces.IDisplay).yesno(\n- self.IP_DISCLAIMER, \"Yes\", \"No\"):\n- raise errors.PluginError(\"Must agree to IP logging to proceed\")\n+ if not self.conf(\"public-ip-logging-ok\"):\n+ if not zope.component.getUtility(interfaces.IDisplay).yesno(\n+ self.IP_DISCLAIMER, \"Yes\", \"No\"):\n+ raise errors.PluginError(\"Must agree to IP logging to proceed\")\n \n self._notify_and_wait(self.MESSAGE_TEMPLATE.format(\n validation=validation, response=response,\n", "issue": "Implement --manual-public-ip-logging-ok\nNot having such flags slows down my development considerably.\n\nhttps://github.com/letsencrypt/letsencrypt/pull/1125#issuecomment-150897419\n\n", "before_files": [{"content": "\"\"\"Manual plugin.\"\"\"\nimport os\nimport logging\nimport pipes\nimport shutil\nimport signal\nimport socket\nimport subprocess\nimport sys\nimport tempfile\nimport time\n\nimport zope.component\nimport zope.interface\n\nfrom acme import challenges\n\nfrom letsencrypt import errors\nfrom letsencrypt import interfaces\nfrom letsencrypt.plugins import common\n\n\nlogger = logging.getLogger(__name__)\n\n\nclass Authenticator(common.Plugin):\n \"\"\"Manual Authenticator.\n\n This plugin requires user's manual intervention in setting up a HTTP\n server for solving http-01 challenges and thus does not need to be\n run as a privileged process. Alternatively shows instructions on how\n to use Python's built-in HTTP server.\n\n .. todo:: Support for `~.challenges.TLSSNI01`.\n\n \"\"\"\n zope.interface.implements(interfaces.IAuthenticator)\n zope.interface.classProvides(interfaces.IPluginFactory)\n hidden = True\n\n description = \"Manually configure an HTTP server\"\n\n MESSAGE_TEMPLATE = \"\"\"\\\nMake sure your web server displays the following content at\n{uri} before continuing:\n\n{validation}\n\nContent-Type header MUST be set to {ct}.\n\nIf you don't have HTTP server configured, you can run the following\ncommand on the target server (as root):\n\n{command}\n\"\"\"\n\n # a disclaimer about your current IP being transmitted to Let's Encrypt's servers.\n IP_DISCLAIMER = \"\"\"\\\nNOTE: The IP of this machine will be publicly logged as having requested this certificate. \\\nIf you're running letsencrypt in manual mode on a machine that is not your server, \\\nplease ensure you're okay with that.\n\nAre you OK with your IP being logged?\n\"\"\"\n\n # \"cd /tmp/letsencrypt\" makes sure user doesn't serve /root,\n # separate \"public_html\" ensures that cert.pem/key.pem are not\n # served and makes it more obvious that Python command will serve\n # anything recursively under the cwd\n\n CMD_TEMPLATE = \"\"\"\\\nmkdir -p {root}/public_html/{achall.URI_ROOT_PATH}\ncd {root}/public_html\nprintf \"%s\" {validation} > {achall.URI_ROOT_PATH}/{encoded_token}\n# run only once per server:\n$(command -v python2 || command -v python2.7 || command -v python2.6) -c \\\\\n\"import BaseHTTPServer, SimpleHTTPServer; \\\\\nSimpleHTTPServer.SimpleHTTPRequestHandler.extensions_map = {{'': '{ct}'}}; \\\\\ns = BaseHTTPServer.HTTPServer(('', {port}), SimpleHTTPServer.SimpleHTTPRequestHandler); \\\\\ns.serve_forever()\" \"\"\"\n \"\"\"Command template.\"\"\"\n\n def __init__(self, *args, **kwargs):\n super(Authenticator, self).__init__(*args, **kwargs)\n self._root = (tempfile.mkdtemp() if self.conf(\"test-mode\")\n else \"/tmp/letsencrypt\")\n self._httpd = None\n\n @classmethod\n def add_parser_arguments(cls, add):\n add(\"test-mode\", action=\"store_true\",\n help=\"Test mode. Executes the manual command in subprocess.\")\n\n def prepare(self): # pylint: disable=missing-docstring,no-self-use\n pass # pragma: no cover\n\n def more_info(self): # pylint: disable=missing-docstring,no-self-use\n return (\"This plugin requires user's manual intervention in setting \"\n \"up an HTTP server for solving http-01 challenges and thus \"\n \"does not need to be run as a privileged process. \"\n \"Alternatively shows instructions on how to use Python's \"\n \"built-in HTTP server.\")\n\n def get_chall_pref(self, domain):\n # pylint: disable=missing-docstring,no-self-use,unused-argument\n return [challenges.HTTP01]\n\n def perform(self, achalls): # pylint: disable=missing-docstring\n responses = []\n # TODO: group achalls by the same socket.gethostbyname(_ex)\n # and prompt only once per server (one \"echo -n\" per domain)\n for achall in achalls:\n responses.append(self._perform_single(achall))\n return responses\n\n @classmethod\n def _test_mode_busy_wait(cls, port):\n while True:\n time.sleep(1)\n sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)\n try:\n sock.connect((\"localhost\", port))\n except socket.error: # pragma: no cover\n pass\n else:\n break\n finally:\n sock.close()\n\n def _perform_single(self, achall):\n # same path for each challenge response would be easier for\n # users, but will not work if multiple domains point at the\n # same server: default command doesn't support virtual hosts\n response, validation = achall.response_and_validation()\n\n port = (response.port if self.config.http01_port is None\n else int(self.config.http01_port))\n command = self.CMD_TEMPLATE.format(\n root=self._root, achall=achall, response=response,\n # TODO(kuba): pipes still necessary?\n validation=pipes.quote(validation),\n encoded_token=achall.chall.encode(\"token\"),\n ct=achall.CONTENT_TYPE, port=port)\n if self.conf(\"test-mode\"):\n logger.debug(\"Test mode. Executing the manual command: %s\", command)\n # sh shipped with OS X does't support echo -n, but supports printf\n try:\n self._httpd = subprocess.Popen(\n command,\n # don't care about setting stdout and stderr,\n # we're in test mode anyway\n shell=True,\n executable=None,\n # \"preexec_fn\" is UNIX specific, but so is \"command\"\n preexec_fn=os.setsid)\n except OSError as error: # ValueError should not happen!\n logger.debug(\n \"Couldn't execute manual command: %s\", error, exc_info=True)\n return False\n logger.debug(\"Manual command running as PID %s.\", self._httpd.pid)\n # give it some time to bootstrap, before we try to verify\n # (cert generation in case of simpleHttpS might take time)\n self._test_mode_busy_wait(port)\n if self._httpd.poll() is not None:\n raise errors.Error(\"Couldn't execute manual command\")\n else:\n if not zope.component.getUtility(interfaces.IDisplay).yesno(\n self.IP_DISCLAIMER, \"Yes\", \"No\"):\n raise errors.PluginError(\"Must agree to IP logging to proceed\")\n\n self._notify_and_wait(self.MESSAGE_TEMPLATE.format(\n validation=validation, response=response,\n uri=achall.chall.uri(achall.domain),\n ct=achall.CONTENT_TYPE, command=command))\n\n if response.simple_verify(\n achall.chall, achall.domain,\n achall.account_key.public_key(), self.config.http01_port):\n return response\n else:\n logger.error(\n \"Self-verify of challenge failed, authorization abandoned.\")\n if self.conf(\"test-mode\") and self._httpd.poll() is not None:\n # simply verify cause command failure...\n return False\n return None\n\n def _notify_and_wait(self, message): # pylint: disable=no-self-use\n # TODO: IDisplay wraps messages, breaking the command\n #answer = zope.component.getUtility(interfaces.IDisplay).notification(\n # message=message, height=25, pause=True)\n sys.stdout.write(message)\n raw_input(\"Press ENTER to continue\")\n\n def cleanup(self, achalls):\n # pylint: disable=missing-docstring,no-self-use,unused-argument\n if self.conf(\"test-mode\"):\n assert self._httpd is not None, (\n \"cleanup() must be called after perform()\")\n if self._httpd.poll() is None:\n logger.debug(\"Terminating manual command process\")\n os.killpg(self._httpd.pid, signal.SIGTERM)\n else:\n logger.debug(\"Manual command process already terminated \"\n \"with %s code\", self._httpd.returncode)\n shutil.rmtree(self._root)\n", "path": "letsencrypt/plugins/manual.py"}]}
| 2,836 | 301 |
gh_patches_debug_25872
|
rasdani/github-patches
|
git_diff
|
google__clusterfuzz-3140
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Make Centipede handle custom ASAN options that libFuzzer supports
See https://bugs.chromium.org/p/chromium/issues/detail?id=1451080#c4
</issue>
<code>
[start of src/clusterfuzz/_internal/bot/fuzzers/centipede/engine.py]
1 # Copyright 2022 Google LLC
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 """Centipede engine interface."""
15
16 from collections import namedtuple
17 import os
18 import pathlib
19 import re
20 import shutil
21
22 from clusterfuzz._internal.bot.fuzzers import dictionary_manager
23 from clusterfuzz._internal.bot.fuzzers import engine_common
24 from clusterfuzz._internal.bot.fuzzers import utils as fuzzer_utils
25 from clusterfuzz._internal.metrics import logs
26 from clusterfuzz._internal.system import environment
27 from clusterfuzz._internal.system import new_process
28 from clusterfuzz.fuzz import engine
29
30 _CLEAN_EXIT_SECS = 10
31 _SERVER_COUNT = 1
32 _RSS_LIMIT = 4096
33 _ADDRESS_SPACE_LIMIT = 4096
34 _TIMEOUT_PER_INPUT_FUZZ = 25
35 _TIMEOUT_PER_INPUT_REPR = 60
36 _DEFAULT_ARGUMENTS = [
37 '--exit_on_crash=1',
38 f'--fork_server={_SERVER_COUNT}',
39 f'--rss_limit_mb={_RSS_LIMIT}',
40 f'--address_space_limit_mb={_ADDRESS_SPACE_LIMIT}',
41 ]
42
43 CRASH_REGEX = re.compile(r'[sS]aving input to:?\s*(.*)')
44 _CRASH_LOG_PREFIX = 'CRASH LOG: '
45 TargetBinaries = namedtuple('TargetBinaries', ['unsanitized', 'sanitized'])
46
47
48 class CentipedeError(Exception):
49 """Base exception class."""
50
51
52 def _get_runner(target_path):
53 """Gets the Centipede runner."""
54 centipede_path = pathlib.Path(target_path).parent / 'centipede'
55 if not centipede_path.exists():
56 raise CentipedeError('Centipede not found in build')
57
58 centipede_path = str(centipede_path)
59 if environment.get_value('USE_UNSHARE'):
60 return new_process.UnicodeModifierRunner(centipede_path)
61 return new_process.UnicodeProcessRunner(centipede_path)
62
63
64 def _get_reproducer_path(log, reproducers_dir):
65 """Gets the reproducer path, if any."""
66 crash_match = CRASH_REGEX.search(log)
67 if not crash_match:
68 return None
69 tmp_crash_path = pathlib.Path(crash_match.group(1))
70 crash_path = pathlib.Path(reproducers_dir) / tmp_crash_path.name
71 shutil.copy(tmp_crash_path, crash_path)
72 return crash_path
73
74
75 class Engine(engine.Engine):
76 """Centipede engine implementation."""
77
78 @property
79 def name(self):
80 return 'centipede'
81
82 # pylint: disable=unused-argument
83 def prepare(self, corpus_dir, target_path, build_dir):
84 """Prepares for a fuzzing session, by generating options.
85
86 Args:
87 corpus_dir: The main corpus directory.
88 target_path: Path to the target.
89 build_dir: Path to the build directory.
90
91 Returns:
92 A FuzzOptions object.
93 """
94 arguments = []
95 dict_path = pathlib.Path(
96 dictionary_manager.get_default_dictionary_path(target_path))
97 if dict_path.exists():
98 arguments.append(f'--dictionary={dict_path}')
99
100 # Directory workdir saves:
101 # 1. Centipede-readable corpus file;
102 # 2. Centipede-readable feature file;
103 # 3. Crash reproducing inputs.
104 workdir = self._create_temp_dir('workdir')
105 arguments.append(f'--workdir={workdir}')
106
107 # Directory corpus_dir saves the corpus files required by ClusterFuzz.
108 arguments.append(f'--corpus_dir={corpus_dir}')
109
110 target_binaries = self._get_binary_paths(target_path)
111 if target_binaries.unsanitized is None:
112 # Assuming the only binary is always sanitized (e.g., from Chrome).
113 arguments.append(f'--binary={target_binaries.sanitized}')
114 logs.log_warn('Unable to find unsanitized target binary.')
115 else:
116 arguments.append(f'--binary={target_binaries.unsanitized}')
117 arguments.append(f'--extra_binaries={target_binaries.sanitized}')
118
119 arguments.append(f'--timeout_per_input={_TIMEOUT_PER_INPUT_FUZZ}')
120
121 arguments.extend(_DEFAULT_ARGUMENTS)
122
123 return engine.FuzzOptions(corpus_dir, arguments, {})
124
125 def _get_binary_paths(self, target_path):
126 """Gets the paths to the main and auxiliary binaries based on |target_path|
127 Args:
128 target_path: Path to the main target in a string.
129
130 Returns:
131 A named tuple containing paths to both target binaries as pathlib.Path.
132 """
133 # Centipede expects one or two target binaries:
134 # |-------------------------------------------------------|
135 # | | main target path | auxiliary target path |
136 # |-------------------------------------------------------|
137 # | 1 binary | sanitized | - |
138 # |-------------------------------------------------------|
139 # | 2 binaries | unsanitized | sanitized |
140 # |-------------------------------------------------------|
141
142 main_target_path = pathlib.Path(target_path)
143 auxiliary_target_path = self._get_auxiliary_target_path(target_path)
144
145 if main_target_path.exists() and auxiliary_target_path.exists():
146 # 2 binaries were provided.
147 target_binaries = TargetBinaries(main_target_path, auxiliary_target_path)
148 elif main_target_path.exists():
149 # 1 binary was provided.
150 target_binaries = TargetBinaries(None, main_target_path)
151 else:
152 assert not auxiliary_target_path.exists()
153 raise RuntimeError('No fuzz target: Centipede cannot find main target '
154 f'{main_target_path}, or auxiliary target '
155 f'{auxiliary_target_path}.')
156
157 return target_binaries
158
159 def _get_auxiliary_target_path(self, target_path):
160 """Gets the auxiliary target path based on the main |target_path|.
161 When exists, it points to the sanitized binary, which is required by fuzzing
162 (as an auxiliary) and crash reproduction.
163
164 Args:
165 target_path: Path to the main target in a string.
166
167 Returns:
168 Path to the auxiliary binary as a pathlib.Path.
169 """
170 # Assuming they will be in child dirs named by fuzzer_utils.EXTRA_BUILD_DIR.
171 build_dir = environment.get_value('BUILD_DIR')
172 auxiliary_target_name = pathlib.Path(target_path).name
173 auxiliary_target_path = pathlib.Path(
174 build_dir, fuzzer_utils.EXTRA_BUILD_DIR, auxiliary_target_name)
175 return auxiliary_target_path
176
177 def fuzz(self, target_path, options, reproducers_dir, max_time): # pylint: disable=unused-argument
178 """Runs a fuzz session.
179
180 Args:
181 target_path: Path to the target.
182 options: The FuzzOptions object returned by prepare().
183 reproducers_dir: The directory to put reproducers in when crashes
184 are found.
185 max_time: Maximum allowed time for the fuzzing to run.
186
187 Returns:
188 A FuzzResult object.
189 """
190 runner = _get_runner(target_path)
191 timeout = max_time + _CLEAN_EXIT_SECS
192 fuzz_result = runner.run_and_wait(
193 additional_args=options.arguments, timeout=timeout)
194 fuzz_result.output = Engine.trim_logs(fuzz_result.output)
195
196 reproducer_path = _get_reproducer_path(fuzz_result.output, reproducers_dir)
197 crashes = []
198 if reproducer_path:
199 crashes.append(
200 engine.Crash(
201 str(reproducer_path), fuzz_result.output, [],
202 int(fuzz_result.time_executed)))
203
204 # Stats report is not available in Centipede yet.
205 stats = None
206 return engine.FuzzResult(fuzz_result.output, fuzz_result.command, crashes,
207 stats, fuzz_result.time_executed)
208
209 @staticmethod
210 def trim_logs(fuzz_log):
211 """ Strips the 'CRASH LOG:' prefix that breaks stacktrace parsing.
212
213 Args:
214 fuzz_result: The ProcessResult returned by running fuzzer binary.
215 """
216 trimmed_log_lines = [
217 line[len(_CRASH_LOG_PREFIX):]
218 if line.startswith(_CRASH_LOG_PREFIX) else line
219 for line in fuzz_log.splitlines()
220 ]
221 return '\n'.join(trimmed_log_lines)
222
223 def reproduce(self, target_path, input_path, arguments, max_time): # pylint: disable=unused-argument
224 """Reproduces a crash given an input.
225
226 Args:
227 target_path: Path to the target.
228 input_path: Path to the reproducer input.
229 arguments: Additional arguments needed for reproduction.
230 max_time: Maximum allowed time for the reproduction.
231
232 Returns:
233 A ReproduceResult.
234 """
235 target_binaries = self._get_binary_paths(target_path)
236 sanitized_target = str(target_binaries.sanitized)
237
238 existing_runner_flags = os.environ.get('CENTIPEDE_RUNNER_FLAGS')
239 if not existing_runner_flags:
240 os.environ['CENTIPEDE_RUNNER_FLAGS'] = (
241 f':rss_limit_mb={_RSS_LIMIT}'
242 f':timeout_per_input={_TIMEOUT_PER_INPUT_REPR}:')
243
244 runner = new_process.UnicodeProcessRunner(sanitized_target, [input_path])
245 result = runner.run_and_wait(timeout=max_time)
246
247 if existing_runner_flags:
248 os.environ['CENTIPEDE_RUNNER_FLAGS'] = existing_runner_flags
249 else:
250 os.unsetenv('CENTIPEDE_RUNNER_FLAGS')
251 result.output = Engine.trim_logs(result.output)
252
253 return engine.ReproduceResult(result.command, result.return_code,
254 result.time_executed, result.output)
255
256 def _create_temp_dir(self, name):
257 """Creates temporary directory for fuzzing."""
258 new_directory = pathlib.Path(fuzzer_utils.get_temp_dir(), name)
259 engine_common.recreate_directory(new_directory)
260 return new_directory
261
262 def minimize_corpus(self, target_path, arguments, input_dirs, output_dir,
263 reproducers_dir, max_time):
264 """Runs corpus minimization.
265 Args:
266 target_path: Path to the target.
267 arguments: Additional arguments needed for corpus minimization.
268 input_dirs: Input corpora.
269 output_dir: Output directory to place minimized corpus.
270 reproducers_dir: The directory to put reproducers in when crashes are
271 found.
272 max_time: Maximum allowed time for the minimization.
273
274 Returns:
275 A FuzzResult object.
276 """
277 raise NotImplementedError
278
279 def minimize_testcase(self, target_path, arguments, input_path, output_path,
280 max_time):
281 """Minimizes a testcase.
282 Args:
283 target_path: Path to the target.
284 arguments: Additional arguments needed for testcase minimization.
285 input_path: Path to the reproducer input.
286 output_path: Path to the minimized output.
287 max_time: Maximum allowed time for the minimization.
288 Returns:
289 A ReproduceResult.
290 Raises:
291 TimeoutError: If the testcase minimization exceeds max_time.
292 """
293 raise NotImplementedError
294
295 def cleanse(self, target_path, arguments, input_path, output_path, max_time):
296 """Cleanses a testcase.
297 Args:
298 target_path: Path to the target.
299 arguments: Additional arguments needed for testcase cleanse.
300 input_path: Path to the reproducer input.
301 output_path: Path to the cleansed output.
302 max_time: Maximum allowed time for the cleanse.
303 Returns:
304 A ReproduceResult.
305 Raises:
306 TimeoutError: If the cleanse exceeds max_time.
307 """
308 raise NotImplementedError
309
[end of src/clusterfuzz/_internal/bot/fuzzers/centipede/engine.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/src/clusterfuzz/_internal/bot/fuzzers/centipede/engine.py b/src/clusterfuzz/_internal/bot/fuzzers/centipede/engine.py
--- a/src/clusterfuzz/_internal/bot/fuzzers/centipede/engine.py
+++ b/src/clusterfuzz/_internal/bot/fuzzers/centipede/engine.py
@@ -72,6 +72,15 @@
return crash_path
+def _set_sanitizer_options(fuzzer_path):
+ """Sets sanitizer options based on .options file overrides."""
+ engine_common.process_sanitizer_options_overrides(fuzzer_path)
+ sanitizer_options_var = environment.get_current_memory_tool_var()
+ sanitizer_options = environment.get_memory_tool_options(
+ sanitizer_options_var, {})
+ environment.set_memory_tool_options(sanitizer_options_var, sanitizer_options)
+
+
class Engine(engine.Engine):
"""Centipede engine implementation."""
@@ -188,6 +197,7 @@
A FuzzResult object.
"""
runner = _get_runner(target_path)
+ _set_sanitizer_options(target_path)
timeout = max_time + _CLEAN_EXIT_SECS
fuzz_result = runner.run_and_wait(
additional_args=options.arguments, timeout=timeout)
@@ -232,6 +242,7 @@
Returns:
A ReproduceResult.
"""
+ _set_sanitizer_options(target_path)
target_binaries = self._get_binary_paths(target_path)
sanitized_target = str(target_binaries.sanitized)
|
{"golden_diff": "diff --git a/src/clusterfuzz/_internal/bot/fuzzers/centipede/engine.py b/src/clusterfuzz/_internal/bot/fuzzers/centipede/engine.py\n--- a/src/clusterfuzz/_internal/bot/fuzzers/centipede/engine.py\n+++ b/src/clusterfuzz/_internal/bot/fuzzers/centipede/engine.py\n@@ -72,6 +72,15 @@\n return crash_path\n \n \n+def _set_sanitizer_options(fuzzer_path):\n+ \"\"\"Sets sanitizer options based on .options file overrides.\"\"\"\n+ engine_common.process_sanitizer_options_overrides(fuzzer_path)\n+ sanitizer_options_var = environment.get_current_memory_tool_var()\n+ sanitizer_options = environment.get_memory_tool_options(\n+ sanitizer_options_var, {})\n+ environment.set_memory_tool_options(sanitizer_options_var, sanitizer_options)\n+\n+\n class Engine(engine.Engine):\n \"\"\"Centipede engine implementation.\"\"\"\n \n@@ -188,6 +197,7 @@\n A FuzzResult object.\n \"\"\"\n runner = _get_runner(target_path)\n+ _set_sanitizer_options(target_path)\n timeout = max_time + _CLEAN_EXIT_SECS\n fuzz_result = runner.run_and_wait(\n additional_args=options.arguments, timeout=timeout)\n@@ -232,6 +242,7 @@\n Returns:\n A ReproduceResult.\n \"\"\"\n+ _set_sanitizer_options(target_path)\n target_binaries = self._get_binary_paths(target_path)\n sanitized_target = str(target_binaries.sanitized)\n", "issue": "Make Centipede handle custom ASAN options that libFuzzer supports\nSee https://bugs.chromium.org/p/chromium/issues/detail?id=1451080#c4\n", "before_files": [{"content": "# Copyright 2022 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Centipede engine interface.\"\"\"\n\nfrom collections import namedtuple\nimport os\nimport pathlib\nimport re\nimport shutil\n\nfrom clusterfuzz._internal.bot.fuzzers import dictionary_manager\nfrom clusterfuzz._internal.bot.fuzzers import engine_common\nfrom clusterfuzz._internal.bot.fuzzers import utils as fuzzer_utils\nfrom clusterfuzz._internal.metrics import logs\nfrom clusterfuzz._internal.system import environment\nfrom clusterfuzz._internal.system import new_process\nfrom clusterfuzz.fuzz import engine\n\n_CLEAN_EXIT_SECS = 10\n_SERVER_COUNT = 1\n_RSS_LIMIT = 4096\n_ADDRESS_SPACE_LIMIT = 4096\n_TIMEOUT_PER_INPUT_FUZZ = 25\n_TIMEOUT_PER_INPUT_REPR = 60\n_DEFAULT_ARGUMENTS = [\n '--exit_on_crash=1',\n f'--fork_server={_SERVER_COUNT}',\n f'--rss_limit_mb={_RSS_LIMIT}',\n f'--address_space_limit_mb={_ADDRESS_SPACE_LIMIT}',\n]\n\nCRASH_REGEX = re.compile(r'[sS]aving input to:?\\s*(.*)')\n_CRASH_LOG_PREFIX = 'CRASH LOG: '\nTargetBinaries = namedtuple('TargetBinaries', ['unsanitized', 'sanitized'])\n\n\nclass CentipedeError(Exception):\n \"\"\"Base exception class.\"\"\"\n\n\ndef _get_runner(target_path):\n \"\"\"Gets the Centipede runner.\"\"\"\n centipede_path = pathlib.Path(target_path).parent / 'centipede'\n if not centipede_path.exists():\n raise CentipedeError('Centipede not found in build')\n\n centipede_path = str(centipede_path)\n if environment.get_value('USE_UNSHARE'):\n return new_process.UnicodeModifierRunner(centipede_path)\n return new_process.UnicodeProcessRunner(centipede_path)\n\n\ndef _get_reproducer_path(log, reproducers_dir):\n \"\"\"Gets the reproducer path, if any.\"\"\"\n crash_match = CRASH_REGEX.search(log)\n if not crash_match:\n return None\n tmp_crash_path = pathlib.Path(crash_match.group(1))\n crash_path = pathlib.Path(reproducers_dir) / tmp_crash_path.name\n shutil.copy(tmp_crash_path, crash_path)\n return crash_path\n\n\nclass Engine(engine.Engine):\n \"\"\"Centipede engine implementation.\"\"\"\n\n @property\n def name(self):\n return 'centipede'\n\n # pylint: disable=unused-argument\n def prepare(self, corpus_dir, target_path, build_dir):\n \"\"\"Prepares for a fuzzing session, by generating options.\n\n Args:\n corpus_dir: The main corpus directory.\n target_path: Path to the target.\n build_dir: Path to the build directory.\n\n Returns:\n A FuzzOptions object.\n \"\"\"\n arguments = []\n dict_path = pathlib.Path(\n dictionary_manager.get_default_dictionary_path(target_path))\n if dict_path.exists():\n arguments.append(f'--dictionary={dict_path}')\n\n # Directory workdir saves:\n # 1. Centipede-readable corpus file;\n # 2. Centipede-readable feature file;\n # 3. Crash reproducing inputs.\n workdir = self._create_temp_dir('workdir')\n arguments.append(f'--workdir={workdir}')\n\n # Directory corpus_dir saves the corpus files required by ClusterFuzz.\n arguments.append(f'--corpus_dir={corpus_dir}')\n\n target_binaries = self._get_binary_paths(target_path)\n if target_binaries.unsanitized is None:\n # Assuming the only binary is always sanitized (e.g., from Chrome).\n arguments.append(f'--binary={target_binaries.sanitized}')\n logs.log_warn('Unable to find unsanitized target binary.')\n else:\n arguments.append(f'--binary={target_binaries.unsanitized}')\n arguments.append(f'--extra_binaries={target_binaries.sanitized}')\n\n arguments.append(f'--timeout_per_input={_TIMEOUT_PER_INPUT_FUZZ}')\n\n arguments.extend(_DEFAULT_ARGUMENTS)\n\n return engine.FuzzOptions(corpus_dir, arguments, {})\n\n def _get_binary_paths(self, target_path):\n \"\"\"Gets the paths to the main and auxiliary binaries based on |target_path|\n Args:\n target_path: Path to the main target in a string.\n\n Returns:\n A named tuple containing paths to both target binaries as pathlib.Path.\n \"\"\"\n # Centipede expects one or two target binaries:\n # |-------------------------------------------------------|\n # | | main target path | auxiliary target path |\n # |-------------------------------------------------------|\n # | 1 binary | sanitized | - |\n # |-------------------------------------------------------|\n # | 2 binaries | unsanitized | sanitized |\n # |-------------------------------------------------------|\n\n main_target_path = pathlib.Path(target_path)\n auxiliary_target_path = self._get_auxiliary_target_path(target_path)\n\n if main_target_path.exists() and auxiliary_target_path.exists():\n # 2 binaries were provided.\n target_binaries = TargetBinaries(main_target_path, auxiliary_target_path)\n elif main_target_path.exists():\n # 1 binary was provided.\n target_binaries = TargetBinaries(None, main_target_path)\n else:\n assert not auxiliary_target_path.exists()\n raise RuntimeError('No fuzz target: Centipede cannot find main target '\n f'{main_target_path}, or auxiliary target '\n f'{auxiliary_target_path}.')\n\n return target_binaries\n\n def _get_auxiliary_target_path(self, target_path):\n \"\"\"Gets the auxiliary target path based on the main |target_path|.\n When exists, it points to the sanitized binary, which is required by fuzzing\n (as an auxiliary) and crash reproduction.\n\n Args:\n target_path: Path to the main target in a string.\n\n Returns:\n Path to the auxiliary binary as a pathlib.Path.\n \"\"\"\n # Assuming they will be in child dirs named by fuzzer_utils.EXTRA_BUILD_DIR.\n build_dir = environment.get_value('BUILD_DIR')\n auxiliary_target_name = pathlib.Path(target_path).name\n auxiliary_target_path = pathlib.Path(\n build_dir, fuzzer_utils.EXTRA_BUILD_DIR, auxiliary_target_name)\n return auxiliary_target_path\n\n def fuzz(self, target_path, options, reproducers_dir, max_time): # pylint: disable=unused-argument\n \"\"\"Runs a fuzz session.\n\n Args:\n target_path: Path to the target.\n options: The FuzzOptions object returned by prepare().\n reproducers_dir: The directory to put reproducers in when crashes\n are found.\n max_time: Maximum allowed time for the fuzzing to run.\n\n Returns:\n A FuzzResult object.\n \"\"\"\n runner = _get_runner(target_path)\n timeout = max_time + _CLEAN_EXIT_SECS\n fuzz_result = runner.run_and_wait(\n additional_args=options.arguments, timeout=timeout)\n fuzz_result.output = Engine.trim_logs(fuzz_result.output)\n\n reproducer_path = _get_reproducer_path(fuzz_result.output, reproducers_dir)\n crashes = []\n if reproducer_path:\n crashes.append(\n engine.Crash(\n str(reproducer_path), fuzz_result.output, [],\n int(fuzz_result.time_executed)))\n\n # Stats report is not available in Centipede yet.\n stats = None\n return engine.FuzzResult(fuzz_result.output, fuzz_result.command, crashes,\n stats, fuzz_result.time_executed)\n\n @staticmethod\n def trim_logs(fuzz_log):\n \"\"\" Strips the 'CRASH LOG:' prefix that breaks stacktrace parsing.\n\n Args:\n fuzz_result: The ProcessResult returned by running fuzzer binary.\n \"\"\"\n trimmed_log_lines = [\n line[len(_CRASH_LOG_PREFIX):]\n if line.startswith(_CRASH_LOG_PREFIX) else line\n for line in fuzz_log.splitlines()\n ]\n return '\\n'.join(trimmed_log_lines)\n\n def reproduce(self, target_path, input_path, arguments, max_time): # pylint: disable=unused-argument\n \"\"\"Reproduces a crash given an input.\n\n Args:\n target_path: Path to the target.\n input_path: Path to the reproducer input.\n arguments: Additional arguments needed for reproduction.\n max_time: Maximum allowed time for the reproduction.\n\n Returns:\n A ReproduceResult.\n \"\"\"\n target_binaries = self._get_binary_paths(target_path)\n sanitized_target = str(target_binaries.sanitized)\n\n existing_runner_flags = os.environ.get('CENTIPEDE_RUNNER_FLAGS')\n if not existing_runner_flags:\n os.environ['CENTIPEDE_RUNNER_FLAGS'] = (\n f':rss_limit_mb={_RSS_LIMIT}'\n f':timeout_per_input={_TIMEOUT_PER_INPUT_REPR}:')\n\n runner = new_process.UnicodeProcessRunner(sanitized_target, [input_path])\n result = runner.run_and_wait(timeout=max_time)\n\n if existing_runner_flags:\n os.environ['CENTIPEDE_RUNNER_FLAGS'] = existing_runner_flags\n else:\n os.unsetenv('CENTIPEDE_RUNNER_FLAGS')\n result.output = Engine.trim_logs(result.output)\n\n return engine.ReproduceResult(result.command, result.return_code,\n result.time_executed, result.output)\n\n def _create_temp_dir(self, name):\n \"\"\"Creates temporary directory for fuzzing.\"\"\"\n new_directory = pathlib.Path(fuzzer_utils.get_temp_dir(), name)\n engine_common.recreate_directory(new_directory)\n return new_directory\n\n def minimize_corpus(self, target_path, arguments, input_dirs, output_dir,\n reproducers_dir, max_time):\n \"\"\"Runs corpus minimization.\n Args:\n target_path: Path to the target.\n arguments: Additional arguments needed for corpus minimization.\n input_dirs: Input corpora.\n output_dir: Output directory to place minimized corpus.\n reproducers_dir: The directory to put reproducers in when crashes are\n found.\n max_time: Maximum allowed time for the minimization.\n\n Returns:\n A FuzzResult object.\n \"\"\"\n raise NotImplementedError\n\n def minimize_testcase(self, target_path, arguments, input_path, output_path,\n max_time):\n \"\"\"Minimizes a testcase.\n Args:\n target_path: Path to the target.\n arguments: Additional arguments needed for testcase minimization.\n input_path: Path to the reproducer input.\n output_path: Path to the minimized output.\n max_time: Maximum allowed time for the minimization.\n Returns:\n A ReproduceResult.\n Raises:\n TimeoutError: If the testcase minimization exceeds max_time.\n \"\"\"\n raise NotImplementedError\n\n def cleanse(self, target_path, arguments, input_path, output_path, max_time):\n \"\"\"Cleanses a testcase.\n Args:\n target_path: Path to the target.\n arguments: Additional arguments needed for testcase cleanse.\n input_path: Path to the reproducer input.\n output_path: Path to the cleansed output.\n max_time: Maximum allowed time for the cleanse.\n Returns:\n A ReproduceResult.\n Raises:\n TimeoutError: If the cleanse exceeds max_time.\n \"\"\"\n raise NotImplementedError\n", "path": "src/clusterfuzz/_internal/bot/fuzzers/centipede/engine.py"}]}
| 3,956 | 338 |
gh_patches_debug_2522
|
rasdani/github-patches
|
git_diff
|
googleapis__python-bigquery-189
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Packaging: prep for 1.0.0 release of `google-resumable-media-python`.
See: https://github.com/googleapis/google-resumable-media-python/issues/138
</issue>
<code>
[start of setup.py]
1 # Copyright 2018 Google LLC
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import io
16 import os
17
18 import setuptools
19
20
21 # Package metadata.
22
23 name = "google-cloud-bigquery"
24 description = "Google BigQuery API client library"
25 version = "1.26.0"
26 # Should be one of:
27 # 'Development Status :: 3 - Alpha'
28 # 'Development Status :: 4 - Beta'
29 # 'Development Status :: 5 - Production/Stable'
30 release_status = "Development Status :: 5 - Production/Stable"
31 dependencies = [
32 'enum34; python_version < "3.4"',
33 "google-api-core >= 1.21.0, < 2.0dev",
34 "google-cloud-core >= 1.1.0, < 2.0dev",
35 "google-resumable-media >= 0.5.0, < 0.6dev",
36 "six >=1.13.0,< 2.0.0dev",
37 ]
38 extras = {
39 "bqstorage": [
40 "google-cloud-bigquery-storage >= 1.0.0, <2.0.0dev",
41 # Due to an issue in pip's dependency resolver, the `grpc` extra is not
42 # installed, even though `google-cloud-bigquery-storage` specifies it
43 # as `google-api-core[grpc]`. We thus need to explicitly specify it here.
44 # See: https://github.com/googleapis/python-bigquery/issues/83
45 "grpcio >= 1.8.2, < 2.0dev",
46 "pyarrow>=0.16.0, < 2.0dev",
47 ],
48 "pandas": ["pandas>=0.17.1"],
49 # Exclude PyArrow dependency from Windows Python 2.7.
50 'pyarrow: platform_system != "Windows" or python_version >= "3.4"': [
51 # Bad Linux release for 0.14.0.
52 # https://issues.apache.org/jira/browse/ARROW-5868
53 "pyarrow>=0.4.1, != 0.14.0"
54 ],
55 "tqdm": ["tqdm >= 4.0.0, <5.0.0dev"],
56 "fastparquet": [
57 "fastparquet",
58 "python-snappy",
59 # llvmlite >= 0.32.0 cannot be installed on Python 3.5 and below
60 # (building the wheel fails), thus needs to be restricted.
61 # See: https://github.com/googleapis/python-bigquery/issues/78
62 "llvmlite <= 0.31.0",
63 ],
64 }
65
66 all_extras = []
67
68 for extra in extras:
69 if extra == "fastparquet":
70 # Skip fastparquet from "all" because it is redundant with pyarrow and
71 # creates a dependency on pre-release versions of numpy. See:
72 # https://github.com/googleapis/google-cloud-python/issues/8549
73 continue
74 all_extras.extend(extras[extra])
75
76 extras["all"] = all_extras
77
78 # Setup boilerplate below this line.
79
80 package_root = os.path.abspath(os.path.dirname(__file__))
81
82 readme_filename = os.path.join(package_root, "README.rst")
83 with io.open(readme_filename, encoding="utf-8") as readme_file:
84 readme = readme_file.read()
85
86 # Only include packages under the 'google' namespace. Do not include tests,
87 # benchmarks, etc.
88 packages = [
89 package for package in setuptools.find_packages() if package.startswith("google")
90 ]
91
92 # Determine which namespaces are needed.
93 namespaces = ["google"]
94 if "google.cloud" in packages:
95 namespaces.append("google.cloud")
96
97
98 setuptools.setup(
99 name=name,
100 version=version,
101 description=description,
102 long_description=readme,
103 author="Google LLC",
104 author_email="[email protected]",
105 license="Apache 2.0",
106 url="https://github.com/googleapis/python-bigquery",
107 classifiers=[
108 release_status,
109 "Intended Audience :: Developers",
110 "License :: OSI Approved :: Apache Software License",
111 "Programming Language :: Python",
112 "Programming Language :: Python :: 2",
113 "Programming Language :: Python :: 2.7",
114 "Programming Language :: Python :: 3",
115 "Programming Language :: Python :: 3.5",
116 "Programming Language :: Python :: 3.6",
117 "Programming Language :: Python :: 3.7",
118 "Programming Language :: Python :: 3.8",
119 "Operating System :: OS Independent",
120 "Topic :: Internet",
121 ],
122 platforms="Posix; MacOS X; Windows",
123 packages=packages,
124 namespace_packages=namespaces,
125 install_requires=dependencies,
126 extras_require=extras,
127 python_requires=">=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*",
128 include_package_data=True,
129 zip_safe=False,
130 )
131
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -32,7 +32,7 @@
'enum34; python_version < "3.4"',
"google-api-core >= 1.21.0, < 2.0dev",
"google-cloud-core >= 1.1.0, < 2.0dev",
- "google-resumable-media >= 0.5.0, < 0.6dev",
+ "google-resumable-media >= 0.5.0, < 2.0dev",
"six >=1.13.0,< 2.0.0dev",
]
extras = {
|
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -32,7 +32,7 @@\n 'enum34; python_version < \"3.4\"',\n \"google-api-core >= 1.21.0, < 2.0dev\",\n \"google-cloud-core >= 1.1.0, < 2.0dev\",\n- \"google-resumable-media >= 0.5.0, < 0.6dev\",\n+ \"google-resumable-media >= 0.5.0, < 2.0dev\",\n \"six >=1.13.0,< 2.0.0dev\",\n ]\n extras = {\n", "issue": "Packaging: prep for 1.0.0 release of `google-resumable-media-python`.\nSee: https://github.com/googleapis/google-resumable-media-python/issues/138\n", "before_files": [{"content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport io\nimport os\n\nimport setuptools\n\n\n# Package metadata.\n\nname = \"google-cloud-bigquery\"\ndescription = \"Google BigQuery API client library\"\nversion = \"1.26.0\"\n# Should be one of:\n# 'Development Status :: 3 - Alpha'\n# 'Development Status :: 4 - Beta'\n# 'Development Status :: 5 - Production/Stable'\nrelease_status = \"Development Status :: 5 - Production/Stable\"\ndependencies = [\n 'enum34; python_version < \"3.4\"',\n \"google-api-core >= 1.21.0, < 2.0dev\",\n \"google-cloud-core >= 1.1.0, < 2.0dev\",\n \"google-resumable-media >= 0.5.0, < 0.6dev\",\n \"six >=1.13.0,< 2.0.0dev\",\n]\nextras = {\n \"bqstorage\": [\n \"google-cloud-bigquery-storage >= 1.0.0, <2.0.0dev\",\n # Due to an issue in pip's dependency resolver, the `grpc` extra is not\n # installed, even though `google-cloud-bigquery-storage` specifies it\n # as `google-api-core[grpc]`. We thus need to explicitly specify it here.\n # See: https://github.com/googleapis/python-bigquery/issues/83\n \"grpcio >= 1.8.2, < 2.0dev\",\n \"pyarrow>=0.16.0, < 2.0dev\",\n ],\n \"pandas\": [\"pandas>=0.17.1\"],\n # Exclude PyArrow dependency from Windows Python 2.7.\n 'pyarrow: platform_system != \"Windows\" or python_version >= \"3.4\"': [\n # Bad Linux release for 0.14.0.\n # https://issues.apache.org/jira/browse/ARROW-5868\n \"pyarrow>=0.4.1, != 0.14.0\"\n ],\n \"tqdm\": [\"tqdm >= 4.0.0, <5.0.0dev\"],\n \"fastparquet\": [\n \"fastparquet\",\n \"python-snappy\",\n # llvmlite >= 0.32.0 cannot be installed on Python 3.5 and below\n # (building the wheel fails), thus needs to be restricted.\n # See: https://github.com/googleapis/python-bigquery/issues/78\n \"llvmlite <= 0.31.0\",\n ],\n}\n\nall_extras = []\n\nfor extra in extras:\n if extra == \"fastparquet\":\n # Skip fastparquet from \"all\" because it is redundant with pyarrow and\n # creates a dependency on pre-release versions of numpy. See:\n # https://github.com/googleapis/google-cloud-python/issues/8549\n continue\n all_extras.extend(extras[extra])\n\nextras[\"all\"] = all_extras\n\n# Setup boilerplate below this line.\n\npackage_root = os.path.abspath(os.path.dirname(__file__))\n\nreadme_filename = os.path.join(package_root, \"README.rst\")\nwith io.open(readme_filename, encoding=\"utf-8\") as readme_file:\n readme = readme_file.read()\n\n# Only include packages under the 'google' namespace. Do not include tests,\n# benchmarks, etc.\npackages = [\n package for package in setuptools.find_packages() if package.startswith(\"google\")\n]\n\n# Determine which namespaces are needed.\nnamespaces = [\"google\"]\nif \"google.cloud\" in packages:\n namespaces.append(\"google.cloud\")\n\n\nsetuptools.setup(\n name=name,\n version=version,\n description=description,\n long_description=readme,\n author=\"Google LLC\",\n author_email=\"[email protected]\",\n license=\"Apache 2.0\",\n url=\"https://github.com/googleapis/python-bigquery\",\n classifiers=[\n release_status,\n \"Intended Audience :: Developers\",\n \"License :: OSI Approved :: Apache Software License\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 2\",\n \"Programming Language :: Python :: 2.7\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.5\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Operating System :: OS Independent\",\n \"Topic :: Internet\",\n ],\n platforms=\"Posix; MacOS X; Windows\",\n packages=packages,\n namespace_packages=namespaces,\n install_requires=dependencies,\n extras_require=extras,\n python_requires=\">=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*\",\n include_package_data=True,\n zip_safe=False,\n)\n", "path": "setup.py"}]}
| 2,053 | 156 |
gh_patches_debug_2916
|
rasdani/github-patches
|
git_diff
|
getsentry__sentry-python-1554
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Redis integration tests have side effects
### How do you use Sentry?
Self-hosted/on-premise
### Version
1.9.2
### Steps to Reproduce
While working on https://github.com/getsentry/sentry-python/pull/1543, I noticed the following:
1. Checked out `sentry-sdk` for development.
2. Installed redis:
```
fakeredis==1.9.0
redis==3.5.3
redis-py-cluster==2.1.3
````
3. Run redis integration tests twice, in different order:
```bash
# first rediscluster, then redis
pytest 'tests/integrations/rediscluster/test_rediscluster.py::test_rediscluster_basic[RedisCluster]' tests/integrations/redis/test_redis.py::test_basic
# first redis, then rediscluster
pytest tests/integrations/redis/test_redis.py::test_basic 'tests/integrations/rediscluster/test_rediscluster.py::test_rediscluster_basic[RedisCluster]'
### Expected Result
Both test runs pass.
### Actual Result
The second test run
```bash
pytest tests/integrations/redis/test_redis.py::test_basic 'tests/integrations/rediscluster/test_rediscluster.py::test_rediscluster_basic[RedisCluster]'
```
fails with
```pytest
tests/integrations/redis/test_redis.py . [ 50%]
tests/integrations/rediscluster/test_rediscluster.py F [100%]
============================================================================================================================================ FAILURES =============================================================================================================================================
______________________________________________________________________________________________________________________________ test_rediscluster_basic[RedisCluster] ______________________________________________________________________________________________________________________________
tests/integrations/rediscluster/test_rediscluster.py:29: in test_rediscluster_basic
(crumb,) = event["breadcrumbs"]["values"]
E ValueError: not enough values to unpack (expected 1, got 0)
```
</issue>
<code>
[start of sentry_sdk/integrations/redis.py]
1 from __future__ import absolute_import
2
3 from sentry_sdk import Hub
4 from sentry_sdk.utils import capture_internal_exceptions, logger
5 from sentry_sdk.integrations import Integration, DidNotEnable
6
7 from sentry_sdk._types import MYPY
8
9 if MYPY:
10 from typing import Any, Sequence
11
12 _SINGLE_KEY_COMMANDS = frozenset(
13 ["decr", "decrby", "get", "incr", "incrby", "pttl", "set", "setex", "setnx", "ttl"]
14 )
15 _MULTI_KEY_COMMANDS = frozenset(["del", "touch", "unlink"])
16
17 #: Trim argument lists to this many values
18 _MAX_NUM_ARGS = 10
19
20
21 def patch_redis_pipeline(pipeline_cls, is_cluster, get_command_args_fn):
22 # type: (Any, bool, Any) -> None
23 old_execute = pipeline_cls.execute
24
25 def sentry_patched_execute(self, *args, **kwargs):
26 # type: (Any, *Any, **Any) -> Any
27 hub = Hub.current
28
29 if hub.get_integration(RedisIntegration) is None:
30 return old_execute(self, *args, **kwargs)
31
32 with hub.start_span(op="redis", description="redis.pipeline.execute") as span:
33 with capture_internal_exceptions():
34 span.set_tag("redis.is_cluster", is_cluster)
35 transaction = self.transaction if not is_cluster else False
36 span.set_tag("redis.transaction", transaction)
37
38 commands = []
39 for i, arg in enumerate(self.command_stack):
40 if i > _MAX_NUM_ARGS:
41 break
42 command_args = []
43 for j, command_arg in enumerate(get_command_args_fn(arg)):
44 if j > 0:
45 command_arg = repr(command_arg)
46 command_args.append(command_arg)
47 commands.append(" ".join(command_args))
48
49 span.set_data(
50 "redis.commands",
51 {"count": len(self.command_stack), "first_ten": commands},
52 )
53
54 return old_execute(self, *args, **kwargs)
55
56 pipeline_cls.execute = sentry_patched_execute
57
58
59 def _get_redis_command_args(command):
60 # type: (Any) -> Sequence[Any]
61 return command[0]
62
63
64 def _parse_rediscluster_command(command):
65 # type: (Any) -> Sequence[Any]
66 return command.args
67
68
69 def _patch_rediscluster():
70 # type: () -> None
71 try:
72 import rediscluster # type: ignore
73 except ImportError:
74 return
75
76 patch_redis_client(rediscluster.RedisCluster, is_cluster=True)
77
78 # up to v1.3.6, __version__ attribute is a tuple
79 # from v2.0.0, __version__ is a string and VERSION a tuple
80 version = getattr(rediscluster, "VERSION", rediscluster.__version__)
81
82 # StrictRedisCluster was introduced in v0.2.0 and removed in v2.0.0
83 # https://github.com/Grokzen/redis-py-cluster/blob/master/docs/release-notes.rst
84 if (0, 2, 0) < version < (2, 0, 0):
85 pipeline_cls = rediscluster.StrictClusterPipeline
86 patch_redis_client(rediscluster.StrictRedisCluster, is_cluster=True)
87 else:
88 pipeline_cls = rediscluster.ClusterPipeline
89
90 patch_redis_pipeline(pipeline_cls, True, _parse_rediscluster_command)
91
92
93 class RedisIntegration(Integration):
94 identifier = "redis"
95
96 @staticmethod
97 def setup_once():
98 # type: () -> None
99 try:
100 import redis
101 except ImportError:
102 raise DidNotEnable("Redis client not installed")
103
104 patch_redis_client(redis.StrictRedis, is_cluster=False)
105 patch_redis_pipeline(redis.client.Pipeline, False, _get_redis_command_args)
106 try:
107 strict_pipeline = redis.client.StrictPipeline # type: ignore
108 except AttributeError:
109 pass
110 else:
111 patch_redis_pipeline(strict_pipeline, False, _get_redis_command_args)
112
113 try:
114 import rb.clients # type: ignore
115 except ImportError:
116 pass
117 else:
118 patch_redis_client(rb.clients.FanoutClient, is_cluster=False)
119 patch_redis_client(rb.clients.MappingClient, is_cluster=False)
120 patch_redis_client(rb.clients.RoutingClient, is_cluster=False)
121
122 try:
123 _patch_rediscluster()
124 except Exception:
125 logger.exception("Error occurred while patching `rediscluster` library")
126
127
128 def patch_redis_client(cls, is_cluster):
129 # type: (Any, bool) -> None
130 """
131 This function can be used to instrument custom redis client classes or
132 subclasses.
133 """
134
135 old_execute_command = cls.execute_command
136
137 def sentry_patched_execute_command(self, name, *args, **kwargs):
138 # type: (Any, str, *Any, **Any) -> Any
139 hub = Hub.current
140
141 if hub.get_integration(RedisIntegration) is None:
142 return old_execute_command(self, name, *args, **kwargs)
143
144 description = name
145
146 with capture_internal_exceptions():
147 description_parts = [name]
148 for i, arg in enumerate(args):
149 if i > _MAX_NUM_ARGS:
150 break
151
152 description_parts.append(repr(arg))
153
154 description = " ".join(description_parts)
155
156 with hub.start_span(op="redis", description=description) as span:
157 span.set_tag("redis.is_cluster", is_cluster)
158 if name:
159 span.set_tag("redis.command", name)
160
161 if name and args:
162 name_low = name.lower()
163 if (name_low in _SINGLE_KEY_COMMANDS) or (
164 name_low in _MULTI_KEY_COMMANDS and len(args) == 1
165 ):
166 span.set_tag("redis.key", args[0])
167
168 return old_execute_command(self, name, *args, **kwargs)
169
170 cls.execute_command = sentry_patched_execute_command
171
[end of sentry_sdk/integrations/redis.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/sentry_sdk/integrations/redis.py b/sentry_sdk/integrations/redis.py
--- a/sentry_sdk/integrations/redis.py
+++ b/sentry_sdk/integrations/redis.py
@@ -131,7 +131,6 @@
This function can be used to instrument custom redis client classes or
subclasses.
"""
-
old_execute_command = cls.execute_command
def sentry_patched_execute_command(self, name, *args, **kwargs):
|
{"golden_diff": "diff --git a/sentry_sdk/integrations/redis.py b/sentry_sdk/integrations/redis.py\n--- a/sentry_sdk/integrations/redis.py\n+++ b/sentry_sdk/integrations/redis.py\n@@ -131,7 +131,6 @@\n This function can be used to instrument custom redis client classes or\n subclasses.\n \"\"\"\n-\n old_execute_command = cls.execute_command\n \n def sentry_patched_execute_command(self, name, *args, **kwargs):\n", "issue": "Redis integration tests have side effects\n### How do you use Sentry?\n\nSelf-hosted/on-premise\n\n### Version\n\n1.9.2\n\n### Steps to Reproduce\n\nWhile working on https://github.com/getsentry/sentry-python/pull/1543, I noticed the following:\r\n\r\n1. Checked out `sentry-sdk` for development.\r\n2. Installed redis:\r\n ```\r\n fakeredis==1.9.0\r\n redis==3.5.3\r\n redis-py-cluster==2.1.3\r\n ````\r\n3. Run redis integration tests twice, in different order:\r\n ```bash\r\n # first rediscluster, then redis\r\n pytest 'tests/integrations/rediscluster/test_rediscluster.py::test_rediscluster_basic[RedisCluster]' tests/integrations/redis/test_redis.py::test_basic\r\n # first redis, then rediscluster\r\n pytest tests/integrations/redis/test_redis.py::test_basic 'tests/integrations/rediscluster/test_rediscluster.py::test_rediscluster_basic[RedisCluster]'\n\n### Expected Result\n\nBoth test runs pass.\n\n### Actual Result\n\nThe second test run\r\n\r\n```bash\r\npytest tests/integrations/redis/test_redis.py::test_basic 'tests/integrations/rediscluster/test_rediscluster.py::test_rediscluster_basic[RedisCluster]' \r\n```\r\n\r\nfails with \r\n\r\n```pytest\r\ntests/integrations/redis/test_redis.py . [ 50%]\r\ntests/integrations/rediscluster/test_rediscluster.py F [100%]\r\n\r\n============================================================================================================================================ FAILURES =============================================================================================================================================\r\n______________________________________________________________________________________________________________________________ test_rediscluster_basic[RedisCluster] ______________________________________________________________________________________________________________________________\r\ntests/integrations/rediscluster/test_rediscluster.py:29: in test_rediscluster_basic\r\n (crumb,) = event[\"breadcrumbs\"][\"values\"]\r\nE ValueError: not enough values to unpack (expected 1, got 0)\r\n```\n", "before_files": [{"content": "from __future__ import absolute_import\n\nfrom sentry_sdk import Hub\nfrom sentry_sdk.utils import capture_internal_exceptions, logger\nfrom sentry_sdk.integrations import Integration, DidNotEnable\n\nfrom sentry_sdk._types import MYPY\n\nif MYPY:\n from typing import Any, Sequence\n\n_SINGLE_KEY_COMMANDS = frozenset(\n [\"decr\", \"decrby\", \"get\", \"incr\", \"incrby\", \"pttl\", \"set\", \"setex\", \"setnx\", \"ttl\"]\n)\n_MULTI_KEY_COMMANDS = frozenset([\"del\", \"touch\", \"unlink\"])\n\n#: Trim argument lists to this many values\n_MAX_NUM_ARGS = 10\n\n\ndef patch_redis_pipeline(pipeline_cls, is_cluster, get_command_args_fn):\n # type: (Any, bool, Any) -> None\n old_execute = pipeline_cls.execute\n\n def sentry_patched_execute(self, *args, **kwargs):\n # type: (Any, *Any, **Any) -> Any\n hub = Hub.current\n\n if hub.get_integration(RedisIntegration) is None:\n return old_execute(self, *args, **kwargs)\n\n with hub.start_span(op=\"redis\", description=\"redis.pipeline.execute\") as span:\n with capture_internal_exceptions():\n span.set_tag(\"redis.is_cluster\", is_cluster)\n transaction = self.transaction if not is_cluster else False\n span.set_tag(\"redis.transaction\", transaction)\n\n commands = []\n for i, arg in enumerate(self.command_stack):\n if i > _MAX_NUM_ARGS:\n break\n command_args = []\n for j, command_arg in enumerate(get_command_args_fn(arg)):\n if j > 0:\n command_arg = repr(command_arg)\n command_args.append(command_arg)\n commands.append(\" \".join(command_args))\n\n span.set_data(\n \"redis.commands\",\n {\"count\": len(self.command_stack), \"first_ten\": commands},\n )\n\n return old_execute(self, *args, **kwargs)\n\n pipeline_cls.execute = sentry_patched_execute\n\n\ndef _get_redis_command_args(command):\n # type: (Any) -> Sequence[Any]\n return command[0]\n\n\ndef _parse_rediscluster_command(command):\n # type: (Any) -> Sequence[Any]\n return command.args\n\n\ndef _patch_rediscluster():\n # type: () -> None\n try:\n import rediscluster # type: ignore\n except ImportError:\n return\n\n patch_redis_client(rediscluster.RedisCluster, is_cluster=True)\n\n # up to v1.3.6, __version__ attribute is a tuple\n # from v2.0.0, __version__ is a string and VERSION a tuple\n version = getattr(rediscluster, \"VERSION\", rediscluster.__version__)\n\n # StrictRedisCluster was introduced in v0.2.0 and removed in v2.0.0\n # https://github.com/Grokzen/redis-py-cluster/blob/master/docs/release-notes.rst\n if (0, 2, 0) < version < (2, 0, 0):\n pipeline_cls = rediscluster.StrictClusterPipeline\n patch_redis_client(rediscluster.StrictRedisCluster, is_cluster=True)\n else:\n pipeline_cls = rediscluster.ClusterPipeline\n\n patch_redis_pipeline(pipeline_cls, True, _parse_rediscluster_command)\n\n\nclass RedisIntegration(Integration):\n identifier = \"redis\"\n\n @staticmethod\n def setup_once():\n # type: () -> None\n try:\n import redis\n except ImportError:\n raise DidNotEnable(\"Redis client not installed\")\n\n patch_redis_client(redis.StrictRedis, is_cluster=False)\n patch_redis_pipeline(redis.client.Pipeline, False, _get_redis_command_args)\n try:\n strict_pipeline = redis.client.StrictPipeline # type: ignore\n except AttributeError:\n pass\n else:\n patch_redis_pipeline(strict_pipeline, False, _get_redis_command_args)\n\n try:\n import rb.clients # type: ignore\n except ImportError:\n pass\n else:\n patch_redis_client(rb.clients.FanoutClient, is_cluster=False)\n patch_redis_client(rb.clients.MappingClient, is_cluster=False)\n patch_redis_client(rb.clients.RoutingClient, is_cluster=False)\n\n try:\n _patch_rediscluster()\n except Exception:\n logger.exception(\"Error occurred while patching `rediscluster` library\")\n\n\ndef patch_redis_client(cls, is_cluster):\n # type: (Any, bool) -> None\n \"\"\"\n This function can be used to instrument custom redis client classes or\n subclasses.\n \"\"\"\n\n old_execute_command = cls.execute_command\n\n def sentry_patched_execute_command(self, name, *args, **kwargs):\n # type: (Any, str, *Any, **Any) -> Any\n hub = Hub.current\n\n if hub.get_integration(RedisIntegration) is None:\n return old_execute_command(self, name, *args, **kwargs)\n\n description = name\n\n with capture_internal_exceptions():\n description_parts = [name]\n for i, arg in enumerate(args):\n if i > _MAX_NUM_ARGS:\n break\n\n description_parts.append(repr(arg))\n\n description = \" \".join(description_parts)\n\n with hub.start_span(op=\"redis\", description=description) as span:\n span.set_tag(\"redis.is_cluster\", is_cluster)\n if name:\n span.set_tag(\"redis.command\", name)\n\n if name and args:\n name_low = name.lower()\n if (name_low in _SINGLE_KEY_COMMANDS) or (\n name_low in _MULTI_KEY_COMMANDS and len(args) == 1\n ):\n span.set_tag(\"redis.key\", args[0])\n\n return old_execute_command(self, name, *args, **kwargs)\n\n cls.execute_command = sentry_patched_execute_command\n", "path": "sentry_sdk/integrations/redis.py"}]}
| 2,627 | 109 |
gh_patches_debug_6181
|
rasdani/github-patches
|
git_diff
|
scrapy__scrapy-2816
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
DNSCACHE_ENABLED=False not working
Originally reported by @softwarevamp on [StackOverflow](https://stackoverflow.com/questions/44877296/scrapy-with-dnscache-enabled-false-not-working):
> When i run scrapy shell with `DNSCACHE_ENABLED=False` got
```
KeyError: 'dictionary is empty'
twisted.internet.error.DNSLookupError: DNS lookup failed: no results for hostname lookup: www.mydomain.com.
```
```
2017-07-03 03:09:12 [twisted] CRITICAL: while looking up www.mydomain.com with <scrapy.resolver.CachingThreadedResolver object at 0x3fd0050>
Traceback (most recent call last):
File "/usr/lib64/python2.7/site-packages/twisted/internet/defer.py", line 653, in _runCallbacks
current.result = callback(current.result, *args, **kw)
File "/usr/lib64/python2.7/site-packages/scrapy/resolver.py", line 29, in _cache_result
dnscache[name] = result
File "/usr/lib64/python2.7/site-packages/scrapy/utils/datatypes.py", line 305, in __setitem__
self.popitem(last=False)
File "/usr/lib64/python2.7/collections.py", line 159, in popitem
raise KeyError('dictionary is empty')
KeyError: 'dictionary is empty'
2017-07-03 03:09:12 [scrapy.downloadermiddlewares.retry] DEBUG: Gave up retrying <GET //www.mydomain.com/> (failed 3 times): DNS lookup failed: no results for hostname lookup: www.mydomain.com.
Traceback (most recent call last):
File "/usr/bin/scrapy", line 11, in <module>
sys.exit(execute())
File "/usr/lib64/python2.7/site-packages/scrapy/cmdline.py", line 149, in execute
_run_print_help(parser, _run_command, cmd, args, opts)
File "/usr/lib64/python2.7/site-packages/scrapy/cmdline.py", line 89, in _run_print_help
func(*a, **kw)
File "/usr/lib64/python2.7/site-packages/scrapy/cmdline.py", line 156, in _run_command
cmd.run(args, opts)
File "/usr/lib64/python2.7/site-packages/scrapy/commands/shell.py", line 73, in run
shell.start(url=url, redirect=not opts.no_redirect)
File "/usr/lib64/python2.7/site-packages/scrapy/shell.py", line 48, in start
self.fetch(url, spider, redirect=redirect)
File "/usr/lib64/python2.7/site-packages/scrapy/shell.py", line 115, in fetch
reactor, self._schedule, request, spider)
File "/usr/lib64/python2.7/site-packages/twisted/internet/threads.py", line 122, in blockingCallFromThread
result.raiseException()
File "<string>", line 2, in raiseException
twisted.internet.error.DNSLookupError: DNS lookup failed: no results for hostname lookup: www.mydomain.com.
```
> Any thoughts welcome
</issue>
<code>
[start of scrapy/resolver.py]
1 from twisted.internet import defer
2 from twisted.internet.base import ThreadedResolver
3
4 from scrapy.utils.datatypes import LocalCache
5
6 # TODO: cache misses
7
8 dnscache = LocalCache(10000)
9
10 class CachingThreadedResolver(ThreadedResolver):
11 def __init__(self, reactor, cache_size, timeout):
12 super(CachingThreadedResolver, self).__init__(reactor)
13 dnscache.limit = cache_size
14 self.timeout = timeout
15
16 def getHostByName(self, name, timeout=None):
17 if name in dnscache:
18 return defer.succeed(dnscache[name])
19 # in Twisted<=16.6, getHostByName() is always called with
20 # a default timeout of 60s (actually passed as (1, 3, 11, 45) tuple),
21 # so the input argument above is simply overridden
22 # to enforce Scrapy's DNS_TIMEOUT setting's value
23 timeout = (self.timeout,)
24 d = super(CachingThreadedResolver, self).getHostByName(name, timeout)
25 d.addCallback(self._cache_result, name)
26 return d
27
28 def _cache_result(self, result, name):
29 dnscache[name] = result
30 return result
31
[end of scrapy/resolver.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/scrapy/resolver.py b/scrapy/resolver.py
--- a/scrapy/resolver.py
+++ b/scrapy/resolver.py
@@ -22,7 +22,8 @@
# to enforce Scrapy's DNS_TIMEOUT setting's value
timeout = (self.timeout,)
d = super(CachingThreadedResolver, self).getHostByName(name, timeout)
- d.addCallback(self._cache_result, name)
+ if dnscache.limit:
+ d.addCallback(self._cache_result, name)
return d
def _cache_result(self, result, name):
|
{"golden_diff": "diff --git a/scrapy/resolver.py b/scrapy/resolver.py\n--- a/scrapy/resolver.py\n+++ b/scrapy/resolver.py\n@@ -22,7 +22,8 @@\n # to enforce Scrapy's DNS_TIMEOUT setting's value\n timeout = (self.timeout,)\n d = super(CachingThreadedResolver, self).getHostByName(name, timeout)\n- d.addCallback(self._cache_result, name)\n+ if dnscache.limit:\n+ d.addCallback(self._cache_result, name)\n return d\n \n def _cache_result(self, result, name):\n", "issue": "DNSCACHE_ENABLED=False not working\nOriginally reported by @softwarevamp on [StackOverflow](https://stackoverflow.com/questions/44877296/scrapy-with-dnscache-enabled-false-not-working):\r\n\r\n> When i run scrapy shell with `DNSCACHE_ENABLED=False` got\r\n```\r\nKeyError: 'dictionary is empty'\r\ntwisted.internet.error.DNSLookupError: DNS lookup failed: no results for hostname lookup: www.mydomain.com.\r\n```\r\n\r\n```\r\n 2017-07-03 03:09:12 [twisted] CRITICAL: while looking up www.mydomain.com with <scrapy.resolver.CachingThreadedResolver object at 0x3fd0050>\r\n Traceback (most recent call last):\r\n File \"/usr/lib64/python2.7/site-packages/twisted/internet/defer.py\", line 653, in _runCallbacks\r\n current.result = callback(current.result, *args, **kw)\r\n File \"/usr/lib64/python2.7/site-packages/scrapy/resolver.py\", line 29, in _cache_result\r\n dnscache[name] = result\r\n File \"/usr/lib64/python2.7/site-packages/scrapy/utils/datatypes.py\", line 305, in __setitem__\r\n self.popitem(last=False)\r\n File \"/usr/lib64/python2.7/collections.py\", line 159, in popitem\r\n raise KeyError('dictionary is empty')\r\n KeyError: 'dictionary is empty'\r\n 2017-07-03 03:09:12 [scrapy.downloadermiddlewares.retry] DEBUG: Gave up retrying <GET //www.mydomain.com/> (failed 3 times): DNS lookup failed: no results for hostname lookup: www.mydomain.com.\r\n Traceback (most recent call last):\r\n File \"/usr/bin/scrapy\", line 11, in <module>\r\n sys.exit(execute())\r\n File \"/usr/lib64/python2.7/site-packages/scrapy/cmdline.py\", line 149, in execute\r\n _run_print_help(parser, _run_command, cmd, args, opts)\r\n File \"/usr/lib64/python2.7/site-packages/scrapy/cmdline.py\", line 89, in _run_print_help\r\n func(*a, **kw)\r\n File \"/usr/lib64/python2.7/site-packages/scrapy/cmdline.py\", line 156, in _run_command\r\n cmd.run(args, opts)\r\n File \"/usr/lib64/python2.7/site-packages/scrapy/commands/shell.py\", line 73, in run\r\n shell.start(url=url, redirect=not opts.no_redirect)\r\n File \"/usr/lib64/python2.7/site-packages/scrapy/shell.py\", line 48, in start\r\n self.fetch(url, spider, redirect=redirect)\r\n File \"/usr/lib64/python2.7/site-packages/scrapy/shell.py\", line 115, in fetch\r\n reactor, self._schedule, request, spider)\r\n File \"/usr/lib64/python2.7/site-packages/twisted/internet/threads.py\", line 122, in blockingCallFromThread\r\n result.raiseException()\r\n File \"<string>\", line 2, in raiseException\r\n twisted.internet.error.DNSLookupError: DNS lookup failed: no results for hostname lookup: www.mydomain.com.\r\n```\r\n\r\n> Any thoughts welcome\n", "before_files": [{"content": "from twisted.internet import defer\nfrom twisted.internet.base import ThreadedResolver\n\nfrom scrapy.utils.datatypes import LocalCache\n\n# TODO: cache misses\n\ndnscache = LocalCache(10000)\n\nclass CachingThreadedResolver(ThreadedResolver):\n def __init__(self, reactor, cache_size, timeout):\n super(CachingThreadedResolver, self).__init__(reactor)\n dnscache.limit = cache_size\n self.timeout = timeout\n\n def getHostByName(self, name, timeout=None):\n if name in dnscache:\n return defer.succeed(dnscache[name])\n # in Twisted<=16.6, getHostByName() is always called with\n # a default timeout of 60s (actually passed as (1, 3, 11, 45) tuple),\n # so the input argument above is simply overridden\n # to enforce Scrapy's DNS_TIMEOUT setting's value\n timeout = (self.timeout,)\n d = super(CachingThreadedResolver, self).getHostByName(name, timeout)\n d.addCallback(self._cache_result, name)\n return d\n\n def _cache_result(self, result, name):\n dnscache[name] = result\n return result\n", "path": "scrapy/resolver.py"}]}
| 1,608 | 130 |
gh_patches_debug_24499
|
rasdani/github-patches
|
git_diff
|
pre-commit__pre-commit-797
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
python_venv language fails to use python3 interpreter and is using python2.7 instead
Apparently pre-commit failed to use python3 interpreter when I tried to add a hook and thus failed because venv module was not installed on default python2.7!
```
$ pre-commit try-repo ../python-license-check [19:55:27]
[INFO] Initializing environment for ../python-license-check.
===============================================================================
Using config:
===============================================================================
repos:
- repo: ../python-license-check
rev: 4048cf3844dbbf45690c153a7da7f532585ec87c
hooks:
- id: liccheck
===============================================================================
[INFO] Installing environment for ../python-license-check.
[INFO] Once installed this environment will be reused.
[INFO] This may take a few minutes...
An unexpected error has occurred: CalledProcessError: Command: ('/Users/ssbarnea/.pyenv/versions/2.7.14/bin/python2.7', '-mvenv', '/var/folders/br/99tfdvcs3vvfwdk69z7f0xmc0000gn/T/tmpayl0P5/repoHa7_qe/py_venv-python2.7')
Return code: 1
Expected return code: 0
Output: (none)
Errors:
/Users/ssbarnea/.pyenv/versions/2.7.14/bin/python2.7: No module named venv
Check the log at /Users/ssbarnea/.cache/pre-commit/pre-commit.log
FAIL: 1
ssbarnea@smac: ~/os/jira master ⚡ $ cat ../python-license-check/.pre-commit-hooks.yaml [19:55:34]
- id: liccheck
name: Validates dependency licenses for Python packages
description: This validator validates a pre-commit hooks manifest file
entry: liccheck -s setup.cfg -r requirements.txt
language: python_venv
```
Based on the documentation I was expecting to see pre-commit using the `python3` executable for calling venv module.
</issue>
<code>
[start of pre_commit/languages/python_venv.py]
1 from __future__ import unicode_literals
2
3 import os.path
4
5 from pre_commit.languages import python
6 from pre_commit.util import CalledProcessError
7 from pre_commit.util import cmd_output
8
9
10 ENVIRONMENT_DIR = 'py_venv'
11
12
13 def orig_py_exe(exe): # pragma: no cover (platform specific)
14 """A -mvenv virtualenv made from a -mvirtualenv virtualenv installs
15 packages to the incorrect location. Attempt to find the _original_ exe
16 and invoke `-mvenv` from there.
17
18 See:
19 - https://github.com/pre-commit/pre-commit/issues/755
20 - https://github.com/pypa/virtualenv/issues/1095
21 - https://bugs.python.org/issue30811
22 """
23 try:
24 prefix_script = 'import sys; print(sys.real_prefix)'
25 _, prefix, _ = cmd_output(exe, '-c', prefix_script)
26 prefix = prefix.strip()
27 except CalledProcessError:
28 # not created from -mvirtualenv
29 return exe
30
31 if os.name == 'nt':
32 expected = os.path.join(prefix, 'python.exe')
33 else:
34 expected = os.path.join(prefix, 'bin', os.path.basename(exe))
35
36 if os.path.exists(expected):
37 return expected
38 else:
39 return exe
40
41
42 def make_venv(envdir, python):
43 cmd_output(orig_py_exe(python), '-mvenv', envdir, cwd='/')
44
45
46 get_default_version = python.get_default_version
47 _interface = python.py_interface(ENVIRONMENT_DIR, make_venv)
48 in_env, healthy, run_hook, install_environment = _interface
49
[end of pre_commit/languages/python_venv.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/pre_commit/languages/python_venv.py b/pre_commit/languages/python_venv.py
--- a/pre_commit/languages/python_venv.py
+++ b/pre_commit/languages/python_venv.py
@@ -1,6 +1,7 @@
from __future__ import unicode_literals
import os.path
+import sys
from pre_commit.languages import python
from pre_commit.util import CalledProcessError
@@ -10,6 +11,13 @@
ENVIRONMENT_DIR = 'py_venv'
+def get_default_version(): # pragma: no cover (version specific)
+ if sys.version_info < (3,):
+ return 'python3'
+ else:
+ return python.get_default_version()
+
+
def orig_py_exe(exe): # pragma: no cover (platform specific)
"""A -mvenv virtualenv made from a -mvirtualenv virtualenv installs
packages to the incorrect location. Attempt to find the _original_ exe
@@ -43,6 +51,5 @@
cmd_output(orig_py_exe(python), '-mvenv', envdir, cwd='/')
-get_default_version = python.get_default_version
_interface = python.py_interface(ENVIRONMENT_DIR, make_venv)
in_env, healthy, run_hook, install_environment = _interface
|
{"golden_diff": "diff --git a/pre_commit/languages/python_venv.py b/pre_commit/languages/python_venv.py\n--- a/pre_commit/languages/python_venv.py\n+++ b/pre_commit/languages/python_venv.py\n@@ -1,6 +1,7 @@\n from __future__ import unicode_literals\n \n import os.path\n+import sys\n \n from pre_commit.languages import python\n from pre_commit.util import CalledProcessError\n@@ -10,6 +11,13 @@\n ENVIRONMENT_DIR = 'py_venv'\n \n \n+def get_default_version(): # pragma: no cover (version specific)\n+ if sys.version_info < (3,):\n+ return 'python3'\n+ else:\n+ return python.get_default_version()\n+\n+\n def orig_py_exe(exe): # pragma: no cover (platform specific)\n \"\"\"A -mvenv virtualenv made from a -mvirtualenv virtualenv installs\n packages to the incorrect location. Attempt to find the _original_ exe\n@@ -43,6 +51,5 @@\n cmd_output(orig_py_exe(python), '-mvenv', envdir, cwd='/')\n \n \n-get_default_version = python.get_default_version\n _interface = python.py_interface(ENVIRONMENT_DIR, make_venv)\n in_env, healthy, run_hook, install_environment = _interface\n", "issue": "python_venv language fails to use python3 interpreter and is using python2.7 instead\nApparently pre-commit failed to use python3 interpreter when I tried to add a hook and thus failed because venv module was not installed on default python2.7!\r\n\r\n```\r\n$ pre-commit try-repo ../python-license-check [19:55:27]\r\n[INFO] Initializing environment for ../python-license-check.\r\n===============================================================================\r\nUsing config:\r\n===============================================================================\r\nrepos:\r\n- repo: ../python-license-check\r\n rev: 4048cf3844dbbf45690c153a7da7f532585ec87c\r\n hooks:\r\n - id: liccheck\r\n===============================================================================\r\n[INFO] Installing environment for ../python-license-check.\r\n[INFO] Once installed this environment will be reused.\r\n[INFO] This may take a few minutes...\r\nAn unexpected error has occurred: CalledProcessError: Command: ('/Users/ssbarnea/.pyenv/versions/2.7.14/bin/python2.7', '-mvenv', '/var/folders/br/99tfdvcs3vvfwdk69z7f0xmc0000gn/T/tmpayl0P5/repoHa7_qe/py_venv-python2.7')\r\nReturn code: 1\r\nExpected return code: 0\r\nOutput: (none)\r\nErrors:\r\n /Users/ssbarnea/.pyenv/versions/2.7.14/bin/python2.7: No module named venv\r\n\r\n\r\nCheck the log at /Users/ssbarnea/.cache/pre-commit/pre-commit.log\r\nFAIL: 1\r\nssbarnea@smac: ~/os/jira master \u26a1 $ cat ../python-license-check/.pre-commit-hooks.yaml [19:55:34]\r\n- id: liccheck\r\n name: Validates dependency licenses for Python packages\r\n description: This validator validates a pre-commit hooks manifest file\r\n entry: liccheck -s setup.cfg -r requirements.txt\r\n language: python_venv\r\n```\r\n\r\nBased on the documentation I was expecting to see pre-commit using the `python3` executable for calling venv module. \n", "before_files": [{"content": "from __future__ import unicode_literals\n\nimport os.path\n\nfrom pre_commit.languages import python\nfrom pre_commit.util import CalledProcessError\nfrom pre_commit.util import cmd_output\n\n\nENVIRONMENT_DIR = 'py_venv'\n\n\ndef orig_py_exe(exe): # pragma: no cover (platform specific)\n \"\"\"A -mvenv virtualenv made from a -mvirtualenv virtualenv installs\n packages to the incorrect location. Attempt to find the _original_ exe\n and invoke `-mvenv` from there.\n\n See:\n - https://github.com/pre-commit/pre-commit/issues/755\n - https://github.com/pypa/virtualenv/issues/1095\n - https://bugs.python.org/issue30811\n \"\"\"\n try:\n prefix_script = 'import sys; print(sys.real_prefix)'\n _, prefix, _ = cmd_output(exe, '-c', prefix_script)\n prefix = prefix.strip()\n except CalledProcessError:\n # not created from -mvirtualenv\n return exe\n\n if os.name == 'nt':\n expected = os.path.join(prefix, 'python.exe')\n else:\n expected = os.path.join(prefix, 'bin', os.path.basename(exe))\n\n if os.path.exists(expected):\n return expected\n else:\n return exe\n\n\ndef make_venv(envdir, python):\n cmd_output(orig_py_exe(python), '-mvenv', envdir, cwd='/')\n\n\nget_default_version = python.get_default_version\n_interface = python.py_interface(ENVIRONMENT_DIR, make_venv)\nin_env, healthy, run_hook, install_environment = _interface\n", "path": "pre_commit/languages/python_venv.py"}]}
| 1,475 | 288 |
gh_patches_debug_56429
|
rasdani/github-patches
|
git_diff
|
pwr-Solaar__Solaar-1085
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Either documentation or behavior with a second Unifying Receiver - How to specify?
Arch Linux
solaar-git 1.0.5rc2.r0.g8d01e28-1
2021 Feb 20
Suppose you have, for instance, a second Unifying Receiver that you want to pair with a particular device, perhaps before moving the second receiver and the paired device to another machine.
To specify and pair that particular device with a specific Unifying Receiver, when there is more than one receiver, and consulting the embedded documentation, compare:
```
$ solaar pair --help
usage: solaar pair [-h] [receiver]
positional arguments:
receiver select a certain receiver when more than one is present
optional arguments:
-h, --help show this help message and exit
The Logitech Unifying Receiver supports up to 6 paired devices at the same time.1 Feb 20
```
versus:
```
$ solaar --help
usage: solaar [-h] [-d] [-D PATH] [--restart-on-wake-up] [-w {show,hide,only}] [-b {regular,symbolic,solaar}] [-V] [--help-actions] ...
positional arguments:
{show,probe,config,pair,unpair}
optional actions to perform
optional arguments:
-h, --help show this help message and exit
-d, --debug print logging messages, for debugging purposes (may be repeated for extra verbosity)
-D PATH, --hidraw PATH
unifying receiver to use; the first detected receiver if unspecified. Example: /dev/hidraw2
--restart-on-wake-up restart Solaar on sleep wake-up (experimental)
-w {show,hide,only}, --window {show,hide,only}
start with window showing / hidden / only (no tray icon)
-b {regular,symbolic,solaar}, --battery-icons {regular,symbolic,solaar}
prefer regular battery / symbolic battery / solaar icons
-V, --version show program's version number and exit
--help-actions print help for the optional actions
```
Note that, currently, the first approach fails - assuming that the receiver is selected using its Serial number, found with `solaar show` - giving:
```
$ solaar pair xxxxxxxx
solaar: error: Traceback (most recent call last):
File "/usr/lib/python3.9/site-packages/solaar/cli/__init__.py", line 202, in run
m.run(c, args, _find_receiver, _find_device)
File "/usr/lib/python3.9/site-packages/solaar/cli/pair.py", line 35, in run
receiver = find_receiver(receiver_name)
TypeError: _find_receiver() missing 1 required positional argument: 'name'
```
Well, no, the "1 required positional argument" was not "missing", it was just not recognized. Using "Device path" as the name, instead of "Serial", produces the same failed result.
Instead, the second approach does work:
```
$ solaar -D /dev/hidraw0 pair
Pairing: turn your new device on (timing out in 20 seconds).
Paired device 1: Wireless Mouse M525 (M525) [4013:xxxxxxxx]
```
So, what is the preferred behavior? If the device path switch approach is preferred, then the `solaar pair --help` usage message should be revised. If, instead, the original `solaar pair [-h] [receiver]` approach is preferred, then that would suggest that the device path switch approach should be reverted. I expect that there was a good reason to implement the device path switch approach, which maybe implies that the original `solaar pair [-h] [receiver]` approach is now deprecated.
It just seems that `/usr/lib/python3.9/site-packages/solaar/cli/pair.py` is still expecting `find_receiver(receiver_name)`, so, a little confusing. I haven't tried to grok the code, but I do notice there is still:
```
/usr/lib/python3.9/site-packages/solaar/cli/__init__.py
def _find_receiver(receivers, name):
assert receivers
assert name
for r in receivers:
if name in r.name.lower() or (r.serial is not None and name == r.serial.lower()):
return r
```
Maybe someone can take a look at all this and see what is going on?
</issue>
<code>
[start of lib/solaar/cli/pair.py]
1 # -*- python-mode -*-
2 # -*- coding: UTF-8 -*-
3
4 ## Copyright (C) 2012-2013 Daniel Pavel
5 ##
6 ## This program is free software; you can redistribute it and/or modify
7 ## it under the terms of the GNU General Public License as published by
8 ## the Free Software Foundation; either version 2 of the License, or
9 ## (at your option) any later version.
10 ##
11 ## This program is distributed in the hope that it will be useful,
12 ## but WITHOUT ANY WARRANTY; without even the implied warranty of
13 ## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
14 ## GNU General Public License for more details.
15 ##
16 ## You should have received a copy of the GNU General Public License along
17 ## with this program; if not, write to the Free Software Foundation, Inc.,
18 ## 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
19
20 from __future__ import absolute_import, division, print_function, unicode_literals
21
22 from time import time as _timestamp
23
24 from logitech_receiver import base as _base
25 from logitech_receiver import hidpp10 as _hidpp10
26 from logitech_receiver import notifications as _notifications
27 from logitech_receiver import status as _status
28
29
30 def run(receivers, args, find_receiver, _ignore):
31 assert receivers
32
33 if args.receiver:
34 receiver_name = args.receiver.lower()
35 receiver = find_receiver(receiver_name)
36 if not receiver:
37 raise Exception("no receiver found matching '%s'" % receiver_name)
38 else:
39 receiver = receivers[0]
40
41 assert receiver
42 receiver.status = _status.ReceiverStatus(receiver, lambda *args, **kwargs: None)
43
44 # check if it's necessary to set the notification flags
45 old_notification_flags = _hidpp10.get_notification_flags(receiver) or 0
46 if not (old_notification_flags & _hidpp10.NOTIFICATION_FLAG.wireless):
47 _hidpp10.set_notification_flags(receiver, old_notification_flags | _hidpp10.NOTIFICATION_FLAG.wireless)
48
49 # get all current devices
50 known_devices = [dev.number for dev in receiver]
51
52 class _HandleWithNotificationHook(int):
53 def notifications_hook(self, n):
54 nonlocal known_devices
55 assert n
56 if n.devnumber == 0xFF:
57 _notifications.process(receiver, n)
58 elif n.sub_id == 0x41 and len(n.data) == _base._SHORT_MESSAGE_SIZE - 4:
59 kd, known_devices = known_devices, None # only process one connection notification
60 if kd is not None:
61 if n.devnumber not in kd:
62 receiver.status.new_device = receiver.register_new_device(n.devnumber, n)
63 elif receiver.re_pairs:
64 del receiver[n.devnumber] # get rid of information on device re-paired away
65 receiver.status.new_device = receiver.register_new_device(n.devnumber, n)
66
67 timeout = 20 # seconds
68 receiver.handle = _HandleWithNotificationHook(receiver.handle)
69
70 receiver.set_lock(False, timeout=timeout)
71 print('Pairing: turn your new device on (timing out in', timeout, 'seconds).')
72
73 # the lock-open notification may come slightly later, wait for it a bit
74 pairing_start = _timestamp()
75 patience = 5 # seconds
76
77 while receiver.status.lock_open or _timestamp() - pairing_start < patience:
78 n = _base.read(receiver.handle)
79 if n:
80 n = _base.make_notification(*n)
81 if n:
82 receiver.handle.notifications_hook(n)
83
84 if not (old_notification_flags & _hidpp10.NOTIFICATION_FLAG.wireless):
85 # only clear the flags if they weren't set before, otherwise a
86 # concurrently running Solaar app might stop working properly
87 _hidpp10.set_notification_flags(receiver, old_notification_flags)
88
89 if receiver.status.new_device:
90 dev = receiver.status.new_device
91 print('Paired device %d: %s (%s) [%s:%s]' % (dev.number, dev.name, dev.codename, dev.wpid, dev.serial))
92 else:
93 error = receiver.status.get(_status.KEYS.ERROR)
94 if error:
95 raise Exception('pairing failed: %s' % error)
96 else:
97 print('Paired a device') # this is better than an error
98
[end of lib/solaar/cli/pair.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/lib/solaar/cli/pair.py b/lib/solaar/cli/pair.py
--- a/lib/solaar/cli/pair.py
+++ b/lib/solaar/cli/pair.py
@@ -32,7 +32,7 @@
if args.receiver:
receiver_name = args.receiver.lower()
- receiver = find_receiver(receiver_name)
+ receiver = find_receiver(receivers, receiver_name)
if not receiver:
raise Exception("no receiver found matching '%s'" % receiver_name)
else:
|
{"golden_diff": "diff --git a/lib/solaar/cli/pair.py b/lib/solaar/cli/pair.py\n--- a/lib/solaar/cli/pair.py\n+++ b/lib/solaar/cli/pair.py\n@@ -32,7 +32,7 @@\n \n if args.receiver:\n receiver_name = args.receiver.lower()\n- receiver = find_receiver(receiver_name)\n+ receiver = find_receiver(receivers, receiver_name)\n if not receiver:\n raise Exception(\"no receiver found matching '%s'\" % receiver_name)\n else:\n", "issue": "Either documentation or behavior with a second Unifying Receiver - How to specify?\nArch Linux\r\nsolaar-git 1.0.5rc2.r0.g8d01e28-1\r\n2021 Feb 20\r\n\r\nSuppose you have, for instance, a second Unifying Receiver that you want to pair with a particular device, perhaps before moving the second receiver and the paired device to another machine.\r\n\r\nTo specify and pair that particular device with a specific Unifying Receiver, when there is more than one receiver, and consulting the embedded documentation, compare:\r\n```\r\n$ solaar pair --help\r\nusage: solaar pair [-h] [receiver]\r\n\r\npositional arguments:\r\n receiver select a certain receiver when more than one is present\r\n\r\noptional arguments:\r\n -h, --help show this help message and exit\r\n\r\nThe Logitech Unifying Receiver supports up to 6 paired devices at the same time.1 Feb 20\r\n```\r\nversus:\r\n```\r\n$ solaar --help\r\nusage: solaar [-h] [-d] [-D PATH] [--restart-on-wake-up] [-w {show,hide,only}] [-b {regular,symbolic,solaar}] [-V] [--help-actions] ...\r\n\r\npositional arguments:\r\n {show,probe,config,pair,unpair}\r\n optional actions to perform\r\n\r\noptional arguments:\r\n -h, --help show this help message and exit\r\n -d, --debug print logging messages, for debugging purposes (may be repeated for extra verbosity)\r\n -D PATH, --hidraw PATH\r\n unifying receiver to use; the first detected receiver if unspecified. Example: /dev/hidraw2\r\n --restart-on-wake-up restart Solaar on sleep wake-up (experimental)\r\n -w {show,hide,only}, --window {show,hide,only}\r\n start with window showing / hidden / only (no tray icon)\r\n -b {regular,symbolic,solaar}, --battery-icons {regular,symbolic,solaar}\r\n prefer regular battery / symbolic battery / solaar icons\r\n -V, --version show program's version number and exit\r\n --help-actions print help for the optional actions\r\n```\r\n\r\nNote that, currently, the first approach fails - assuming that the receiver is selected using its Serial number, found with `solaar show` - giving:\r\n```\r\n$ solaar pair xxxxxxxx\r\nsolaar: error: Traceback (most recent call last):\r\n File \"/usr/lib/python3.9/site-packages/solaar/cli/__init__.py\", line 202, in run\r\n m.run(c, args, _find_receiver, _find_device)\r\n File \"/usr/lib/python3.9/site-packages/solaar/cli/pair.py\", line 35, in run\r\n receiver = find_receiver(receiver_name)\r\nTypeError: _find_receiver() missing 1 required positional argument: 'name'\r\n```\r\nWell, no, the \"1 required positional argument\" was not \"missing\", it was just not recognized. Using \"Device path\" as the name, instead of \"Serial\", produces the same failed result.\r\n\r\nInstead, the second approach does work:\r\n```\r\n$ solaar -D /dev/hidraw0 pair\r\nPairing: turn your new device on (timing out in 20 seconds).\r\nPaired device 1: Wireless Mouse M525 (M525) [4013:xxxxxxxx]\r\n```\r\n\r\nSo, what is the preferred behavior? If the device path switch approach is preferred, then the `solaar pair --help` usage message should be revised. If, instead, the original `solaar pair [-h] [receiver]` approach is preferred, then that would suggest that the device path switch approach should be reverted. I expect that there was a good reason to implement the device path switch approach, which maybe implies that the original `solaar pair [-h] [receiver]` approach is now deprecated.\r\n\r\nIt just seems that `/usr/lib/python3.9/site-packages/solaar/cli/pair.py` is still expecting `find_receiver(receiver_name)`, so, a little confusing. I haven't tried to grok the code, but I do notice there is still:\r\n```\r\n/usr/lib/python3.9/site-packages/solaar/cli/__init__.py\r\ndef _find_receiver(receivers, name):\r\n assert receivers\r\n assert name\r\n\r\n for r in receivers:\r\n if name in r.name.lower() or (r.serial is not None and name == r.serial.lower()):\r\n return r\r\n```\r\n\r\nMaybe someone can take a look at all this and see what is going on?\n", "before_files": [{"content": "# -*- python-mode -*-\n# -*- coding: UTF-8 -*-\n\n## Copyright (C) 2012-2013 Daniel Pavel\n##\n## This program is free software; you can redistribute it and/or modify\n## it under the terms of the GNU General Public License as published by\n## the Free Software Foundation; either version 2 of the License, or\n## (at your option) any later version.\n##\n## This program is distributed in the hope that it will be useful,\n## but WITHOUT ANY WARRANTY; without even the implied warranty of\n## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n## GNU General Public License for more details.\n##\n## You should have received a copy of the GNU General Public License along\n## with this program; if not, write to the Free Software Foundation, Inc.,\n## 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.\n\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nfrom time import time as _timestamp\n\nfrom logitech_receiver import base as _base\nfrom logitech_receiver import hidpp10 as _hidpp10\nfrom logitech_receiver import notifications as _notifications\nfrom logitech_receiver import status as _status\n\n\ndef run(receivers, args, find_receiver, _ignore):\n assert receivers\n\n if args.receiver:\n receiver_name = args.receiver.lower()\n receiver = find_receiver(receiver_name)\n if not receiver:\n raise Exception(\"no receiver found matching '%s'\" % receiver_name)\n else:\n receiver = receivers[0]\n\n assert receiver\n receiver.status = _status.ReceiverStatus(receiver, lambda *args, **kwargs: None)\n\n # check if it's necessary to set the notification flags\n old_notification_flags = _hidpp10.get_notification_flags(receiver) or 0\n if not (old_notification_flags & _hidpp10.NOTIFICATION_FLAG.wireless):\n _hidpp10.set_notification_flags(receiver, old_notification_flags | _hidpp10.NOTIFICATION_FLAG.wireless)\n\n # get all current devices\n known_devices = [dev.number for dev in receiver]\n\n class _HandleWithNotificationHook(int):\n def notifications_hook(self, n):\n nonlocal known_devices\n assert n\n if n.devnumber == 0xFF:\n _notifications.process(receiver, n)\n elif n.sub_id == 0x41 and len(n.data) == _base._SHORT_MESSAGE_SIZE - 4:\n kd, known_devices = known_devices, None # only process one connection notification\n if kd is not None:\n if n.devnumber not in kd:\n receiver.status.new_device = receiver.register_new_device(n.devnumber, n)\n elif receiver.re_pairs:\n del receiver[n.devnumber] # get rid of information on device re-paired away\n receiver.status.new_device = receiver.register_new_device(n.devnumber, n)\n\n timeout = 20 # seconds\n receiver.handle = _HandleWithNotificationHook(receiver.handle)\n\n receiver.set_lock(False, timeout=timeout)\n print('Pairing: turn your new device on (timing out in', timeout, 'seconds).')\n\n # the lock-open notification may come slightly later, wait for it a bit\n pairing_start = _timestamp()\n patience = 5 # seconds\n\n while receiver.status.lock_open or _timestamp() - pairing_start < patience:\n n = _base.read(receiver.handle)\n if n:\n n = _base.make_notification(*n)\n if n:\n receiver.handle.notifications_hook(n)\n\n if not (old_notification_flags & _hidpp10.NOTIFICATION_FLAG.wireless):\n # only clear the flags if they weren't set before, otherwise a\n # concurrently running Solaar app might stop working properly\n _hidpp10.set_notification_flags(receiver, old_notification_flags)\n\n if receiver.status.new_device:\n dev = receiver.status.new_device\n print('Paired device %d: %s (%s) [%s:%s]' % (dev.number, dev.name, dev.codename, dev.wpid, dev.serial))\n else:\n error = receiver.status.get(_status.KEYS.ERROR)\n if error:\n raise Exception('pairing failed: %s' % error)\n else:\n print('Paired a device') # this is better than an error\n", "path": "lib/solaar/cli/pair.py"}]}
| 2,667 | 115 |
gh_patches_debug_18504
|
rasdani/github-patches
|
git_diff
|
open-mmlab__mmdetection-2296
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
ImportError: cannot import name 'CARAFENAIVE' from 'mmdet.ops.carafe'
The module name 'CARAFENAIVE' in file 'mmdet.ops.carafe.grad_check.py' shoud be 'CARAFENaive'. When I run this command 'python mmdet/ops/carafe/grad_check.py', the following error is reported: ImportError: cannot import name 'CARAFENAIVE' from 'mmdet.ops.carafe'.
</issue>
<code>
[start of mmdet/ops/carafe/grad_check.py]
1 import os.path as osp
2 import sys
3
4 import mmcv
5 import torch
6 from torch.autograd import gradcheck
7
8 sys.path.append(osp.abspath(osp.join(__file__, '../../')))
9 from mmdet.ops.carafe import CARAFENAIVE # noqa: E402, isort:skip
10 from mmdet.ops.carafe import carafe_naive # noqa: E402, isort:skip
11 from mmdet.ops.carafe import carafe, CARAFE # noqa: E402, isort:skip
12
13 feat = torch.randn(2, 64, 3, 3, requires_grad=True, device='cuda:0').double()
14 mask = torch.randn(
15 2, 100, 6, 6, requires_grad=True, device='cuda:0').sigmoid().double()
16
17 print('Gradcheck for carafe...')
18 test = gradcheck(CARAFE(5, 4, 2), (feat, mask), atol=1e-4, eps=1e-4)
19 print(test)
20
21 print('Gradcheck for carafe naive...')
22 test = gradcheck(CARAFENAIVE(5, 4, 2), (feat, mask), atol=1e-4, eps=1e-4)
23 print(test)
24
25 feat = torch.randn(
26 2, 1024, 100, 100, requires_grad=True, device='cuda:0').float()
27 mask = torch.randn(
28 2, 25, 200, 200, requires_grad=True, device='cuda:0').sigmoid().float()
29 loop_num = 500
30
31 time_forward = 0
32 time_backward = 0
33 bar = mmcv.ProgressBar(loop_num)
34 timer = mmcv.Timer()
35 for i in range(loop_num):
36 x = carafe(feat.clone(), mask.clone(), 5, 1, 2)
37 torch.cuda.synchronize()
38 time_forward += timer.since_last_check()
39 x.sum().backward(retain_graph=True)
40 torch.cuda.synchronize()
41 time_backward += timer.since_last_check()
42 bar.update()
43 print('\nCARAFE time forward: {} ms/iter | time backward: {} ms/iter'.format(
44 (time_forward + 1e-3) * 1e3 / loop_num,
45 (time_backward + 1e-3) * 1e3 / loop_num))
46
47 time_naive_forward = 0
48 time_naive_backward = 0
49 bar = mmcv.ProgressBar(loop_num)
50 timer = mmcv.Timer()
51 for i in range(loop_num):
52 x = carafe_naive(feat.clone(), mask.clone(), 5, 1, 2)
53 torch.cuda.synchronize()
54 time_naive_forward += timer.since_last_check()
55 x.sum().backward(retain_graph=True)
56 torch.cuda.synchronize()
57 time_naive_backward += timer.since_last_check()
58 bar.update()
59 print('\nCARAFE naive time forward: {} ms/iter | time backward: {} ms/iter'.
60 format((time_naive_forward + 1e-3) * 1e3 / loop_num,
61 (time_naive_backward + 1e-3) * 1e3 / loop_num))
62
[end of mmdet/ops/carafe/grad_check.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/mmdet/ops/carafe/grad_check.py b/mmdet/ops/carafe/grad_check.py
--- a/mmdet/ops/carafe/grad_check.py
+++ b/mmdet/ops/carafe/grad_check.py
@@ -6,9 +6,8 @@
from torch.autograd import gradcheck
sys.path.append(osp.abspath(osp.join(__file__, '../../')))
-from mmdet.ops.carafe import CARAFENAIVE # noqa: E402, isort:skip
-from mmdet.ops.carafe import carafe_naive # noqa: E402, isort:skip
-from mmdet.ops.carafe import carafe, CARAFE # noqa: E402, isort:skip
+from mmdet.ops.carafe import CARAFE, CARAFENaive # noqa: E402, isort:skip
+from mmdet.ops.carafe import carafe, carafe_naive # noqa: E402, isort:skip
feat = torch.randn(2, 64, 3, 3, requires_grad=True, device='cuda:0').double()
mask = torch.randn(
@@ -19,7 +18,7 @@
print(test)
print('Gradcheck for carafe naive...')
-test = gradcheck(CARAFENAIVE(5, 4, 2), (feat, mask), atol=1e-4, eps=1e-4)
+test = gradcheck(CARAFENaive(5, 4, 2), (feat, mask), atol=1e-4, eps=1e-4)
print(test)
feat = torch.randn(
|
{"golden_diff": "diff --git a/mmdet/ops/carafe/grad_check.py b/mmdet/ops/carafe/grad_check.py\n--- a/mmdet/ops/carafe/grad_check.py\n+++ b/mmdet/ops/carafe/grad_check.py\n@@ -6,9 +6,8 @@\n from torch.autograd import gradcheck\n \n sys.path.append(osp.abspath(osp.join(__file__, '../../')))\n-from mmdet.ops.carafe import CARAFENAIVE # noqa: E402, isort:skip\n-from mmdet.ops.carafe import carafe_naive # noqa: E402, isort:skip\n-from mmdet.ops.carafe import carafe, CARAFE # noqa: E402, isort:skip\n+from mmdet.ops.carafe import CARAFE, CARAFENaive # noqa: E402, isort:skip\n+from mmdet.ops.carafe import carafe, carafe_naive # noqa: E402, isort:skip\n \n feat = torch.randn(2, 64, 3, 3, requires_grad=True, device='cuda:0').double()\n mask = torch.randn(\n@@ -19,7 +18,7 @@\n print(test)\n \n print('Gradcheck for carafe naive...')\n-test = gradcheck(CARAFENAIVE(5, 4, 2), (feat, mask), atol=1e-4, eps=1e-4)\n+test = gradcheck(CARAFENaive(5, 4, 2), (feat, mask), atol=1e-4, eps=1e-4)\n print(test)\n \n feat = torch.randn(\n", "issue": "ImportError: cannot import name 'CARAFENAIVE' from 'mmdet.ops.carafe'\nThe module name 'CARAFENAIVE' in file 'mmdet.ops.carafe.grad_check.py' shoud be 'CARAFENaive'. When I run this command 'python mmdet/ops/carafe/grad_check.py', the following error is reported: ImportError: cannot import name 'CARAFENAIVE' from 'mmdet.ops.carafe'.\n", "before_files": [{"content": "import os.path as osp\nimport sys\n\nimport mmcv\nimport torch\nfrom torch.autograd import gradcheck\n\nsys.path.append(osp.abspath(osp.join(__file__, '../../')))\nfrom mmdet.ops.carafe import CARAFENAIVE # noqa: E402, isort:skip\nfrom mmdet.ops.carafe import carafe_naive # noqa: E402, isort:skip\nfrom mmdet.ops.carafe import carafe, CARAFE # noqa: E402, isort:skip\n\nfeat = torch.randn(2, 64, 3, 3, requires_grad=True, device='cuda:0').double()\nmask = torch.randn(\n 2, 100, 6, 6, requires_grad=True, device='cuda:0').sigmoid().double()\n\nprint('Gradcheck for carafe...')\ntest = gradcheck(CARAFE(5, 4, 2), (feat, mask), atol=1e-4, eps=1e-4)\nprint(test)\n\nprint('Gradcheck for carafe naive...')\ntest = gradcheck(CARAFENAIVE(5, 4, 2), (feat, mask), atol=1e-4, eps=1e-4)\nprint(test)\n\nfeat = torch.randn(\n 2, 1024, 100, 100, requires_grad=True, device='cuda:0').float()\nmask = torch.randn(\n 2, 25, 200, 200, requires_grad=True, device='cuda:0').sigmoid().float()\nloop_num = 500\n\ntime_forward = 0\ntime_backward = 0\nbar = mmcv.ProgressBar(loop_num)\ntimer = mmcv.Timer()\nfor i in range(loop_num):\n x = carafe(feat.clone(), mask.clone(), 5, 1, 2)\n torch.cuda.synchronize()\n time_forward += timer.since_last_check()\n x.sum().backward(retain_graph=True)\n torch.cuda.synchronize()\n time_backward += timer.since_last_check()\n bar.update()\nprint('\\nCARAFE time forward: {} ms/iter | time backward: {} ms/iter'.format(\n (time_forward + 1e-3) * 1e3 / loop_num,\n (time_backward + 1e-3) * 1e3 / loop_num))\n\ntime_naive_forward = 0\ntime_naive_backward = 0\nbar = mmcv.ProgressBar(loop_num)\ntimer = mmcv.Timer()\nfor i in range(loop_num):\n x = carafe_naive(feat.clone(), mask.clone(), 5, 1, 2)\n torch.cuda.synchronize()\n time_naive_forward += timer.since_last_check()\n x.sum().backward(retain_graph=True)\n torch.cuda.synchronize()\n time_naive_backward += timer.since_last_check()\n bar.update()\nprint('\\nCARAFE naive time forward: {} ms/iter | time backward: {} ms/iter'.\n format((time_naive_forward + 1e-3) * 1e3 / loop_num,\n (time_naive_backward + 1e-3) * 1e3 / loop_num))\n", "path": "mmdet/ops/carafe/grad_check.py"}]}
| 1,474 | 381 |
gh_patches_debug_19435
|
rasdani/github-patches
|
git_diff
|
Pylons__pyramid-3457
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
pyramid.exceptions.ConfigurationConflictError: <exception str() failed>
**Describe the bug**
While building an app I caused an error who's traceback ending the the following line:
```
pyramid.exceptions.ConfigurationConflictError: <exception str() failed>
```
What caused the error in the first place was that I copied a python package containing my views to a new package called controllers and forgot to delete the original views package. I assume that the configurator failed while trying to commit the config.scan(). Since I couldn't find any information online about the above traceback message I assume that this is probably just some internal problem with Pyramid.
**To Reproduce**
download from: https://github.com/benkawecki/pypi/tree/error
after setting up run
```
pserve development.ini
```
**Expected behavior**
I expect there to be an error message.
**Screenshots**
Screenshot of the error message:
<img width="489" alt="screen shot 2019-01-15 at 10 02 44 pm" src="https://user-images.githubusercontent.com/39999125/51224413-c57eb800-1913-11e9-9e0f-b25878a479f5.png">
Screenshot of installed packages:
<img width="488" alt="screen shot 2019-01-15 at 10 24 42 pm" src="https://user-images.githubusercontent.com/39999125/51224563-8b61e600-1914-11e9-9b04-42936f94d4bd.png">
**Additional context**
I'm looking to help out in open-source more this year so if this is an easy fix I would love to see if I can do it!
</issue>
<code>
[start of src/pyramid/exceptions.py]
1 from pyramid.httpexceptions import HTTPBadRequest, HTTPNotFound, HTTPForbidden
2
3 NotFound = HTTPNotFound # bw compat
4 Forbidden = HTTPForbidden # bw compat
5
6 CR = '\n'
7
8
9 class BadCSRFOrigin(HTTPBadRequest):
10 """
11 This exception indicates the request has failed cross-site request forgery
12 origin validation.
13 """
14
15 title = "Bad CSRF Origin"
16 explanation = (
17 "Access is denied. This server can not verify that the origin or "
18 "referrer of your request matches the current site. Either your "
19 "browser supplied the wrong Origin or Referrer or it did not supply "
20 "one at all."
21 )
22
23
24 class BadCSRFToken(HTTPBadRequest):
25 """
26 This exception indicates the request has failed cross-site request
27 forgery token validation.
28 """
29
30 title = 'Bad CSRF Token'
31 explanation = (
32 'Access is denied. This server can not verify that your cross-site '
33 'request forgery token belongs to your login session. Either you '
34 'supplied the wrong cross-site request forgery token or your session '
35 'no longer exists. This may be due to session timeout or because '
36 'browser is not supplying the credentials required, as can happen '
37 'when the browser has cookies turned off.'
38 )
39
40
41 class PredicateMismatch(HTTPNotFound):
42 """
43 This exception is raised by multiviews when no view matches
44 all given predicates.
45
46 This exception subclasses the :class:`HTTPNotFound` exception for a
47 specific reason: if it reaches the main exception handler, it should
48 be treated as :class:`HTTPNotFound`` by any exception view
49 registrations. Thus, typically, this exception will not be seen
50 publicly.
51
52 However, this exception will be raised if the predicates of all
53 views configured to handle another exception context cannot be
54 successfully matched. For instance, if a view is configured to
55 handle a context of ``HTTPForbidden`` and the configured with
56 additional predicates, then :class:`PredicateMismatch` will be
57 raised if:
58
59 * An original view callable has raised :class:`HTTPForbidden` (thus
60 invoking an exception view); and
61 * The given request fails to match all predicates for said
62 exception view associated with :class:`HTTPForbidden`.
63
64 The same applies to any type of exception being handled by an
65 exception view.
66 """
67
68
69 class URLDecodeError(UnicodeDecodeError):
70 """
71 This exception is raised when :app:`Pyramid` cannot
72 successfully decode a URL or a URL path segment. This exception
73 behaves just like the Python builtin
74 :exc:`UnicodeDecodeError`. It is a subclass of the builtin
75 :exc:`UnicodeDecodeError` exception only for identity purposes,
76 mostly so an exception view can be registered when a URL cannot be
77 decoded.
78 """
79
80
81 class ConfigurationError(Exception):
82 """ Raised when inappropriate input values are supplied to an API
83 method of a :term:`Configurator`"""
84
85
86 class ConfigurationConflictError(ConfigurationError):
87 """ Raised when a configuration conflict is detected during action
88 processing"""
89
90 def __init__(self, conflicts):
91 self._conflicts = conflicts
92
93 def __str__(self):
94 r = ["Conflicting configuration actions"]
95 items = sorted(self._conflicts.items())
96 for discriminator, infos in items:
97 r.append(" For: %s" % (discriminator,))
98 for info in infos:
99 for line in str(info).rstrip().split(CR):
100 r.append(" " + line)
101
102 return CR.join(r)
103
104
105 class ConfigurationExecutionError(ConfigurationError):
106 """An error occurred during execution of a configuration action
107 """
108
109 def __init__(self, etype, evalue, info):
110 self.etype, self.evalue, self.info = etype, evalue, info
111
112 def __str__(self):
113 return "%s: %s\n in:\n %s" % (self.etype, self.evalue, self.info)
114
115
116 class CyclicDependencyError(Exception):
117 """ The exception raised when the Pyramid topological sorter detects a
118 cyclic dependency."""
119
120 def __init__(self, cycles):
121 self.cycles = cycles
122
123 def __str__(self):
124 L = []
125 cycles = self.cycles
126 for cycle in cycles:
127 dependent = cycle
128 dependees = cycles[cycle]
129 L.append('%r sorts before %r' % (dependent, dependees))
130 msg = 'Implicit ordering cycle:' + '; '.join(L)
131 return msg
132
[end of src/pyramid/exceptions.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/src/pyramid/exceptions.py b/src/pyramid/exceptions.py
--- a/src/pyramid/exceptions.py
+++ b/src/pyramid/exceptions.py
@@ -3,8 +3,6 @@
NotFound = HTTPNotFound # bw compat
Forbidden = HTTPForbidden # bw compat
-CR = '\n'
-
class BadCSRFOrigin(HTTPBadRequest):
"""
@@ -92,14 +90,13 @@
def __str__(self):
r = ["Conflicting configuration actions"]
- items = sorted(self._conflicts.items())
- for discriminator, infos in items:
+ for discriminator, infos in self._conflicts.items():
r.append(" For: %s" % (discriminator,))
for info in infos:
- for line in str(info).rstrip().split(CR):
+ for line in str(info).rstrip().split('\n'):
r.append(" " + line)
- return CR.join(r)
+ return '\n'.join(r)
class ConfigurationExecutionError(ConfigurationError):
|
{"golden_diff": "diff --git a/src/pyramid/exceptions.py b/src/pyramid/exceptions.py\n--- a/src/pyramid/exceptions.py\n+++ b/src/pyramid/exceptions.py\n@@ -3,8 +3,6 @@\n NotFound = HTTPNotFound # bw compat\n Forbidden = HTTPForbidden # bw compat\n \n-CR = '\\n'\n-\n \n class BadCSRFOrigin(HTTPBadRequest):\n \"\"\"\n@@ -92,14 +90,13 @@\n \n def __str__(self):\n r = [\"Conflicting configuration actions\"]\n- items = sorted(self._conflicts.items())\n- for discriminator, infos in items:\n+ for discriminator, infos in self._conflicts.items():\n r.append(\" For: %s\" % (discriminator,))\n for info in infos:\n- for line in str(info).rstrip().split(CR):\n+ for line in str(info).rstrip().split('\\n'):\n r.append(\" \" + line)\n \n- return CR.join(r)\n+ return '\\n'.join(r)\n \n \n class ConfigurationExecutionError(ConfigurationError):\n", "issue": "pyramid.exceptions.ConfigurationConflictError: <exception str() failed>\n**Describe the bug**\r\nWhile building an app I caused an error who's traceback ending the the following line:\r\n```\r\npyramid.exceptions.ConfigurationConflictError: <exception str() failed>\r\n```\r\nWhat caused the error in the first place was that I copied a python package containing my views to a new package called controllers and forgot to delete the original views package. I assume that the configurator failed while trying to commit the config.scan(). Since I couldn't find any information online about the above traceback message I assume that this is probably just some internal problem with Pyramid. \r\n**To Reproduce**\r\ndownload from: https://github.com/benkawecki/pypi/tree/error\r\nafter setting up run \r\n``` \r\npserve development.ini\r\n```\r\n\r\n**Expected behavior**\r\nI expect there to be an error message. \r\n\r\n**Screenshots**\r\nScreenshot of the error message:\r\n<img width=\"489\" alt=\"screen shot 2019-01-15 at 10 02 44 pm\" src=\"https://user-images.githubusercontent.com/39999125/51224413-c57eb800-1913-11e9-9e0f-b25878a479f5.png\">\r\nScreenshot of installed packages:\r\n<img width=\"488\" alt=\"screen shot 2019-01-15 at 10 24 42 pm\" src=\"https://user-images.githubusercontent.com/39999125/51224563-8b61e600-1914-11e9-9b04-42936f94d4bd.png\">\r\n\r\n\r\n\r\n**Additional context**\r\nI'm looking to help out in open-source more this year so if this is an easy fix I would love to see if I can do it!\r\n\n", "before_files": [{"content": "from pyramid.httpexceptions import HTTPBadRequest, HTTPNotFound, HTTPForbidden\n\nNotFound = HTTPNotFound # bw compat\nForbidden = HTTPForbidden # bw compat\n\nCR = '\\n'\n\n\nclass BadCSRFOrigin(HTTPBadRequest):\n \"\"\"\n This exception indicates the request has failed cross-site request forgery\n origin validation.\n \"\"\"\n\n title = \"Bad CSRF Origin\"\n explanation = (\n \"Access is denied. This server can not verify that the origin or \"\n \"referrer of your request matches the current site. Either your \"\n \"browser supplied the wrong Origin or Referrer or it did not supply \"\n \"one at all.\"\n )\n\n\nclass BadCSRFToken(HTTPBadRequest):\n \"\"\"\n This exception indicates the request has failed cross-site request\n forgery token validation.\n \"\"\"\n\n title = 'Bad CSRF Token'\n explanation = (\n 'Access is denied. This server can not verify that your cross-site '\n 'request forgery token belongs to your login session. Either you '\n 'supplied the wrong cross-site request forgery token or your session '\n 'no longer exists. This may be due to session timeout or because '\n 'browser is not supplying the credentials required, as can happen '\n 'when the browser has cookies turned off.'\n )\n\n\nclass PredicateMismatch(HTTPNotFound):\n \"\"\"\n This exception is raised by multiviews when no view matches\n all given predicates.\n\n This exception subclasses the :class:`HTTPNotFound` exception for a\n specific reason: if it reaches the main exception handler, it should\n be treated as :class:`HTTPNotFound`` by any exception view\n registrations. Thus, typically, this exception will not be seen\n publicly.\n\n However, this exception will be raised if the predicates of all\n views configured to handle another exception context cannot be\n successfully matched. For instance, if a view is configured to\n handle a context of ``HTTPForbidden`` and the configured with\n additional predicates, then :class:`PredicateMismatch` will be\n raised if:\n\n * An original view callable has raised :class:`HTTPForbidden` (thus\n invoking an exception view); and\n * The given request fails to match all predicates for said\n exception view associated with :class:`HTTPForbidden`.\n\n The same applies to any type of exception being handled by an\n exception view.\n \"\"\"\n\n\nclass URLDecodeError(UnicodeDecodeError):\n \"\"\"\n This exception is raised when :app:`Pyramid` cannot\n successfully decode a URL or a URL path segment. This exception\n behaves just like the Python builtin\n :exc:`UnicodeDecodeError`. It is a subclass of the builtin\n :exc:`UnicodeDecodeError` exception only for identity purposes,\n mostly so an exception view can be registered when a URL cannot be\n decoded.\n \"\"\"\n\n\nclass ConfigurationError(Exception):\n \"\"\" Raised when inappropriate input values are supplied to an API\n method of a :term:`Configurator`\"\"\"\n\n\nclass ConfigurationConflictError(ConfigurationError):\n \"\"\" Raised when a configuration conflict is detected during action\n processing\"\"\"\n\n def __init__(self, conflicts):\n self._conflicts = conflicts\n\n def __str__(self):\n r = [\"Conflicting configuration actions\"]\n items = sorted(self._conflicts.items())\n for discriminator, infos in items:\n r.append(\" For: %s\" % (discriminator,))\n for info in infos:\n for line in str(info).rstrip().split(CR):\n r.append(\" \" + line)\n\n return CR.join(r)\n\n\nclass ConfigurationExecutionError(ConfigurationError):\n \"\"\"An error occurred during execution of a configuration action\n \"\"\"\n\n def __init__(self, etype, evalue, info):\n self.etype, self.evalue, self.info = etype, evalue, info\n\n def __str__(self):\n return \"%s: %s\\n in:\\n %s\" % (self.etype, self.evalue, self.info)\n\n\nclass CyclicDependencyError(Exception):\n \"\"\" The exception raised when the Pyramid topological sorter detects a\n cyclic dependency.\"\"\"\n\n def __init__(self, cycles):\n self.cycles = cycles\n\n def __str__(self):\n L = []\n cycles = self.cycles\n for cycle in cycles:\n dependent = cycle\n dependees = cycles[cycle]\n L.append('%r sorts before %r' % (dependent, dependees))\n msg = 'Implicit ordering cycle:' + '; '.join(L)\n return msg\n", "path": "src/pyramid/exceptions.py"}]}
| 2,230 | 229 |
gh_patches_debug_14167
|
rasdani/github-patches
|
git_diff
|
fidals__shopelectro-917
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
models.py:85: Explore and doc what is vendor_code.
The puzzle `910-f547fee5` from #910 has to be resolved:
https://github.com/fidals/shopelectro/blob/e7de650f01b749f584f374923d7863083cc83b3c/shopelectro/models.py#L85-L85
The puzzle was created by duker33 on 28-Jun-19.
Estimate: 30 minutes, role: DEV.
If you have any technical questions, don't ask me, submit new tickets instead. The task will be \"done\" when the problem is fixed and the text of the puzzle is _removed_ from the source code. Here is more about [PDD](http://www.yegor256.com/2009/03/04/pdd.html) and [about me](http://www.yegor256.com/2017/04/05/pdd-in-action.html).
</issue>
<code>
[start of shopelectro/models.py]
1 import enum
2 import random
3 import string
4 import typing
5 from uuid import uuid4
6
7 from django.conf import settings
8 from django.db import models
9 from django.urls import reverse
10 from django.utils.translation import ugettext_lazy as _
11
12 from catalog import models as catalog_models
13 from ecommerce import models as ecommerce_models
14 from pages import models as pages_models
15
16
17 def randomize_slug(slug: str) -> str:
18 slug_hash = ''.join(
19 random.choices(string.ascii_lowercase, k=settings.SLUG_HASH_SIZE)
20 )
21 return f'{slug}_{slug_hash}'
22
23
24 class SECategoryQuerySet(catalog_models.CategoryQuerySet):
25 def get_categories_tree_with_pictures(self) -> 'SECategoryQuerySet':
26 categories_with_pictures = (
27 self
28 .filter(products__page__images__isnull=False)
29 .distinct()
30 )
31
32 return categories_with_pictures.get_ancestors(include_self=True)
33
34
35 class SECategoryManager(
36 catalog_models.CategoryManager.from_queryset(SECategoryQuerySet)
37 ):
38 pass
39
40
41 class Category(catalog_models.AbstractCategory, pages_models.SyncPageMixin):
42
43 objects = SECategoryManager()
44 uuid = models.UUIDField(default=uuid4, editable=False)
45
46 @classmethod
47 def get_default_parent(cls):
48 return pages_models.CustomPage.objects.filter(slug='catalog').first()
49
50 @property
51 def image(self):
52 products = self.products.all()
53 return products[0].image if products else None
54
55 def get_absolute_url(self):
56 return reverse('category', args=(self.page.slug,))
57
58
59 class Product(
60 catalog_models.AbstractProduct,
61 catalog_models.AbstractPosition,
62 pages_models.SyncPageMixin
63 ):
64
65 # That's why we are needed to explicitly add objects manager here
66 # because of Django special managers behaviour.
67 # Se se#480 for details.
68 objects = catalog_models.ProductManager()
69
70 category = models.ForeignKey(
71 Category,
72 on_delete=models.CASCADE,
73 null=True,
74 related_name='products',
75 verbose_name=_('category'),
76 )
77
78 tags = models.ManyToManyField(
79 'Tag',
80 related_name='products',
81 blank=True,
82 verbose_name=_('tags'),
83 )
84
85 # @todo #910:30m Explore and doc what is vendor_code.
86 vendor_code = models.SmallIntegerField(verbose_name=_('vendor_code'))
87 uuid = models.UUIDField(default=uuid4, editable=False)
88 purchase_price = models.FloatField(
89 default=0, verbose_name=_('purchase_price'))
90 wholesale_small = models.FloatField(
91 default=0, verbose_name=_('wholesale_small'))
92 wholesale_medium = models.FloatField(
93 default=0, verbose_name=_('wholesale_medium'))
94 wholesale_large = models.FloatField(
95 default=0, verbose_name=_('wholesale_large'))
96
97 in_pack = models.PositiveSmallIntegerField(
98 default=1,
99 verbose_name=_('in pack'),
100 )
101
102 def get_absolute_url(self):
103 return reverse('product', args=(self.vendor_code,))
104
105 @property
106 def average_rate(self):
107 """Return rounded to first decimal averaged rating."""
108 rating = self.product_feedbacks.aggregate(
109 avg=models.Avg('rating')).get('avg', 0)
110 return round(rating, 1)
111
112 @property
113 def feedback_count(self):
114 return self.product_feedbacks.count()
115
116 @property
117 def feedback(self):
118 return self.product_feedbacks.all().order_by('-date')
119
120 def get_params(self):
121 return Tag.objects.filter_by_products([self]).group_tags()
122
123 def get_brand_name(self) -> str:
124 brand: typing.Optional['Tag'] = Tag.objects.get_brands([self]).get(self)
125 return brand.name if brand else ''
126
127
128 class ProductFeedback(models.Model):
129 product = models.ForeignKey(
130 Product, on_delete=models.CASCADE, null=True,
131 related_name='product_feedbacks'
132 )
133
134 date = models.DateTimeField(
135 auto_now=True, db_index=True, verbose_name=_('date'))
136 name = models.CharField(
137 max_length=255, db_index=True, verbose_name=_('name'))
138 rating = models.PositiveSmallIntegerField(
139 default=1, db_index=True, verbose_name=_('rating'))
140 dignities = models.TextField(
141 default='', blank=True, verbose_name=_('dignities'))
142 limitations = models.TextField(
143 default='', blank=True, verbose_name=_('limitations'))
144 general = models.TextField(
145 default='', blank=True, verbose_name=_('limitations'))
146
147
148 class ItemsEnum(enum.EnumMeta):
149 """
150 Provide dict-like `items` method.
151
152 https://docs.python.org/3/library/enum.html#enum-classes
153 """
154
155 def items(self):
156 return [(i.name, i.value) for i in self]
157
158 def __repr__(self):
159 fields = ', '.join(i.name for i in self)
160 return f"<enum '{self.__name__}: {fields}'>"
161
162
163 class PaymentOptions(enum.Enum, metaclass=ItemsEnum):
164 cash = 'Наличные'
165 cashless = 'Безналичные и денежные переводы'
166 AC = 'Банковская карта'
167 PC = 'Яндекс.Деньги'
168 GP = 'Связной (терминал)'
169 AB = 'Альфа-Клик'
170
171 @staticmethod
172 def default():
173 return PaymentOptions.cash
174
175
176 class Order(ecommerce_models.Order):
177 address = models.TextField(blank=True, default='')
178 payment_type = models.CharField(
179 max_length=255,
180 choices=PaymentOptions.items(),
181 default=PaymentOptions.default().name,
182 )
183 comment = models.TextField(blank=True, default='')
184 # total price - total purchase price
185 revenue = models.FloatField(default=0, null=True, verbose_name=_('revenue'))
186
187 @property
188 def payment_type_label(self):
189 """Return label for an order's payment option."""
190 return PaymentOptions[self.payment_type].value
191
192 def set_positions(self, cart):
193 """
194 Save cart's state into Order instance.
195
196 @todo #589:60m Create Cart model.
197 See details here: https://github.com/fidals/shopelectro/pull/590#discussion_r222544672
198 """
199 self.revenue = cart.total_revenue()
200 self.save()
201 for id_, position in cart:
202 self.positions.create(
203 order=self,
204 product_id=id_,
205 vendor_code=position['vendor_code'],
206 name=position['name'],
207 price=position['price'],
208 quantity=position['quantity'],
209 )
210 return self
211
212
213 class CategoryPage(pages_models.ModelPage):
214 """Create proxy model for Admin."""
215
216 class Meta(pages_models.ModelPage.Meta): # Ignore PycodestyleBear (E303)
217 proxy = True
218
219 # noinspection PyTypeChecker
220 objects = pages_models.ModelPage.create_model_page_managers(Category)
221
222
223 class ProductPage(pages_models.ModelPage):
224 """Create proxy model for Admin."""
225
226 class Meta(pages_models.ModelPage.Meta): # Ignore PycodestyleBear (E303)
227 proxy = True
228
229 # noinspection PyTypeChecker
230 objects = (
231 pages_models.ModelPage
232 .create_model_page_managers(Product)
233 )
234
235
236 class TagGroupManager(models.Manager):
237
238 def get_pack(self):
239 return self.get_queryset().get(uuid=settings.PACK_GROUP_UUID)
240
241
242 class TagGroup(catalog_models.TagGroup):
243
244 objects = TagGroupManager()
245
246
247 class TagQuerySet(catalog_models.TagQuerySet):
248
249 def products(self):
250 ids = self.values_list('products__id', flat=True)
251 return Product.objects.filter(id__in=ids).distinct()
252
253
254 class TagManager(catalog_models.TagManager.from_queryset(TagQuerySet)):
255
256 def get_packs(self):
257 return TagGroup.objects.get_pack().tags.all()
258
259
260 class Tag(catalog_models.Tag):
261 group = models.ForeignKey(
262 TagGroup, on_delete=models.CASCADE, null=True, related_name='tags',
263 )
264
265 objects = TagManager()
266
[end of shopelectro/models.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/shopelectro/models.py b/shopelectro/models.py
--- a/shopelectro/models.py
+++ b/shopelectro/models.py
@@ -82,7 +82,13 @@
verbose_name=_('tags'),
)
- # @todo #910:30m Explore and doc what is vendor_code.
+ # `vendor_code` is a code that refers to the particular stock keeping unit (SKU).
+ # You can treat it as public unique id. Use it to publicly identify a product.
+
+ # We bring codes from 1C database and sync with products.
+ # We doesn't use the id field instead, because it is auto-increment sequence,
+ # that can't be changed easily. We decided to avoid that complexity.
+ # https://www.postgresql.org/docs/current/functions-sequence.html
vendor_code = models.SmallIntegerField(verbose_name=_('vendor_code'))
uuid = models.UUIDField(default=uuid4, editable=False)
purchase_price = models.FloatField(
|
{"golden_diff": "diff --git a/shopelectro/models.py b/shopelectro/models.py\n--- a/shopelectro/models.py\n+++ b/shopelectro/models.py\n@@ -82,7 +82,13 @@\n verbose_name=_('tags'),\n )\n \n- # @todo #910:30m Explore and doc what is vendor_code.\n+ # `vendor_code` is a code that refers to the particular stock keeping unit (SKU).\n+ # You can treat it as public unique id. Use it to publicly identify a product.\n+\n+ # We bring codes from 1C database and sync with products.\n+ # We doesn't use the id field instead, because it is auto-increment sequence,\n+ # that can't be changed easily. We decided to avoid that complexity.\n+ # https://www.postgresql.org/docs/current/functions-sequence.html\n vendor_code = models.SmallIntegerField(verbose_name=_('vendor_code'))\n uuid = models.UUIDField(default=uuid4, editable=False)\n purchase_price = models.FloatField(\n", "issue": "models.py:85: Explore and doc what is vendor_code.\nThe puzzle `910-f547fee5` from #910 has to be resolved: \n\nhttps://github.com/fidals/shopelectro/blob/e7de650f01b749f584f374923d7863083cc83b3c/shopelectro/models.py#L85-L85\n\nThe puzzle was created by duker33 on 28-Jun-19. \n\nEstimate: 30 minutes, role: DEV. \n\nIf you have any technical questions, don't ask me, submit new tickets instead. The task will be \\\"done\\\" when the problem is fixed and the text of the puzzle is _removed_ from the source code. Here is more about [PDD](http://www.yegor256.com/2009/03/04/pdd.html) and [about me](http://www.yegor256.com/2017/04/05/pdd-in-action.html). \n\n", "before_files": [{"content": "import enum\nimport random\nimport string\nimport typing\nfrom uuid import uuid4\n\nfrom django.conf import settings\nfrom django.db import models\nfrom django.urls import reverse\nfrom django.utils.translation import ugettext_lazy as _\n\nfrom catalog import models as catalog_models\nfrom ecommerce import models as ecommerce_models\nfrom pages import models as pages_models\n\n\ndef randomize_slug(slug: str) -> str:\n slug_hash = ''.join(\n random.choices(string.ascii_lowercase, k=settings.SLUG_HASH_SIZE)\n )\n return f'{slug}_{slug_hash}'\n\n\nclass SECategoryQuerySet(catalog_models.CategoryQuerySet):\n def get_categories_tree_with_pictures(self) -> 'SECategoryQuerySet':\n categories_with_pictures = (\n self\n .filter(products__page__images__isnull=False)\n .distinct()\n )\n\n return categories_with_pictures.get_ancestors(include_self=True)\n\n\nclass SECategoryManager(\n catalog_models.CategoryManager.from_queryset(SECategoryQuerySet)\n):\n pass\n\n\nclass Category(catalog_models.AbstractCategory, pages_models.SyncPageMixin):\n\n objects = SECategoryManager()\n uuid = models.UUIDField(default=uuid4, editable=False)\n\n @classmethod\n def get_default_parent(cls):\n return pages_models.CustomPage.objects.filter(slug='catalog').first()\n\n @property\n def image(self):\n products = self.products.all()\n return products[0].image if products else None\n\n def get_absolute_url(self):\n return reverse('category', args=(self.page.slug,))\n\n\nclass Product(\n catalog_models.AbstractProduct,\n catalog_models.AbstractPosition,\n pages_models.SyncPageMixin\n):\n\n # That's why we are needed to explicitly add objects manager here\n # because of Django special managers behaviour.\n # Se se#480 for details.\n objects = catalog_models.ProductManager()\n\n category = models.ForeignKey(\n Category,\n on_delete=models.CASCADE,\n null=True,\n related_name='products',\n verbose_name=_('category'),\n )\n\n tags = models.ManyToManyField(\n 'Tag',\n related_name='products',\n blank=True,\n verbose_name=_('tags'),\n )\n\n # @todo #910:30m Explore and doc what is vendor_code.\n vendor_code = models.SmallIntegerField(verbose_name=_('vendor_code'))\n uuid = models.UUIDField(default=uuid4, editable=False)\n purchase_price = models.FloatField(\n default=0, verbose_name=_('purchase_price'))\n wholesale_small = models.FloatField(\n default=0, verbose_name=_('wholesale_small'))\n wholesale_medium = models.FloatField(\n default=0, verbose_name=_('wholesale_medium'))\n wholesale_large = models.FloatField(\n default=0, verbose_name=_('wholesale_large'))\n\n in_pack = models.PositiveSmallIntegerField(\n default=1,\n verbose_name=_('in pack'),\n )\n\n def get_absolute_url(self):\n return reverse('product', args=(self.vendor_code,))\n\n @property\n def average_rate(self):\n \"\"\"Return rounded to first decimal averaged rating.\"\"\"\n rating = self.product_feedbacks.aggregate(\n avg=models.Avg('rating')).get('avg', 0)\n return round(rating, 1)\n\n @property\n def feedback_count(self):\n return self.product_feedbacks.count()\n\n @property\n def feedback(self):\n return self.product_feedbacks.all().order_by('-date')\n\n def get_params(self):\n return Tag.objects.filter_by_products([self]).group_tags()\n\n def get_brand_name(self) -> str:\n brand: typing.Optional['Tag'] = Tag.objects.get_brands([self]).get(self)\n return brand.name if brand else ''\n\n\nclass ProductFeedback(models.Model):\n product = models.ForeignKey(\n Product, on_delete=models.CASCADE, null=True,\n related_name='product_feedbacks'\n )\n\n date = models.DateTimeField(\n auto_now=True, db_index=True, verbose_name=_('date'))\n name = models.CharField(\n max_length=255, db_index=True, verbose_name=_('name'))\n rating = models.PositiveSmallIntegerField(\n default=1, db_index=True, verbose_name=_('rating'))\n dignities = models.TextField(\n default='', blank=True, verbose_name=_('dignities'))\n limitations = models.TextField(\n default='', blank=True, verbose_name=_('limitations'))\n general = models.TextField(\n default='', blank=True, verbose_name=_('limitations'))\n\n\nclass ItemsEnum(enum.EnumMeta):\n \"\"\"\n Provide dict-like `items` method.\n\n https://docs.python.org/3/library/enum.html#enum-classes\n \"\"\"\n\n def items(self):\n return [(i.name, i.value) for i in self]\n\n def __repr__(self):\n fields = ', '.join(i.name for i in self)\n return f\"<enum '{self.__name__}: {fields}'>\"\n\n\nclass PaymentOptions(enum.Enum, metaclass=ItemsEnum):\n cash = '\u041d\u0430\u043b\u0438\u0447\u043d\u044b\u0435'\n cashless = '\u0411\u0435\u0437\u043d\u0430\u043b\u0438\u0447\u043d\u044b\u0435 \u0438 \u0434\u0435\u043d\u0435\u0436\u043d\u044b\u0435 \u043f\u0435\u0440\u0435\u0432\u043e\u0434\u044b'\n AC = '\u0411\u0430\u043d\u043a\u043e\u0432\u0441\u043a\u0430\u044f \u043a\u0430\u0440\u0442\u0430'\n PC = '\u042f\u043d\u0434\u0435\u043a\u0441.\u0414\u0435\u043d\u044c\u0433\u0438'\n GP = '\u0421\u0432\u044f\u0437\u043d\u043e\u0439 (\u0442\u0435\u0440\u043c\u0438\u043d\u0430\u043b)'\n AB = '\u0410\u043b\u044c\u0444\u0430-\u041a\u043b\u0438\u043a'\n\n @staticmethod\n def default():\n return PaymentOptions.cash\n\n\nclass Order(ecommerce_models.Order):\n address = models.TextField(blank=True, default='')\n payment_type = models.CharField(\n max_length=255,\n choices=PaymentOptions.items(),\n default=PaymentOptions.default().name,\n )\n comment = models.TextField(blank=True, default='')\n # total price - total purchase price\n revenue = models.FloatField(default=0, null=True, verbose_name=_('revenue'))\n\n @property\n def payment_type_label(self):\n \"\"\"Return label for an order's payment option.\"\"\"\n return PaymentOptions[self.payment_type].value\n\n def set_positions(self, cart):\n \"\"\"\n Save cart's state into Order instance.\n\n @todo #589:60m Create Cart model.\n See details here: https://github.com/fidals/shopelectro/pull/590#discussion_r222544672\n \"\"\"\n self.revenue = cart.total_revenue()\n self.save()\n for id_, position in cart:\n self.positions.create(\n order=self,\n product_id=id_,\n vendor_code=position['vendor_code'],\n name=position['name'],\n price=position['price'],\n quantity=position['quantity'],\n )\n return self\n\n\nclass CategoryPage(pages_models.ModelPage):\n \"\"\"Create proxy model for Admin.\"\"\"\n\n class Meta(pages_models.ModelPage.Meta): # Ignore PycodestyleBear (E303)\n proxy = True\n\n # noinspection PyTypeChecker\n objects = pages_models.ModelPage.create_model_page_managers(Category)\n\n\nclass ProductPage(pages_models.ModelPage):\n \"\"\"Create proxy model for Admin.\"\"\"\n\n class Meta(pages_models.ModelPage.Meta): # Ignore PycodestyleBear (E303)\n proxy = True\n\n # noinspection PyTypeChecker\n objects = (\n pages_models.ModelPage\n .create_model_page_managers(Product)\n )\n\n\nclass TagGroupManager(models.Manager):\n\n def get_pack(self):\n return self.get_queryset().get(uuid=settings.PACK_GROUP_UUID)\n\n\nclass TagGroup(catalog_models.TagGroup):\n\n objects = TagGroupManager()\n\n\nclass TagQuerySet(catalog_models.TagQuerySet):\n\n def products(self):\n ids = self.values_list('products__id', flat=True)\n return Product.objects.filter(id__in=ids).distinct()\n\n\nclass TagManager(catalog_models.TagManager.from_queryset(TagQuerySet)):\n\n def get_packs(self):\n return TagGroup.objects.get_pack().tags.all()\n\n\nclass Tag(catalog_models.Tag):\n group = models.ForeignKey(\n TagGroup, on_delete=models.CASCADE, null=True, related_name='tags',\n )\n\n objects = TagManager()\n", "path": "shopelectro/models.py"}]}
| 3,195 | 225 |
gh_patches_debug_37837
|
rasdani/github-patches
|
git_diff
|
kserve__kserve-3173
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Configurable timeouts for InferenceGraph
/kind feature
**Describe the solution you'd like**
InferenceServices in a Serverless deployment of KServe allow you to configure the Knative `timeoutSeconds` field by way of the `timeout` field in ComponentExtensionSpec. We'd like the same feature for an InferenceGraph.
**Anything else you would like to add:**
If we cannot update the InfereceGraph spec at this time, I would request the InferenceGraph controller at the very least be able to check the existing InferenceServices for Knative-backed components and match the highest `timeoutSeconds`.
Thanks for your consideration!
</issue>
<code>
[start of python/kserve/kserve/models/v1alpha1_inference_graph_spec.py]
1 # Copyright 2023 The KServe Authors.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 # coding: utf-8
16
17 """
18 KServe
19
20 Python SDK for KServe # noqa: E501
21
22 The version of the OpenAPI document: v0.1
23 Generated by: https://openapi-generator.tech
24 """
25
26
27 import pprint
28 import re # noqa: F401
29
30 import six
31
32 from kserve.configuration import Configuration
33
34
35 class V1alpha1InferenceGraphSpec(object):
36 """NOTE: This class is auto generated by OpenAPI Generator.
37 Ref: https://openapi-generator.tech
38
39 Do not edit the class manually.
40 """
41
42 """
43 Attributes:
44 openapi_types (dict): The key is attribute name
45 and the value is attribute type.
46 attribute_map (dict): The key is attribute name
47 and the value is json key in definition.
48 """
49 openapi_types = {
50 'affinity': 'V1Affinity',
51 'nodes': 'dict(str, V1alpha1InferenceRouter)',
52 'resources': 'V1ResourceRequirements'
53 }
54
55 attribute_map = {
56 'affinity': 'affinity',
57 'nodes': 'nodes',
58 'resources': 'resources'
59 }
60
61 def __init__(self, affinity=None, nodes=None, resources=None, local_vars_configuration=None): # noqa: E501
62 """V1alpha1InferenceGraphSpec - a model defined in OpenAPI""" # noqa: E501
63 if local_vars_configuration is None:
64 local_vars_configuration = Configuration()
65 self.local_vars_configuration = local_vars_configuration
66
67 self._affinity = None
68 self._nodes = None
69 self._resources = None
70 self.discriminator = None
71
72 if affinity is not None:
73 self.affinity = affinity
74 self.nodes = nodes
75 if resources is not None:
76 self.resources = resources
77
78 @property
79 def affinity(self):
80 """Gets the affinity of this V1alpha1InferenceGraphSpec. # noqa: E501
81
82
83 :return: The affinity of this V1alpha1InferenceGraphSpec. # noqa: E501
84 :rtype: V1Affinity
85 """
86 return self._affinity
87
88 @affinity.setter
89 def affinity(self, affinity):
90 """Sets the affinity of this V1alpha1InferenceGraphSpec.
91
92
93 :param affinity: The affinity of this V1alpha1InferenceGraphSpec. # noqa: E501
94 :type: V1Affinity
95 """
96
97 self._affinity = affinity
98
99 @property
100 def nodes(self):
101 """Gets the nodes of this V1alpha1InferenceGraphSpec. # noqa: E501
102
103 Map of InferenceGraph router nodes Each node defines the router which can be different routing types # noqa: E501
104
105 :return: The nodes of this V1alpha1InferenceGraphSpec. # noqa: E501
106 :rtype: dict(str, V1alpha1InferenceRouter)
107 """
108 return self._nodes
109
110 @nodes.setter
111 def nodes(self, nodes):
112 """Sets the nodes of this V1alpha1InferenceGraphSpec.
113
114 Map of InferenceGraph router nodes Each node defines the router which can be different routing types # noqa: E501
115
116 :param nodes: The nodes of this V1alpha1InferenceGraphSpec. # noqa: E501
117 :type: dict(str, V1alpha1InferenceRouter)
118 """
119 if self.local_vars_configuration.client_side_validation and nodes is None: # noqa: E501
120 raise ValueError("Invalid value for `nodes`, must not be `None`") # noqa: E501
121
122 self._nodes = nodes
123
124 @property
125 def resources(self):
126 """Gets the resources of this V1alpha1InferenceGraphSpec. # noqa: E501
127
128
129 :return: The resources of this V1alpha1InferenceGraphSpec. # noqa: E501
130 :rtype: V1ResourceRequirements
131 """
132 return self._resources
133
134 @resources.setter
135 def resources(self, resources):
136 """Sets the resources of this V1alpha1InferenceGraphSpec.
137
138
139 :param resources: The resources of this V1alpha1InferenceGraphSpec. # noqa: E501
140 :type: V1ResourceRequirements
141 """
142
143 self._resources = resources
144
145 def to_dict(self):
146 """Returns the model properties as a dict"""
147 result = {}
148
149 for attr, _ in six.iteritems(self.openapi_types):
150 value = getattr(self, attr)
151 if isinstance(value, list):
152 result[attr] = list(map(
153 lambda x: x.to_dict() if hasattr(x, "to_dict") else x,
154 value
155 ))
156 elif hasattr(value, "to_dict"):
157 result[attr] = value.to_dict()
158 elif isinstance(value, dict):
159 result[attr] = dict(map(
160 lambda item: (item[0], item[1].to_dict())
161 if hasattr(item[1], "to_dict") else item,
162 value.items()
163 ))
164 else:
165 result[attr] = value
166
167 return result
168
169 def to_str(self):
170 """Returns the string representation of the model"""
171 return pprint.pformat(self.to_dict())
172
173 def __repr__(self):
174 """For `print` and `pprint`"""
175 return self.to_str()
176
177 def __eq__(self, other):
178 """Returns true if both objects are equal"""
179 if not isinstance(other, V1alpha1InferenceGraphSpec):
180 return False
181
182 return self.to_dict() == other.to_dict()
183
184 def __ne__(self, other):
185 """Returns true if both objects are not equal"""
186 if not isinstance(other, V1alpha1InferenceGraphSpec):
187 return True
188
189 return self.to_dict() != other.to_dict()
190
[end of python/kserve/kserve/models/v1alpha1_inference_graph_spec.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/python/kserve/kserve/models/v1alpha1_inference_graph_spec.py b/python/kserve/kserve/models/v1alpha1_inference_graph_spec.py
--- a/python/kserve/kserve/models/v1alpha1_inference_graph_spec.py
+++ b/python/kserve/kserve/models/v1alpha1_inference_graph_spec.py
@@ -49,16 +49,18 @@
openapi_types = {
'affinity': 'V1Affinity',
'nodes': 'dict(str, V1alpha1InferenceRouter)',
- 'resources': 'V1ResourceRequirements'
+ 'resources': 'V1ResourceRequirements',
+ 'timeout': 'int'
}
attribute_map = {
'affinity': 'affinity',
'nodes': 'nodes',
- 'resources': 'resources'
+ 'resources': 'resources',
+ 'timeout': 'timeout'
}
- def __init__(self, affinity=None, nodes=None, resources=None, local_vars_configuration=None): # noqa: E501
+ def __init__(self, affinity=None, nodes=None, resources=None, timeout=None, local_vars_configuration=None): # noqa: E501
"""V1alpha1InferenceGraphSpec - a model defined in OpenAPI""" # noqa: E501
if local_vars_configuration is None:
local_vars_configuration = Configuration()
@@ -67,6 +69,7 @@
self._affinity = None
self._nodes = None
self._resources = None
+ self._timeout = None
self.discriminator = None
if affinity is not None:
@@ -74,6 +77,8 @@
self.nodes = nodes
if resources is not None:
self.resources = resources
+ if timeout is not None:
+ self.timeout = timeout
@property
def affinity(self):
@@ -142,6 +147,29 @@
self._resources = resources
+ @property
+ def timeout(self):
+ """Gets the timeout of this V1alpha1InferenceGraphSpec. # noqa: E501
+
+ TimeoutSeconds specifies the number of seconds to wait before timing out a request to the component. # noqa: E501
+
+ :return: The timeout of this V1alpha1InferenceGraphSpec. # noqa: E501
+ :rtype: int
+ """
+ return self._timeout
+
+ @timeout.setter
+ def timeout(self, timeout):
+ """Sets the timeout of this V1alpha1InferenceGraphSpec.
+
+ TimeoutSeconds specifies the number of seconds to wait before timing out a request to the component. # noqa: E501
+
+ :param timeout: The timeout of this V1alpha1InferenceGraphSpec. # noqa: E501
+ :type: int
+ """
+
+ self._timeout = timeout
+
def to_dict(self):
"""Returns the model properties as a dict"""
result = {}
|
{"golden_diff": "diff --git a/python/kserve/kserve/models/v1alpha1_inference_graph_spec.py b/python/kserve/kserve/models/v1alpha1_inference_graph_spec.py\n--- a/python/kserve/kserve/models/v1alpha1_inference_graph_spec.py\n+++ b/python/kserve/kserve/models/v1alpha1_inference_graph_spec.py\n@@ -49,16 +49,18 @@\n openapi_types = {\n 'affinity': 'V1Affinity',\n 'nodes': 'dict(str, V1alpha1InferenceRouter)',\n- 'resources': 'V1ResourceRequirements'\n+ 'resources': 'V1ResourceRequirements',\n+ 'timeout': 'int'\n }\n \n attribute_map = {\n 'affinity': 'affinity',\n 'nodes': 'nodes',\n- 'resources': 'resources'\n+ 'resources': 'resources',\n+ 'timeout': 'timeout'\n }\n \n- def __init__(self, affinity=None, nodes=None, resources=None, local_vars_configuration=None): # noqa: E501\n+ def __init__(self, affinity=None, nodes=None, resources=None, timeout=None, local_vars_configuration=None): # noqa: E501\n \"\"\"V1alpha1InferenceGraphSpec - a model defined in OpenAPI\"\"\" # noqa: E501\n if local_vars_configuration is None:\n local_vars_configuration = Configuration()\n@@ -67,6 +69,7 @@\n self._affinity = None\n self._nodes = None\n self._resources = None\n+ self._timeout = None\n self.discriminator = None\n \n if affinity is not None:\n@@ -74,6 +77,8 @@\n self.nodes = nodes\n if resources is not None:\n self.resources = resources\n+ if timeout is not None:\n+ self.timeout = timeout\n \n @property\n def affinity(self):\n@@ -142,6 +147,29 @@\n \n self._resources = resources\n \n+ @property\n+ def timeout(self):\n+ \"\"\"Gets the timeout of this V1alpha1InferenceGraphSpec. # noqa: E501\n+\n+ TimeoutSeconds specifies the number of seconds to wait before timing out a request to the component. # noqa: E501\n+\n+ :return: The timeout of this V1alpha1InferenceGraphSpec. # noqa: E501\n+ :rtype: int\n+ \"\"\"\n+ return self._timeout\n+\n+ @timeout.setter\n+ def timeout(self, timeout):\n+ \"\"\"Sets the timeout of this V1alpha1InferenceGraphSpec.\n+\n+ TimeoutSeconds specifies the number of seconds to wait before timing out a request to the component. # noqa: E501\n+\n+ :param timeout: The timeout of this V1alpha1InferenceGraphSpec. # noqa: E501\n+ :type: int\n+ \"\"\"\n+\n+ self._timeout = timeout\n+\n def to_dict(self):\n \"\"\"Returns the model properties as a dict\"\"\"\n result = {}\n", "issue": "Configurable timeouts for InferenceGraph\n/kind feature\r\n\r\n**Describe the solution you'd like**\r\n\r\nInferenceServices in a Serverless deployment of KServe allow you to configure the Knative `timeoutSeconds` field by way of the `timeout` field in ComponentExtensionSpec. We'd like the same feature for an InferenceGraph.\r\n\r\n**Anything else you would like to add:**\r\n\r\nIf we cannot update the InfereceGraph spec at this time, I would request the InferenceGraph controller at the very least be able to check the existing InferenceServices for Knative-backed components and match the highest `timeoutSeconds`.\r\n\r\nThanks for your consideration!\n", "before_files": [{"content": "# Copyright 2023 The KServe Authors.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n# coding: utf-8\n\n\"\"\"\n KServe\n\n Python SDK for KServe # noqa: E501\n\n The version of the OpenAPI document: v0.1\n Generated by: https://openapi-generator.tech\n\"\"\"\n\n\nimport pprint\nimport re # noqa: F401\n\nimport six\n\nfrom kserve.configuration import Configuration\n\n\nclass V1alpha1InferenceGraphSpec(object):\n \"\"\"NOTE: This class is auto generated by OpenAPI Generator.\n Ref: https://openapi-generator.tech\n\n Do not edit the class manually.\n \"\"\"\n\n \"\"\"\n Attributes:\n openapi_types (dict): The key is attribute name\n and the value is attribute type.\n attribute_map (dict): The key is attribute name\n and the value is json key in definition.\n \"\"\"\n openapi_types = {\n 'affinity': 'V1Affinity',\n 'nodes': 'dict(str, V1alpha1InferenceRouter)',\n 'resources': 'V1ResourceRequirements'\n }\n\n attribute_map = {\n 'affinity': 'affinity',\n 'nodes': 'nodes',\n 'resources': 'resources'\n }\n\n def __init__(self, affinity=None, nodes=None, resources=None, local_vars_configuration=None): # noqa: E501\n \"\"\"V1alpha1InferenceGraphSpec - a model defined in OpenAPI\"\"\" # noqa: E501\n if local_vars_configuration is None:\n local_vars_configuration = Configuration()\n self.local_vars_configuration = local_vars_configuration\n\n self._affinity = None\n self._nodes = None\n self._resources = None\n self.discriminator = None\n\n if affinity is not None:\n self.affinity = affinity\n self.nodes = nodes\n if resources is not None:\n self.resources = resources\n\n @property\n def affinity(self):\n \"\"\"Gets the affinity of this V1alpha1InferenceGraphSpec. # noqa: E501\n\n\n :return: The affinity of this V1alpha1InferenceGraphSpec. # noqa: E501\n :rtype: V1Affinity\n \"\"\"\n return self._affinity\n\n @affinity.setter\n def affinity(self, affinity):\n \"\"\"Sets the affinity of this V1alpha1InferenceGraphSpec.\n\n\n :param affinity: The affinity of this V1alpha1InferenceGraphSpec. # noqa: E501\n :type: V1Affinity\n \"\"\"\n\n self._affinity = affinity\n\n @property\n def nodes(self):\n \"\"\"Gets the nodes of this V1alpha1InferenceGraphSpec. # noqa: E501\n\n Map of InferenceGraph router nodes Each node defines the router which can be different routing types # noqa: E501\n\n :return: The nodes of this V1alpha1InferenceGraphSpec. # noqa: E501\n :rtype: dict(str, V1alpha1InferenceRouter)\n \"\"\"\n return self._nodes\n\n @nodes.setter\n def nodes(self, nodes):\n \"\"\"Sets the nodes of this V1alpha1InferenceGraphSpec.\n\n Map of InferenceGraph router nodes Each node defines the router which can be different routing types # noqa: E501\n\n :param nodes: The nodes of this V1alpha1InferenceGraphSpec. # noqa: E501\n :type: dict(str, V1alpha1InferenceRouter)\n \"\"\"\n if self.local_vars_configuration.client_side_validation and nodes is None: # noqa: E501\n raise ValueError(\"Invalid value for `nodes`, must not be `None`\") # noqa: E501\n\n self._nodes = nodes\n\n @property\n def resources(self):\n \"\"\"Gets the resources of this V1alpha1InferenceGraphSpec. # noqa: E501\n\n\n :return: The resources of this V1alpha1InferenceGraphSpec. # noqa: E501\n :rtype: V1ResourceRequirements\n \"\"\"\n return self._resources\n\n @resources.setter\n def resources(self, resources):\n \"\"\"Sets the resources of this V1alpha1InferenceGraphSpec.\n\n\n :param resources: The resources of this V1alpha1InferenceGraphSpec. # noqa: E501\n :type: V1ResourceRequirements\n \"\"\"\n\n self._resources = resources\n\n def to_dict(self):\n \"\"\"Returns the model properties as a dict\"\"\"\n result = {}\n\n for attr, _ in six.iteritems(self.openapi_types):\n value = getattr(self, attr)\n if isinstance(value, list):\n result[attr] = list(map(\n lambda x: x.to_dict() if hasattr(x, \"to_dict\") else x,\n value\n ))\n elif hasattr(value, \"to_dict\"):\n result[attr] = value.to_dict()\n elif isinstance(value, dict):\n result[attr] = dict(map(\n lambda item: (item[0], item[1].to_dict())\n if hasattr(item[1], \"to_dict\") else item,\n value.items()\n ))\n else:\n result[attr] = value\n\n return result\n\n def to_str(self):\n \"\"\"Returns the string representation of the model\"\"\"\n return pprint.pformat(self.to_dict())\n\n def __repr__(self):\n \"\"\"For `print` and `pprint`\"\"\"\n return self.to_str()\n\n def __eq__(self, other):\n \"\"\"Returns true if both objects are equal\"\"\"\n if not isinstance(other, V1alpha1InferenceGraphSpec):\n return False\n\n return self.to_dict() == other.to_dict()\n\n def __ne__(self, other):\n \"\"\"Returns true if both objects are not equal\"\"\"\n if not isinstance(other, V1alpha1InferenceGraphSpec):\n return True\n\n return self.to_dict() != other.to_dict()\n", "path": "python/kserve/kserve/models/v1alpha1_inference_graph_spec.py"}]}
| 2,602 | 685 |
gh_patches_debug_32909
|
rasdani/github-patches
|
git_diff
|
python-discord__bot-1538
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Stop an in-progress stream if a user's role is revoked
If a user's streaming permission is removed via the revoke stream command, we want to stop any stream that may be in-progress.
If a user's video role drops off due to the normal schedule, then we don't need to do this.
`discord.ActivityType.streaming` shows whether a user is streaming.
This implementation should work even if the member don't have the video role anymore.
Concept approved by @MrHemlock
</issue>
<code>
[start of bot/exts/moderation/stream.py]
1 import logging
2 from datetime import timedelta, timezone
3 from operator import itemgetter
4
5 import arrow
6 import discord
7 from arrow import Arrow
8 from async_rediscache import RedisCache
9 from discord.ext import commands
10
11 from bot.bot import Bot
12 from bot.constants import Colours, Emojis, Guild, MODERATION_ROLES, Roles, STAFF_ROLES, VideoPermission
13 from bot.converters import Expiry
14 from bot.pagination import LinePaginator
15 from bot.utils.scheduling import Scheduler
16 from bot.utils.time import format_infraction_with_duration
17
18 log = logging.getLogger(__name__)
19
20
21 class Stream(commands.Cog):
22 """Grant and revoke streaming permissions from members."""
23
24 # Stores tasks to remove streaming permission
25 # RedisCache[discord.Member.id, UtcPosixTimestamp]
26 task_cache = RedisCache()
27
28 def __init__(self, bot: Bot):
29 self.bot = bot
30 self.scheduler = Scheduler(self.__class__.__name__)
31 self.reload_task = self.bot.loop.create_task(self._reload_tasks_from_redis())
32
33 def cog_unload(self) -> None:
34 """Cancel all scheduled tasks."""
35 self.reload_task.cancel()
36 self.reload_task.add_done_callback(lambda _: self.scheduler.cancel_all())
37
38 async def _revoke_streaming_permission(self, member: discord.Member) -> None:
39 """Remove the streaming permission from the given Member."""
40 await self.task_cache.delete(member.id)
41 await member.remove_roles(discord.Object(Roles.video), reason="Streaming access revoked")
42
43 async def _reload_tasks_from_redis(self) -> None:
44 """Reload outstanding tasks from redis on startup, delete the task if the member has since left the server."""
45 await self.bot.wait_until_guild_available()
46 items = await self.task_cache.items()
47 for key, value in items:
48 member = self.bot.get_guild(Guild.id).get_member(key)
49
50 if not member:
51 # Member isn't found in the cache
52 try:
53 member = await self.bot.get_guild(Guild.id).fetch_member(key)
54 except discord.errors.NotFound:
55 log.debug(
56 f"Member {key} left the guild before we could schedule "
57 "the revoking of their streaming permissions."
58 )
59 await self.task_cache.delete(key)
60 continue
61 except discord.HTTPException:
62 log.exception(f"Exception while trying to retrieve member {key} from Discord.")
63 continue
64
65 revoke_time = Arrow.utcfromtimestamp(value)
66 log.debug(f"Scheduling {member} ({member.id}) to have streaming permission revoked at {revoke_time}")
67 self.scheduler.schedule_at(
68 revoke_time,
69 key,
70 self._revoke_streaming_permission(member)
71 )
72
73 @commands.command(aliases=("streaming",))
74 @commands.has_any_role(*MODERATION_ROLES)
75 async def stream(self, ctx: commands.Context, member: discord.Member, duration: Expiry = None) -> None:
76 """
77 Temporarily grant streaming permissions to a member for a given duration.
78
79 A unit of time should be appended to the duration.
80 Units (∗case-sensitive):
81 \u2003`y` - years
82 \u2003`m` - months∗
83 \u2003`w` - weeks
84 \u2003`d` - days
85 \u2003`h` - hours
86 \u2003`M` - minutes∗
87 \u2003`s` - seconds
88
89 Alternatively, an ISO 8601 timestamp can be provided for the duration.
90 """
91 log.trace(f"Attempting to give temporary streaming permission to {member} ({member.id}).")
92
93 if duration is None:
94 # Use default duration and convert back to datetime as Embed.timestamp doesn't support Arrow
95 duration = arrow.utcnow() + timedelta(minutes=VideoPermission.default_permission_duration)
96 duration = duration.datetime
97 elif duration.tzinfo is None:
98 # Make duration tz-aware.
99 # ISODateTime could already include tzinfo, this check is so it isn't overwritten.
100 duration.replace(tzinfo=timezone.utc)
101
102 # Check if the member already has streaming permission
103 already_allowed = any(Roles.video == role.id for role in member.roles)
104 if already_allowed:
105 await ctx.send(f"{Emojis.cross_mark} {member.mention} can already stream.")
106 log.debug(f"{member} ({member.id}) already has permission to stream.")
107 return
108
109 # Schedule task to remove streaming permission from Member and add it to task cache
110 self.scheduler.schedule_at(duration, member.id, self._revoke_streaming_permission(member))
111 await self.task_cache.set(member.id, duration.timestamp())
112
113 await member.add_roles(discord.Object(Roles.video), reason="Temporary streaming access granted")
114
115 # Use embed as embed timestamps do timezone conversions.
116 embed = discord.Embed(
117 description=f"{Emojis.check_mark} {member.mention} can now stream.",
118 colour=Colours.soft_green
119 )
120 embed.set_footer(text=f"Streaming permission has been given to {member} until")
121 embed.timestamp = duration
122
123 # Mention in content as mentions in embeds don't ping
124 await ctx.send(content=member.mention, embed=embed)
125
126 # Convert here for nicer logging
127 revoke_time = format_infraction_with_duration(str(duration))
128 log.debug(f"Successfully gave {member} ({member.id}) permission to stream until {revoke_time}.")
129
130 @commands.command(aliases=("pstream",))
131 @commands.has_any_role(*MODERATION_ROLES)
132 async def permanentstream(self, ctx: commands.Context, member: discord.Member) -> None:
133 """Permanently grants the given member the permission to stream."""
134 log.trace(f"Attempting to give permanent streaming permission to {member} ({member.id}).")
135
136 # Check if the member already has streaming permission
137 if any(Roles.video == role.id for role in member.roles):
138 if member.id in self.scheduler:
139 # Member has temp permission, so cancel the task to revoke later and delete from cache
140 self.scheduler.cancel(member.id)
141 await self.task_cache.delete(member.id)
142
143 await ctx.send(f"{Emojis.check_mark} Permanently granted {member.mention} the permission to stream.")
144 log.debug(
145 f"Successfully upgraded temporary streaming permission for {member} ({member.id}) to permanent."
146 )
147 return
148
149 await ctx.send(f"{Emojis.cross_mark} This member can already stream.")
150 log.debug(f"{member} ({member.id}) already had permanent streaming permission.")
151 return
152
153 await member.add_roles(discord.Object(Roles.video), reason="Permanent streaming access granted")
154 await ctx.send(f"{Emojis.check_mark} Permanently granted {member.mention} the permission to stream.")
155 log.debug(f"Successfully gave {member} ({member.id}) permanent streaming permission.")
156
157 @commands.command(aliases=("unstream", "rstream"))
158 @commands.has_any_role(*MODERATION_ROLES)
159 async def revokestream(self, ctx: commands.Context, member: discord.Member) -> None:
160 """Revoke the permission to stream from the given member."""
161 log.trace(f"Attempting to remove streaming permission from {member} ({member.id}).")
162
163 # Check if the member already has streaming permission
164 if any(Roles.video == role.id for role in member.roles):
165 if member.id in self.scheduler:
166 # Member has temp permission, so cancel the task to revoke later and delete from cache
167 self.scheduler.cancel(member.id)
168 await self.task_cache.delete(member.id)
169 await self._revoke_streaming_permission(member)
170
171 await ctx.send(f"{Emojis.check_mark} Revoked the permission to stream from {member.mention}.")
172 log.debug(f"Successfully revoked streaming permission from {member} ({member.id}).")
173 return
174
175 await ctx.send(f"{Emojis.cross_mark} This member doesn't have video permissions to remove!")
176 log.debug(f"{member} ({member.id}) didn't have the streaming permission to remove!")
177
178 @commands.command(aliases=('lstream',))
179 @commands.has_any_role(*MODERATION_ROLES)
180 async def liststream(self, ctx: commands.Context) -> None:
181 """Lists all non-staff users who have permission to stream."""
182 non_staff_members_with_stream = [
183 member
184 for member in ctx.guild.get_role(Roles.video).members
185 if not any(role.id in STAFF_ROLES for role in member.roles)
186 ]
187
188 # List of tuples (UtcPosixTimestamp, str)
189 # So that the list can be sorted on the UtcPosixTimestamp before the message is passed to the paginator.
190 streamer_info = []
191 for member in non_staff_members_with_stream:
192 if revoke_time := await self.task_cache.get(member.id):
193 # Member only has temporary streaming perms
194 revoke_delta = Arrow.utcfromtimestamp(revoke_time).humanize()
195 message = f"{member.mention} will have stream permissions revoked {revoke_delta}."
196 else:
197 message = f"{member.mention} has permanent streaming permissions."
198
199 # If revoke_time is None use max timestamp to force sort to put them at the end
200 streamer_info.append(
201 (revoke_time or Arrow.max.timestamp(), message)
202 )
203
204 if streamer_info:
205 # Sort based on duration left of streaming perms
206 streamer_info.sort(key=itemgetter(0))
207
208 # Only output the message in the pagination
209 lines = [line[1] for line in streamer_info]
210 embed = discord.Embed(
211 title=f"Members with streaming permission (`{len(lines)}` total)",
212 colour=Colours.soft_green
213 )
214 await LinePaginator.paginate(lines, ctx, embed, max_size=400, empty=False)
215 else:
216 await ctx.send("No members with stream permissions found.")
217
218
219 def setup(bot: Bot) -> None:
220 """Loads the Stream cog."""
221 bot.add_cog(Stream(bot))
222
[end of bot/exts/moderation/stream.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/bot/exts/moderation/stream.py b/bot/exts/moderation/stream.py
--- a/bot/exts/moderation/stream.py
+++ b/bot/exts/moderation/stream.py
@@ -70,6 +70,28 @@
self._revoke_streaming_permission(member)
)
+ async def _suspend_stream(self, ctx: commands.Context, member: discord.Member) -> None:
+ """Suspend a member's stream."""
+ await self.bot.wait_until_guild_available()
+ voice_state = member.voice
+
+ if not voice_state:
+ return
+
+ # If the user is streaming.
+ if voice_state.self_stream:
+ # End user's stream by moving them to AFK voice channel and back.
+ original_vc = voice_state.channel
+ await member.move_to(ctx.guild.afk_channel)
+ await member.move_to(original_vc)
+
+ # Notify.
+ await ctx.send(f"{member.mention}'s stream has been suspended!")
+ log.debug(f"Successfully suspended stream from {member} ({member.id}).")
+ return
+
+ log.debug(f"No stream found to suspend from {member} ({member.id}).")
+
@commands.command(aliases=("streaming",))
@commands.has_any_role(*MODERATION_ROLES)
async def stream(self, ctx: commands.Context, member: discord.Member, duration: Expiry = None) -> None:
@@ -170,10 +192,12 @@
await ctx.send(f"{Emojis.check_mark} Revoked the permission to stream from {member.mention}.")
log.debug(f"Successfully revoked streaming permission from {member} ({member.id}).")
- return
- await ctx.send(f"{Emojis.cross_mark} This member doesn't have video permissions to remove!")
- log.debug(f"{member} ({member.id}) didn't have the streaming permission to remove!")
+ else:
+ await ctx.send(f"{Emojis.cross_mark} This member doesn't have video permissions to remove!")
+ log.debug(f"{member} ({member.id}) didn't have the streaming permission to remove!")
+
+ await self._suspend_stream(ctx, member)
@commands.command(aliases=('lstream',))
@commands.has_any_role(*MODERATION_ROLES)
|
{"golden_diff": "diff --git a/bot/exts/moderation/stream.py b/bot/exts/moderation/stream.py\n--- a/bot/exts/moderation/stream.py\n+++ b/bot/exts/moderation/stream.py\n@@ -70,6 +70,28 @@\n self._revoke_streaming_permission(member)\n )\n \n+ async def _suspend_stream(self, ctx: commands.Context, member: discord.Member) -> None:\n+ \"\"\"Suspend a member's stream.\"\"\"\n+ await self.bot.wait_until_guild_available()\n+ voice_state = member.voice\n+\n+ if not voice_state:\n+ return\n+\n+ # If the user is streaming.\n+ if voice_state.self_stream:\n+ # End user's stream by moving them to AFK voice channel and back.\n+ original_vc = voice_state.channel\n+ await member.move_to(ctx.guild.afk_channel)\n+ await member.move_to(original_vc)\n+\n+ # Notify.\n+ await ctx.send(f\"{member.mention}'s stream has been suspended!\")\n+ log.debug(f\"Successfully suspended stream from {member} ({member.id}).\")\n+ return\n+\n+ log.debug(f\"No stream found to suspend from {member} ({member.id}).\")\n+\n @commands.command(aliases=(\"streaming\",))\n @commands.has_any_role(*MODERATION_ROLES)\n async def stream(self, ctx: commands.Context, member: discord.Member, duration: Expiry = None) -> None:\n@@ -170,10 +192,12 @@\n \n await ctx.send(f\"{Emojis.check_mark} Revoked the permission to stream from {member.mention}.\")\n log.debug(f\"Successfully revoked streaming permission from {member} ({member.id}).\")\n- return\n \n- await ctx.send(f\"{Emojis.cross_mark} This member doesn't have video permissions to remove!\")\n- log.debug(f\"{member} ({member.id}) didn't have the streaming permission to remove!\")\n+ else:\n+ await ctx.send(f\"{Emojis.cross_mark} This member doesn't have video permissions to remove!\")\n+ log.debug(f\"{member} ({member.id}) didn't have the streaming permission to remove!\")\n+\n+ await self._suspend_stream(ctx, member)\n \n @commands.command(aliases=('lstream',))\n @commands.has_any_role(*MODERATION_ROLES)\n", "issue": "Stop an in-progress stream if a user's role is revoked\nIf a user's streaming permission is removed via the revoke stream command, we want to stop any stream that may be in-progress.\r\n\r\nIf a user's video role drops off due to the normal schedule, then we don't need to do this.\r\n\r\n`discord.ActivityType.streaming` shows whether a user is streaming.\r\n\r\nThis implementation should work even if the member don't have the video role anymore.\r\n\r\nConcept approved by @MrHemlock \n", "before_files": [{"content": "import logging\nfrom datetime import timedelta, timezone\nfrom operator import itemgetter\n\nimport arrow\nimport discord\nfrom arrow import Arrow\nfrom async_rediscache import RedisCache\nfrom discord.ext import commands\n\nfrom bot.bot import Bot\nfrom bot.constants import Colours, Emojis, Guild, MODERATION_ROLES, Roles, STAFF_ROLES, VideoPermission\nfrom bot.converters import Expiry\nfrom bot.pagination import LinePaginator\nfrom bot.utils.scheduling import Scheduler\nfrom bot.utils.time import format_infraction_with_duration\n\nlog = logging.getLogger(__name__)\n\n\nclass Stream(commands.Cog):\n \"\"\"Grant and revoke streaming permissions from members.\"\"\"\n\n # Stores tasks to remove streaming permission\n # RedisCache[discord.Member.id, UtcPosixTimestamp]\n task_cache = RedisCache()\n\n def __init__(self, bot: Bot):\n self.bot = bot\n self.scheduler = Scheduler(self.__class__.__name__)\n self.reload_task = self.bot.loop.create_task(self._reload_tasks_from_redis())\n\n def cog_unload(self) -> None:\n \"\"\"Cancel all scheduled tasks.\"\"\"\n self.reload_task.cancel()\n self.reload_task.add_done_callback(lambda _: self.scheduler.cancel_all())\n\n async def _revoke_streaming_permission(self, member: discord.Member) -> None:\n \"\"\"Remove the streaming permission from the given Member.\"\"\"\n await self.task_cache.delete(member.id)\n await member.remove_roles(discord.Object(Roles.video), reason=\"Streaming access revoked\")\n\n async def _reload_tasks_from_redis(self) -> None:\n \"\"\"Reload outstanding tasks from redis on startup, delete the task if the member has since left the server.\"\"\"\n await self.bot.wait_until_guild_available()\n items = await self.task_cache.items()\n for key, value in items:\n member = self.bot.get_guild(Guild.id).get_member(key)\n\n if not member:\n # Member isn't found in the cache\n try:\n member = await self.bot.get_guild(Guild.id).fetch_member(key)\n except discord.errors.NotFound:\n log.debug(\n f\"Member {key} left the guild before we could schedule \"\n \"the revoking of their streaming permissions.\"\n )\n await self.task_cache.delete(key)\n continue\n except discord.HTTPException:\n log.exception(f\"Exception while trying to retrieve member {key} from Discord.\")\n continue\n\n revoke_time = Arrow.utcfromtimestamp(value)\n log.debug(f\"Scheduling {member} ({member.id}) to have streaming permission revoked at {revoke_time}\")\n self.scheduler.schedule_at(\n revoke_time,\n key,\n self._revoke_streaming_permission(member)\n )\n\n @commands.command(aliases=(\"streaming\",))\n @commands.has_any_role(*MODERATION_ROLES)\n async def stream(self, ctx: commands.Context, member: discord.Member, duration: Expiry = None) -> None:\n \"\"\"\n Temporarily grant streaming permissions to a member for a given duration.\n\n A unit of time should be appended to the duration.\n Units (\u2217case-sensitive):\n \\u2003`y` - years\n \\u2003`m` - months\u2217\n \\u2003`w` - weeks\n \\u2003`d` - days\n \\u2003`h` - hours\n \\u2003`M` - minutes\u2217\n \\u2003`s` - seconds\n\n Alternatively, an ISO 8601 timestamp can be provided for the duration.\n \"\"\"\n log.trace(f\"Attempting to give temporary streaming permission to {member} ({member.id}).\")\n\n if duration is None:\n # Use default duration and convert back to datetime as Embed.timestamp doesn't support Arrow\n duration = arrow.utcnow() + timedelta(minutes=VideoPermission.default_permission_duration)\n duration = duration.datetime\n elif duration.tzinfo is None:\n # Make duration tz-aware.\n # ISODateTime could already include tzinfo, this check is so it isn't overwritten.\n duration.replace(tzinfo=timezone.utc)\n\n # Check if the member already has streaming permission\n already_allowed = any(Roles.video == role.id for role in member.roles)\n if already_allowed:\n await ctx.send(f\"{Emojis.cross_mark} {member.mention} can already stream.\")\n log.debug(f\"{member} ({member.id}) already has permission to stream.\")\n return\n\n # Schedule task to remove streaming permission from Member and add it to task cache\n self.scheduler.schedule_at(duration, member.id, self._revoke_streaming_permission(member))\n await self.task_cache.set(member.id, duration.timestamp())\n\n await member.add_roles(discord.Object(Roles.video), reason=\"Temporary streaming access granted\")\n\n # Use embed as embed timestamps do timezone conversions.\n embed = discord.Embed(\n description=f\"{Emojis.check_mark} {member.mention} can now stream.\",\n colour=Colours.soft_green\n )\n embed.set_footer(text=f\"Streaming permission has been given to {member} until\")\n embed.timestamp = duration\n\n # Mention in content as mentions in embeds don't ping\n await ctx.send(content=member.mention, embed=embed)\n\n # Convert here for nicer logging\n revoke_time = format_infraction_with_duration(str(duration))\n log.debug(f\"Successfully gave {member} ({member.id}) permission to stream until {revoke_time}.\")\n\n @commands.command(aliases=(\"pstream\",))\n @commands.has_any_role(*MODERATION_ROLES)\n async def permanentstream(self, ctx: commands.Context, member: discord.Member) -> None:\n \"\"\"Permanently grants the given member the permission to stream.\"\"\"\n log.trace(f\"Attempting to give permanent streaming permission to {member} ({member.id}).\")\n\n # Check if the member already has streaming permission\n if any(Roles.video == role.id for role in member.roles):\n if member.id in self.scheduler:\n # Member has temp permission, so cancel the task to revoke later and delete from cache\n self.scheduler.cancel(member.id)\n await self.task_cache.delete(member.id)\n\n await ctx.send(f\"{Emojis.check_mark} Permanently granted {member.mention} the permission to stream.\")\n log.debug(\n f\"Successfully upgraded temporary streaming permission for {member} ({member.id}) to permanent.\"\n )\n return\n\n await ctx.send(f\"{Emojis.cross_mark} This member can already stream.\")\n log.debug(f\"{member} ({member.id}) already had permanent streaming permission.\")\n return\n\n await member.add_roles(discord.Object(Roles.video), reason=\"Permanent streaming access granted\")\n await ctx.send(f\"{Emojis.check_mark} Permanently granted {member.mention} the permission to stream.\")\n log.debug(f\"Successfully gave {member} ({member.id}) permanent streaming permission.\")\n\n @commands.command(aliases=(\"unstream\", \"rstream\"))\n @commands.has_any_role(*MODERATION_ROLES)\n async def revokestream(self, ctx: commands.Context, member: discord.Member) -> None:\n \"\"\"Revoke the permission to stream from the given member.\"\"\"\n log.trace(f\"Attempting to remove streaming permission from {member} ({member.id}).\")\n\n # Check if the member already has streaming permission\n if any(Roles.video == role.id for role in member.roles):\n if member.id in self.scheduler:\n # Member has temp permission, so cancel the task to revoke later and delete from cache\n self.scheduler.cancel(member.id)\n await self.task_cache.delete(member.id)\n await self._revoke_streaming_permission(member)\n\n await ctx.send(f\"{Emojis.check_mark} Revoked the permission to stream from {member.mention}.\")\n log.debug(f\"Successfully revoked streaming permission from {member} ({member.id}).\")\n return\n\n await ctx.send(f\"{Emojis.cross_mark} This member doesn't have video permissions to remove!\")\n log.debug(f\"{member} ({member.id}) didn't have the streaming permission to remove!\")\n\n @commands.command(aliases=('lstream',))\n @commands.has_any_role(*MODERATION_ROLES)\n async def liststream(self, ctx: commands.Context) -> None:\n \"\"\"Lists all non-staff users who have permission to stream.\"\"\"\n non_staff_members_with_stream = [\n member\n for member in ctx.guild.get_role(Roles.video).members\n if not any(role.id in STAFF_ROLES for role in member.roles)\n ]\n\n # List of tuples (UtcPosixTimestamp, str)\n # So that the list can be sorted on the UtcPosixTimestamp before the message is passed to the paginator.\n streamer_info = []\n for member in non_staff_members_with_stream:\n if revoke_time := await self.task_cache.get(member.id):\n # Member only has temporary streaming perms\n revoke_delta = Arrow.utcfromtimestamp(revoke_time).humanize()\n message = f\"{member.mention} will have stream permissions revoked {revoke_delta}.\"\n else:\n message = f\"{member.mention} has permanent streaming permissions.\"\n\n # If revoke_time is None use max timestamp to force sort to put them at the end\n streamer_info.append(\n (revoke_time or Arrow.max.timestamp(), message)\n )\n\n if streamer_info:\n # Sort based on duration left of streaming perms\n streamer_info.sort(key=itemgetter(0))\n\n # Only output the message in the pagination\n lines = [line[1] for line in streamer_info]\n embed = discord.Embed(\n title=f\"Members with streaming permission (`{len(lines)}` total)\",\n colour=Colours.soft_green\n )\n await LinePaginator.paginate(lines, ctx, embed, max_size=400, empty=False)\n else:\n await ctx.send(\"No members with stream permissions found.\")\n\n\ndef setup(bot: Bot) -> None:\n \"\"\"Loads the Stream cog.\"\"\"\n bot.add_cog(Stream(bot))\n", "path": "bot/exts/moderation/stream.py"}]}
| 3,324 | 505 |
gh_patches_debug_9269
|
rasdani/github-patches
|
git_diff
|
autogluon__autogluon-2915
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Update scikit-learn-intelex version
- [ ] Check if scikit-learn-intelex can be upgraded.
</issue>
<code>
[start of tabular/setup.py]
1 #!/usr/bin/env python
2 ###########################
3 # This code block is a HACK (!), but is necessary to avoid code duplication. Do NOT alter these lines.
4 import os
5 from setuptools import setup
6 import importlib.util
7 filepath = os.path.abspath(os.path.dirname(__file__))
8 filepath_import = os.path.join(filepath, '..', 'core', 'src', 'autogluon', 'core', '_setup_utils.py')
9 spec = importlib.util.spec_from_file_location("ag_min_dependencies", filepath_import)
10 ag = importlib.util.module_from_spec(spec)
11 # Identical to `from autogluon.core import _setup_utils as ag`, but works without `autogluon.core` being installed.
12 spec.loader.exec_module(ag)
13 ###########################
14
15 import sys
16
17 version = ag.load_version_file()
18 version = ag.update_version(version)
19
20 submodule = 'tabular'
21 install_requires = [
22 # version ranges added in ag.get_dependency_version_ranges()
23 'numpy', # version range defined in `core/_setup_utils.py`
24 'scipy', # version range defined in `core/_setup_utils.py`
25 'pandas', # version range defined in `core/_setup_utils.py`
26 'scikit-learn', # version range defined in `core/_setup_utils.py`
27 'networkx', # version range defined in `core/_setup_utils.py`
28 f'{ag.PACKAGE_NAME}.core=={version}',
29 f'{ag.PACKAGE_NAME}.features=={version}',
30 ]
31
32 extras_require = {
33 'lightgbm': [
34 'lightgbm>=3.3,<3.4',
35 ],
36 'catboost': [
37 'catboost>=1.0,<1.2',
38 ],
39 # FIXME: Debug why xgboost 1.6 has 4x+ slower inference on multiclass datasets compared to 1.4
40 # It is possibly only present on MacOS, haven't tested linux.
41 # XGBoost made API breaking changes in 1.6 with custom metric and callback support, so we don't support older versions.
42 'xgboost': [
43 'xgboost>=1.6,<1.8',
44 ],
45 'fastai': [
46 'torch>=1.9,<1.14',
47 'fastai>=2.3.1,<2.8',
48 ],
49 'ray': [
50 f'{ag.PACKAGE_NAME}.core[all]=={version}',
51 ],
52 'skex': [
53 'scikit-learn-intelex>=2021.6,<2021.8',
54 ],
55 'imodels': [
56 'imodels>=1.3.10,<1.4.0', # 1.3.8/1.3.9 either remove/renamed attribute `complexity_` causing failures. https://github.com/csinva/imodels/issues/147
57 ],
58 'vowpalwabbit': [
59 # FIXME: 9.5+ causes VW to save an empty model which always predicts 0. Confirmed on MacOS (Intel CPU). Unknown how to fix.
60 'vowpalwabbit>=9,<9.5',
61 ],
62 'skl2onnx': [
63 'skl2onnx>=1.13.0,<1.14.0',
64 # For macOS, there isn't a onnxruntime-gpu package installed with skl2onnx.
65 # Therefore, we install onnxruntime explicitly here just for macOS.
66 'onnxruntime>=1.13.0,<1.14.0'
67 ] if sys.platform == 'darwin' else [
68 'skl2onnx>=1.13.0,<1.14.0'
69 ]
70 }
71
72 all_requires = []
73 # TODO: Consider adding 'skex' to 'all'
74 for extra_package in ['lightgbm', 'catboost', 'xgboost', 'fastai', 'ray']:
75 all_requires += extras_require[extra_package]
76 all_requires = list(set(all_requires))
77 extras_require['all'] = all_requires
78
79
80 test_requires = []
81 for test_package in ['imodels', 'vowpalwabbit', 'skl2onnx']:
82 test_requires += extras_require[test_package]
83 extras_require['tests'] = test_requires
84 install_requires = ag.get_dependency_version_ranges(install_requires)
85
86 if __name__ == '__main__':
87 ag.create_version_file(version=version, submodule=submodule)
88 setup_args = ag.default_setup_args(version=version, submodule=submodule)
89 setup(
90 install_requires=install_requires,
91 extras_require=extras_require,
92 **setup_args,
93 )
94
[end of tabular/setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/tabular/setup.py b/tabular/setup.py
--- a/tabular/setup.py
+++ b/tabular/setup.py
@@ -50,7 +50,8 @@
f'{ag.PACKAGE_NAME}.core[all]=={version}',
],
'skex': [
- 'scikit-learn-intelex>=2021.6,<2021.8',
+ # Note: 2021.7 released on Sep 2022, version 2022.x doesn't exist (went directly from 2021.7 to 2023.0)
+ 'scikit-learn-intelex>=2021.7,<2023.1',
],
'imodels': [
'imodels>=1.3.10,<1.4.0', # 1.3.8/1.3.9 either remove/renamed attribute `complexity_` causing failures. https://github.com/csinva/imodels/issues/147
|
{"golden_diff": "diff --git a/tabular/setup.py b/tabular/setup.py\n--- a/tabular/setup.py\n+++ b/tabular/setup.py\n@@ -50,7 +50,8 @@\n f'{ag.PACKAGE_NAME}.core[all]=={version}',\n ],\n 'skex': [\n- 'scikit-learn-intelex>=2021.6,<2021.8',\n+ # Note: 2021.7 released on Sep 2022, version 2022.x doesn't exist (went directly from 2021.7 to 2023.0)\n+ 'scikit-learn-intelex>=2021.7,<2023.1',\n ],\n 'imodels': [\n 'imodels>=1.3.10,<1.4.0', # 1.3.8/1.3.9 either remove/renamed attribute `complexity_` causing failures. https://github.com/csinva/imodels/issues/147\n", "issue": "Update scikit-learn-intelex version\n- [ ] Check if scikit-learn-intelex can be upgraded.\n", "before_files": [{"content": "#!/usr/bin/env python\n###########################\n# This code block is a HACK (!), but is necessary to avoid code duplication. Do NOT alter these lines.\nimport os\nfrom setuptools import setup\nimport importlib.util\nfilepath = os.path.abspath(os.path.dirname(__file__))\nfilepath_import = os.path.join(filepath, '..', 'core', 'src', 'autogluon', 'core', '_setup_utils.py')\nspec = importlib.util.spec_from_file_location(\"ag_min_dependencies\", filepath_import)\nag = importlib.util.module_from_spec(spec)\n# Identical to `from autogluon.core import _setup_utils as ag`, but works without `autogluon.core` being installed.\nspec.loader.exec_module(ag)\n###########################\n\nimport sys\n\nversion = ag.load_version_file()\nversion = ag.update_version(version)\n\nsubmodule = 'tabular'\ninstall_requires = [\n # version ranges added in ag.get_dependency_version_ranges()\n 'numpy', # version range defined in `core/_setup_utils.py`\n 'scipy', # version range defined in `core/_setup_utils.py`\n 'pandas', # version range defined in `core/_setup_utils.py`\n 'scikit-learn', # version range defined in `core/_setup_utils.py`\n 'networkx', # version range defined in `core/_setup_utils.py`\n f'{ag.PACKAGE_NAME}.core=={version}',\n f'{ag.PACKAGE_NAME}.features=={version}',\n]\n\nextras_require = {\n 'lightgbm': [\n 'lightgbm>=3.3,<3.4',\n ],\n 'catboost': [\n 'catboost>=1.0,<1.2',\n ],\n # FIXME: Debug why xgboost 1.6 has 4x+ slower inference on multiclass datasets compared to 1.4\n # It is possibly only present on MacOS, haven't tested linux.\n # XGBoost made API breaking changes in 1.6 with custom metric and callback support, so we don't support older versions.\n 'xgboost': [\n 'xgboost>=1.6,<1.8',\n ],\n 'fastai': [\n 'torch>=1.9,<1.14',\n 'fastai>=2.3.1,<2.8',\n ],\n 'ray': [\n f'{ag.PACKAGE_NAME}.core[all]=={version}',\n ],\n 'skex': [\n 'scikit-learn-intelex>=2021.6,<2021.8',\n ],\n 'imodels': [\n 'imodels>=1.3.10,<1.4.0', # 1.3.8/1.3.9 either remove/renamed attribute `complexity_` causing failures. https://github.com/csinva/imodels/issues/147\n ],\n 'vowpalwabbit': [\n # FIXME: 9.5+ causes VW to save an empty model which always predicts 0. Confirmed on MacOS (Intel CPU). Unknown how to fix.\n 'vowpalwabbit>=9,<9.5',\n ],\n 'skl2onnx': [\n 'skl2onnx>=1.13.0,<1.14.0',\n # For macOS, there isn't a onnxruntime-gpu package installed with skl2onnx.\n # Therefore, we install onnxruntime explicitly here just for macOS.\n 'onnxruntime>=1.13.0,<1.14.0'\n ] if sys.platform == 'darwin' else [\n 'skl2onnx>=1.13.0,<1.14.0'\n ]\n}\n\nall_requires = []\n# TODO: Consider adding 'skex' to 'all'\nfor extra_package in ['lightgbm', 'catboost', 'xgboost', 'fastai', 'ray']:\n all_requires += extras_require[extra_package]\nall_requires = list(set(all_requires))\nextras_require['all'] = all_requires\n\n\ntest_requires = []\nfor test_package in ['imodels', 'vowpalwabbit', 'skl2onnx']:\n test_requires += extras_require[test_package]\nextras_require['tests'] = test_requires\ninstall_requires = ag.get_dependency_version_ranges(install_requires)\n\nif __name__ == '__main__':\n ag.create_version_file(version=version, submodule=submodule)\n setup_args = ag.default_setup_args(version=version, submodule=submodule)\n setup(\n install_requires=install_requires,\n extras_require=extras_require,\n **setup_args,\n )\n", "path": "tabular/setup.py"}]}
| 1,737 | 237 |
gh_patches_debug_2274
|
rasdani/github-patches
|
git_diff
|
svthalia__concrexit-1844
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Event (registration) status message in the API
### Is your feature request related to a problem? Please describe.
Currently, the event status messages (like 'you cannot cancel your registration without having to pay a fine') are hardcoded and whenever we update them, we must also update the app
### Describe the solution you'd like
Put the message in the API
### Additional context
Also checkout #1381
</issue>
<code>
[start of website/events/api/v2/serializers/event.py]
1 from rest_framework import serializers
2
3 from activemembers.api.v2.serializers.member_group import MemberGroupSerializer
4 from announcements.api.v2.serializers import SlideSerializer
5 from documents.api.v2.serializers.document import DocumentSerializer
6 from events import services
7 from events.api.v2.serializers.event_registration import EventRegistrationSerializer
8 from events.models import Event, EventRegistration
9 from thaliawebsite.api.v2.serializers import CleanedHTMLSerializer
10 from utils.snippets import create_google_maps_url
11
12
13 class EventSerializer(serializers.ModelSerializer):
14 """Serializer for events."""
15
16 class Meta:
17 model = Event
18 fields = (
19 "pk",
20 "title",
21 "description",
22 "start",
23 "end",
24 "category",
25 "registration_start",
26 "registration_end",
27 "cancel_deadline",
28 "optional_registrations",
29 "location",
30 "price",
31 "fine",
32 "num_participants",
33 "max_participants",
34 "no_registration_message",
35 "has_fields",
36 "food_event",
37 "maps_url",
38 "user_permissions",
39 "user_registration",
40 "organiser",
41 "slide",
42 "documents",
43 )
44
45 description = CleanedHTMLSerializer()
46 organiser = MemberGroupSerializer()
47 user_registration = serializers.SerializerMethodField("_user_registration")
48 num_participants = serializers.SerializerMethodField("_num_participants")
49 maps_url = serializers.SerializerMethodField("_maps_url")
50 price = serializers.DecimalField(max_digits=5, decimal_places=2)
51 fine = serializers.DecimalField(max_digits=5, decimal_places=2)
52 slide = SlideSerializer()
53 documents = DocumentSerializer(many=True)
54 user_permissions = serializers.SerializerMethodField("_user_permissions")
55
56 def _user_registration(self, instance):
57 try:
58 if self.context["request"].member:
59 reg = instance.eventregistration_set.get(
60 member=self.context["request"].member, date_cancelled=None
61 )
62 return EventRegistrationSerializer(
63 reg,
64 context=self.context,
65 fields=("pk", "present", "queue_position", "date", "payment"),
66 ).data
67 except EventRegistration.DoesNotExist:
68 pass
69 return None
70
71 def _num_participants(self, instance):
72 if (
73 instance.max_participants
74 and instance.participants.count() > instance.max_participants
75 ):
76 return instance.max_participants
77 return instance.participants.count()
78
79 def _user_permissions(self, instance):
80 member = self.context["request"].member
81 return services.event_permissions(member, instance)
82
83 def _maps_url(self, instance):
84 return create_google_maps_url(instance.map_location, zoom=13, size="450x250")
85
[end of website/events/api/v2/serializers/event.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/website/events/api/v2/serializers/event.py b/website/events/api/v2/serializers/event.py
--- a/website/events/api/v2/serializers/event.py
+++ b/website/events/api/v2/serializers/event.py
@@ -32,6 +32,7 @@
"num_participants",
"max_participants",
"no_registration_message",
+ "cancel_too_late_message",
"has_fields",
"food_event",
"maps_url",
|
{"golden_diff": "diff --git a/website/events/api/v2/serializers/event.py b/website/events/api/v2/serializers/event.py\n--- a/website/events/api/v2/serializers/event.py\n+++ b/website/events/api/v2/serializers/event.py\n@@ -32,6 +32,7 @@\n \"num_participants\",\n \"max_participants\",\n \"no_registration_message\",\n+ \"cancel_too_late_message\",\n \"has_fields\",\n \"food_event\",\n \"maps_url\",\n", "issue": "Event (registration) status message in the API\n### Is your feature request related to a problem? Please describe.\r\nCurrently, the event status messages (like 'you cannot cancel your registration without having to pay a fine') are hardcoded and whenever we update them, we must also update the app\r\n\r\n### Describe the solution you'd like\r\nPut the message in the API\r\n\r\n### Additional context\r\nAlso checkout #1381 \n", "before_files": [{"content": "from rest_framework import serializers\n\nfrom activemembers.api.v2.serializers.member_group import MemberGroupSerializer\nfrom announcements.api.v2.serializers import SlideSerializer\nfrom documents.api.v2.serializers.document import DocumentSerializer\nfrom events import services\nfrom events.api.v2.serializers.event_registration import EventRegistrationSerializer\nfrom events.models import Event, EventRegistration\nfrom thaliawebsite.api.v2.serializers import CleanedHTMLSerializer\nfrom utils.snippets import create_google_maps_url\n\n\nclass EventSerializer(serializers.ModelSerializer):\n \"\"\"Serializer for events.\"\"\"\n\n class Meta:\n model = Event\n fields = (\n \"pk\",\n \"title\",\n \"description\",\n \"start\",\n \"end\",\n \"category\",\n \"registration_start\",\n \"registration_end\",\n \"cancel_deadline\",\n \"optional_registrations\",\n \"location\",\n \"price\",\n \"fine\",\n \"num_participants\",\n \"max_participants\",\n \"no_registration_message\",\n \"has_fields\",\n \"food_event\",\n \"maps_url\",\n \"user_permissions\",\n \"user_registration\",\n \"organiser\",\n \"slide\",\n \"documents\",\n )\n\n description = CleanedHTMLSerializer()\n organiser = MemberGroupSerializer()\n user_registration = serializers.SerializerMethodField(\"_user_registration\")\n num_participants = serializers.SerializerMethodField(\"_num_participants\")\n maps_url = serializers.SerializerMethodField(\"_maps_url\")\n price = serializers.DecimalField(max_digits=5, decimal_places=2)\n fine = serializers.DecimalField(max_digits=5, decimal_places=2)\n slide = SlideSerializer()\n documents = DocumentSerializer(many=True)\n user_permissions = serializers.SerializerMethodField(\"_user_permissions\")\n\n def _user_registration(self, instance):\n try:\n if self.context[\"request\"].member:\n reg = instance.eventregistration_set.get(\n member=self.context[\"request\"].member, date_cancelled=None\n )\n return EventRegistrationSerializer(\n reg,\n context=self.context,\n fields=(\"pk\", \"present\", \"queue_position\", \"date\", \"payment\"),\n ).data\n except EventRegistration.DoesNotExist:\n pass\n return None\n\n def _num_participants(self, instance):\n if (\n instance.max_participants\n and instance.participants.count() > instance.max_participants\n ):\n return instance.max_participants\n return instance.participants.count()\n\n def _user_permissions(self, instance):\n member = self.context[\"request\"].member\n return services.event_permissions(member, instance)\n\n def _maps_url(self, instance):\n return create_google_maps_url(instance.map_location, zoom=13, size=\"450x250\")\n", "path": "website/events/api/v2/serializers/event.py"}]}
| 1,358 | 111 |
gh_patches_debug_49253
|
rasdani/github-patches
|
git_diff
|
pyro-ppl__pyro-1903
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Cannot delete constrained parameter [bug]
### **Issue Description**
Deleting a constrained parameter throws an error.
In the function `param_store.__delitem__()`, the line
`unconstrained_value = constrained_value.unconstrained()`
throws
`AttributeError: 'Tensor' object has no attribute 'unconstrained'`
### **Environment**
OS: Windows 8.1
Python Version: 3.6.8
Pytorch Version: 1.1.0
Pyro Version: 0.3.3
This error was also present using Pytorch 1.0 and Pyro 0.3.1.
### **Code Snippet**
```py
import torch
import pyro
from torch.distributions import constraints
param_store = pyro.get_param_store()
a = pyro.param('a', torch.ones(3))
print(param_store.keys()) #dict_keys(['a'])
param_store.__delitem__('a') #Works fine
print(param_store.keys()) #dict_keys([])
b = pyro.param('b', torch.ones(3), constraint=constraints.positive)
print(param_store.keys()) #dict_keys(['b'])
param_store.__delitem__('b') #AttributeError: 'Tensor' object has no attribute 'unconstrained'
```
</issue>
<code>
[start of pyro/params/param_store.py]
1 from __future__ import absolute_import, division, print_function
2
3 import re
4 import warnings
5 import weakref
6
7 import torch
8 from torch.distributions import constraints, transform_to
9
10
11 class ParamStoreDict(object):
12 """
13 Global store for parameters in Pyro. This is basically a key-value store.
14 The typical user interacts with the ParamStore primarily through the
15 primitive `pyro.param`.
16
17 See `Intro Part II <http://pyro.ai/examples/intro_part_ii.html>`_ for further discussion
18 and `SVI Part I <http://pyro.ai/examples/svi_part_i.html>`_ for some examples.
19
20 Some things to bear in mind when using parameters in Pyro:
21
22 - parameters must be assigned unique names
23 - the `init_tensor` argument to `pyro.param` is only used the first time that a given (named)
24 parameter is registered with Pyro.
25 - for this reason, a user may need to use the `clear()` method if working in a REPL in order to
26 get the desired behavior. this method can also be invoked with `pyro.clear_param_store()`.
27 - the internal name of a parameter within a PyTorch `nn.Module` that has been registered with
28 Pyro is prepended with the Pyro name of the module. so nothing prevents the user from having
29 two different modules each of which contains a parameter named `weight`. by contrast, a user
30 can only have one top-level parameter named `weight` (outside of any module).
31 - parameters can be saved and loaded from disk using `save` and `load`.
32 """
33
34 # -------------------------------------------------------------------------------
35 # New dict-like interface
36
37 def __init__(self):
38 """
39 initialize ParamStore data structures
40 """
41 self._params = {} # dictionary from param name to param
42 self._param_to_name = {} # dictionary from unconstrained param to param name
43 self._constraints = {} # dictionary from param name to constraint object
44
45 def clear(self):
46 """
47 Clear the ParamStore
48 """
49 self._params = {}
50 self._param_to_name = {}
51 self._constraints = {}
52
53 def items(self):
54 """
55 Iterate over ``(name, constrained_param)`` pairs.
56 """
57 for name in self._params:
58 yield name, self[name]
59
60 def keys(self):
61 """
62 Iterate over param names.
63 """
64 return self._params.keys()
65
66 def values(self):
67 """
68 Iterate over constrained parameter values.
69 """
70 for name, constrained_param in self.items():
71 yield constrained_param
72
73 def __bool__(self):
74 return bool(self._params)
75
76 def __len__(self):
77 return len(self._params)
78
79 def __contains__(self, name):
80 return name in self._params
81
82 def __iter__(self):
83 """
84 Iterate over param names.
85 """
86 return iter(self.keys())
87
88 def __delitem__(self, name):
89 """
90 Remove a parameter from the param store.
91 """
92 constrained_value = self._params.pop(name)
93 unconstrained_value = constrained_value.unconstrained()
94 self._param_to_name.pop(unconstrained_value)
95 self._constraints.pop(name)
96
97 def __getitem__(self, name):
98 """
99 Get the constrained value of a named parameter.
100 """
101 unconstrained_value = self._params[name]
102
103 # compute the constrained value
104 constraint = self._constraints[name]
105 constrained_value = transform_to(constraint)(unconstrained_value)
106 constrained_value.unconstrained = weakref.ref(unconstrained_value)
107
108 return constrained_value
109
110 def __setitem__(self, name, new_constrained_value):
111 """
112 Set the constrained value of an existing parameter, or the value of a
113 new unconstrained parameter. To declare a new parameter with
114 constraint, use :meth:`setdefault`.
115 """
116 # store constraint, defaulting to unconstrained
117 constraint = self._constraints.setdefault(name, constraints.real)
118
119 # compute the unconstrained value
120 with torch.no_grad():
121 # FIXME should we .detach() the new_constrained_value?
122 unconstrained_value = transform_to(constraint).inv(new_constrained_value)
123 unconstrained_value = unconstrained_value.contiguous()
124 unconstrained_value.requires_grad_(True)
125
126 # store a bidirectional mapping between name and unconstrained tensor
127 self._params[name] = unconstrained_value
128 self._param_to_name[unconstrained_value] = name
129
130 def setdefault(self, name, init_constrained_value, constraint=constraints.real):
131 """
132 Retrieve a constrained parameter value from the if it exists, otherwise
133 set the initial value. Note that this is a little fancier than
134 :meth:`dict.setdefault`.
135
136 If the parameter already exists, ``init_constrained_tensor`` will be ignored. To avoid
137 expensive creation of ``init_constrained_tensor`` you can wrap it in a ``lambda`` that
138 will only be evaluated if the parameter does not already exist::
139
140 param_store.get("foo", lambda: (0.001 * torch.randn(1000, 1000)).exp(),
141 constraint=constraints.positive)
142
143 :param str name: parameter name
144 :param init_constrained_value: initial constrained value
145 :type init_constrained_value: torch.Tensor or callable returning a torch.Tensor
146 :param constraint: torch constraint object
147 :type constraint: torch.distributions.constraints.Constraint
148 :returns: constrained parameter value
149 :rtype: torch.Tensor
150 """
151 if name not in self._params:
152 # set the constraint
153 self._constraints[name] = constraint
154
155 # evaluate the lazy value
156 if callable(init_constrained_value):
157 init_constrained_value = init_constrained_value()
158
159 # set the initial value
160 self[name] = init_constrained_value
161
162 # get the param, which is guaranteed to exist
163 return self[name]
164
165 # -------------------------------------------------------------------------------
166 # Old non-dict interface
167
168 def named_parameters(self):
169 """
170 Returns an iterator over ``(name, unconstrained_value)`` tuples for
171 each parameter in the ParamStore.
172 """
173 return self._params.items()
174
175 def get_all_param_names(self):
176 warnings.warn("ParamStore.get_all_param_names() is deprecated; use .keys() instead.",
177 DeprecationWarning)
178 return self.keys()
179
180 def replace_param(self, param_name, new_param, old_param):
181 warnings.warn("ParamStore.replace_param() is deprecated; use .__setitem__() instead.",
182 DeprecationWarning)
183 assert self._params[param_name] is old_param.unconstrained()
184 self[param_name] = new_param
185
186 def get_param(self, name, init_tensor=None, constraint=constraints.real, event_dim=None):
187 """
188 Get parameter from its name. If it does not yet exist in the
189 ParamStore, it will be created and stored.
190 The Pyro primitive `pyro.param` dispatches to this method.
191
192 :param name: parameter name
193 :type name: str
194 :param init_tensor: initial tensor
195 :type init_tensor: torch.Tensor
196 :param constraint: torch constraint
197 :type constraint: torch.distributions.constraints.Constraint
198 :param int event_dim: (ignored)
199 :returns: parameter
200 :rtype: torch.Tensor
201 """
202 if init_tensor is None:
203 return self[name]
204 else:
205 return self.setdefault(name, init_tensor, constraint)
206
207 def match(self, name):
208 """
209 Get all parameters that match regex. The parameter must exist.
210
211 :param name: regular expression
212 :type name: str
213 :returns: dict with key param name and value torch Tensor
214 """
215 pattern = re.compile(name)
216 return {name: self[name] for name in self if pattern.match(name)}
217
218 def param_name(self, p):
219 """
220 Get parameter name from parameter
221
222 :param p: parameter
223 :returns: parameter name
224 """
225 return self._param_to_name.get(p)
226
227 def get_state(self):
228 """
229 Get the ParamStore state.
230 """
231 state = {
232 'params': self._params,
233 'constraints': self._constraints,
234 }
235 return state
236
237 def set_state(self, state):
238 """
239 Set the ParamStore state using state from a previous get_state() call
240 """
241 assert isinstance(state, dict), "malformed ParamStore state"
242 assert set(state.keys()) == set(['params', 'constraints']), \
243 "malformed ParamStore keys {}".format(state.keys())
244
245 for param_name, param in state['params'].items():
246 self._params[param_name] = param
247 self._param_to_name[param] = param_name
248
249 for param_name, constraint in state['constraints'].items():
250 if isinstance(constraint, type(constraints.real)):
251 # Work around lack of hash & equality comparison on constraints.
252 constraint = constraints.real
253 self._constraints[param_name] = constraint
254
255 def save(self, filename):
256 """
257 Save parameters to disk
258
259 :param filename: file name to save to
260 :type filename: str
261 """
262 with open(filename, "wb") as output_file:
263 torch.save(self.get_state(), output_file)
264
265 def load(self, filename, map_location=None):
266 """
267 Loads parameters from disk
268
269 .. note::
270
271 If using :meth:`pyro.module` on parameters loaded from
272 disk, be sure to set the ``update_module_params`` flag::
273
274 pyro.get_param_store().load('saved_params.save')
275 pyro.module('module', nn, update_module_params=True)
276
277 :param filename: file name to load from
278 :type filename: str
279 :param map_location: specifies how to remap storage locations
280 :type map_location: function, torch.device, string or a dict
281 """
282 with open(filename, "rb") as input_file:
283 state = torch.load(input_file, map_location)
284 self.set_state(state)
285
286
287 # used to create fully-formed param names, e.g. mymodule$$$mysubmodule.weight
288 _MODULE_NAMESPACE_DIVIDER = "$$$"
289
290
291 def param_with_module_name(pyro_name, param_name):
292 return _MODULE_NAMESPACE_DIVIDER.join([pyro_name, param_name])
293
294
295 def module_from_param_with_module_name(param_name):
296 return param_name.split(_MODULE_NAMESPACE_DIVIDER)[0]
297
298
299 def user_param_name(param_name):
300 if _MODULE_NAMESPACE_DIVIDER in param_name:
301 return param_name.split(_MODULE_NAMESPACE_DIVIDER)[1]
302 return param_name
303
[end of pyro/params/param_store.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/pyro/params/param_store.py b/pyro/params/param_store.py
--- a/pyro/params/param_store.py
+++ b/pyro/params/param_store.py
@@ -89,8 +89,7 @@
"""
Remove a parameter from the param store.
"""
- constrained_value = self._params.pop(name)
- unconstrained_value = constrained_value.unconstrained()
+ unconstrained_value = self._params.pop(name)
self._param_to_name.pop(unconstrained_value)
self._constraints.pop(name)
|
{"golden_diff": "diff --git a/pyro/params/param_store.py b/pyro/params/param_store.py\n--- a/pyro/params/param_store.py\n+++ b/pyro/params/param_store.py\n@@ -89,8 +89,7 @@\n \"\"\"\n Remove a parameter from the param store.\n \"\"\"\n- constrained_value = self._params.pop(name)\n- unconstrained_value = constrained_value.unconstrained()\n+ unconstrained_value = self._params.pop(name)\n self._param_to_name.pop(unconstrained_value)\n self._constraints.pop(name)\n", "issue": "Cannot delete constrained parameter [bug]\n### **Issue Description**\r\nDeleting a constrained parameter throws an error.\r\nIn the function `param_store.__delitem__()`, the line \r\n`unconstrained_value = constrained_value.unconstrained()`\r\nthrows\r\n`AttributeError: 'Tensor' object has no attribute 'unconstrained'`\r\n\r\n### **Environment**\r\nOS: Windows 8.1\r\nPython Version: 3.6.8\r\nPytorch Version: 1.1.0\r\nPyro Version: 0.3.3\r\n\r\nThis error was also present using Pytorch 1.0 and Pyro 0.3.1.\r\n### **Code Snippet**\r\n```py\r\nimport torch\r\nimport pyro\r\nfrom torch.distributions import constraints\r\n\r\nparam_store = pyro.get_param_store()\r\na = pyro.param('a', torch.ones(3))\r\nprint(param_store.keys()) #dict_keys(['a'])\r\nparam_store.__delitem__('a') #Works fine\r\nprint(param_store.keys()) #dict_keys([])\r\nb = pyro.param('b', torch.ones(3), constraint=constraints.positive)\r\nprint(param_store.keys()) #dict_keys(['b'])\r\nparam_store.__delitem__('b') #AttributeError: 'Tensor' object has no attribute 'unconstrained'\r\n```\n", "before_files": [{"content": "from __future__ import absolute_import, division, print_function\n\nimport re\nimport warnings\nimport weakref\n\nimport torch\nfrom torch.distributions import constraints, transform_to\n\n\nclass ParamStoreDict(object):\n \"\"\"\n Global store for parameters in Pyro. This is basically a key-value store.\n The typical user interacts with the ParamStore primarily through the\n primitive `pyro.param`.\n\n See `Intro Part II <http://pyro.ai/examples/intro_part_ii.html>`_ for further discussion\n and `SVI Part I <http://pyro.ai/examples/svi_part_i.html>`_ for some examples.\n\n Some things to bear in mind when using parameters in Pyro:\n\n - parameters must be assigned unique names\n - the `init_tensor` argument to `pyro.param` is only used the first time that a given (named)\n parameter is registered with Pyro.\n - for this reason, a user may need to use the `clear()` method if working in a REPL in order to\n get the desired behavior. this method can also be invoked with `pyro.clear_param_store()`.\n - the internal name of a parameter within a PyTorch `nn.Module` that has been registered with\n Pyro is prepended with the Pyro name of the module. so nothing prevents the user from having\n two different modules each of which contains a parameter named `weight`. by contrast, a user\n can only have one top-level parameter named `weight` (outside of any module).\n - parameters can be saved and loaded from disk using `save` and `load`.\n \"\"\"\n\n # -------------------------------------------------------------------------------\n # New dict-like interface\n\n def __init__(self):\n \"\"\"\n initialize ParamStore data structures\n \"\"\"\n self._params = {} # dictionary from param name to param\n self._param_to_name = {} # dictionary from unconstrained param to param name\n self._constraints = {} # dictionary from param name to constraint object\n\n def clear(self):\n \"\"\"\n Clear the ParamStore\n \"\"\"\n self._params = {}\n self._param_to_name = {}\n self._constraints = {}\n\n def items(self):\n \"\"\"\n Iterate over ``(name, constrained_param)`` pairs.\n \"\"\"\n for name in self._params:\n yield name, self[name]\n\n def keys(self):\n \"\"\"\n Iterate over param names.\n \"\"\"\n return self._params.keys()\n\n def values(self):\n \"\"\"\n Iterate over constrained parameter values.\n \"\"\"\n for name, constrained_param in self.items():\n yield constrained_param\n\n def __bool__(self):\n return bool(self._params)\n\n def __len__(self):\n return len(self._params)\n\n def __contains__(self, name):\n return name in self._params\n\n def __iter__(self):\n \"\"\"\n Iterate over param names.\n \"\"\"\n return iter(self.keys())\n\n def __delitem__(self, name):\n \"\"\"\n Remove a parameter from the param store.\n \"\"\"\n constrained_value = self._params.pop(name)\n unconstrained_value = constrained_value.unconstrained()\n self._param_to_name.pop(unconstrained_value)\n self._constraints.pop(name)\n\n def __getitem__(self, name):\n \"\"\"\n Get the constrained value of a named parameter.\n \"\"\"\n unconstrained_value = self._params[name]\n\n # compute the constrained value\n constraint = self._constraints[name]\n constrained_value = transform_to(constraint)(unconstrained_value)\n constrained_value.unconstrained = weakref.ref(unconstrained_value)\n\n return constrained_value\n\n def __setitem__(self, name, new_constrained_value):\n \"\"\"\n Set the constrained value of an existing parameter, or the value of a\n new unconstrained parameter. To declare a new parameter with\n constraint, use :meth:`setdefault`.\n \"\"\"\n # store constraint, defaulting to unconstrained\n constraint = self._constraints.setdefault(name, constraints.real)\n\n # compute the unconstrained value\n with torch.no_grad():\n # FIXME should we .detach() the new_constrained_value?\n unconstrained_value = transform_to(constraint).inv(new_constrained_value)\n unconstrained_value = unconstrained_value.contiguous()\n unconstrained_value.requires_grad_(True)\n\n # store a bidirectional mapping between name and unconstrained tensor\n self._params[name] = unconstrained_value\n self._param_to_name[unconstrained_value] = name\n\n def setdefault(self, name, init_constrained_value, constraint=constraints.real):\n \"\"\"\n Retrieve a constrained parameter value from the if it exists, otherwise\n set the initial value. Note that this is a little fancier than\n :meth:`dict.setdefault`.\n\n If the parameter already exists, ``init_constrained_tensor`` will be ignored. To avoid\n expensive creation of ``init_constrained_tensor`` you can wrap it in a ``lambda`` that\n will only be evaluated if the parameter does not already exist::\n\n param_store.get(\"foo\", lambda: (0.001 * torch.randn(1000, 1000)).exp(),\n constraint=constraints.positive)\n\n :param str name: parameter name\n :param init_constrained_value: initial constrained value\n :type init_constrained_value: torch.Tensor or callable returning a torch.Tensor\n :param constraint: torch constraint object\n :type constraint: torch.distributions.constraints.Constraint\n :returns: constrained parameter value\n :rtype: torch.Tensor\n \"\"\"\n if name not in self._params:\n # set the constraint\n self._constraints[name] = constraint\n\n # evaluate the lazy value\n if callable(init_constrained_value):\n init_constrained_value = init_constrained_value()\n\n # set the initial value\n self[name] = init_constrained_value\n\n # get the param, which is guaranteed to exist\n return self[name]\n\n # -------------------------------------------------------------------------------\n # Old non-dict interface\n\n def named_parameters(self):\n \"\"\"\n Returns an iterator over ``(name, unconstrained_value)`` tuples for\n each parameter in the ParamStore.\n \"\"\"\n return self._params.items()\n\n def get_all_param_names(self):\n warnings.warn(\"ParamStore.get_all_param_names() is deprecated; use .keys() instead.\",\n DeprecationWarning)\n return self.keys()\n\n def replace_param(self, param_name, new_param, old_param):\n warnings.warn(\"ParamStore.replace_param() is deprecated; use .__setitem__() instead.\",\n DeprecationWarning)\n assert self._params[param_name] is old_param.unconstrained()\n self[param_name] = new_param\n\n def get_param(self, name, init_tensor=None, constraint=constraints.real, event_dim=None):\n \"\"\"\n Get parameter from its name. If it does not yet exist in the\n ParamStore, it will be created and stored.\n The Pyro primitive `pyro.param` dispatches to this method.\n\n :param name: parameter name\n :type name: str\n :param init_tensor: initial tensor\n :type init_tensor: torch.Tensor\n :param constraint: torch constraint\n :type constraint: torch.distributions.constraints.Constraint\n :param int event_dim: (ignored)\n :returns: parameter\n :rtype: torch.Tensor\n \"\"\"\n if init_tensor is None:\n return self[name]\n else:\n return self.setdefault(name, init_tensor, constraint)\n\n def match(self, name):\n \"\"\"\n Get all parameters that match regex. The parameter must exist.\n\n :param name: regular expression\n :type name: str\n :returns: dict with key param name and value torch Tensor\n \"\"\"\n pattern = re.compile(name)\n return {name: self[name] for name in self if pattern.match(name)}\n\n def param_name(self, p):\n \"\"\"\n Get parameter name from parameter\n\n :param p: parameter\n :returns: parameter name\n \"\"\"\n return self._param_to_name.get(p)\n\n def get_state(self):\n \"\"\"\n Get the ParamStore state.\n \"\"\"\n state = {\n 'params': self._params,\n 'constraints': self._constraints,\n }\n return state\n\n def set_state(self, state):\n \"\"\"\n Set the ParamStore state using state from a previous get_state() call\n \"\"\"\n assert isinstance(state, dict), \"malformed ParamStore state\"\n assert set(state.keys()) == set(['params', 'constraints']), \\\n \"malformed ParamStore keys {}\".format(state.keys())\n\n for param_name, param in state['params'].items():\n self._params[param_name] = param\n self._param_to_name[param] = param_name\n\n for param_name, constraint in state['constraints'].items():\n if isinstance(constraint, type(constraints.real)):\n # Work around lack of hash & equality comparison on constraints.\n constraint = constraints.real\n self._constraints[param_name] = constraint\n\n def save(self, filename):\n \"\"\"\n Save parameters to disk\n\n :param filename: file name to save to\n :type filename: str\n \"\"\"\n with open(filename, \"wb\") as output_file:\n torch.save(self.get_state(), output_file)\n\n def load(self, filename, map_location=None):\n \"\"\"\n Loads parameters from disk\n\n .. note::\n\n If using :meth:`pyro.module` on parameters loaded from\n disk, be sure to set the ``update_module_params`` flag::\n\n pyro.get_param_store().load('saved_params.save')\n pyro.module('module', nn, update_module_params=True)\n\n :param filename: file name to load from\n :type filename: str\n :param map_location: specifies how to remap storage locations\n :type map_location: function, torch.device, string or a dict\n \"\"\"\n with open(filename, \"rb\") as input_file:\n state = torch.load(input_file, map_location)\n self.set_state(state)\n\n\n# used to create fully-formed param names, e.g. mymodule$$$mysubmodule.weight\n_MODULE_NAMESPACE_DIVIDER = \"$$$\"\n\n\ndef param_with_module_name(pyro_name, param_name):\n return _MODULE_NAMESPACE_DIVIDER.join([pyro_name, param_name])\n\n\ndef module_from_param_with_module_name(param_name):\n return param_name.split(_MODULE_NAMESPACE_DIVIDER)[0]\n\n\ndef user_param_name(param_name):\n if _MODULE_NAMESPACE_DIVIDER in param_name:\n return param_name.split(_MODULE_NAMESPACE_DIVIDER)[1]\n return param_name\n", "path": "pyro/params/param_store.py"}]}
| 3,901 | 123 |
gh_patches_debug_25304
|
rasdani/github-patches
|
git_diff
|
learningequality__kolibri-8744
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Home page - The 'Continue learning on your own' section is not displayed
## Observed behavior
The 'Continue learning on your own' section is not displayed at the Home page even though I have completed all assigned class resources and have some non-class resources in progress.
## Expected behavior
The 'Continue learning on your own' section should be displayed as specified in [Notion](https://www.notion.so/Home-Page-QA-8ffb1011fa034e21bc1f52a2aad585ac).
## Steps to reproduce the issue
1. Install the following [build.](https://github.com/learningequality/kolibri/releases/tag/v0.15.0-beta2)
2. As a Coach create a new Learner user, enroll that user to a Class and assign a lesson with resources.
3. Sign in with the Learner user and complete the assigned resources.
4. Start looking at some non-classes resources so that those will get marked as being in progress.
5. Go back to the Home page and look for the 'Continue learning on your own' section
## Additional information

## Usage Details
- OS: Windows 10
- Browser: Chrome
</issue>
<code>
[start of kolibri/plugins/learn/viewsets.py]
1 from django.db.models import Count
2 from django.db.models import OuterRef
3 from django.db.models import Q
4 from django.db.models import Subquery
5 from django.db.models import Sum
6 from django.db.models.fields import IntegerField
7 from rest_framework.permissions import IsAuthenticated
8 from rest_framework.response import Response
9 from rest_framework.views import APIView
10
11 from kolibri.core.api import ReadOnlyValuesViewset
12 from kolibri.core.auth.api import KolibriAuthPermissionsFilter
13 from kolibri.core.auth.models import Classroom
14 from kolibri.core.auth.models import Facility
15 from kolibri.core.content.api import ContentNodeProgressViewset
16 from kolibri.core.content.api import ContentNodeViewset
17 from kolibri.core.content.api import UserContentNodeViewset
18 from kolibri.core.exams.models import Exam
19 from kolibri.core.lessons.models import Lesson
20 from kolibri.core.logger.models import AttemptLog
21 from kolibri.core.logger.models import MasteryLog
22
23
24 contentnode_progress_viewset = ContentNodeProgressViewset()
25 contentnode_viewset = ContentNodeViewset()
26 user_contentnode_viewset = UserContentNodeViewset()
27
28
29 class LearnStateView(APIView):
30 def get(self, request, format=None):
31 if request.user.is_anonymous():
32 default_facility = Facility.get_default_facility()
33 can_download_content = (
34 default_facility.dataset.show_download_button_in_learn
35 if default_facility
36 else True
37 )
38 return Response(
39 {
40 "in_classes": False,
41 "can_download_content": can_download_content,
42 }
43 )
44 return Response(
45 {
46 "in_classes": request.user.memberships.exists(),
47 "can_download_content": request.user.dataset.show_download_button_in_learn,
48 }
49 )
50
51
52 class LearnerClassroomViewset(ReadOnlyValuesViewset):
53 """
54 Returns all Classrooms for which the requesting User is a member,
55 along with all associated assignments.
56 """
57
58 filter_backends = (KolibriAuthPermissionsFilter,)
59 permission_classes = (IsAuthenticated,)
60
61 values = ("id", "name")
62
63 def get_queryset(self):
64 if self.request.user.is_anonymous():
65 return Classroom.objects.none()
66 return Classroom.objects.filter(membership__user=self.request.user)
67
68 def consolidate(self, items, queryset):
69 if not items:
70 return items
71 lessons = (
72 Lesson.objects.filter(
73 lesson_assignments__collection__membership__user=self.request.user,
74 is_active=True,
75 collection__in=(c["id"] for c in items),
76 )
77 .distinct()
78 .values(
79 "description", "id", "is_active", "title", "resources", "collection"
80 )
81 )
82 lesson_contentnode_ids = set()
83 for lesson in lessons:
84 lesson_contentnode_ids |= {
85 resource["contentnode_id"] for resource in lesson["resources"]
86 }
87
88 contentnode_progress = contentnode_progress_viewset.serialize_list(
89 self.request, {"ids": lesson_contentnode_ids}
90 )
91
92 contentnodes = contentnode_viewset.serialize_list(
93 self.request, {"ids": lesson_contentnode_ids}
94 )
95
96 progress_map = {l["content_id"]: l["progress"] for l in contentnode_progress}
97
98 contentnode_map = {c["id"]: c for c in contentnodes}
99
100 for lesson in lessons:
101 lesson["progress"] = {
102 "resource_progress": sum(
103 (
104 progress_map[resource["content_id"]]
105 for resource in lesson["resources"]
106 if resource["content_id"] in progress_map
107 )
108 ),
109 "total_resources": len(lesson["resources"]),
110 }
111 for resource in lesson["resources"]:
112 resource["progress"] = progress_map.get(resource["content_id"], 0)
113 resource["contentnode"] = contentnode_map.get(
114 resource["contentnode_id"], None
115 )
116
117 user_masterylog_content_ids = MasteryLog.objects.filter(
118 user=self.request.user
119 ).values("summarylog__content_id")
120
121 exams = (
122 Exam.objects.filter(
123 assignments__collection__membership__user=self.request.user,
124 collection__in=(c["id"] for c in items),
125 )
126 .filter(Q(active=True) | Q(id__in=user_masterylog_content_ids))
127 .annotate(
128 closed=Subquery(
129 MasteryLog.objects.filter(
130 summarylog__content_id=OuterRef("id"), user=self.request.user
131 ).values("complete")[:1]
132 ),
133 score=Subquery(
134 AttemptLog.objects.filter(
135 sessionlog__content_id=OuterRef("id"), user=self.request.user
136 )
137 .order_by()
138 .values_list("item")
139 .distinct()
140 .values("masterylog")
141 .annotate(total_correct=Sum("correct"))
142 .values("total_correct"),
143 output_field=IntegerField(),
144 ),
145 answer_count=Subquery(
146 AttemptLog.objects.filter(
147 sessionlog__content_id=OuterRef("id"), user=self.request.user
148 )
149 .order_by()
150 .values_list("item")
151 .distinct()
152 .values("masterylog")
153 .annotate(total_complete=Count("id"))
154 .values("total_complete"),
155 output_field=IntegerField(),
156 ),
157 )
158 .distinct()
159 .values(
160 "collection",
161 "active",
162 "archive",
163 "id",
164 "question_count",
165 "title",
166 "closed",
167 "answer_count",
168 "score",
169 )
170 )
171
172 for exam in exams:
173 closed = exam.pop("closed")
174 score = exam.pop("score")
175 answer_count = exam.pop("answer_count")
176 if closed is not None:
177 exam["progress"] = {
178 "closed": closed,
179 "score": score,
180 "answer_count": answer_count,
181 "started": True,
182 }
183 else:
184 exam["progress"] = {
185 "score": None,
186 "answer_count": None,
187 "closed": None,
188 "started": False,
189 }
190 out_items = []
191 for item in items:
192 item["assignments"] = {
193 "exams": [exam for exam in exams if exam["collection"] == item["id"]],
194 "lessons": [
195 lesson for lesson in lessons if lesson["collection"] == item["id"]
196 ],
197 }
198 out_items.append(item)
199 return out_items
200
201
202 learner_classroom_viewset = LearnerClassroomViewset()
203
204
205 def _resumable_resources(classrooms):
206 for classroom in classrooms:
207 for lesson in classroom["assignments"]["lessons"]:
208 for resource in lesson["resources"]:
209 yield resource["progress"] > 0
210
211
212 class LearnHomePageHydrationView(APIView):
213 def get(self, request, format=None):
214 classrooms = []
215 resumable_resources = []
216 resumable_resources_progress = []
217 if not request.user.is_anonymous():
218 classrooms = learner_classroom_viewset.serialize_list(request)
219 if not classrooms or not any(_resumable_resources(classrooms)):
220 resumable_resources = user_contentnode_viewset.serialize_list(
221 request, {"resume": True, "max_results": 12}
222 )
223 resumable_resources_progress = (
224 contentnode_progress_viewset.serialize_list(
225 request, {"resume": True, "max_results": 12}
226 )
227 )
228
229 return Response(
230 {
231 "classrooms": classrooms,
232 "resumable_resources": resumable_resources,
233 "resumable_resources_progress": resumable_resources_progress,
234 }
235 )
236
237
238 def _map_lesson_classroom(item):
239 return {
240 "id": item.pop("collection__id"),
241 "name": item.pop("collection__name"),
242 "parent": item.pop("collection__parent_id"),
243 }
244
245
246 class LearnerLessonViewset(ReadOnlyValuesViewset):
247 """
248 Special Viewset for Learners to view Lessons to which they are assigned.
249 The core Lesson Viewset is locked down to Admin users only.
250 """
251
252 permission_classes = (IsAuthenticated,)
253
254 values = (
255 "id",
256 "title",
257 "description",
258 "resources",
259 "is_active",
260 "collection",
261 "collection__id",
262 "collection__name",
263 "collection__parent_id",
264 )
265
266 field_map = {"classroom": _map_lesson_classroom}
267
268 def get_queryset(self):
269 if self.request.user.is_anonymous():
270 return Lesson.objects.none()
271 return Lesson.objects.filter(
272 lesson_assignments__collection__membership__user=self.request.user,
273 is_active=True,
274 )
275
[end of kolibri/plugins/learn/viewsets.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/kolibri/plugins/learn/viewsets.py b/kolibri/plugins/learn/viewsets.py
--- a/kolibri/plugins/learn/viewsets.py
+++ b/kolibri/plugins/learn/viewsets.py
@@ -85,12 +85,20 @@
resource["contentnode_id"] for resource in lesson["resources"]
}
- contentnode_progress = contentnode_progress_viewset.serialize_list(
- self.request, {"ids": lesson_contentnode_ids}
+ contentnode_progress = (
+ contentnode_progress_viewset.serialize_list(
+ self.request, {"ids": lesson_contentnode_ids}
+ )
+ if lesson_contentnode_ids
+ else []
)
- contentnodes = contentnode_viewset.serialize_list(
- self.request, {"ids": lesson_contentnode_ids}
+ contentnodes = (
+ contentnode_viewset.serialize_list(
+ self.request, {"ids": lesson_contentnode_ids}
+ )
+ if lesson_contentnode_ids
+ else []
)
progress_map = {l["content_id"]: l["progress"] for l in contentnode_progress}
@@ -206,7 +214,7 @@
for classroom in classrooms:
for lesson in classroom["assignments"]["lessons"]:
for resource in lesson["resources"]:
- yield resource["progress"] > 0
+ yield 0 < resource["progress"] < 1
class LearnHomePageHydrationView(APIView):
|
{"golden_diff": "diff --git a/kolibri/plugins/learn/viewsets.py b/kolibri/plugins/learn/viewsets.py\n--- a/kolibri/plugins/learn/viewsets.py\n+++ b/kolibri/plugins/learn/viewsets.py\n@@ -85,12 +85,20 @@\n resource[\"contentnode_id\"] for resource in lesson[\"resources\"]\n }\n \n- contentnode_progress = contentnode_progress_viewset.serialize_list(\n- self.request, {\"ids\": lesson_contentnode_ids}\n+ contentnode_progress = (\n+ contentnode_progress_viewset.serialize_list(\n+ self.request, {\"ids\": lesson_contentnode_ids}\n+ )\n+ if lesson_contentnode_ids\n+ else []\n )\n \n- contentnodes = contentnode_viewset.serialize_list(\n- self.request, {\"ids\": lesson_contentnode_ids}\n+ contentnodes = (\n+ contentnode_viewset.serialize_list(\n+ self.request, {\"ids\": lesson_contentnode_ids}\n+ )\n+ if lesson_contentnode_ids\n+ else []\n )\n \n progress_map = {l[\"content_id\"]: l[\"progress\"] for l in contentnode_progress}\n@@ -206,7 +214,7 @@\n for classroom in classrooms:\n for lesson in classroom[\"assignments\"][\"lessons\"]:\n for resource in lesson[\"resources\"]:\n- yield resource[\"progress\"] > 0\n+ yield 0 < resource[\"progress\"] < 1\n \n \n class LearnHomePageHydrationView(APIView):\n", "issue": "Home page - The 'Continue learning on your own' section is not displayed\n## Observed behavior\r\nThe 'Continue learning on your own' section is not displayed at the Home page even though I have completed all assigned class resources and have some non-class resources in progress.\r\n\r\n## Expected behavior\r\nThe 'Continue learning on your own' section should be displayed as specified in [Notion](https://www.notion.so/Home-Page-QA-8ffb1011fa034e21bc1f52a2aad585ac).\r\n\r\n## Steps to reproduce the issue\r\n1. Install the following [build.](https://github.com/learningequality/kolibri/releases/tag/v0.15.0-beta2)\r\n2. As a Coach create a new Learner user, enroll that user to a Class and assign a lesson with resources.\r\n3. Sign in with the Learner user and complete the assigned resources.\r\n4. Start looking at some non-classes resources so that those will get marked as being in progress.\r\n5. Go back to the Home page and look for the 'Continue learning on your own' section\r\n\r\n## Additional information\r\n\r\n\r\n\r\n## Usage Details\r\n - OS: Windows 10\r\n - Browser: Chrome\n", "before_files": [{"content": "from django.db.models import Count\nfrom django.db.models import OuterRef\nfrom django.db.models import Q\nfrom django.db.models import Subquery\nfrom django.db.models import Sum\nfrom django.db.models.fields import IntegerField\nfrom rest_framework.permissions import IsAuthenticated\nfrom rest_framework.response import Response\nfrom rest_framework.views import APIView\n\nfrom kolibri.core.api import ReadOnlyValuesViewset\nfrom kolibri.core.auth.api import KolibriAuthPermissionsFilter\nfrom kolibri.core.auth.models import Classroom\nfrom kolibri.core.auth.models import Facility\nfrom kolibri.core.content.api import ContentNodeProgressViewset\nfrom kolibri.core.content.api import ContentNodeViewset\nfrom kolibri.core.content.api import UserContentNodeViewset\nfrom kolibri.core.exams.models import Exam\nfrom kolibri.core.lessons.models import Lesson\nfrom kolibri.core.logger.models import AttemptLog\nfrom kolibri.core.logger.models import MasteryLog\n\n\ncontentnode_progress_viewset = ContentNodeProgressViewset()\ncontentnode_viewset = ContentNodeViewset()\nuser_contentnode_viewset = UserContentNodeViewset()\n\n\nclass LearnStateView(APIView):\n def get(self, request, format=None):\n if request.user.is_anonymous():\n default_facility = Facility.get_default_facility()\n can_download_content = (\n default_facility.dataset.show_download_button_in_learn\n if default_facility\n else True\n )\n return Response(\n {\n \"in_classes\": False,\n \"can_download_content\": can_download_content,\n }\n )\n return Response(\n {\n \"in_classes\": request.user.memberships.exists(),\n \"can_download_content\": request.user.dataset.show_download_button_in_learn,\n }\n )\n\n\nclass LearnerClassroomViewset(ReadOnlyValuesViewset):\n \"\"\"\n Returns all Classrooms for which the requesting User is a member,\n along with all associated assignments.\n \"\"\"\n\n filter_backends = (KolibriAuthPermissionsFilter,)\n permission_classes = (IsAuthenticated,)\n\n values = (\"id\", \"name\")\n\n def get_queryset(self):\n if self.request.user.is_anonymous():\n return Classroom.objects.none()\n return Classroom.objects.filter(membership__user=self.request.user)\n\n def consolidate(self, items, queryset):\n if not items:\n return items\n lessons = (\n Lesson.objects.filter(\n lesson_assignments__collection__membership__user=self.request.user,\n is_active=True,\n collection__in=(c[\"id\"] for c in items),\n )\n .distinct()\n .values(\n \"description\", \"id\", \"is_active\", \"title\", \"resources\", \"collection\"\n )\n )\n lesson_contentnode_ids = set()\n for lesson in lessons:\n lesson_contentnode_ids |= {\n resource[\"contentnode_id\"] for resource in lesson[\"resources\"]\n }\n\n contentnode_progress = contentnode_progress_viewset.serialize_list(\n self.request, {\"ids\": lesson_contentnode_ids}\n )\n\n contentnodes = contentnode_viewset.serialize_list(\n self.request, {\"ids\": lesson_contentnode_ids}\n )\n\n progress_map = {l[\"content_id\"]: l[\"progress\"] for l in contentnode_progress}\n\n contentnode_map = {c[\"id\"]: c for c in contentnodes}\n\n for lesson in lessons:\n lesson[\"progress\"] = {\n \"resource_progress\": sum(\n (\n progress_map[resource[\"content_id\"]]\n for resource in lesson[\"resources\"]\n if resource[\"content_id\"] in progress_map\n )\n ),\n \"total_resources\": len(lesson[\"resources\"]),\n }\n for resource in lesson[\"resources\"]:\n resource[\"progress\"] = progress_map.get(resource[\"content_id\"], 0)\n resource[\"contentnode\"] = contentnode_map.get(\n resource[\"contentnode_id\"], None\n )\n\n user_masterylog_content_ids = MasteryLog.objects.filter(\n user=self.request.user\n ).values(\"summarylog__content_id\")\n\n exams = (\n Exam.objects.filter(\n assignments__collection__membership__user=self.request.user,\n collection__in=(c[\"id\"] for c in items),\n )\n .filter(Q(active=True) | Q(id__in=user_masterylog_content_ids))\n .annotate(\n closed=Subquery(\n MasteryLog.objects.filter(\n summarylog__content_id=OuterRef(\"id\"), user=self.request.user\n ).values(\"complete\")[:1]\n ),\n score=Subquery(\n AttemptLog.objects.filter(\n sessionlog__content_id=OuterRef(\"id\"), user=self.request.user\n )\n .order_by()\n .values_list(\"item\")\n .distinct()\n .values(\"masterylog\")\n .annotate(total_correct=Sum(\"correct\"))\n .values(\"total_correct\"),\n output_field=IntegerField(),\n ),\n answer_count=Subquery(\n AttemptLog.objects.filter(\n sessionlog__content_id=OuterRef(\"id\"), user=self.request.user\n )\n .order_by()\n .values_list(\"item\")\n .distinct()\n .values(\"masterylog\")\n .annotate(total_complete=Count(\"id\"))\n .values(\"total_complete\"),\n output_field=IntegerField(),\n ),\n )\n .distinct()\n .values(\n \"collection\",\n \"active\",\n \"archive\",\n \"id\",\n \"question_count\",\n \"title\",\n \"closed\",\n \"answer_count\",\n \"score\",\n )\n )\n\n for exam in exams:\n closed = exam.pop(\"closed\")\n score = exam.pop(\"score\")\n answer_count = exam.pop(\"answer_count\")\n if closed is not None:\n exam[\"progress\"] = {\n \"closed\": closed,\n \"score\": score,\n \"answer_count\": answer_count,\n \"started\": True,\n }\n else:\n exam[\"progress\"] = {\n \"score\": None,\n \"answer_count\": None,\n \"closed\": None,\n \"started\": False,\n }\n out_items = []\n for item in items:\n item[\"assignments\"] = {\n \"exams\": [exam for exam in exams if exam[\"collection\"] == item[\"id\"]],\n \"lessons\": [\n lesson for lesson in lessons if lesson[\"collection\"] == item[\"id\"]\n ],\n }\n out_items.append(item)\n return out_items\n\n\nlearner_classroom_viewset = LearnerClassroomViewset()\n\n\ndef _resumable_resources(classrooms):\n for classroom in classrooms:\n for lesson in classroom[\"assignments\"][\"lessons\"]:\n for resource in lesson[\"resources\"]:\n yield resource[\"progress\"] > 0\n\n\nclass LearnHomePageHydrationView(APIView):\n def get(self, request, format=None):\n classrooms = []\n resumable_resources = []\n resumable_resources_progress = []\n if not request.user.is_anonymous():\n classrooms = learner_classroom_viewset.serialize_list(request)\n if not classrooms or not any(_resumable_resources(classrooms)):\n resumable_resources = user_contentnode_viewset.serialize_list(\n request, {\"resume\": True, \"max_results\": 12}\n )\n resumable_resources_progress = (\n contentnode_progress_viewset.serialize_list(\n request, {\"resume\": True, \"max_results\": 12}\n )\n )\n\n return Response(\n {\n \"classrooms\": classrooms,\n \"resumable_resources\": resumable_resources,\n \"resumable_resources_progress\": resumable_resources_progress,\n }\n )\n\n\ndef _map_lesson_classroom(item):\n return {\n \"id\": item.pop(\"collection__id\"),\n \"name\": item.pop(\"collection__name\"),\n \"parent\": item.pop(\"collection__parent_id\"),\n }\n\n\nclass LearnerLessonViewset(ReadOnlyValuesViewset):\n \"\"\"\n Special Viewset for Learners to view Lessons to which they are assigned.\n The core Lesson Viewset is locked down to Admin users only.\n \"\"\"\n\n permission_classes = (IsAuthenticated,)\n\n values = (\n \"id\",\n \"title\",\n \"description\",\n \"resources\",\n \"is_active\",\n \"collection\",\n \"collection__id\",\n \"collection__name\",\n \"collection__parent_id\",\n )\n\n field_map = {\"classroom\": _map_lesson_classroom}\n\n def get_queryset(self):\n if self.request.user.is_anonymous():\n return Lesson.objects.none()\n return Lesson.objects.filter(\n lesson_assignments__collection__membership__user=self.request.user,\n is_active=True,\n )\n", "path": "kolibri/plugins/learn/viewsets.py"}]}
| 3,408 | 319 |
gh_patches_debug_28
|
rasdani/github-patches
|
git_diff
|
open-telemetry__opentelemetry-python-1889
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Run tests on Windows in CI
</issue>
<code>
[start of docs/getting_started/flask_example.py]
1 # Copyright The OpenTelemetry Authors
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 # flask_example.py
16 import flask
17 import requests
18
19 from opentelemetry import trace
20 from opentelemetry.instrumentation.flask import FlaskInstrumentor
21 from opentelemetry.instrumentation.requests import RequestsInstrumentor
22 from opentelemetry.sdk.trace import TracerProvider
23 from opentelemetry.sdk.trace.export import (
24 BatchSpanProcessor,
25 ConsoleSpanExporter,
26 )
27
28 trace.set_tracer_provider(TracerProvider())
29 trace.get_tracer_provider().add_span_processor(
30 BatchSpanProcessor(ConsoleSpanExporter())
31 )
32
33 app = flask.Flask(__name__)
34 FlaskInstrumentor().instrument_app(app)
35 RequestsInstrumentor().instrument()
36
37 tracer = trace.get_tracer(__name__)
38
39
40 @app.route("/")
41 def hello():
42 with tracer.start_as_current_span("example-request"):
43 requests.get("http://www.example.com")
44 return "hello"
45
46
47 app.run(debug=True, port=5000)
48
[end of docs/getting_started/flask_example.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/docs/getting_started/flask_example.py b/docs/getting_started/flask_example.py
--- a/docs/getting_started/flask_example.py
+++ b/docs/getting_started/flask_example.py
@@ -44,4 +44,4 @@
return "hello"
-app.run(debug=True, port=5000)
+app.run(port=5000)
|
{"golden_diff": "diff --git a/docs/getting_started/flask_example.py b/docs/getting_started/flask_example.py\n--- a/docs/getting_started/flask_example.py\n+++ b/docs/getting_started/flask_example.py\n@@ -44,4 +44,4 @@\n return \"hello\"\n \n \n-app.run(debug=True, port=5000)\n+app.run(port=5000)\n", "issue": "Run tests on Windows in CI\n\n", "before_files": [{"content": "# Copyright The OpenTelemetry Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n# flask_example.py\nimport flask\nimport requests\n\nfrom opentelemetry import trace\nfrom opentelemetry.instrumentation.flask import FlaskInstrumentor\nfrom opentelemetry.instrumentation.requests import RequestsInstrumentor\nfrom opentelemetry.sdk.trace import TracerProvider\nfrom opentelemetry.sdk.trace.export import (\n BatchSpanProcessor,\n ConsoleSpanExporter,\n)\n\ntrace.set_tracer_provider(TracerProvider())\ntrace.get_tracer_provider().add_span_processor(\n BatchSpanProcessor(ConsoleSpanExporter())\n)\n\napp = flask.Flask(__name__)\nFlaskInstrumentor().instrument_app(app)\nRequestsInstrumentor().instrument()\n\ntracer = trace.get_tracer(__name__)\n\n\[email protected](\"/\")\ndef hello():\n with tracer.start_as_current_span(\"example-request\"):\n requests.get(\"http://www.example.com\")\n return \"hello\"\n\n\napp.run(debug=True, port=5000)\n", "path": "docs/getting_started/flask_example.py"}]}
| 951 | 84 |
gh_patches_debug_43703
|
rasdani/github-patches
|
git_diff
|
huggingface__optimum-1660
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Whisper-v3 ValueError: Transformers now supports natively BetterTransformer optimizations
### System Info
```shell
Nvidia Docker Container 23.12
xFormers 0.0.24+6600003.d20240116
memory_efficient_attention.cutlassF: available
memory_efficient_attention.cutlassB: available
memory_efficient_attention.decoderF: available
[email protected]: available
[email protected]: available
memory_efficient_attention.smallkF: available
memory_efficient_attention.smallkB: available
memory_efficient_attention.tritonflashattF: unavailable
memory_efficient_attention.tritonflashattB: unavailable
memory_efficient_attention.triton_splitKF: available
indexing.scaled_index_addF: available
indexing.scaled_index_addB: available
indexing.index_select: available
swiglu.dual_gemm_silu: available
swiglu.gemm_fused_operand_sum: available
swiglu.fused.p.cpp: available
is_triton_available: True
pytorch.version: 2.2.0a0+81ea7a4
pytorch.cuda: available
gpu.compute_capability: 8.9
gpu.name: NVIDIA GeForce RTX 4090
dcgm_profiler: unavailable
build.info: available
build.cuda_version: 1203
build.python_version: 3.10.12
build.torch_version: 2.2.0a0+81ea7a4
build.env.TORCH_CUDA_ARCH_LIST: 5.2 6.0 6.1 7.0 7.2 7.5 8.0 8.6 8.7 9.0+PTX
build.env.XFORMERS_BUILD_TYPE: None
build.env.XFORMERS_ENABLE_DEBUG_ASSERTIONS: None
build.env.NVCC_FLAGS: None
build.env.XFORMERS_PACKAGE_FROM: None
build.nvcc_version: 12.3.107
source.privacy: open source
```
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction (minimal, reproducible, runnable)
use code:
https://huggingface.co/spaces/primeline/whisper-german/blob/main/app.py
```bash
python app.py
Traceback (most recent call last):
File "/app/app.py", line 23, in <module>
model = model.to_bettertransformer()
File "/usr/local/lib/python3.10/dist-packages/transformers/modeling_utils.py", line 4314, in to_bettertransformer
return BetterTransformer.transform(self)
File "/usr/lib/python3.10/contextlib.py", line 79, in inner
return func(*args, **kwds)
File "/usr/local/lib/python3.10/dist-packages/optimum/bettertransformer/transformation.py", line 211, in transform
raise ValueError(
ValueError: Transformers now supports natively BetterTransformer optimizations (torch.nn.functional.scaled_dot_product_attention) for the model type whisper. Please upgrade to transformers>=4.36 and torch>=2.1.1 to use it. Details: https://huggingface.co/docs/transformers/perf_infer_gpu_one#flashattention-and-memory-efficient-attention-through-pytorchs-scaleddotproductattention
```
### Expected behavior
everything is ok.
</issue>
<code>
[start of optimum/utils/import_utils.py]
1 # Copyright 2022 The HuggingFace Team. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 """Import utilities."""
15
16 import importlib.util
17 import inspect
18 import sys
19 from collections import OrderedDict
20 from contextlib import contextmanager
21 from typing import Tuple, Union
22
23 import numpy as np
24 import packaging
25 from transformers.utils import is_torch_available
26
27
28 def _is_package_available(pkg_name: str, return_version: bool = False) -> Union[Tuple[bool, str], bool]:
29 # Check we're not importing a "pkg_name" directory somewhere but the actual library by trying to grab the version
30 package_exists = importlib.util.find_spec(pkg_name) is not None
31 package_version = "N/A"
32 if package_exists:
33 try:
34 package_version = importlib.metadata.version(pkg_name)
35 package_exists = True
36 except importlib.metadata.PackageNotFoundError:
37 package_exists = False
38 if return_version:
39 return package_exists, package_version
40 else:
41 return package_exists
42
43
44 # The package importlib_metadata is in a different place, depending on the python version.
45 if sys.version_info < (3, 8):
46 import importlib_metadata
47 else:
48 import importlib.metadata as importlib_metadata
49
50
51 TORCH_MINIMUM_VERSION = packaging.version.parse("1.11.0")
52 TRANSFORMERS_MINIMUM_VERSION = packaging.version.parse("4.25.0")
53 DIFFUSERS_MINIMUM_VERSION = packaging.version.parse("0.18.0")
54 AUTOGPTQ_MINIMUM_VERSION = packaging.version.parse("0.4.99") # Allows 0.5.0.dev0
55
56
57 # This is the minimal required version to support some ONNX Runtime features
58 ORT_QUANTIZE_MINIMUM_VERSION = packaging.version.parse("1.4.0")
59
60
61 _onnx_available = _is_package_available("onnx")
62
63 # importlib.metadata.version seem to not be robust with the ONNX Runtime extensions (`onnxruntime-gpu`, etc.)
64 _onnxruntime_available = importlib.util.find_spec("onnxruntime") is not None
65
66 _pydantic_available = _is_package_available("pydantic")
67 _accelerate_available = _is_package_available("accelerate")
68 _diffusers_available = _is_package_available("diffusers")
69 _auto_gptq_available = _is_package_available("auto_gptq")
70 _timm_available = _is_package_available("timm")
71 _sentence_transformers_available = _is_package_available("sentence_transformers")
72
73 torch_version = None
74 if is_torch_available():
75 torch_version = packaging.version.parse(importlib_metadata.version("torch"))
76
77 _is_torch_onnx_support_available = is_torch_available() and (
78 TORCH_MINIMUM_VERSION.major,
79 TORCH_MINIMUM_VERSION.minor,
80 ) <= (
81 torch_version.major,
82 torch_version.minor,
83 )
84
85
86 _diffusers_version = None
87 if _diffusers_available:
88 try:
89 _diffusers_version = importlib_metadata.version("diffusers")
90 except importlib_metadata.PackageNotFoundError:
91 _diffusers_available = False
92
93
94 def is_torch_onnx_support_available():
95 return _is_torch_onnx_support_available
96
97
98 def is_onnx_available():
99 return _onnx_available
100
101
102 def is_onnxruntime_available():
103 try:
104 # Try to import the source file of onnxruntime - if you run the tests from `tests` the function gets
105 # confused since there a folder named `onnxruntime` in `tests`. Therefore, `_onnxruntime_available`
106 # will be set to `True` even if not installed.
107 mod = importlib.import_module("onnxruntime")
108 inspect.getsourcefile(mod)
109 except Exception:
110 return False
111 return _onnxruntime_available
112
113
114 def is_pydantic_available():
115 return _pydantic_available
116
117
118 def is_accelerate_available():
119 return _accelerate_available
120
121
122 def is_diffusers_available():
123 return _diffusers_available
124
125
126 def is_timm_available():
127 return _timm_available
128
129
130 def is_sentence_transformers_available():
131 return _sentence_transformers_available
132
133
134 def is_auto_gptq_available():
135 if _auto_gptq_available:
136 version_autogptq = packaging.version.parse(importlib_metadata.version("auto_gptq"))
137 if AUTOGPTQ_MINIMUM_VERSION < version_autogptq:
138 return True
139 else:
140 raise ImportError(
141 f"Found an incompatible version of auto-gptq. Found version {version_autogptq}, but only version above {AUTOGPTQ_MINIMUM_VERSION} are supported"
142 )
143
144
145 @contextmanager
146 def check_if_pytorch_greater(target_version: str, message: str):
147 r"""
148 A context manager that does nothing except checking if the PyTorch version is greater than `pt_version`
149 """
150 import torch
151
152 if not packaging.version.parse(torch.__version__) >= packaging.version.parse(target_version):
153 raise ImportError(
154 f"Found an incompatible version of PyTorch. Found version {torch.__version__}, but only {target_version} and above are supported. {message}"
155 )
156 try:
157 yield
158 finally:
159 pass
160
161
162 def check_if_transformers_greater(target_version: Union[str, packaging.version.Version]) -> bool:
163 """
164 Checks whether the current install of transformers is greater than or equal to the target version.
165
166 Args:
167 target_version (`Union[str, packaging.version.Version]`): version used as the reference for comparison.
168
169 Returns:
170 bool: whether the check is True or not.
171 """
172 import transformers
173
174 if isinstance(target_version, str):
175 target_version = packaging.version.parse(target_version)
176
177 return packaging.version.parse(transformers.__version__) >= target_version
178
179
180 def check_if_diffusers_greater(target_version: str) -> bool:
181 """
182 Checks whether the current install of diffusers is greater than or equal to the target version.
183
184 Args:
185 target_version (str): version used as the reference for comparison.
186
187 Returns:
188 bool: whether the check is True or not.
189 """
190 if not _diffusers_available:
191 return False
192
193 return packaging.version.parse(_diffusers_version) >= packaging.version.parse(target_version)
194
195
196 @contextmanager
197 def require_numpy_strictly_lower(version: str, message: str):
198 if not packaging.version.parse(np.__version__) < packaging.version.parse(version):
199 raise ImportError(
200 f"Found an incompatible version of numpy. Found version {np.__version__}, but expected numpy<{version}. {message}"
201 )
202 try:
203 yield
204 finally:
205 pass
206
207
208 DIFFUSERS_IMPORT_ERROR = """
209 {0} requires the diffusers library but it was not found in your environment. You can install it with pip: `pip install
210 diffusers`. Please note that you may need to restart your runtime after installation.
211 """
212
213 TRANSFORMERS_IMPORT_ERROR = """requires the transformers>={0} library but it was not found in your environment. You can install it with pip: `pip install
214 -U transformers`. Please note that you may need to restart your runtime after installation.
215 """
216
217 BACKENDS_MAPPING = OrderedDict(
218 [
219 ("diffusers", (is_diffusers_available, DIFFUSERS_IMPORT_ERROR)),
220 (
221 "transformers_431",
222 (lambda: check_if_transformers_greater("4.31"), "{0} " + TRANSFORMERS_IMPORT_ERROR.format("4.31")),
223 ),
224 (
225 "transformers_432",
226 (lambda: check_if_transformers_greater("4.32"), "{0} " + TRANSFORMERS_IMPORT_ERROR.format("4.32")),
227 ),
228 (
229 "transformers_434",
230 (lambda: check_if_transformers_greater("4.34"), "{0} " + TRANSFORMERS_IMPORT_ERROR.format("4.34")),
231 ),
232 ]
233 )
234
235
236 def requires_backends(obj, backends):
237 if not isinstance(backends, (list, tuple)):
238 backends = [backends]
239
240 name = obj.__name__ if hasattr(obj, "__name__") else obj.__class__.__name__
241 checks = (BACKENDS_MAPPING[backend] for backend in backends)
242 failed = [msg.format(name) for available, msg in checks if not available()]
243 if failed:
244 raise ImportError("".join(failed))
245
246
247 # Copied from: https://github.com/huggingface/transformers/blob/v4.26.0/src/transformers/utils/import_utils.py#L1041
248 class DummyObject(type):
249 """
250 Metaclass for the dummy objects. Any class inheriting from it will return the ImportError generated by
251 `requires_backend` each time a user tries to access any method of that class.
252 """
253
254 def __getattr__(cls, key):
255 if key.startswith("_"):
256 return super().__getattr__(cls, key)
257 requires_backends(cls, cls._backends)
258
[end of optimum/utils/import_utils.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/optimum/utils/import_utils.py b/optimum/utils/import_utils.py
--- a/optimum/utils/import_utils.py
+++ b/optimum/utils/import_utils.py
@@ -21,7 +21,7 @@
from typing import Tuple, Union
import numpy as np
-import packaging
+from packaging import version
from transformers.utils import is_torch_available
@@ -48,14 +48,14 @@
import importlib.metadata as importlib_metadata
-TORCH_MINIMUM_VERSION = packaging.version.parse("1.11.0")
-TRANSFORMERS_MINIMUM_VERSION = packaging.version.parse("4.25.0")
-DIFFUSERS_MINIMUM_VERSION = packaging.version.parse("0.18.0")
-AUTOGPTQ_MINIMUM_VERSION = packaging.version.parse("0.4.99") # Allows 0.5.0.dev0
+TORCH_MINIMUM_VERSION = version.parse("1.11.0")
+TRANSFORMERS_MINIMUM_VERSION = version.parse("4.25.0")
+DIFFUSERS_MINIMUM_VERSION = version.parse("0.18.0")
+AUTOGPTQ_MINIMUM_VERSION = version.parse("0.4.99") # Allows 0.5.0.dev0
# This is the minimal required version to support some ONNX Runtime features
-ORT_QUANTIZE_MINIMUM_VERSION = packaging.version.parse("1.4.0")
+ORT_QUANTIZE_MINIMUM_VERSION = version.parse("1.4.0")
_onnx_available = _is_package_available("onnx")
@@ -72,7 +72,7 @@
torch_version = None
if is_torch_available():
- torch_version = packaging.version.parse(importlib_metadata.version("torch"))
+ torch_version = version.parse(importlib_metadata.version("torch"))
_is_torch_onnx_support_available = is_torch_available() and (
TORCH_MINIMUM_VERSION.major,
@@ -133,7 +133,7 @@
def is_auto_gptq_available():
if _auto_gptq_available:
- version_autogptq = packaging.version.parse(importlib_metadata.version("auto_gptq"))
+ version_autogptq = version.parse(importlib_metadata.version("auto_gptq"))
if AUTOGPTQ_MINIMUM_VERSION < version_autogptq:
return True
else:
@@ -149,7 +149,7 @@
"""
import torch
- if not packaging.version.parse(torch.__version__) >= packaging.version.parse(target_version):
+ if not version.parse(torch.__version__) >= version.parse(target_version):
raise ImportError(
f"Found an incompatible version of PyTorch. Found version {torch.__version__}, but only {target_version} and above are supported. {message}"
)
@@ -159,7 +159,7 @@
pass
-def check_if_transformers_greater(target_version: Union[str, packaging.version.Version]) -> bool:
+def check_if_transformers_greater(target_version: Union[str, version.Version]) -> bool:
"""
Checks whether the current install of transformers is greater than or equal to the target version.
@@ -172,9 +172,9 @@
import transformers
if isinstance(target_version, str):
- target_version = packaging.version.parse(target_version)
+ target_version = version.parse(target_version)
- return packaging.version.parse(transformers.__version__) >= target_version
+ return version.parse(transformers.__version__) >= target_version
def check_if_diffusers_greater(target_version: str) -> bool:
@@ -190,12 +190,12 @@
if not _diffusers_available:
return False
- return packaging.version.parse(_diffusers_version) >= packaging.version.parse(target_version)
+ return version.parse(_diffusers_version) >= version.parse(target_version)
@contextmanager
-def require_numpy_strictly_lower(version: str, message: str):
- if not packaging.version.parse(np.__version__) < packaging.version.parse(version):
+def require_numpy_strictly_lower(package_version: str, message: str):
+ if not version.parse(np.__version__) < version.parse(package_version):
raise ImportError(
f"Found an incompatible version of numpy. Found version {np.__version__}, but expected numpy<{version}. {message}"
)
|
{"golden_diff": "diff --git a/optimum/utils/import_utils.py b/optimum/utils/import_utils.py\n--- a/optimum/utils/import_utils.py\n+++ b/optimum/utils/import_utils.py\n@@ -21,7 +21,7 @@\n from typing import Tuple, Union\n \n import numpy as np\n-import packaging\n+from packaging import version\n from transformers.utils import is_torch_available\n \n \n@@ -48,14 +48,14 @@\n import importlib.metadata as importlib_metadata\n \n \n-TORCH_MINIMUM_VERSION = packaging.version.parse(\"1.11.0\")\n-TRANSFORMERS_MINIMUM_VERSION = packaging.version.parse(\"4.25.0\")\n-DIFFUSERS_MINIMUM_VERSION = packaging.version.parse(\"0.18.0\")\n-AUTOGPTQ_MINIMUM_VERSION = packaging.version.parse(\"0.4.99\") # Allows 0.5.0.dev0\n+TORCH_MINIMUM_VERSION = version.parse(\"1.11.0\")\n+TRANSFORMERS_MINIMUM_VERSION = version.parse(\"4.25.0\")\n+DIFFUSERS_MINIMUM_VERSION = version.parse(\"0.18.0\")\n+AUTOGPTQ_MINIMUM_VERSION = version.parse(\"0.4.99\") # Allows 0.5.0.dev0\n \n \n # This is the minimal required version to support some ONNX Runtime features\n-ORT_QUANTIZE_MINIMUM_VERSION = packaging.version.parse(\"1.4.0\")\n+ORT_QUANTIZE_MINIMUM_VERSION = version.parse(\"1.4.0\")\n \n \n _onnx_available = _is_package_available(\"onnx\")\n@@ -72,7 +72,7 @@\n \n torch_version = None\n if is_torch_available():\n- torch_version = packaging.version.parse(importlib_metadata.version(\"torch\"))\n+ torch_version = version.parse(importlib_metadata.version(\"torch\"))\n \n _is_torch_onnx_support_available = is_torch_available() and (\n TORCH_MINIMUM_VERSION.major,\n@@ -133,7 +133,7 @@\n \n def is_auto_gptq_available():\n if _auto_gptq_available:\n- version_autogptq = packaging.version.parse(importlib_metadata.version(\"auto_gptq\"))\n+ version_autogptq = version.parse(importlib_metadata.version(\"auto_gptq\"))\n if AUTOGPTQ_MINIMUM_VERSION < version_autogptq:\n return True\n else:\n@@ -149,7 +149,7 @@\n \"\"\"\n import torch\n \n- if not packaging.version.parse(torch.__version__) >= packaging.version.parse(target_version):\n+ if not version.parse(torch.__version__) >= version.parse(target_version):\n raise ImportError(\n f\"Found an incompatible version of PyTorch. Found version {torch.__version__}, but only {target_version} and above are supported. {message}\"\n )\n@@ -159,7 +159,7 @@\n pass\n \n \n-def check_if_transformers_greater(target_version: Union[str, packaging.version.Version]) -> bool:\n+def check_if_transformers_greater(target_version: Union[str, version.Version]) -> bool:\n \"\"\"\n Checks whether the current install of transformers is greater than or equal to the target version.\n \n@@ -172,9 +172,9 @@\n import transformers\n \n if isinstance(target_version, str):\n- target_version = packaging.version.parse(target_version)\n+ target_version = version.parse(target_version)\n \n- return packaging.version.parse(transformers.__version__) >= target_version\n+ return version.parse(transformers.__version__) >= target_version\n \n \n def check_if_diffusers_greater(target_version: str) -> bool:\n@@ -190,12 +190,12 @@\n if not _diffusers_available:\n return False\n \n- return packaging.version.parse(_diffusers_version) >= packaging.version.parse(target_version)\n+ return version.parse(_diffusers_version) >= version.parse(target_version)\n \n \n @contextmanager\n-def require_numpy_strictly_lower(version: str, message: str):\n- if not packaging.version.parse(np.__version__) < packaging.version.parse(version):\n+def require_numpy_strictly_lower(package_version: str, message: str):\n+ if not version.parse(np.__version__) < version.parse(package_version):\n raise ImportError(\n f\"Found an incompatible version of numpy. Found version {np.__version__}, but expected numpy<{version}. {message}\"\n )\n", "issue": "Whisper-v3 ValueError: Transformers now supports natively BetterTransformer optimizations\n### System Info\n\n```shell\nNvidia Docker Container 23.12\r\n\r\n\r\nxFormers 0.0.24+6600003.d20240116\r\nmemory_efficient_attention.cutlassF: available\r\nmemory_efficient_attention.cutlassB: available\r\nmemory_efficient_attention.decoderF: available\r\[email protected]: available\r\[email protected]: available\r\nmemory_efficient_attention.smallkF: available\r\nmemory_efficient_attention.smallkB: available\r\nmemory_efficient_attention.tritonflashattF: unavailable\r\nmemory_efficient_attention.tritonflashattB: unavailable\r\nmemory_efficient_attention.triton_splitKF: available\r\nindexing.scaled_index_addF: available\r\nindexing.scaled_index_addB: available\r\nindexing.index_select: available\r\nswiglu.dual_gemm_silu: available\r\nswiglu.gemm_fused_operand_sum: available\r\nswiglu.fused.p.cpp: available\r\nis_triton_available: True\r\npytorch.version: 2.2.0a0+81ea7a4\r\npytorch.cuda: available\r\ngpu.compute_capability: 8.9\r\ngpu.name: NVIDIA GeForce RTX 4090\r\ndcgm_profiler: unavailable\r\nbuild.info: available\r\nbuild.cuda_version: 1203\r\nbuild.python_version: 3.10.12\r\nbuild.torch_version: 2.2.0a0+81ea7a4\r\nbuild.env.TORCH_CUDA_ARCH_LIST: 5.2 6.0 6.1 7.0 7.2 7.5 8.0 8.6 8.7 9.0+PTX\r\nbuild.env.XFORMERS_BUILD_TYPE: None\r\nbuild.env.XFORMERS_ENABLE_DEBUG_ASSERTIONS: None\r\nbuild.env.NVCC_FLAGS: None\r\nbuild.env.XFORMERS_PACKAGE_FROM: None\r\nbuild.nvcc_version: 12.3.107\r\nsource.privacy: open source\n```\n\n\n### Who can help?\n\n_No response_\n\n### Information\n\n- [X] The official example scripts\n- [ ] My own modified scripts\n\n### Tasks\n\n- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)\n- [ ] My own task or dataset (give details below)\n\n### Reproduction (minimal, reproducible, runnable)\n\nuse code:\r\n\r\nhttps://huggingface.co/spaces/primeline/whisper-german/blob/main/app.py\r\n\r\n```bash\r\npython app.py \r\nTraceback (most recent call last):\r\n File \"/app/app.py\", line 23, in <module>\r\n model = model.to_bettertransformer()\r\n File \"/usr/local/lib/python3.10/dist-packages/transformers/modeling_utils.py\", line 4314, in to_bettertransformer\r\n return BetterTransformer.transform(self)\r\n File \"/usr/lib/python3.10/contextlib.py\", line 79, in inner\r\n return func(*args, **kwds)\r\n File \"/usr/local/lib/python3.10/dist-packages/optimum/bettertransformer/transformation.py\", line 211, in transform\r\n raise ValueError(\r\nValueError: Transformers now supports natively BetterTransformer optimizations (torch.nn.functional.scaled_dot_product_attention) for the model type whisper. Please upgrade to transformers>=4.36 and torch>=2.1.1 to use it. Details: https://huggingface.co/docs/transformers/perf_infer_gpu_one#flashattention-and-memory-efficient-attention-through-pytorchs-scaleddotproductattention\r\n```\n\n### Expected behavior\n\neverything is ok.\n", "before_files": [{"content": "# Copyright 2022 The HuggingFace Team. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Import utilities.\"\"\"\n\nimport importlib.util\nimport inspect\nimport sys\nfrom collections import OrderedDict\nfrom contextlib import contextmanager\nfrom typing import Tuple, Union\n\nimport numpy as np\nimport packaging\nfrom transformers.utils import is_torch_available\n\n\ndef _is_package_available(pkg_name: str, return_version: bool = False) -> Union[Tuple[bool, str], bool]:\n # Check we're not importing a \"pkg_name\" directory somewhere but the actual library by trying to grab the version\n package_exists = importlib.util.find_spec(pkg_name) is not None\n package_version = \"N/A\"\n if package_exists:\n try:\n package_version = importlib.metadata.version(pkg_name)\n package_exists = True\n except importlib.metadata.PackageNotFoundError:\n package_exists = False\n if return_version:\n return package_exists, package_version\n else:\n return package_exists\n\n\n# The package importlib_metadata is in a different place, depending on the python version.\nif sys.version_info < (3, 8):\n import importlib_metadata\nelse:\n import importlib.metadata as importlib_metadata\n\n\nTORCH_MINIMUM_VERSION = packaging.version.parse(\"1.11.0\")\nTRANSFORMERS_MINIMUM_VERSION = packaging.version.parse(\"4.25.0\")\nDIFFUSERS_MINIMUM_VERSION = packaging.version.parse(\"0.18.0\")\nAUTOGPTQ_MINIMUM_VERSION = packaging.version.parse(\"0.4.99\") # Allows 0.5.0.dev0\n\n\n# This is the minimal required version to support some ONNX Runtime features\nORT_QUANTIZE_MINIMUM_VERSION = packaging.version.parse(\"1.4.0\")\n\n\n_onnx_available = _is_package_available(\"onnx\")\n\n# importlib.metadata.version seem to not be robust with the ONNX Runtime extensions (`onnxruntime-gpu`, etc.)\n_onnxruntime_available = importlib.util.find_spec(\"onnxruntime\") is not None\n\n_pydantic_available = _is_package_available(\"pydantic\")\n_accelerate_available = _is_package_available(\"accelerate\")\n_diffusers_available = _is_package_available(\"diffusers\")\n_auto_gptq_available = _is_package_available(\"auto_gptq\")\n_timm_available = _is_package_available(\"timm\")\n_sentence_transformers_available = _is_package_available(\"sentence_transformers\")\n\ntorch_version = None\nif is_torch_available():\n torch_version = packaging.version.parse(importlib_metadata.version(\"torch\"))\n\n_is_torch_onnx_support_available = is_torch_available() and (\n TORCH_MINIMUM_VERSION.major,\n TORCH_MINIMUM_VERSION.minor,\n) <= (\n torch_version.major,\n torch_version.minor,\n)\n\n\n_diffusers_version = None\nif _diffusers_available:\n try:\n _diffusers_version = importlib_metadata.version(\"diffusers\")\n except importlib_metadata.PackageNotFoundError:\n _diffusers_available = False\n\n\ndef is_torch_onnx_support_available():\n return _is_torch_onnx_support_available\n\n\ndef is_onnx_available():\n return _onnx_available\n\n\ndef is_onnxruntime_available():\n try:\n # Try to import the source file of onnxruntime - if you run the tests from `tests` the function gets\n # confused since there a folder named `onnxruntime` in `tests`. Therefore, `_onnxruntime_available`\n # will be set to `True` even if not installed.\n mod = importlib.import_module(\"onnxruntime\")\n inspect.getsourcefile(mod)\n except Exception:\n return False\n return _onnxruntime_available\n\n\ndef is_pydantic_available():\n return _pydantic_available\n\n\ndef is_accelerate_available():\n return _accelerate_available\n\n\ndef is_diffusers_available():\n return _diffusers_available\n\n\ndef is_timm_available():\n return _timm_available\n\n\ndef is_sentence_transformers_available():\n return _sentence_transformers_available\n\n\ndef is_auto_gptq_available():\n if _auto_gptq_available:\n version_autogptq = packaging.version.parse(importlib_metadata.version(\"auto_gptq\"))\n if AUTOGPTQ_MINIMUM_VERSION < version_autogptq:\n return True\n else:\n raise ImportError(\n f\"Found an incompatible version of auto-gptq. Found version {version_autogptq}, but only version above {AUTOGPTQ_MINIMUM_VERSION} are supported\"\n )\n\n\n@contextmanager\ndef check_if_pytorch_greater(target_version: str, message: str):\n r\"\"\"\n A context manager that does nothing except checking if the PyTorch version is greater than `pt_version`\n \"\"\"\n import torch\n\n if not packaging.version.parse(torch.__version__) >= packaging.version.parse(target_version):\n raise ImportError(\n f\"Found an incompatible version of PyTorch. Found version {torch.__version__}, but only {target_version} and above are supported. {message}\"\n )\n try:\n yield\n finally:\n pass\n\n\ndef check_if_transformers_greater(target_version: Union[str, packaging.version.Version]) -> bool:\n \"\"\"\n Checks whether the current install of transformers is greater than or equal to the target version.\n\n Args:\n target_version (`Union[str, packaging.version.Version]`): version used as the reference for comparison.\n\n Returns:\n bool: whether the check is True or not.\n \"\"\"\n import transformers\n\n if isinstance(target_version, str):\n target_version = packaging.version.parse(target_version)\n\n return packaging.version.parse(transformers.__version__) >= target_version\n\n\ndef check_if_diffusers_greater(target_version: str) -> bool:\n \"\"\"\n Checks whether the current install of diffusers is greater than or equal to the target version.\n\n Args:\n target_version (str): version used as the reference for comparison.\n\n Returns:\n bool: whether the check is True or not.\n \"\"\"\n if not _diffusers_available:\n return False\n\n return packaging.version.parse(_diffusers_version) >= packaging.version.parse(target_version)\n\n\n@contextmanager\ndef require_numpy_strictly_lower(version: str, message: str):\n if not packaging.version.parse(np.__version__) < packaging.version.parse(version):\n raise ImportError(\n f\"Found an incompatible version of numpy. Found version {np.__version__}, but expected numpy<{version}. {message}\"\n )\n try:\n yield\n finally:\n pass\n\n\nDIFFUSERS_IMPORT_ERROR = \"\"\"\n{0} requires the diffusers library but it was not found in your environment. You can install it with pip: `pip install\ndiffusers`. Please note that you may need to restart your runtime after installation.\n\"\"\"\n\nTRANSFORMERS_IMPORT_ERROR = \"\"\"requires the transformers>={0} library but it was not found in your environment. You can install it with pip: `pip install\n-U transformers`. Please note that you may need to restart your runtime after installation.\n\"\"\"\n\nBACKENDS_MAPPING = OrderedDict(\n [\n (\"diffusers\", (is_diffusers_available, DIFFUSERS_IMPORT_ERROR)),\n (\n \"transformers_431\",\n (lambda: check_if_transformers_greater(\"4.31\"), \"{0} \" + TRANSFORMERS_IMPORT_ERROR.format(\"4.31\")),\n ),\n (\n \"transformers_432\",\n (lambda: check_if_transformers_greater(\"4.32\"), \"{0} \" + TRANSFORMERS_IMPORT_ERROR.format(\"4.32\")),\n ),\n (\n \"transformers_434\",\n (lambda: check_if_transformers_greater(\"4.34\"), \"{0} \" + TRANSFORMERS_IMPORT_ERROR.format(\"4.34\")),\n ),\n ]\n)\n\n\ndef requires_backends(obj, backends):\n if not isinstance(backends, (list, tuple)):\n backends = [backends]\n\n name = obj.__name__ if hasattr(obj, \"__name__\") else obj.__class__.__name__\n checks = (BACKENDS_MAPPING[backend] for backend in backends)\n failed = [msg.format(name) for available, msg in checks if not available()]\n if failed:\n raise ImportError(\"\".join(failed))\n\n\n# Copied from: https://github.com/huggingface/transformers/blob/v4.26.0/src/transformers/utils/import_utils.py#L1041\nclass DummyObject(type):\n \"\"\"\n Metaclass for the dummy objects. Any class inheriting from it will return the ImportError generated by\n `requires_backend` each time a user tries to access any method of that class.\n \"\"\"\n\n def __getattr__(cls, key):\n if key.startswith(\"_\"):\n return super().__getattr__(cls, key)\n requires_backends(cls, cls._backends)\n", "path": "optimum/utils/import_utils.py"}]}
| 4,088 | 945 |
gh_patches_debug_37092
|
rasdani/github-patches
|
git_diff
|
PennyLaneAI__pennylane-4800
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[BUG] compile pre-decomposes at most once
### Expected behavior
I would expect that if I do qml.compile() of a template and its adjoint template, the decompositions would be equivalent but adjoined. However the depth of the decomposition is different:
<img width="300" alt="Captura de pantalla 2023-11-07 a las 13 58 38" src="https://github.com/PennyLaneAI/pennylane/assets/65235481/b6d7525e-e751-47ea-824e-d1bda5701f73">
<img width="300" alt="Captura de pantalla 2023-11-07 a las 13 58 49" src="https://github.com/PennyLaneAI/pennylane/assets/65235481/967a1e09-0203-4501-b0c2-3c2e345cc5d7">
As we can see in one case it expands the QFT and in another it does not. I would have liked that they were equivalent so in both cases it shows (QFT or QFT^t)
### Actual behavior
Explained above
### Additional information
_No response_
### Source code
```shell
import pennylane as qml
dev = qml.device("default.qubit")
@qml.qnode(dev)
@qml.compile()
def circuit():
#qml.QuantumPhaseEstimation(qml.PauliZ(0), estimation_wires = range(1,3))
qml.adjoint(qml.QuantumPhaseEstimation)(qml.PauliZ(0), estimation_wires = range(1,3))
return qml.state()
qml.draw_mpl(circuit)()
```
### Tracebacks
_No response_
### System information
```shell
Name: PennyLane
Version: 0.33.0
Summary: PennyLane is a Python quantum machine learning library by Xanadu Inc.
Home-page: https://github.com/PennyLaneAI/pennylane
Author:
Author-email:
License: Apache License 2.0
Location: /usr/local/lib/python3.10/dist-packages
Requires: appdirs, autograd, autoray, cachetools, networkx, numpy, pennylane-lightning, requests, rustworkx, scipy, semantic-version, toml, typing-extensions
Required-by: PennyLane-Lightning
Platform info: Linux-5.15.120+-x86_64-with-glibc2.35
Python version: 3.10.12
Numpy version: 1.23.5
Scipy version: 1.11.3
Installed devices:
```
### Existing GitHub issues
- [X] I have searched existing GitHub issues to make sure the issue does not already exist.
</issue>
<code>
[start of pennylane/transforms/compile.py]
1 # Copyright 2018-2021 Xanadu Quantum Technologies Inc.
2
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6
7 # http://www.apache.org/licenses/LICENSE-2.0
8
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 """Code for the high-level quantum function transform that executes compilation."""
15 # pylint: disable=too-many-branches
16 from functools import partial
17 from typing import Sequence, Callable
18
19 from pennylane.queuing import QueuingManager
20 from pennylane.ops import __all__ as all_ops
21 from pennylane.tape import QuantumTape
22 from pennylane.transforms.core import transform, TransformDispatcher
23 from pennylane.transforms.optimization import (
24 cancel_inverses,
25 commute_controlled,
26 merge_rotations,
27 remove_barrier,
28 )
29
30
31 default_pipeline = [commute_controlled, cancel_inverses, merge_rotations, remove_barrier]
32
33
34 @transform
35 def compile(
36 tape: QuantumTape, pipeline=None, basis_set=None, num_passes=1, expand_depth=5
37 ) -> (Sequence[QuantumTape], Callable):
38 """Compile a circuit by applying a series of transforms to a quantum function.
39
40 The default set of transforms includes (in order):
41
42 - pushing all commuting single-qubit gates as far right as possible
43 (:func:`~pennylane.transforms.commute_controlled`)
44 - cancellation of adjacent inverse gates
45 (:func:`~pennylane.transforms.cancel_inverses`)
46 - merging adjacent rotations of the same type
47 (:func:`~pennylane.transforms.merge_rotations`)
48
49 Args:
50 tape (QNode or QuantumTape or Callable): A quantum circuit.
51 pipeline (list[Callable]): A list of
52 tape and/or quantum function transforms to apply.
53 basis_set (list[str]): A list of basis gates. When expanding the tape,
54 expansion will continue until gates in the specific set are
55 reached. If no basis set is specified, no expansion will be done.
56 num_passes (int): The number of times to apply the set of transforms in
57 ``pipeline``. The default is to perform each transform once;
58 however, doing so may produce a new circuit where applying the set
59 of transforms again may yield further improvement, so the number of
60 such passes can be adjusted.
61 expand_depth (int): When ``basis_set`` is specified, the depth to use
62 for tape expansion into the basis gates.
63
64 Returns:
65 qnode (QNode) or quantum function (Callable) or tuple[List[QuantumTape], function]: The compiled circuit. The output type is explained in :func:`qml.transform <pennylane.transform>`.
66
67 **Example**
68
69 >>> dev = qml.device('default.qubit', wires=[0, 1, 2])
70
71 You can apply the transform directly on a :class:`QNode`:
72
73 .. code-block:: python
74
75 @compile
76 @qml.qnode(device=dev)
77 def circuit(x, y, z):
78 qml.Hadamard(wires=0)
79 qml.Hadamard(wires=1)
80 qml.Hadamard(wires=2)
81 qml.RZ(z, wires=2)
82 qml.CNOT(wires=[2, 1])
83 qml.RX(z, wires=0)
84 qml.CNOT(wires=[1, 0])
85 qml.RX(x, wires=0)
86 qml.CNOT(wires=[1, 0])
87 qml.RZ(-z, wires=2)
88 qml.RX(y, wires=2)
89 qml.PauliY(wires=2)
90 qml.CY(wires=[1, 2])
91 return qml.expval(qml.PauliZ(wires=0))
92
93 The default compilation pipeline is applied before execution.
94
95 Consider the following quantum function:
96
97 .. code-block:: python
98
99 def qfunc(x, y, z):
100 qml.Hadamard(wires=0)
101 qml.Hadamard(wires=1)
102 qml.Hadamard(wires=2)
103 qml.RZ(z, wires=2)
104 qml.CNOT(wires=[2, 1])
105 qml.RX(z, wires=0)
106 qml.CNOT(wires=[1, 0])
107 qml.RX(x, wires=0)
108 qml.CNOT(wires=[1, 0])
109 qml.RZ(-z, wires=2)
110 qml.RX(y, wires=2)
111 qml.PauliY(wires=2)
112 qml.CY(wires=[1, 2])
113 return qml.expval(qml.PauliZ(wires=0))
114
115 Visually, the original function looks like this:
116
117 >>> qnode = qml.QNode(qfunc, dev)
118 >>> print(qml.draw(qnode)(0.2, 0.3, 0.4))
119 0: ──H──RX(0.40)────╭X──────────RX(0.20)─╭X────┤ <Z>
120 1: ──H───────────╭X─╰●───────────────────╰●─╭●─┤
121 2: ──H──RZ(0.40)─╰●──RZ(-0.40)──RX(0.30)──Y─╰Y─┤
122
123 We can compile it down to a smaller set of gates using the ``qml.compile``
124 transform.
125
126 >>> compiled_qfunc = qml.compile(qfunc)
127 >>> compiled_qnode = qml.QNode(compiled_qfunc, dev)
128 >>> print(qml.draw(compiled_qnode)(0.2, 0.3, 0.4))
129 0: ──H──RX(0.60)─────────────────┤ <Z>
130 1: ──H─╭X──────────────────╭●────┤
131 2: ──H─╰●─────────RX(0.30)─╰Y──Y─┤
132
133 You can change up the set of transforms by passing a custom ``pipeline`` to
134 ``qml.compile``. The pipeline is a list of transform functions. Furthermore,
135 you can specify a number of passes (repetitions of the pipeline), and a list
136 of gates into which the compiler will first attempt to decompose the
137 existing operations prior to applying any optimization transforms.
138
139 .. code-block:: python3
140
141 compiled_qfunc = qml.compile(
142 pipeline=[
143 partial(qml.transforms.commute_controlled, direction="left"),
144 partial(qml.transforms.merge_rotations, atol=1e-6),
145 qml.transforms.cancel_inverses
146 ],
147 basis_set=["CNOT", "RX", "RY", "RZ"],
148 num_passes=2
149 )(qfunc)
150
151 compiled_qnode = qml.QNode(compiled_qfunc, dev)
152
153 print(qml.draw(compiled_qnode)(0.2, 0.3, 0.4))
154
155 .. code-block::
156
157 0: ──RZ(1.57)──RX(1.57)──RZ(1.57)──RX(0.60)─────────────────────────────────────────────────────
158 1: ──RZ(1.57)──RX(1.57)──RZ(1.57)─╭X─────────RZ(1.57)─────────────────────────────────────────╭●
159 2: ──RZ(1.57)──RX(1.57)──RZ(1.57)─╰●─────────RX(0.30)──RZ(1.57)──RY(3.14)──RZ(1.57)──RY(1.57)─╰X
160
161 ────────────────┤ <Z>
162 ─────────────╭●─┤
163 ───RY(-1.57)─╰X─┤
164
165 """
166 # Ensure that everything in the pipeline is a valid qfunc or tape transform
167 if pipeline is None:
168 pipeline = default_pipeline
169 else:
170 for p in pipeline:
171 p_func = p.func if isinstance(p, partial) else p
172 if not isinstance(p_func, TransformDispatcher):
173 raise ValueError("Invalid transform function {p} passed to compile.")
174
175 if num_passes < 1 or not isinstance(num_passes, int):
176 raise ValueError("Number of passes must be an integer with value at least 1.")
177
178 # Expand the tape; this is done to unroll any templates that may be present,
179 # as well as to decompose over a specified basis set
180 # First, though, we have to stop whatever tape may be recording so that we
181 # don't queue anything as a result of the expansion or transform pipeline
182
183 with QueuingManager.stop_recording():
184 if basis_set is not None:
185 expanded_tape = tape.expand(
186 depth=expand_depth, stop_at=lambda obj: obj.name in basis_set
187 )
188 else:
189 # Expands out anything that is not a single operation (i.e., the templates)
190 # expand barriers when `only_visual=True`
191 def stop_at(obj):
192 return (obj.name in all_ops) and (not getattr(obj, "only_visual", False))
193
194 expanded_tape = tape.expand(stop_at=stop_at)
195
196 # Apply the full set of compilation transforms num_passes times
197 for _ in range(num_passes):
198 for transf in pipeline:
199 tapes, _ = transf(expanded_tape)
200 expanded_tape = tapes[0]
201
202 new_tape = type(tape)(
203 expanded_tape.operations, expanded_tape.measurements, shots=expanded_tape.shots
204 )
205
206 def null_postprocessing(results):
207 """A postprocesing function returned by a transform that only converts the batch of results
208 into a result for a single ``QuantumTape``.
209 """
210 return results[0]
211
212 return [new_tape], null_postprocessing
213
[end of pennylane/transforms/compile.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/pennylane/transforms/compile.py b/pennylane/transforms/compile.py
--- a/pennylane/transforms/compile.py
+++ b/pennylane/transforms/compile.py
@@ -52,14 +52,15 @@
tape and/or quantum function transforms to apply.
basis_set (list[str]): A list of basis gates. When expanding the tape,
expansion will continue until gates in the specific set are
- reached. If no basis set is specified, no expansion will be done.
+ reached. If no basis set is specified, a default of
+ ``pennylane.ops.__all__`` will be used. This decomposes templates and
+ operator arithmetic.
num_passes (int): The number of times to apply the set of transforms in
``pipeline``. The default is to perform each transform once;
however, doing so may produce a new circuit where applying the set
of transforms again may yield further improvement, so the number of
such passes can be adjusted.
- expand_depth (int): When ``basis_set`` is specified, the depth to use
- for tape expansion into the basis gates.
+ expand_depth (int): The depth to use for tape expansion into the basis gates.
Returns:
qnode (QNode) or quantum function (Callable) or tuple[List[QuantumTape], function]: The compiled circuit. The output type is explained in :func:`qml.transform <pennylane.transform>`.
@@ -181,17 +182,12 @@
# don't queue anything as a result of the expansion or transform pipeline
with QueuingManager.stop_recording():
- if basis_set is not None:
- expanded_tape = tape.expand(
- depth=expand_depth, stop_at=lambda obj: obj.name in basis_set
- )
- else:
- # Expands out anything that is not a single operation (i.e., the templates)
- # expand barriers when `only_visual=True`
- def stop_at(obj):
- return (obj.name in all_ops) and (not getattr(obj, "only_visual", False))
-
- expanded_tape = tape.expand(stop_at=stop_at)
+ basis_set = basis_set or all_ops
+
+ def stop_at(obj):
+ return obj.name in basis_set and (not getattr(obj, "only_visual", False))
+
+ expanded_tape = tape.expand(depth=expand_depth, stop_at=stop_at)
# Apply the full set of compilation transforms num_passes times
for _ in range(num_passes):
|
{"golden_diff": "diff --git a/pennylane/transforms/compile.py b/pennylane/transforms/compile.py\n--- a/pennylane/transforms/compile.py\n+++ b/pennylane/transforms/compile.py\n@@ -52,14 +52,15 @@\n tape and/or quantum function transforms to apply.\n basis_set (list[str]): A list of basis gates. When expanding the tape,\n expansion will continue until gates in the specific set are\n- reached. If no basis set is specified, no expansion will be done.\n+ reached. If no basis set is specified, a default of\n+ ``pennylane.ops.__all__`` will be used. This decomposes templates and\n+ operator arithmetic.\n num_passes (int): The number of times to apply the set of transforms in\n ``pipeline``. The default is to perform each transform once;\n however, doing so may produce a new circuit where applying the set\n of transforms again may yield further improvement, so the number of\n such passes can be adjusted.\n- expand_depth (int): When ``basis_set`` is specified, the depth to use\n- for tape expansion into the basis gates.\n+ expand_depth (int): The depth to use for tape expansion into the basis gates.\n \n Returns:\n qnode (QNode) or quantum function (Callable) or tuple[List[QuantumTape], function]: The compiled circuit. The output type is explained in :func:`qml.transform <pennylane.transform>`.\n@@ -181,17 +182,12 @@\n # don't queue anything as a result of the expansion or transform pipeline\n \n with QueuingManager.stop_recording():\n- if basis_set is not None:\n- expanded_tape = tape.expand(\n- depth=expand_depth, stop_at=lambda obj: obj.name in basis_set\n- )\n- else:\n- # Expands out anything that is not a single operation (i.e., the templates)\n- # expand barriers when `only_visual=True`\n- def stop_at(obj):\n- return (obj.name in all_ops) and (not getattr(obj, \"only_visual\", False))\n-\n- expanded_tape = tape.expand(stop_at=stop_at)\n+ basis_set = basis_set or all_ops\n+\n+ def stop_at(obj):\n+ return obj.name in basis_set and (not getattr(obj, \"only_visual\", False))\n+\n+ expanded_tape = tape.expand(depth=expand_depth, stop_at=stop_at)\n \n # Apply the full set of compilation transforms num_passes times\n for _ in range(num_passes):\n", "issue": "[BUG] compile pre-decomposes at most once\n### Expected behavior\r\n\r\nI would expect that if I do qml.compile() of a template and its adjoint template, the decompositions would be equivalent but adjoined. However the depth of the decomposition is different:\r\n\r\n<img width=\"300\" alt=\"Captura de pantalla 2023-11-07 a las 13 58 38\" src=\"https://github.com/PennyLaneAI/pennylane/assets/65235481/b6d7525e-e751-47ea-824e-d1bda5701f73\">\r\n\r\n<img width=\"300\" alt=\"Captura de pantalla 2023-11-07 a las 13 58 49\" src=\"https://github.com/PennyLaneAI/pennylane/assets/65235481/967a1e09-0203-4501-b0c2-3c2e345cc5d7\">\r\n\r\nAs we can see in one case it expands the QFT and in another it does not. I would have liked that they were equivalent so in both cases it shows (QFT or QFT^t)\r\n\r\n\r\n\r\n### Actual behavior\r\n\r\nExplained above\r\n\r\n### Additional information\r\n\r\n_No response_\r\n\r\n### Source code\r\n\r\n```shell\r\nimport pennylane as qml\r\n\r\ndev = qml.device(\"default.qubit\")\r\n\r\[email protected](dev)\r\[email protected]()\r\ndef circuit():\r\n #qml.QuantumPhaseEstimation(qml.PauliZ(0), estimation_wires = range(1,3))\r\n qml.adjoint(qml.QuantumPhaseEstimation)(qml.PauliZ(0), estimation_wires = range(1,3))\r\n return qml.state()\r\n\r\nqml.draw_mpl(circuit)()\r\n```\r\n\r\n\r\n### Tracebacks\r\n\r\n_No response_\r\n\r\n### System information\r\n\r\n```shell\r\nName: PennyLane\r\nVersion: 0.33.0\r\nSummary: PennyLane is a Python quantum machine learning library by Xanadu Inc.\r\nHome-page: https://github.com/PennyLaneAI/pennylane\r\nAuthor: \r\nAuthor-email: \r\nLicense: Apache License 2.0\r\nLocation: /usr/local/lib/python3.10/dist-packages\r\nRequires: appdirs, autograd, autoray, cachetools, networkx, numpy, pennylane-lightning, requests, rustworkx, scipy, semantic-version, toml, typing-extensions\r\nRequired-by: PennyLane-Lightning\r\n\r\nPlatform info: Linux-5.15.120+-x86_64-with-glibc2.35\r\nPython version: 3.10.12\r\nNumpy version: 1.23.5\r\nScipy version: 1.11.3\r\nInstalled devices:\r\n```\r\n\r\n\r\n### Existing GitHub issues\r\n\r\n- [X] I have searched existing GitHub issues to make sure the issue does not already exist.\n", "before_files": [{"content": "# Copyright 2018-2021 Xanadu Quantum Technologies Inc.\n\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n\n# http://www.apache.org/licenses/LICENSE-2.0\n\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Code for the high-level quantum function transform that executes compilation.\"\"\"\n# pylint: disable=too-many-branches\nfrom functools import partial\nfrom typing import Sequence, Callable\n\nfrom pennylane.queuing import QueuingManager\nfrom pennylane.ops import __all__ as all_ops\nfrom pennylane.tape import QuantumTape\nfrom pennylane.transforms.core import transform, TransformDispatcher\nfrom pennylane.transforms.optimization import (\n cancel_inverses,\n commute_controlled,\n merge_rotations,\n remove_barrier,\n)\n\n\ndefault_pipeline = [commute_controlled, cancel_inverses, merge_rotations, remove_barrier]\n\n\n@transform\ndef compile(\n tape: QuantumTape, pipeline=None, basis_set=None, num_passes=1, expand_depth=5\n) -> (Sequence[QuantumTape], Callable):\n \"\"\"Compile a circuit by applying a series of transforms to a quantum function.\n\n The default set of transforms includes (in order):\n\n - pushing all commuting single-qubit gates as far right as possible\n (:func:`~pennylane.transforms.commute_controlled`)\n - cancellation of adjacent inverse gates\n (:func:`~pennylane.transforms.cancel_inverses`)\n - merging adjacent rotations of the same type\n (:func:`~pennylane.transforms.merge_rotations`)\n\n Args:\n tape (QNode or QuantumTape or Callable): A quantum circuit.\n pipeline (list[Callable]): A list of\n tape and/or quantum function transforms to apply.\n basis_set (list[str]): A list of basis gates. When expanding the tape,\n expansion will continue until gates in the specific set are\n reached. If no basis set is specified, no expansion will be done.\n num_passes (int): The number of times to apply the set of transforms in\n ``pipeline``. The default is to perform each transform once;\n however, doing so may produce a new circuit where applying the set\n of transforms again may yield further improvement, so the number of\n such passes can be adjusted.\n expand_depth (int): When ``basis_set`` is specified, the depth to use\n for tape expansion into the basis gates.\n\n Returns:\n qnode (QNode) or quantum function (Callable) or tuple[List[QuantumTape], function]: The compiled circuit. The output type is explained in :func:`qml.transform <pennylane.transform>`.\n\n **Example**\n\n >>> dev = qml.device('default.qubit', wires=[0, 1, 2])\n\n You can apply the transform directly on a :class:`QNode`:\n\n .. code-block:: python\n\n @compile\n @qml.qnode(device=dev)\n def circuit(x, y, z):\n qml.Hadamard(wires=0)\n qml.Hadamard(wires=1)\n qml.Hadamard(wires=2)\n qml.RZ(z, wires=2)\n qml.CNOT(wires=[2, 1])\n qml.RX(z, wires=0)\n qml.CNOT(wires=[1, 0])\n qml.RX(x, wires=0)\n qml.CNOT(wires=[1, 0])\n qml.RZ(-z, wires=2)\n qml.RX(y, wires=2)\n qml.PauliY(wires=2)\n qml.CY(wires=[1, 2])\n return qml.expval(qml.PauliZ(wires=0))\n\n The default compilation pipeline is applied before execution.\n\n Consider the following quantum function:\n\n .. code-block:: python\n\n def qfunc(x, y, z):\n qml.Hadamard(wires=0)\n qml.Hadamard(wires=1)\n qml.Hadamard(wires=2)\n qml.RZ(z, wires=2)\n qml.CNOT(wires=[2, 1])\n qml.RX(z, wires=0)\n qml.CNOT(wires=[1, 0])\n qml.RX(x, wires=0)\n qml.CNOT(wires=[1, 0])\n qml.RZ(-z, wires=2)\n qml.RX(y, wires=2)\n qml.PauliY(wires=2)\n qml.CY(wires=[1, 2])\n return qml.expval(qml.PauliZ(wires=0))\n\n Visually, the original function looks like this:\n\n >>> qnode = qml.QNode(qfunc, dev)\n >>> print(qml.draw(qnode)(0.2, 0.3, 0.4))\n 0: \u2500\u2500H\u2500\u2500RX(0.40)\u2500\u2500\u2500\u2500\u256dX\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500RX(0.20)\u2500\u256dX\u2500\u2500\u2500\u2500\u2524 <Z>\n 1: \u2500\u2500H\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256dX\u2500\u2570\u25cf\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2570\u25cf\u2500\u256d\u25cf\u2500\u2524\n 2: \u2500\u2500H\u2500\u2500RZ(0.40)\u2500\u2570\u25cf\u2500\u2500RZ(-0.40)\u2500\u2500RX(0.30)\u2500\u2500Y\u2500\u2570Y\u2500\u2524\n\n We can compile it down to a smaller set of gates using the ``qml.compile``\n transform.\n\n >>> compiled_qfunc = qml.compile(qfunc)\n >>> compiled_qnode = qml.QNode(compiled_qfunc, dev)\n >>> print(qml.draw(compiled_qnode)(0.2, 0.3, 0.4))\n 0: \u2500\u2500H\u2500\u2500RX(0.60)\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524 <Z>\n 1: \u2500\u2500H\u2500\u256dX\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256d\u25cf\u2500\u2500\u2500\u2500\u2524\n 2: \u2500\u2500H\u2500\u2570\u25cf\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500RX(0.30)\u2500\u2570Y\u2500\u2500Y\u2500\u2524\n\n You can change up the set of transforms by passing a custom ``pipeline`` to\n ``qml.compile``. The pipeline is a list of transform functions. Furthermore,\n you can specify a number of passes (repetitions of the pipeline), and a list\n of gates into which the compiler will first attempt to decompose the\n existing operations prior to applying any optimization transforms.\n\n .. code-block:: python3\n\n compiled_qfunc = qml.compile(\n pipeline=[\n partial(qml.transforms.commute_controlled, direction=\"left\"),\n partial(qml.transforms.merge_rotations, atol=1e-6),\n qml.transforms.cancel_inverses\n ],\n basis_set=[\"CNOT\", \"RX\", \"RY\", \"RZ\"],\n num_passes=2\n )(qfunc)\n\n compiled_qnode = qml.QNode(compiled_qfunc, dev)\n\n print(qml.draw(compiled_qnode)(0.2, 0.3, 0.4))\n\n .. code-block::\n\n 0: \u2500\u2500RZ(1.57)\u2500\u2500RX(1.57)\u2500\u2500RZ(1.57)\u2500\u2500RX(0.60)\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\n 1: \u2500\u2500RZ(1.57)\u2500\u2500RX(1.57)\u2500\u2500RZ(1.57)\u2500\u256dX\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500RZ(1.57)\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256d\u25cf\n 2: \u2500\u2500RZ(1.57)\u2500\u2500RX(1.57)\u2500\u2500RZ(1.57)\u2500\u2570\u25cf\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500RX(0.30)\u2500\u2500RZ(1.57)\u2500\u2500RY(3.14)\u2500\u2500RZ(1.57)\u2500\u2500RY(1.57)\u2500\u2570X\n\n \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524 <Z>\n \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256d\u25cf\u2500\u2524\n \u2500\u2500\u2500RY(-1.57)\u2500\u2570X\u2500\u2524\n\n \"\"\"\n # Ensure that everything in the pipeline is a valid qfunc or tape transform\n if pipeline is None:\n pipeline = default_pipeline\n else:\n for p in pipeline:\n p_func = p.func if isinstance(p, partial) else p\n if not isinstance(p_func, TransformDispatcher):\n raise ValueError(\"Invalid transform function {p} passed to compile.\")\n\n if num_passes < 1 or not isinstance(num_passes, int):\n raise ValueError(\"Number of passes must be an integer with value at least 1.\")\n\n # Expand the tape; this is done to unroll any templates that may be present,\n # as well as to decompose over a specified basis set\n # First, though, we have to stop whatever tape may be recording so that we\n # don't queue anything as a result of the expansion or transform pipeline\n\n with QueuingManager.stop_recording():\n if basis_set is not None:\n expanded_tape = tape.expand(\n depth=expand_depth, stop_at=lambda obj: obj.name in basis_set\n )\n else:\n # Expands out anything that is not a single operation (i.e., the templates)\n # expand barriers when `only_visual=True`\n def stop_at(obj):\n return (obj.name in all_ops) and (not getattr(obj, \"only_visual\", False))\n\n expanded_tape = tape.expand(stop_at=stop_at)\n\n # Apply the full set of compilation transforms num_passes times\n for _ in range(num_passes):\n for transf in pipeline:\n tapes, _ = transf(expanded_tape)\n expanded_tape = tapes[0]\n\n new_tape = type(tape)(\n expanded_tape.operations, expanded_tape.measurements, shots=expanded_tape.shots\n )\n\n def null_postprocessing(results):\n \"\"\"A postprocesing function returned by a transform that only converts the batch of results\n into a result for a single ``QuantumTape``.\n \"\"\"\n return results[0]\n\n return [new_tape], null_postprocessing\n", "path": "pennylane/transforms/compile.py"}]}
| 4,048 | 572 |
gh_patches_debug_25501
|
rasdani/github-patches
|
git_diff
|
azavea__raster-vision-784
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Plugin configuration values cannot handle list of values
The plugin component of the configuration file is set up to be parsed as a JSON list. However, a multi-item list causes a parse exception when Everett parses the configuration.
E.g.:
```
[PLUGINS]
modules = [ "tests.test_plugin" ]
```
works, while
```
[PLUGINS]
modules = [ "tests.test_plugin", "tests.test_plugin" ]
```
does not, and errors with:
```
File "/opt/src/rastervision/registry.py", line 179, in initialize_config
verbosity=verbosity)
File "/opt/src/rastervision/rv_config.py", line 146, in __init__
ConfigIniEnv(config_file_locations),
File "/usr/local/lib/python3.5/dist-packages/everett/manager.py", line 602, in __init__
self.cfg.update(ConfigIniEnv.parse_ini_file(path))
File "/usr/local/lib/python3.5/dist-packages/everett/manager.py", line 606, in parse_ini_file
cfgobj = ConfigObj(path)
File "/usr/local/lib/python3.5/dist-packages/configobj.py", line 1229, in __init__
self._load(infile, configspec)
File "/usr/local/lib/python3.5/dist-packages/configobj.py", line 1318, in _load
raise error
File "<string>", line None
configobj.ParseError: Parse error in value at line 2.
```
</issue>
<code>
[start of rastervision/plugin.py]
1 import os
2 import json
3 import importlib
4
5 from pluginbase import PluginBase
6
7 import rastervision as rv
8 from rastervision.protos.plugin_pb2 import PluginConfig as PluginConfigMsg
9 from rastervision.utils.files import download_if_needed
10
11
12 class PluginError(Exception):
13 pass
14
15
16 class PluginRegistry:
17 @staticmethod
18 def get_instance():
19 return rv._registry._get_plugin_registry()
20
21 def __init__(self, plugin_config, rv_home):
22 """Initializes this plugin registry.
23
24 A plugin registry is passed to plugins in a call
25 to their "register_plugin" method.
26
27 Args:
28 plugin_config - the everett ConfigManager for the plugin
29 section of the application configuration.
30 """
31 self.plugin_root_dir = os.path.join(rv_home, 'plugins')
32 self.config_builders = {}
33 self.default_raster_sources = []
34 self.default_vector_sources = []
35 self.default_label_sources = []
36 self.default_label_stores = []
37 self.default_evaluators = []
38 self.experiment_runners = {}
39 self.filesystems = []
40
41 plugin_files = json.loads(plugin_config('files', default='[]'))
42 self._load_from_files(plugin_files)
43 self.plugin_files = plugin_files
44
45 plugin_modules = json.loads(plugin_config('modules', default='[]'))
46 self._load_from_modules(plugin_modules)
47 self.plugin_modules = plugin_modules
48
49 def _load_plugin(self, plugin, identifier):
50 # Check the plugin is valid
51 if not hasattr(plugin, 'register_plugin'):
52 raise PluginError('Plugin at {} does not have '
53 '"register_plugin" method.'.format(identifier))
54
55 register_method = getattr(plugin, 'register_plugin')
56 if not callable(register_method):
57 raise PluginError('Plugin at {} has a '
58 '"register_plugin" attribute, '
59 'but it is not callable'.format(identifier))
60
61 # TODO: Log loading plugin.
62 register_method(self)
63
64 def _load_from_files(self, plugin_paths):
65 if not plugin_paths:
66 return
67
68 self.plugin_sources = []
69
70 plugin_base = PluginBase(package='rastervision.plugins')
71 for uri in plugin_paths:
72 plugin_name = os.path.splitext(os.path.basename(uri))[0]
73 plugin_path = os.path.join(self.plugin_root_dir, plugin_name)
74 fs = rv._registry.get_file_system(uri, search_plugins=False)
75 local_path = download_if_needed(uri, plugin_path, fs=fs)
76 local_dir = os.path.dirname(local_path)
77
78 plugin_source = plugin_base.make_plugin_source(
79 searchpath=[local_dir])
80
81 # We're required to hang onto the source
82 # to keep it from getting GC'd.
83 self.plugin_sources.append(plugin_source)
84
85 self._load_plugin(plugin_source.load_plugin(plugin_name), uri)
86
87 def _load_from_modules(self, plugin_modules):
88 if not plugin_modules:
89 return
90
91 for module in plugin_modules:
92 plugin = importlib.import_module(module)
93 self._load_plugin(plugin, module)
94
95 def add_plugins_from_proto(self, plugin_msg):
96 new_plugin_files = list(
97 set(plugin_msg.plugin_uris) - set(self.plugin_files))
98 self._load_from_files(new_plugin_files)
99 self.plugin_files.extend(new_plugin_files)
100
101 new_plugin_modules = list(
102 set(plugin_msg.plugin_modules) - set(self.plugin_modules))
103 self._load_from_modules(new_plugin_modules)
104 self.plugin_modules.extend(new_plugin_modules)
105
106 def to_proto(self):
107 """Returns a protobuf message that records the
108 plugin sources for plugins that are currently loaded
109 in the registry.
110 """
111 return PluginConfigMsg(
112 plugin_uris=self.plugin_files, plugin_modules=self.plugin_modules)
113
114 def register_config_builder(self, group, key, builder_class):
115 """Registers a ConfigBuilder as a plugin.
116
117 Args:
118 group - The Config group, e.g. rv.BACKEND, rv.TASK.
119 key - The key used for this plugin. This will be used to
120 construct the builder in a ".builder(key)" call.
121 builder_class - The subclass of ConfigBuilder that builds
122 the Config for this plugin.
123 """
124 if (group, key) in self.config_builders:
125 raise PluginError('ConfigBuilder already registered for group '
126 '{} and key {}'.format(group, key))
127 self.config_builders[(group, key)] = builder_class
128
129 def register_default_raster_source(self, provider_class):
130 """Registers a RasterSourceDefaultProvider for use as a plugin."""
131
132 self.default_raster_sources.append(provider_class)
133
134 def register_default_vector_source(self, provider_class):
135 """Registers a VectorSourceDefaultProvider for use as a plugin."""
136 self.default_vector_sources.append(provider_class)
137
138 def register_default_label_source(self, provider_class):
139 """Registers a LabelSourceDefaultProvider for use as a plugin."""
140 self.default_label_sources.append(provider_class)
141
142 def register_default_label_store(self, provider_class):
143 """Registers a LabelStoreDefaultProvider for use as a plugin."""
144 self.default_label_stores.append(provider_class)
145
146 def register_default_evaluator(self, provider_class):
147 """Registers an EvaluatorDefaultProvider for use as a plugin."""
148 self.default_evaluators.append(provider_class)
149
150 def register_experiment_runner(self, runner_key, runner_class):
151 """Registers an ExperimentRunner as a plugin.
152
153 Args:
154 runner_key - The key used to reference this plugin runner.
155 This is a string that will match the command line
156 argument used to reference this runner; e.g. if the
157 key is "FOO_RUNNER", then users can use the runner
158 by issuing a "rastervision run foo_runner ..." command.
159 runner_class - The class of the ExperimentRunner plugin.
160 """
161 if runner_key in self.experiment_runners:
162 raise PluginError('ExperimentRunner already registered for '
163 'key {}'.format(runner_key))
164 self.experiment_runners[runner_key] = runner_class
165
166 def register_filesystem(self, filesystem_class):
167 """Registers a FileSystem as a plugin."""
168 self.filesystems.append(filesystem_class)
169
[end of rastervision/plugin.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/rastervision/plugin.py b/rastervision/plugin.py
--- a/rastervision/plugin.py
+++ b/rastervision/plugin.py
@@ -13,6 +13,24 @@
pass
+def load_conf_list(s):
+ """Loads a list of items from the config.
+
+ Lists should be comma separated.
+
+ This takes into account that previous versions of Raster Vision
+ allowed for a `[ "module" ]` like syntax, even though that didn't
+ work for multi-value lists.
+ """
+ try:
+ # A comma separated list of values will be transformed to
+ # having a list-like string, with ' instead of ". Replacing
+ # single quotes with double quotes lets us parse it as a JSON list.
+ return json.loads(s.replace("'", '"'))
+ except json.JSONDecodeError:
+ return list(map(lambda x: x.strip(), s.split(',')))
+
+
class PluginRegistry:
@staticmethod
def get_instance():
@@ -38,11 +56,11 @@
self.experiment_runners = {}
self.filesystems = []
- plugin_files = json.loads(plugin_config('files', default='[]'))
+ plugin_files = load_conf_list(plugin_config('files', default='[]'))
self._load_from_files(plugin_files)
self.plugin_files = plugin_files
- plugin_modules = json.loads(plugin_config('modules', default='[]'))
+ plugin_modules = load_conf_list(plugin_config('modules', default='[]'))
self._load_from_modules(plugin_modules)
self.plugin_modules = plugin_modules
|
{"golden_diff": "diff --git a/rastervision/plugin.py b/rastervision/plugin.py\n--- a/rastervision/plugin.py\n+++ b/rastervision/plugin.py\n@@ -13,6 +13,24 @@\n pass\n \n \n+def load_conf_list(s):\n+ \"\"\"Loads a list of items from the config.\n+\n+ Lists should be comma separated.\n+\n+ This takes into account that previous versions of Raster Vision\n+ allowed for a `[ \"module\" ]` like syntax, even though that didn't\n+ work for multi-value lists.\n+ \"\"\"\n+ try:\n+ # A comma separated list of values will be transformed to\n+ # having a list-like string, with ' instead of \". Replacing\n+ # single quotes with double quotes lets us parse it as a JSON list.\n+ return json.loads(s.replace(\"'\", '\"'))\n+ except json.JSONDecodeError:\n+ return list(map(lambda x: x.strip(), s.split(',')))\n+\n+\n class PluginRegistry:\n @staticmethod\n def get_instance():\n@@ -38,11 +56,11 @@\n self.experiment_runners = {}\n self.filesystems = []\n \n- plugin_files = json.loads(plugin_config('files', default='[]'))\n+ plugin_files = load_conf_list(plugin_config('files', default='[]'))\n self._load_from_files(plugin_files)\n self.plugin_files = plugin_files\n \n- plugin_modules = json.loads(plugin_config('modules', default='[]'))\n+ plugin_modules = load_conf_list(plugin_config('modules', default='[]'))\n self._load_from_modules(plugin_modules)\n self.plugin_modules = plugin_modules\n", "issue": "Plugin configuration values cannot handle list of values\nThe plugin component of the configuration file is set up to be parsed as a JSON list. However, a multi-item list causes a parse exception when Everett parses the configuration.\r\n\r\nE.g.:\r\n\r\n```\r\n[PLUGINS]\r\nmodules = [ \"tests.test_plugin\" ]\r\n```\r\n\r\nworks, while\r\n\r\n```\r\n[PLUGINS]\r\nmodules = [ \"tests.test_plugin\", \"tests.test_plugin\" ]\r\n```\r\n\r\ndoes not, and errors with:\r\n\r\n```\r\n File \"/opt/src/rastervision/registry.py\", line 179, in initialize_config\r\n verbosity=verbosity)\r\n File \"/opt/src/rastervision/rv_config.py\", line 146, in __init__\r\n ConfigIniEnv(config_file_locations),\r\n File \"/usr/local/lib/python3.5/dist-packages/everett/manager.py\", line 602, in __init__\r\n self.cfg.update(ConfigIniEnv.parse_ini_file(path))\r\n File \"/usr/local/lib/python3.5/dist-packages/everett/manager.py\", line 606, in parse_ini_file\r\n cfgobj = ConfigObj(path)\r\n File \"/usr/local/lib/python3.5/dist-packages/configobj.py\", line 1229, in __init__\r\n self._load(infile, configspec)\r\n File \"/usr/local/lib/python3.5/dist-packages/configobj.py\", line 1318, in _load\r\n raise error\r\n File \"<string>\", line None\r\nconfigobj.ParseError: Parse error in value at line 2.\r\n```\n", "before_files": [{"content": "import os\nimport json\nimport importlib\n\nfrom pluginbase import PluginBase\n\nimport rastervision as rv\nfrom rastervision.protos.plugin_pb2 import PluginConfig as PluginConfigMsg\nfrom rastervision.utils.files import download_if_needed\n\n\nclass PluginError(Exception):\n pass\n\n\nclass PluginRegistry:\n @staticmethod\n def get_instance():\n return rv._registry._get_plugin_registry()\n\n def __init__(self, plugin_config, rv_home):\n \"\"\"Initializes this plugin registry.\n\n A plugin registry is passed to plugins in a call\n to their \"register_plugin\" method.\n\n Args:\n plugin_config - the everett ConfigManager for the plugin\n section of the application configuration.\n \"\"\"\n self.plugin_root_dir = os.path.join(rv_home, 'plugins')\n self.config_builders = {}\n self.default_raster_sources = []\n self.default_vector_sources = []\n self.default_label_sources = []\n self.default_label_stores = []\n self.default_evaluators = []\n self.experiment_runners = {}\n self.filesystems = []\n\n plugin_files = json.loads(plugin_config('files', default='[]'))\n self._load_from_files(plugin_files)\n self.plugin_files = plugin_files\n\n plugin_modules = json.loads(plugin_config('modules', default='[]'))\n self._load_from_modules(plugin_modules)\n self.plugin_modules = plugin_modules\n\n def _load_plugin(self, plugin, identifier):\n # Check the plugin is valid\n if not hasattr(plugin, 'register_plugin'):\n raise PluginError('Plugin at {} does not have '\n '\"register_plugin\" method.'.format(identifier))\n\n register_method = getattr(plugin, 'register_plugin')\n if not callable(register_method):\n raise PluginError('Plugin at {} has a '\n '\"register_plugin\" attribute, '\n 'but it is not callable'.format(identifier))\n\n # TODO: Log loading plugin.\n register_method(self)\n\n def _load_from_files(self, plugin_paths):\n if not plugin_paths:\n return\n\n self.plugin_sources = []\n\n plugin_base = PluginBase(package='rastervision.plugins')\n for uri in plugin_paths:\n plugin_name = os.path.splitext(os.path.basename(uri))[0]\n plugin_path = os.path.join(self.plugin_root_dir, plugin_name)\n fs = rv._registry.get_file_system(uri, search_plugins=False)\n local_path = download_if_needed(uri, plugin_path, fs=fs)\n local_dir = os.path.dirname(local_path)\n\n plugin_source = plugin_base.make_plugin_source(\n searchpath=[local_dir])\n\n # We're required to hang onto the source\n # to keep it from getting GC'd.\n self.plugin_sources.append(plugin_source)\n\n self._load_plugin(plugin_source.load_plugin(plugin_name), uri)\n\n def _load_from_modules(self, plugin_modules):\n if not plugin_modules:\n return\n\n for module in plugin_modules:\n plugin = importlib.import_module(module)\n self._load_plugin(plugin, module)\n\n def add_plugins_from_proto(self, plugin_msg):\n new_plugin_files = list(\n set(plugin_msg.plugin_uris) - set(self.plugin_files))\n self._load_from_files(new_plugin_files)\n self.plugin_files.extend(new_plugin_files)\n\n new_plugin_modules = list(\n set(plugin_msg.plugin_modules) - set(self.plugin_modules))\n self._load_from_modules(new_plugin_modules)\n self.plugin_modules.extend(new_plugin_modules)\n\n def to_proto(self):\n \"\"\"Returns a protobuf message that records the\n plugin sources for plugins that are currently loaded\n in the registry.\n \"\"\"\n return PluginConfigMsg(\n plugin_uris=self.plugin_files, plugin_modules=self.plugin_modules)\n\n def register_config_builder(self, group, key, builder_class):\n \"\"\"Registers a ConfigBuilder as a plugin.\n\n Args:\n group - The Config group, e.g. rv.BACKEND, rv.TASK.\n key - The key used for this plugin. This will be used to\n construct the builder in a \".builder(key)\" call.\n builder_class - The subclass of ConfigBuilder that builds\n the Config for this plugin.\n \"\"\"\n if (group, key) in self.config_builders:\n raise PluginError('ConfigBuilder already registered for group '\n '{} and key {}'.format(group, key))\n self.config_builders[(group, key)] = builder_class\n\n def register_default_raster_source(self, provider_class):\n \"\"\"Registers a RasterSourceDefaultProvider for use as a plugin.\"\"\"\n\n self.default_raster_sources.append(provider_class)\n\n def register_default_vector_source(self, provider_class):\n \"\"\"Registers a VectorSourceDefaultProvider for use as a plugin.\"\"\"\n self.default_vector_sources.append(provider_class)\n\n def register_default_label_source(self, provider_class):\n \"\"\"Registers a LabelSourceDefaultProvider for use as a plugin.\"\"\"\n self.default_label_sources.append(provider_class)\n\n def register_default_label_store(self, provider_class):\n \"\"\"Registers a LabelStoreDefaultProvider for use as a plugin.\"\"\"\n self.default_label_stores.append(provider_class)\n\n def register_default_evaluator(self, provider_class):\n \"\"\"Registers an EvaluatorDefaultProvider for use as a plugin.\"\"\"\n self.default_evaluators.append(provider_class)\n\n def register_experiment_runner(self, runner_key, runner_class):\n \"\"\"Registers an ExperimentRunner as a plugin.\n\n Args:\n runner_key - The key used to reference this plugin runner.\n This is a string that will match the command line\n argument used to reference this runner; e.g. if the\n key is \"FOO_RUNNER\", then users can use the runner\n by issuing a \"rastervision run foo_runner ...\" command.\n runner_class - The class of the ExperimentRunner plugin.\n \"\"\"\n if runner_key in self.experiment_runners:\n raise PluginError('ExperimentRunner already registered for '\n 'key {}'.format(runner_key))\n self.experiment_runners[runner_key] = runner_class\n\n def register_filesystem(self, filesystem_class):\n \"\"\"Registers a FileSystem as a plugin.\"\"\"\n self.filesystems.append(filesystem_class)\n", "path": "rastervision/plugin.py"}]}
| 2,571 | 356 |
gh_patches_debug_14093
|
rasdani/github-patches
|
git_diff
|
Kinto__kinto-1925
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Remove deprecation warning: Resource `mapping` is deprecated, use `schema`
```
tests/core/resource/test_viewset.py::ViewSetTest::test_a_default_schema_is_added_when_method_doesnt_match
/home/mathieu/Code/Mozilla/kinto/kinto/core/resource/viewset.py:160: DeprecationWarning: Resource `mapping` is deprecated, use `schema`
warnings.warn(message, DeprecationWarning)
```
</issue>
<code>
[start of kinto/core/resource/viewset.py]
1 import functools
2 import warnings
3
4 import colander
5 from cornice.validators import colander_validator
6 from pyramid.settings import asbool
7
8 from kinto.core import authorization
9
10 from .schema import (
11 PermissionsSchema,
12 RequestSchema,
13 PayloadRequestSchema,
14 PatchHeaderSchema,
15 CollectionQuerySchema,
16 CollectionGetQuerySchema,
17 RecordGetQuerySchema,
18 RecordSchema,
19 ResourceReponses,
20 ShareableResourseResponses,
21 )
22
23
24 CONTENT_TYPES = ["application/json"]
25
26 PATCH_CONTENT_TYPES = ["application/merge-patch+json"]
27
28
29 class StrictSchema(colander.MappingSchema):
30 @staticmethod
31 def schema_type():
32 return colander.Mapping(unknown="raise")
33
34
35 class PartialSchema(colander.MappingSchema):
36 @staticmethod
37 def schema_type():
38 return colander.Mapping(unknown="ignore")
39
40
41 class SimpleSchema(colander.MappingSchema):
42 @staticmethod
43 def schema_type():
44 return colander.Mapping(unknown="preserve")
45
46
47 class ViewSet:
48 """The default ViewSet object.
49
50 A viewset contains all the information needed to register
51 any resource in the Cornice registry.
52
53 It provides the same features as ``cornice.resource()``, except
54 that it is much more flexible and extensible.
55 """
56
57 service_name = "{resource_name}-{endpoint_type}"
58 collection_path = "/{resource_name}s"
59 record_path = "/{resource_name}s/{{id}}"
60
61 collection_methods = ("GET", "POST", "DELETE")
62 record_methods = ("GET", "PUT", "PATCH", "DELETE")
63
64 readonly_methods = ("GET", "OPTIONS", "HEAD")
65
66 factory = authorization.RouteFactory
67
68 responses = ResourceReponses()
69
70 service_arguments = {"description": "Collection of {resource_name}"}
71
72 default_arguments = {
73 "permission": authorization.PRIVATE,
74 "accept": CONTENT_TYPES,
75 "schema": RequestSchema(),
76 }
77
78 default_post_arguments = {"content_type": CONTENT_TYPES, "schema": PayloadRequestSchema()}
79
80 default_put_arguments = {"content_type": CONTENT_TYPES, "schema": PayloadRequestSchema()}
81
82 default_patch_arguments = {
83 "content_type": CONTENT_TYPES + PATCH_CONTENT_TYPES,
84 "schema": PayloadRequestSchema().bind(header=PatchHeaderSchema()),
85 }
86
87 default_collection_arguments = {
88 "schema": RequestSchema().bind(querystring=CollectionQuerySchema())
89 }
90 collection_get_arguments = {
91 "schema": RequestSchema().bind(querystring=CollectionGetQuerySchema()),
92 "cors_headers": (
93 "Next-Page",
94 "Total-Records",
95 "Last-Modified",
96 "ETag",
97 "Cache-Control",
98 "Expires",
99 "Pragma",
100 ),
101 }
102 collection_post_arguments = {"schema": PayloadRequestSchema()}
103 default_record_arguments = {}
104 record_get_arguments = {
105 "schema": RequestSchema().bind(querystring=RecordGetQuerySchema()),
106 "cors_headers": ("Last-Modified", "ETag", "Cache-Control", "Expires", "Pragma"),
107 }
108
109 def __init__(self, **kwargs):
110 self.update(**kwargs)
111 self.record_arguments = functools.partial(self.get_view_arguments, "record")
112 self.collection_arguments = functools.partial(self.get_view_arguments, "collection")
113
114 def update(self, **kwargs):
115 """Update viewset attributes with provided values."""
116 self.__dict__.update(**kwargs)
117
118 def get_view_arguments(self, endpoint_type, resource_cls, method):
119 """Return the Pyramid/Cornice view arguments for the given endpoint
120 type and method.
121
122 :param str endpoint_type: either "collection" or "record".
123 :param resource_cls: the resource class.
124 :param str method: the HTTP method.
125 """
126 args = {**self.default_arguments}
127 default_arguments = getattr(self, f"default_{endpoint_type}_arguments")
128 args.update(**default_arguments)
129
130 by_http_verb = f"default_{method.lower()}_arguments"
131 method_args = getattr(self, by_http_verb, {})
132 args.update(**method_args)
133
134 by_method = f"{endpoint_type}_{method.lower()}_arguments"
135 endpoint_args = getattr(self, by_method, {})
136 args.update(**endpoint_args)
137
138 request_schema = args.get("schema", RequestSchema())
139 record_schema = self.get_record_schema(resource_cls, method)
140 request_schema = request_schema.bind(body=record_schema)
141 response_schemas = self.responses.get_and_bind(endpoint_type, method, record=record_schema)
142
143 args["schema"] = request_schema
144 args["response_schemas"] = response_schemas
145
146 validators = args.get("validators", [])
147 validators.append(colander_validator)
148 args["validators"] = validators
149
150 return args
151
152 def get_record_schema(self, resource_cls, method):
153 """Return the Cornice schema for the given method.
154 """
155 if method.lower() in ("patch", "delete"):
156 resource_schema = SimpleSchema
157 else:
158 resource_schema = resource_cls.schema
159 if hasattr(resource_cls, "mapping"):
160 message = "Resource `mapping` is deprecated, use `schema`"
161 warnings.warn(message, DeprecationWarning)
162 resource_schema = resource_cls.mapping.__class__
163
164 record_schema = RecordSchema().bind(data=resource_schema())
165
166 return record_schema
167
168 def get_view(self, endpoint_type, method):
169 """Return the view method name located on the resource object, for the
170 given type and method.
171
172 * For collections, this will be "collection_{method|lower}
173 * For records, this will be "{method|lower}.
174 """
175 if endpoint_type == "record":
176 return method.lower()
177 return f"{endpoint_type}_{method.lower()}"
178
179 def get_name(self, resource_cls):
180 """Returns the name of the resource.
181 """
182 # Provided on viewset during registration.
183 if "name" in self.__dict__:
184 return self.__dict__["name"]
185
186 # Attribute on resource class (but not @property)
187 has_class_attr = hasattr(resource_cls, "name") and not callable(resource_cls.name)
188 if has_class_attr:
189 return resource_cls.name
190
191 # Use classname
192 return resource_cls.__name__.lower()
193
194 def get_service_name(self, endpoint_type, resource_cls):
195 """Returns the name of the service, depending a given type and
196 resource.
197 """
198 return self.service_name.format(
199 resource_name=self.get_name(resource_cls), endpoint_type=endpoint_type
200 )
201
202 def get_service_arguments(self):
203 return {**self.service_arguments}
204
205 def is_endpoint_enabled(self, endpoint_type, resource_name, method, settings):
206 """Returns if the given endpoint is enabled or not.
207
208 Uses the settings to tell so.
209 """
210 readonly_enabled = asbool(settings.get("readonly"))
211 readonly_method = method.lower() in [m.lower() for m in self.readonly_methods]
212 if readonly_enabled and not readonly_method:
213 return False
214
215 setting_enabled = f"{endpoint_type}_{resource_name}_{method.lower()}_enabled"
216 return asbool(settings.get(setting_enabled, True))
217
218
219 class ShareableViewSet(ViewSet):
220 """A ShareableViewSet will register the given resource with a schema
221 that supports permissions.
222
223 The views will rely on dynamic permissions (e.g. create with PUT if
224 record does not exist), and solicit the cliquet RouteFactory.
225 """
226
227 responses = ShareableResourseResponses()
228
229 def get_record_schema(self, resource_cls, method):
230 """Return the Cornice schema for the given method.
231 """
232 record_schema = super(ShareableViewSet, self).get_record_schema(resource_cls, method)
233 allowed_permissions = resource_cls.permissions
234 permissions = PermissionsSchema(
235 name="permissions", missing=colander.drop, permissions=allowed_permissions
236 )
237 record_schema = record_schema.bind(permissions=permissions)
238 return record_schema
239
240 def get_view_arguments(self, endpoint_type, resource_cls, method):
241 args = super().get_view_arguments(endpoint_type, resource_cls, method)
242 args["permission"] = authorization.DYNAMIC
243 return args
244
245 def get_service_arguments(self):
246 args = super().get_service_arguments()
247 args["factory"] = self.factory
248 return args
249
[end of kinto/core/resource/viewset.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/kinto/core/resource/viewset.py b/kinto/core/resource/viewset.py
--- a/kinto/core/resource/viewset.py
+++ b/kinto/core/resource/viewset.py
@@ -1,5 +1,4 @@
import functools
-import warnings
import colander
from cornice.validators import colander_validator
@@ -156,10 +155,6 @@
resource_schema = SimpleSchema
else:
resource_schema = resource_cls.schema
- if hasattr(resource_cls, "mapping"):
- message = "Resource `mapping` is deprecated, use `schema`"
- warnings.warn(message, DeprecationWarning)
- resource_schema = resource_cls.mapping.__class__
record_schema = RecordSchema().bind(data=resource_schema())
|
{"golden_diff": "diff --git a/kinto/core/resource/viewset.py b/kinto/core/resource/viewset.py\n--- a/kinto/core/resource/viewset.py\n+++ b/kinto/core/resource/viewset.py\n@@ -1,5 +1,4 @@\n import functools\n-import warnings\n \n import colander\n from cornice.validators import colander_validator\n@@ -156,10 +155,6 @@\n resource_schema = SimpleSchema\n else:\n resource_schema = resource_cls.schema\n- if hasattr(resource_cls, \"mapping\"):\n- message = \"Resource `mapping` is deprecated, use `schema`\"\n- warnings.warn(message, DeprecationWarning)\n- resource_schema = resource_cls.mapping.__class__\n \n record_schema = RecordSchema().bind(data=resource_schema())\n", "issue": "Remove deprecation warning: Resource `mapping` is deprecated, use `schema`\n```\r\ntests/core/resource/test_viewset.py::ViewSetTest::test_a_default_schema_is_added_when_method_doesnt_match\r\n /home/mathieu/Code/Mozilla/kinto/kinto/core/resource/viewset.py:160: DeprecationWarning: Resource `mapping` is deprecated, use `schema`\r\n warnings.warn(message, DeprecationWarning)\r\n```\r\n\n", "before_files": [{"content": "import functools\nimport warnings\n\nimport colander\nfrom cornice.validators import colander_validator\nfrom pyramid.settings import asbool\n\nfrom kinto.core import authorization\n\nfrom .schema import (\n PermissionsSchema,\n RequestSchema,\n PayloadRequestSchema,\n PatchHeaderSchema,\n CollectionQuerySchema,\n CollectionGetQuerySchema,\n RecordGetQuerySchema,\n RecordSchema,\n ResourceReponses,\n ShareableResourseResponses,\n)\n\n\nCONTENT_TYPES = [\"application/json\"]\n\nPATCH_CONTENT_TYPES = [\"application/merge-patch+json\"]\n\n\nclass StrictSchema(colander.MappingSchema):\n @staticmethod\n def schema_type():\n return colander.Mapping(unknown=\"raise\")\n\n\nclass PartialSchema(colander.MappingSchema):\n @staticmethod\n def schema_type():\n return colander.Mapping(unknown=\"ignore\")\n\n\nclass SimpleSchema(colander.MappingSchema):\n @staticmethod\n def schema_type():\n return colander.Mapping(unknown=\"preserve\")\n\n\nclass ViewSet:\n \"\"\"The default ViewSet object.\n\n A viewset contains all the information needed to register\n any resource in the Cornice registry.\n\n It provides the same features as ``cornice.resource()``, except\n that it is much more flexible and extensible.\n \"\"\"\n\n service_name = \"{resource_name}-{endpoint_type}\"\n collection_path = \"/{resource_name}s\"\n record_path = \"/{resource_name}s/{{id}}\"\n\n collection_methods = (\"GET\", \"POST\", \"DELETE\")\n record_methods = (\"GET\", \"PUT\", \"PATCH\", \"DELETE\")\n\n readonly_methods = (\"GET\", \"OPTIONS\", \"HEAD\")\n\n factory = authorization.RouteFactory\n\n responses = ResourceReponses()\n\n service_arguments = {\"description\": \"Collection of {resource_name}\"}\n\n default_arguments = {\n \"permission\": authorization.PRIVATE,\n \"accept\": CONTENT_TYPES,\n \"schema\": RequestSchema(),\n }\n\n default_post_arguments = {\"content_type\": CONTENT_TYPES, \"schema\": PayloadRequestSchema()}\n\n default_put_arguments = {\"content_type\": CONTENT_TYPES, \"schema\": PayloadRequestSchema()}\n\n default_patch_arguments = {\n \"content_type\": CONTENT_TYPES + PATCH_CONTENT_TYPES,\n \"schema\": PayloadRequestSchema().bind(header=PatchHeaderSchema()),\n }\n\n default_collection_arguments = {\n \"schema\": RequestSchema().bind(querystring=CollectionQuerySchema())\n }\n collection_get_arguments = {\n \"schema\": RequestSchema().bind(querystring=CollectionGetQuerySchema()),\n \"cors_headers\": (\n \"Next-Page\",\n \"Total-Records\",\n \"Last-Modified\",\n \"ETag\",\n \"Cache-Control\",\n \"Expires\",\n \"Pragma\",\n ),\n }\n collection_post_arguments = {\"schema\": PayloadRequestSchema()}\n default_record_arguments = {}\n record_get_arguments = {\n \"schema\": RequestSchema().bind(querystring=RecordGetQuerySchema()),\n \"cors_headers\": (\"Last-Modified\", \"ETag\", \"Cache-Control\", \"Expires\", \"Pragma\"),\n }\n\n def __init__(self, **kwargs):\n self.update(**kwargs)\n self.record_arguments = functools.partial(self.get_view_arguments, \"record\")\n self.collection_arguments = functools.partial(self.get_view_arguments, \"collection\")\n\n def update(self, **kwargs):\n \"\"\"Update viewset attributes with provided values.\"\"\"\n self.__dict__.update(**kwargs)\n\n def get_view_arguments(self, endpoint_type, resource_cls, method):\n \"\"\"Return the Pyramid/Cornice view arguments for the given endpoint\n type and method.\n\n :param str endpoint_type: either \"collection\" or \"record\".\n :param resource_cls: the resource class.\n :param str method: the HTTP method.\n \"\"\"\n args = {**self.default_arguments}\n default_arguments = getattr(self, f\"default_{endpoint_type}_arguments\")\n args.update(**default_arguments)\n\n by_http_verb = f\"default_{method.lower()}_arguments\"\n method_args = getattr(self, by_http_verb, {})\n args.update(**method_args)\n\n by_method = f\"{endpoint_type}_{method.lower()}_arguments\"\n endpoint_args = getattr(self, by_method, {})\n args.update(**endpoint_args)\n\n request_schema = args.get(\"schema\", RequestSchema())\n record_schema = self.get_record_schema(resource_cls, method)\n request_schema = request_schema.bind(body=record_schema)\n response_schemas = self.responses.get_and_bind(endpoint_type, method, record=record_schema)\n\n args[\"schema\"] = request_schema\n args[\"response_schemas\"] = response_schemas\n\n validators = args.get(\"validators\", [])\n validators.append(colander_validator)\n args[\"validators\"] = validators\n\n return args\n\n def get_record_schema(self, resource_cls, method):\n \"\"\"Return the Cornice schema for the given method.\n \"\"\"\n if method.lower() in (\"patch\", \"delete\"):\n resource_schema = SimpleSchema\n else:\n resource_schema = resource_cls.schema\n if hasattr(resource_cls, \"mapping\"):\n message = \"Resource `mapping` is deprecated, use `schema`\"\n warnings.warn(message, DeprecationWarning)\n resource_schema = resource_cls.mapping.__class__\n\n record_schema = RecordSchema().bind(data=resource_schema())\n\n return record_schema\n\n def get_view(self, endpoint_type, method):\n \"\"\"Return the view method name located on the resource object, for the\n given type and method.\n\n * For collections, this will be \"collection_{method|lower}\n * For records, this will be \"{method|lower}.\n \"\"\"\n if endpoint_type == \"record\":\n return method.lower()\n return f\"{endpoint_type}_{method.lower()}\"\n\n def get_name(self, resource_cls):\n \"\"\"Returns the name of the resource.\n \"\"\"\n # Provided on viewset during registration.\n if \"name\" in self.__dict__:\n return self.__dict__[\"name\"]\n\n # Attribute on resource class (but not @property)\n has_class_attr = hasattr(resource_cls, \"name\") and not callable(resource_cls.name)\n if has_class_attr:\n return resource_cls.name\n\n # Use classname\n return resource_cls.__name__.lower()\n\n def get_service_name(self, endpoint_type, resource_cls):\n \"\"\"Returns the name of the service, depending a given type and\n resource.\n \"\"\"\n return self.service_name.format(\n resource_name=self.get_name(resource_cls), endpoint_type=endpoint_type\n )\n\n def get_service_arguments(self):\n return {**self.service_arguments}\n\n def is_endpoint_enabled(self, endpoint_type, resource_name, method, settings):\n \"\"\"Returns if the given endpoint is enabled or not.\n\n Uses the settings to tell so.\n \"\"\"\n readonly_enabled = asbool(settings.get(\"readonly\"))\n readonly_method = method.lower() in [m.lower() for m in self.readonly_methods]\n if readonly_enabled and not readonly_method:\n return False\n\n setting_enabled = f\"{endpoint_type}_{resource_name}_{method.lower()}_enabled\"\n return asbool(settings.get(setting_enabled, True))\n\n\nclass ShareableViewSet(ViewSet):\n \"\"\"A ShareableViewSet will register the given resource with a schema\n that supports permissions.\n\n The views will rely on dynamic permissions (e.g. create with PUT if\n record does not exist), and solicit the cliquet RouteFactory.\n \"\"\"\n\n responses = ShareableResourseResponses()\n\n def get_record_schema(self, resource_cls, method):\n \"\"\"Return the Cornice schema for the given method.\n \"\"\"\n record_schema = super(ShareableViewSet, self).get_record_schema(resource_cls, method)\n allowed_permissions = resource_cls.permissions\n permissions = PermissionsSchema(\n name=\"permissions\", missing=colander.drop, permissions=allowed_permissions\n )\n record_schema = record_schema.bind(permissions=permissions)\n return record_schema\n\n def get_view_arguments(self, endpoint_type, resource_cls, method):\n args = super().get_view_arguments(endpoint_type, resource_cls, method)\n args[\"permission\"] = authorization.DYNAMIC\n return args\n\n def get_service_arguments(self):\n args = super().get_service_arguments()\n args[\"factory\"] = self.factory\n return args\n", "path": "kinto/core/resource/viewset.py"}]}
| 3,026 | 165 |
gh_patches_debug_21692
|
rasdani/github-patches
|
git_diff
|
open-mmlab__mmdetection-4303
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Beta update assert error in dynamic rcnn
When the beta updates, new beta will be the median of the beta_history. It might be 0.0 but the beta used in loss function should be greater than 0.
The assert error occurred.
[https://github.com/open-mmlab/mmdetection/blob/5d07dda8719b0c67c11ad015410bc111e9f90200/mmdet/models/roi_heads/dynamic_roi_head.py#L144](url)
The code to fix it could be as follows:
`if np.abs(np.median(self.beta_history)) < 1e-15:
new_beta = self.bbox_head.loss_bbox.beta
else:
new_beta = min(self.train_cfg.dynamic_rcnn.initial_beta,
np.median(self.beta_history))`
I am willing to create a PR to fix it as well.
Beta update assert error in dynamic rcnn
When the beta updates, new beta will be the median of the beta_history. It might be 0.0 but the beta used in loss function should be greater than 0.
The assert error occurred.
[https://github.com/open-mmlab/mmdetection/blob/5d07dda8719b0c67c11ad015410bc111e9f90200/mmdet/models/roi_heads/dynamic_roi_head.py#L144](url)
The code to fix it could be as follows:
`if np.abs(np.median(self.beta_history)) < 1e-15:
new_beta = self.bbox_head.loss_bbox.beta
else:
new_beta = min(self.train_cfg.dynamic_rcnn.initial_beta,
np.median(self.beta_history))`
I am willing to create a PR to fix it as well.
</issue>
<code>
[start of mmdet/models/roi_heads/dynamic_roi_head.py]
1 import numpy as np
2 import torch
3
4 from mmdet.core import bbox2roi
5 from mmdet.models.losses import SmoothL1Loss
6 from ..builder import HEADS
7 from .standard_roi_head import StandardRoIHead
8
9
10 @HEADS.register_module()
11 class DynamicRoIHead(StandardRoIHead):
12 """RoI head for `Dynamic R-CNN <https://arxiv.org/abs/2004.06002>`_."""
13
14 def __init__(self, **kwargs):
15 super(DynamicRoIHead, self).__init__(**kwargs)
16 assert isinstance(self.bbox_head.loss_bbox, SmoothL1Loss)
17 # the IoU history of the past `update_iter_interval` iterations
18 self.iou_history = []
19 # the beta history of the past `update_iter_interval` iterations
20 self.beta_history = []
21
22 def forward_train(self,
23 x,
24 img_metas,
25 proposal_list,
26 gt_bboxes,
27 gt_labels,
28 gt_bboxes_ignore=None,
29 gt_masks=None):
30 """Forward function for training.
31
32 Args:
33 x (list[Tensor]): list of multi-level img features.
34
35 img_metas (list[dict]): list of image info dict where each dict
36 has: 'img_shape', 'scale_factor', 'flip', and may also contain
37 'filename', 'ori_shape', 'pad_shape', and 'img_norm_cfg'.
38 For details on the values of these keys see
39 `mmdet/datasets/pipelines/formatting.py:Collect`.
40
41 proposals (list[Tensors]): list of region proposals.
42
43 gt_bboxes (list[Tensor]): each item are the truth boxes for each
44 image in [tl_x, tl_y, br_x, br_y] format.
45
46 gt_labels (list[Tensor]): class indices corresponding to each box
47
48 gt_bboxes_ignore (None | list[Tensor]): specify which bounding
49 boxes can be ignored when computing the loss.
50
51 gt_masks (None | Tensor) : true segmentation masks for each box
52 used if the architecture supports a segmentation task.
53
54 Returns:
55 dict[str, Tensor]: a dictionary of loss components
56 """
57 # assign gts and sample proposals
58 if self.with_bbox or self.with_mask:
59 num_imgs = len(img_metas)
60 if gt_bboxes_ignore is None:
61 gt_bboxes_ignore = [None for _ in range(num_imgs)]
62 sampling_results = []
63 cur_iou = []
64 for i in range(num_imgs):
65 assign_result = self.bbox_assigner.assign(
66 proposal_list[i], gt_bboxes[i], gt_bboxes_ignore[i],
67 gt_labels[i])
68 sampling_result = self.bbox_sampler.sample(
69 assign_result,
70 proposal_list[i],
71 gt_bboxes[i],
72 gt_labels[i],
73 feats=[lvl_feat[i][None] for lvl_feat in x])
74 # record the `iou_topk`-th largest IoU in an image
75 iou_topk = min(self.train_cfg.dynamic_rcnn.iou_topk,
76 len(assign_result.max_overlaps))
77 ious, _ = torch.topk(assign_result.max_overlaps, iou_topk)
78 cur_iou.append(ious[-1].item())
79 sampling_results.append(sampling_result)
80 # average the current IoUs over images
81 cur_iou = np.mean(cur_iou)
82 self.iou_history.append(cur_iou)
83
84 losses = dict()
85 # bbox head forward and loss
86 if self.with_bbox:
87 bbox_results = self._bbox_forward_train(x, sampling_results,
88 gt_bboxes, gt_labels,
89 img_metas)
90 losses.update(bbox_results['loss_bbox'])
91
92 # mask head forward and loss
93 if self.with_mask:
94 mask_results = self._mask_forward_train(x, sampling_results,
95 bbox_results['bbox_feats'],
96 gt_masks, img_metas)
97 losses.update(mask_results['loss_mask'])
98
99 # update IoU threshold and SmoothL1 beta
100 update_iter_interval = self.train_cfg.dynamic_rcnn.update_iter_interval
101 if len(self.iou_history) % update_iter_interval == 0:
102 new_iou_thr, new_beta = self.update_hyperparameters()
103
104 return losses
105
106 def _bbox_forward_train(self, x, sampling_results, gt_bboxes, gt_labels,
107 img_metas):
108 num_imgs = len(img_metas)
109 rois = bbox2roi([res.bboxes for res in sampling_results])
110 bbox_results = self._bbox_forward(x, rois)
111
112 bbox_targets = self.bbox_head.get_targets(sampling_results, gt_bboxes,
113 gt_labels, self.train_cfg)
114 # record the `beta_topk`-th smallest target
115 # `bbox_targets[2]` and `bbox_targets[3]` stand for bbox_targets
116 # and bbox_weights, respectively
117 pos_inds = bbox_targets[3][:, 0].nonzero().squeeze(1)
118 num_pos = len(pos_inds)
119 cur_target = bbox_targets[2][pos_inds, :2].abs().mean(dim=1)
120 beta_topk = min(self.train_cfg.dynamic_rcnn.beta_topk * num_imgs,
121 num_pos)
122 cur_target = torch.kthvalue(cur_target, beta_topk)[0].item()
123 self.beta_history.append(cur_target)
124 loss_bbox = self.bbox_head.loss(bbox_results['cls_score'],
125 bbox_results['bbox_pred'], rois,
126 *bbox_targets)
127
128 bbox_results.update(loss_bbox=loss_bbox)
129 return bbox_results
130
131 def update_hyperparameters(self):
132 """Update hyperparameters like IoU thresholds for assigner and beta for
133 SmoothL1 loss based on the training statistics.
134
135 Returns:
136 tuple[float]: the updated ``iou_thr`` and ``beta``.
137 """
138 new_iou_thr = max(self.train_cfg.dynamic_rcnn.initial_iou,
139 np.mean(self.iou_history))
140 self.iou_history = []
141 self.bbox_assigner.pos_iou_thr = new_iou_thr
142 self.bbox_assigner.neg_iou_thr = new_iou_thr
143 self.bbox_assigner.min_pos_iou = new_iou_thr
144 new_beta = min(self.train_cfg.dynamic_rcnn.initial_beta,
145 np.median(self.beta_history))
146 self.beta_history = []
147 self.bbox_head.loss_bbox.beta = new_beta
148 return new_iou_thr, new_beta
149
[end of mmdet/models/roi_heads/dynamic_roi_head.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/mmdet/models/roi_heads/dynamic_roi_head.py b/mmdet/models/roi_heads/dynamic_roi_head.py
--- a/mmdet/models/roi_heads/dynamic_roi_head.py
+++ b/mmdet/models/roi_heads/dynamic_roi_head.py
@@ -6,6 +6,8 @@
from ..builder import HEADS
from .standard_roi_head import StandardRoIHead
+EPS = 1e-15
+
@HEADS.register_module()
class DynamicRoIHead(StandardRoIHead):
@@ -141,8 +143,12 @@
self.bbox_assigner.pos_iou_thr = new_iou_thr
self.bbox_assigner.neg_iou_thr = new_iou_thr
self.bbox_assigner.min_pos_iou = new_iou_thr
- new_beta = min(self.train_cfg.dynamic_rcnn.initial_beta,
- np.median(self.beta_history))
+ if (np.median(self.beta_history) < EPS):
+ # avoid 0 or too small value for new_beta
+ new_beta = self.bbox_head.loss_bbox.beta
+ else:
+ new_beta = min(self.train_cfg.dynamic_rcnn.initial_beta,
+ np.median(self.beta_history))
self.beta_history = []
self.bbox_head.loss_bbox.beta = new_beta
return new_iou_thr, new_beta
|
{"golden_diff": "diff --git a/mmdet/models/roi_heads/dynamic_roi_head.py b/mmdet/models/roi_heads/dynamic_roi_head.py\n--- a/mmdet/models/roi_heads/dynamic_roi_head.py\n+++ b/mmdet/models/roi_heads/dynamic_roi_head.py\n@@ -6,6 +6,8 @@\n from ..builder import HEADS\n from .standard_roi_head import StandardRoIHead\n \n+EPS = 1e-15\n+\n \n @HEADS.register_module()\n class DynamicRoIHead(StandardRoIHead):\n@@ -141,8 +143,12 @@\n self.bbox_assigner.pos_iou_thr = new_iou_thr\n self.bbox_assigner.neg_iou_thr = new_iou_thr\n self.bbox_assigner.min_pos_iou = new_iou_thr\n- new_beta = min(self.train_cfg.dynamic_rcnn.initial_beta,\n- np.median(self.beta_history))\n+ if (np.median(self.beta_history) < EPS):\n+ # avoid 0 or too small value for new_beta\n+ new_beta = self.bbox_head.loss_bbox.beta\n+ else:\n+ new_beta = min(self.train_cfg.dynamic_rcnn.initial_beta,\n+ np.median(self.beta_history))\n self.beta_history = []\n self.bbox_head.loss_bbox.beta = new_beta\n return new_iou_thr, new_beta\n", "issue": "Beta update assert error in dynamic rcnn\nWhen the beta updates, new beta will be the median of the beta_history. It might be 0.0 but the beta used in loss function should be greater than 0.\r\nThe assert error occurred.\r\n[https://github.com/open-mmlab/mmdetection/blob/5d07dda8719b0c67c11ad015410bc111e9f90200/mmdet/models/roi_heads/dynamic_roi_head.py#L144](url)\r\nThe code to fix it could be as follows:\r\n`if np.abs(np.median(self.beta_history)) < 1e-15:\r\n new_beta = self.bbox_head.loss_bbox.beta\r\n else:\r\n new_beta = min(self.train_cfg.dynamic_rcnn.initial_beta,\r\n np.median(self.beta_history))`\r\nI am willing to create a PR to fix it as well.\nBeta update assert error in dynamic rcnn\nWhen the beta updates, new beta will be the median of the beta_history. It might be 0.0 but the beta used in loss function should be greater than 0.\r\nThe assert error occurred.\r\n[https://github.com/open-mmlab/mmdetection/blob/5d07dda8719b0c67c11ad015410bc111e9f90200/mmdet/models/roi_heads/dynamic_roi_head.py#L144](url)\r\nThe code to fix it could be as follows:\r\n`if np.abs(np.median(self.beta_history)) < 1e-15:\r\n new_beta = self.bbox_head.loss_bbox.beta\r\n else:\r\n new_beta = min(self.train_cfg.dynamic_rcnn.initial_beta,\r\n np.median(self.beta_history))`\r\nI am willing to create a PR to fix it as well.\n", "before_files": [{"content": "import numpy as np\nimport torch\n\nfrom mmdet.core import bbox2roi\nfrom mmdet.models.losses import SmoothL1Loss\nfrom ..builder import HEADS\nfrom .standard_roi_head import StandardRoIHead\n\n\[email protected]_module()\nclass DynamicRoIHead(StandardRoIHead):\n \"\"\"RoI head for `Dynamic R-CNN <https://arxiv.org/abs/2004.06002>`_.\"\"\"\n\n def __init__(self, **kwargs):\n super(DynamicRoIHead, self).__init__(**kwargs)\n assert isinstance(self.bbox_head.loss_bbox, SmoothL1Loss)\n # the IoU history of the past `update_iter_interval` iterations\n self.iou_history = []\n # the beta history of the past `update_iter_interval` iterations\n self.beta_history = []\n\n def forward_train(self,\n x,\n img_metas,\n proposal_list,\n gt_bboxes,\n gt_labels,\n gt_bboxes_ignore=None,\n gt_masks=None):\n \"\"\"Forward function for training.\n\n Args:\n x (list[Tensor]): list of multi-level img features.\n\n img_metas (list[dict]): list of image info dict where each dict\n has: 'img_shape', 'scale_factor', 'flip', and may also contain\n 'filename', 'ori_shape', 'pad_shape', and 'img_norm_cfg'.\n For details on the values of these keys see\n `mmdet/datasets/pipelines/formatting.py:Collect`.\n\n proposals (list[Tensors]): list of region proposals.\n\n gt_bboxes (list[Tensor]): each item are the truth boxes for each\n image in [tl_x, tl_y, br_x, br_y] format.\n\n gt_labels (list[Tensor]): class indices corresponding to each box\n\n gt_bboxes_ignore (None | list[Tensor]): specify which bounding\n boxes can be ignored when computing the loss.\n\n gt_masks (None | Tensor) : true segmentation masks for each box\n used if the architecture supports a segmentation task.\n\n Returns:\n dict[str, Tensor]: a dictionary of loss components\n \"\"\"\n # assign gts and sample proposals\n if self.with_bbox or self.with_mask:\n num_imgs = len(img_metas)\n if gt_bboxes_ignore is None:\n gt_bboxes_ignore = [None for _ in range(num_imgs)]\n sampling_results = []\n cur_iou = []\n for i in range(num_imgs):\n assign_result = self.bbox_assigner.assign(\n proposal_list[i], gt_bboxes[i], gt_bboxes_ignore[i],\n gt_labels[i])\n sampling_result = self.bbox_sampler.sample(\n assign_result,\n proposal_list[i],\n gt_bboxes[i],\n gt_labels[i],\n feats=[lvl_feat[i][None] for lvl_feat in x])\n # record the `iou_topk`-th largest IoU in an image\n iou_topk = min(self.train_cfg.dynamic_rcnn.iou_topk,\n len(assign_result.max_overlaps))\n ious, _ = torch.topk(assign_result.max_overlaps, iou_topk)\n cur_iou.append(ious[-1].item())\n sampling_results.append(sampling_result)\n # average the current IoUs over images\n cur_iou = np.mean(cur_iou)\n self.iou_history.append(cur_iou)\n\n losses = dict()\n # bbox head forward and loss\n if self.with_bbox:\n bbox_results = self._bbox_forward_train(x, sampling_results,\n gt_bboxes, gt_labels,\n img_metas)\n losses.update(bbox_results['loss_bbox'])\n\n # mask head forward and loss\n if self.with_mask:\n mask_results = self._mask_forward_train(x, sampling_results,\n bbox_results['bbox_feats'],\n gt_masks, img_metas)\n losses.update(mask_results['loss_mask'])\n\n # update IoU threshold and SmoothL1 beta\n update_iter_interval = self.train_cfg.dynamic_rcnn.update_iter_interval\n if len(self.iou_history) % update_iter_interval == 0:\n new_iou_thr, new_beta = self.update_hyperparameters()\n\n return losses\n\n def _bbox_forward_train(self, x, sampling_results, gt_bboxes, gt_labels,\n img_metas):\n num_imgs = len(img_metas)\n rois = bbox2roi([res.bboxes for res in sampling_results])\n bbox_results = self._bbox_forward(x, rois)\n\n bbox_targets = self.bbox_head.get_targets(sampling_results, gt_bboxes,\n gt_labels, self.train_cfg)\n # record the `beta_topk`-th smallest target\n # `bbox_targets[2]` and `bbox_targets[3]` stand for bbox_targets\n # and bbox_weights, respectively\n pos_inds = bbox_targets[3][:, 0].nonzero().squeeze(1)\n num_pos = len(pos_inds)\n cur_target = bbox_targets[2][pos_inds, :2].abs().mean(dim=1)\n beta_topk = min(self.train_cfg.dynamic_rcnn.beta_topk * num_imgs,\n num_pos)\n cur_target = torch.kthvalue(cur_target, beta_topk)[0].item()\n self.beta_history.append(cur_target)\n loss_bbox = self.bbox_head.loss(bbox_results['cls_score'],\n bbox_results['bbox_pred'], rois,\n *bbox_targets)\n\n bbox_results.update(loss_bbox=loss_bbox)\n return bbox_results\n\n def update_hyperparameters(self):\n \"\"\"Update hyperparameters like IoU thresholds for assigner and beta for\n SmoothL1 loss based on the training statistics.\n\n Returns:\n tuple[float]: the updated ``iou_thr`` and ``beta``.\n \"\"\"\n new_iou_thr = max(self.train_cfg.dynamic_rcnn.initial_iou,\n np.mean(self.iou_history))\n self.iou_history = []\n self.bbox_assigner.pos_iou_thr = new_iou_thr\n self.bbox_assigner.neg_iou_thr = new_iou_thr\n self.bbox_assigner.min_pos_iou = new_iou_thr\n new_beta = min(self.train_cfg.dynamic_rcnn.initial_beta,\n np.median(self.beta_history))\n self.beta_history = []\n self.bbox_head.loss_bbox.beta = new_beta\n return new_iou_thr, new_beta\n", "path": "mmdet/models/roi_heads/dynamic_roi_head.py"}]}
| 2,660 | 298 |
gh_patches_debug_31349
|
rasdani/github-patches
|
git_diff
|
modoboa__modoboa-1823
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
dnstools dkim TXT record evaluation fix
Description of the issue/feature this PR addresses:
The DKIM button was red, but my TXT record setted well. So i looked about the issue, and i founded the evaluation of the TXT record is not good. According to DNS standard, a TXT record what is longer than 256 character need to split to several strings: https://help.directadmin.com/item.php?id=552
Current behavior before PR:
I just merge the splitted parts together.
Desired behavior after PR is merged:
The DKIM button is now green as it should be.
</issue>
<code>
[start of modoboa/dnstools/lib.py]
1 """dnstools library."""
2
3 import ipaddress
4
5 from django.utils.translation import ugettext as _
6
7 from modoboa.admin import lib as admin_lib
8
9 from . import constants
10
11
12 def get_spf_record(domain):
13 """Return SPF record for domain (if any)."""
14 records = admin_lib.get_dns_records(domain, "TXT")
15 if records is None:
16 return None
17 for record in records:
18 value = str(record).strip('"')
19 if value.startswith("v=spf1"):
20 return value
21 return None
22
23
24 def get_dkim_record(domain, selector):
25 """Return DKIM records form domain (if any)."""
26 name = "{}._domainkey.{}".format(selector, domain)
27 records = admin_lib.get_dns_records(name, "TXT")
28 if records is None:
29 return None
30 for record in records:
31 value = str(record).strip('"')
32 if value.startswith("v=DKIM1"):
33 return value
34 return None
35
36
37 def get_dmarc_record(domain):
38 """Return DMARC record for domain (if any)."""
39 name = "_dmarc.{}".format(domain)
40 records = admin_lib.get_dns_records(name, "TXT")
41 if records is None:
42 return None
43 for record in records:
44 value = str(record).strip('"')
45 if value.startswith("v=DMARC1"):
46 return value
47 return None
48
49
50 def _get_simple_record(name):
51 """We just want to know if name is declared."""
52 for rdtype in ["A", "CNAME", "AAAA"]:
53 records = admin_lib.get_dns_records(name, rdtype)
54 if records is not None:
55 break
56 else:
57 return None
58 for record in records:
59 value = str(record).strip('"')
60 break
61 return value
62
63
64 def get_autoconfig_record(domain):
65 """Return autoconfig record for domain (if any)."""
66 return _get_simple_record("autoconfig.{}".format(domain))
67
68
69 def get_autodiscover_record(domain):
70 """Return autodiscover record for domain (if any)."""
71 return _get_simple_record("autodiscover.{}".format(domain))
72
73
74 class DNSSyntaxError(Exception):
75 """Custom exception for DNS errors."""
76
77 pass
78
79
80 def check_spf_ip4(value):
81 """Check syntax of ip4 mechanism."""
82 parts = value.split(":")
83 if len(parts) != 2:
84 raise DNSSyntaxError(_("Wrong ip4 mechanism syntax"))
85 try:
86 ipaddress.ip_network(parts[1], False)
87 except ValueError:
88 raise DNSSyntaxError(_("Wrong IPv4 address format"))
89
90
91 def check_spf_ip6(value):
92 """Check syntax of ip6 mechanism."""
93 if not value.startswith("ip6:"):
94 raise DNSSyntaxError(_("Wrong ip6 mechanism syntax"))
95 value = value.replace("ip6:", "")
96 try:
97 ipaddress.ip_network(value, False)
98 except ValueError:
99 raise DNSSyntaxError(_("Wrong IPv6 address format"))
100
101
102 def _check_domain_and_mask(value, mechanism):
103 """Check for valid domain / mask."""
104 domain = None
105 mask = None
106 if ":" in value:
107 mechanism, domain = value.split(":")
108 if "/" in domain:
109 domain, mask = domain.split("/")
110 elif "/" in value:
111 mechanism, mask = value.split("/")
112 else:
113 raise DNSSyntaxError(
114 _("Invalid syntax for {} mechanism").format(mechanism))
115 if mask and (not mask.isdigit() or int(mask) > 32):
116 raise DNSSyntaxError(_("Invalid mask found {}").format(mask))
117
118
119 def check_spf_a(value):
120 """Check syntax of a mechanism."""
121 if value == "a":
122 return
123 _check_domain_and_mask(value, "a")
124
125
126 def check_spf_mx(value):
127 """Check syntax of mx mechanism."""
128 if value == "mx":
129 return
130 _check_domain_and_mask(value, "mx")
131
132
133 def _check_simple(value, mechanism):
134 """Simple check."""
135 if value == mechanism:
136 return
137 parts = value.split(":")
138 if len(parts) != 2:
139 raise DNSSyntaxError(
140 _("Invalid syntax for {} mechanism").format(mechanism))
141
142
143 def check_spf_ptr(value):
144 """Check syntax of ptr mechanism."""
145 _check_simple(value, "ptr")
146
147
148 def check_spf_exists(value):
149 """Check syntax of ptr mechanism."""
150 _check_simple(value, "exists")
151
152
153 def check_spf_include(value):
154 """Check syntax of include mechanism."""
155 _check_simple(value, "include")
156
157
158 def check_spf_syntax(record):
159 """Check if record has a valid SPF syntax."""
160 if not record.startswith("v=spf1"):
161 raise DNSSyntaxError(_("Not an SPF record"))
162 parts = record.split(" ")[1:]
163 modifiers = []
164 mechanisms = []
165 for part in parts:
166 if part == "":
167 continue
168 qualifier = None
169 if part[0] in ["+", "-", "~", "?"]:
170 qualifier = part[0]
171 part = part[1:]
172 if part == "all":
173 continue
174 for mechanism in constants.SPF_MECHANISMS:
175 if part.startswith(mechanism):
176 globals()["check_spf_{}".format(mechanism)](part)
177 mechanisms.append(mechanism)
178 break
179 else:
180 # Look for modifier
181 modifier = part.split("=")
182 if len(modifier) != 2:
183 raise DNSSyntaxError(_("Unknown mechanism {}").format(part))
184 if modifier[0] not in ["redirect", "exp"]:
185 raise DNSSyntaxError(_("Unknown modifier {}").format(
186 modifier[0]))
187 if modifier[0] in modifiers:
188 raise DNSSyntaxError(_("Duplicate modifier {} found").format(
189 modifier[0]))
190 modifiers.append(modifier[0])
191 if not len(mechanisms) and not len(modifiers):
192 raise DNSSyntaxError(_("No mechanism found"))
193 return None
194
195
196 def check_dkim_syntax(record):
197 """Check if record has a valid DKIM syntax."""
198 if not record.startswith("v=DKIM1"):
199 raise DNSSyntaxError(_("Not a valid DKIM record"))
200 key = None
201 for tag in record.split(";")[1:]:
202 tag = tag.strip(" ")
203 if tag == "":
204 continue
205 parts = tag.split("=", 1)
206 if len(parts) != 2:
207 raise DNSSyntaxError(_("Invalid tag {}").format(tag))
208 name = parts[0].strip(" ")
209 if name == "p":
210 key = "".join(part.strip('"') for part in parts[1].split(" "))
211 if key is None:
212 raise DNSSyntaxError(_("No key found in record"))
213 return key
214
215
216 def check_dmarc_tag_string_value(tag, value):
217 """Check if value is valid for tag."""
218 tdef = constants.DMARC_TAGS[tag]
219 error = _("Wrong value {} for tag {}").format(value, tag)
220 if "values" in tdef and value not in tdef["values"]:
221 raise DNSSyntaxError(error)
222 elif "regex" in tdef and tdef["regex"].match(value) is None:
223 raise DNSSyntaxError(error)
224
225
226 def check_dmarc_tag(tag, value):
227 """Check if tag is valid."""
228 tdef = constants.DMARC_TAGS[tag]
229 ttype = tdef.get("type", "string")
230 if ttype == "list":
231 for svalue in value.split(","):
232 check_dmarc_tag_string_value(tag, svalue)
233 elif ttype == "int":
234 error = _("Wrong value {} for tag {}:").format(value, tag)
235 try:
236 value = int(value)
237 except ValueError:
238 raise DNSSyntaxError(error + _(" not an integer"))
239 if "min_value" in tdef and value < tdef["min_value"]:
240 raise DNSSyntaxError(
241 error + _(" less than {}").format(tdef["min_value"]))
242 if "max_value" in tdef and value > tdef["max_value"]:
243 raise DNSSyntaxError(
244 error + _(" greater than {}").format(tdef["max_value"]))
245 else:
246 check_dmarc_tag_string_value(tag, value)
247
248
249 def check_dmarc_syntax(record):
250 """Check if record has a valid DMARC syntax."""
251 if not record.startswith("v=DMARC1"):
252 raise DNSSyntaxError(_("Not a valid DMARC record"))
253 tags = {}
254 for tag in record.split(";")[1:]:
255 if tag == "":
256 continue
257 tag = tag.strip(" ")
258 parts = tag.split("=")
259 if len(parts) != 2:
260 raise DNSSyntaxError(_("Invalid tag {}").format(tag))
261 name = parts[0].strip(" ")
262 if name not in constants.DMARC_TAGS:
263 raise DNSSyntaxError(_("Unknown tag {}").format(name))
264 value = parts[1].strip(" ")
265 check_dmarc_tag(name, value)
266 tags[name] = value
267 if "p" not in tags:
268 raise DNSSyntaxError(_("Missing required p tag"))
269
[end of modoboa/dnstools/lib.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/modoboa/dnstools/lib.py b/modoboa/dnstools/lib.py
--- a/modoboa/dnstools/lib.py
+++ b/modoboa/dnstools/lib.py
@@ -9,42 +9,36 @@
from . import constants
-def get_spf_record(domain):
- """Return SPF record for domain (if any)."""
- records = admin_lib.get_dns_records(domain, "TXT")
+def _get_record_type_value(records, rr_type):
if records is None:
return None
for record in records:
- value = str(record).strip('"')
- if value.startswith("v=spf1"):
+ # Multiple strings are separated by a space as described in:
+ # https://tools.ietf.org/html/rfc4408#section-3.1.3
+ value = str(record).replace('" "', '').strip('"')
+ if value.startswith('v={}'.format(rr_type)):
return value
return None
+def get_spf_record(domain):
+ """Return SPF record for domain (if any)."""
+ records = admin_lib.get_dns_records(domain, "TXT")
+ return _get_record_type_value(records, 'spf1')
+
+
def get_dkim_record(domain, selector):
"""Return DKIM records form domain (if any)."""
name = "{}._domainkey.{}".format(selector, domain)
records = admin_lib.get_dns_records(name, "TXT")
- if records is None:
- return None
- for record in records:
- value = str(record).strip('"')
- if value.startswith("v=DKIM1"):
- return value
- return None
+ return _get_record_type_value(records, 'DKIM1')
def get_dmarc_record(domain):
"""Return DMARC record for domain (if any)."""
name = "_dmarc.{}".format(domain)
records = admin_lib.get_dns_records(name, "TXT")
- if records is None:
- return None
- for record in records:
- value = str(record).strip('"')
- if value.startswith("v=DMARC1"):
- return value
- return None
+ return _get_record_type_value(records, 'DMARC1')
def _get_simple_record(name):
|
{"golden_diff": "diff --git a/modoboa/dnstools/lib.py b/modoboa/dnstools/lib.py\n--- a/modoboa/dnstools/lib.py\n+++ b/modoboa/dnstools/lib.py\n@@ -9,42 +9,36 @@\n from . import constants\n \n \n-def get_spf_record(domain):\n- \"\"\"Return SPF record for domain (if any).\"\"\"\n- records = admin_lib.get_dns_records(domain, \"TXT\")\n+def _get_record_type_value(records, rr_type):\n if records is None:\n return None\n for record in records:\n- value = str(record).strip('\"')\n- if value.startswith(\"v=spf1\"):\n+ # Multiple strings are separated by a space as described in:\n+ # https://tools.ietf.org/html/rfc4408#section-3.1.3\n+ value = str(record).replace('\" \"', '').strip('\"')\n+ if value.startswith('v={}'.format(rr_type)):\n return value\n return None\n \n \n+def get_spf_record(domain):\n+ \"\"\"Return SPF record for domain (if any).\"\"\"\n+ records = admin_lib.get_dns_records(domain, \"TXT\")\n+ return _get_record_type_value(records, 'spf1')\n+\n+\n def get_dkim_record(domain, selector):\n \"\"\"Return DKIM records form domain (if any).\"\"\"\n name = \"{}._domainkey.{}\".format(selector, domain)\n records = admin_lib.get_dns_records(name, \"TXT\")\n- if records is None:\n- return None\n- for record in records:\n- value = str(record).strip('\"')\n- if value.startswith(\"v=DKIM1\"):\n- return value\n- return None\n+ return _get_record_type_value(records, 'DKIM1')\n \n \n def get_dmarc_record(domain):\n \"\"\"Return DMARC record for domain (if any).\"\"\"\n name = \"_dmarc.{}\".format(domain)\n records = admin_lib.get_dns_records(name, \"TXT\")\n- if records is None:\n- return None\n- for record in records:\n- value = str(record).strip('\"')\n- if value.startswith(\"v=DMARC1\"):\n- return value\n- return None\n+ return _get_record_type_value(records, 'DMARC1')\n \n \n def _get_simple_record(name):\n", "issue": "dnstools dkim TXT record evaluation fix\nDescription of the issue/feature this PR addresses:\r\nThe DKIM button was red, but my TXT record setted well. So i looked about the issue, and i founded the evaluation of the TXT record is not good. According to DNS standard, a TXT record what is longer than 256 character need to split to several strings: https://help.directadmin.com/item.php?id=552\r\n\r\nCurrent behavior before PR:\r\nI just merge the splitted parts together.\r\n\r\nDesired behavior after PR is merged:\r\nThe DKIM button is now green as it should be.\n", "before_files": [{"content": "\"\"\"dnstools library.\"\"\"\n\nimport ipaddress\n\nfrom django.utils.translation import ugettext as _\n\nfrom modoboa.admin import lib as admin_lib\n\nfrom . import constants\n\n\ndef get_spf_record(domain):\n \"\"\"Return SPF record for domain (if any).\"\"\"\n records = admin_lib.get_dns_records(domain, \"TXT\")\n if records is None:\n return None\n for record in records:\n value = str(record).strip('\"')\n if value.startswith(\"v=spf1\"):\n return value\n return None\n\n\ndef get_dkim_record(domain, selector):\n \"\"\"Return DKIM records form domain (if any).\"\"\"\n name = \"{}._domainkey.{}\".format(selector, domain)\n records = admin_lib.get_dns_records(name, \"TXT\")\n if records is None:\n return None\n for record in records:\n value = str(record).strip('\"')\n if value.startswith(\"v=DKIM1\"):\n return value\n return None\n\n\ndef get_dmarc_record(domain):\n \"\"\"Return DMARC record for domain (if any).\"\"\"\n name = \"_dmarc.{}\".format(domain)\n records = admin_lib.get_dns_records(name, \"TXT\")\n if records is None:\n return None\n for record in records:\n value = str(record).strip('\"')\n if value.startswith(\"v=DMARC1\"):\n return value\n return None\n\n\ndef _get_simple_record(name):\n \"\"\"We just want to know if name is declared.\"\"\"\n for rdtype in [\"A\", \"CNAME\", \"AAAA\"]:\n records = admin_lib.get_dns_records(name, rdtype)\n if records is not None:\n break\n else:\n return None\n for record in records:\n value = str(record).strip('\"')\n break\n return value\n\n\ndef get_autoconfig_record(domain):\n \"\"\"Return autoconfig record for domain (if any).\"\"\"\n return _get_simple_record(\"autoconfig.{}\".format(domain))\n\n\ndef get_autodiscover_record(domain):\n \"\"\"Return autodiscover record for domain (if any).\"\"\"\n return _get_simple_record(\"autodiscover.{}\".format(domain))\n\n\nclass DNSSyntaxError(Exception):\n \"\"\"Custom exception for DNS errors.\"\"\"\n\n pass\n\n\ndef check_spf_ip4(value):\n \"\"\"Check syntax of ip4 mechanism.\"\"\"\n parts = value.split(\":\")\n if len(parts) != 2:\n raise DNSSyntaxError(_(\"Wrong ip4 mechanism syntax\"))\n try:\n ipaddress.ip_network(parts[1], False)\n except ValueError:\n raise DNSSyntaxError(_(\"Wrong IPv4 address format\"))\n\n\ndef check_spf_ip6(value):\n \"\"\"Check syntax of ip6 mechanism.\"\"\"\n if not value.startswith(\"ip6:\"):\n raise DNSSyntaxError(_(\"Wrong ip6 mechanism syntax\"))\n value = value.replace(\"ip6:\", \"\")\n try:\n ipaddress.ip_network(value, False)\n except ValueError:\n raise DNSSyntaxError(_(\"Wrong IPv6 address format\"))\n\n\ndef _check_domain_and_mask(value, mechanism):\n \"\"\"Check for valid domain / mask.\"\"\"\n domain = None\n mask = None\n if \":\" in value:\n mechanism, domain = value.split(\":\")\n if \"/\" in domain:\n domain, mask = domain.split(\"/\")\n elif \"/\" in value:\n mechanism, mask = value.split(\"/\")\n else:\n raise DNSSyntaxError(\n _(\"Invalid syntax for {} mechanism\").format(mechanism))\n if mask and (not mask.isdigit() or int(mask) > 32):\n raise DNSSyntaxError(_(\"Invalid mask found {}\").format(mask))\n\n\ndef check_spf_a(value):\n \"\"\"Check syntax of a mechanism.\"\"\"\n if value == \"a\":\n return\n _check_domain_and_mask(value, \"a\")\n\n\ndef check_spf_mx(value):\n \"\"\"Check syntax of mx mechanism.\"\"\"\n if value == \"mx\":\n return\n _check_domain_and_mask(value, \"mx\")\n\n\ndef _check_simple(value, mechanism):\n \"\"\"Simple check.\"\"\"\n if value == mechanism:\n return\n parts = value.split(\":\")\n if len(parts) != 2:\n raise DNSSyntaxError(\n _(\"Invalid syntax for {} mechanism\").format(mechanism))\n\n\ndef check_spf_ptr(value):\n \"\"\"Check syntax of ptr mechanism.\"\"\"\n _check_simple(value, \"ptr\")\n\n\ndef check_spf_exists(value):\n \"\"\"Check syntax of ptr mechanism.\"\"\"\n _check_simple(value, \"exists\")\n\n\ndef check_spf_include(value):\n \"\"\"Check syntax of include mechanism.\"\"\"\n _check_simple(value, \"include\")\n\n\ndef check_spf_syntax(record):\n \"\"\"Check if record has a valid SPF syntax.\"\"\"\n if not record.startswith(\"v=spf1\"):\n raise DNSSyntaxError(_(\"Not an SPF record\"))\n parts = record.split(\" \")[1:]\n modifiers = []\n mechanisms = []\n for part in parts:\n if part == \"\":\n continue\n qualifier = None\n if part[0] in [\"+\", \"-\", \"~\", \"?\"]:\n qualifier = part[0]\n part = part[1:]\n if part == \"all\":\n continue\n for mechanism in constants.SPF_MECHANISMS:\n if part.startswith(mechanism):\n globals()[\"check_spf_{}\".format(mechanism)](part)\n mechanisms.append(mechanism)\n break\n else:\n # Look for modifier\n modifier = part.split(\"=\")\n if len(modifier) != 2:\n raise DNSSyntaxError(_(\"Unknown mechanism {}\").format(part))\n if modifier[0] not in [\"redirect\", \"exp\"]:\n raise DNSSyntaxError(_(\"Unknown modifier {}\").format(\n modifier[0]))\n if modifier[0] in modifiers:\n raise DNSSyntaxError(_(\"Duplicate modifier {} found\").format(\n modifier[0]))\n modifiers.append(modifier[0])\n if not len(mechanisms) and not len(modifiers):\n raise DNSSyntaxError(_(\"No mechanism found\"))\n return None\n\n\ndef check_dkim_syntax(record):\n \"\"\"Check if record has a valid DKIM syntax.\"\"\"\n if not record.startswith(\"v=DKIM1\"):\n raise DNSSyntaxError(_(\"Not a valid DKIM record\"))\n key = None\n for tag in record.split(\";\")[1:]:\n tag = tag.strip(\" \")\n if tag == \"\":\n continue\n parts = tag.split(\"=\", 1)\n if len(parts) != 2:\n raise DNSSyntaxError(_(\"Invalid tag {}\").format(tag))\n name = parts[0].strip(\" \")\n if name == \"p\":\n key = \"\".join(part.strip('\"') for part in parts[1].split(\" \"))\n if key is None:\n raise DNSSyntaxError(_(\"No key found in record\"))\n return key\n\n\ndef check_dmarc_tag_string_value(tag, value):\n \"\"\"Check if value is valid for tag.\"\"\"\n tdef = constants.DMARC_TAGS[tag]\n error = _(\"Wrong value {} for tag {}\").format(value, tag)\n if \"values\" in tdef and value not in tdef[\"values\"]:\n raise DNSSyntaxError(error)\n elif \"regex\" in tdef and tdef[\"regex\"].match(value) is None:\n raise DNSSyntaxError(error)\n\n\ndef check_dmarc_tag(tag, value):\n \"\"\"Check if tag is valid.\"\"\"\n tdef = constants.DMARC_TAGS[tag]\n ttype = tdef.get(\"type\", \"string\")\n if ttype == \"list\":\n for svalue in value.split(\",\"):\n check_dmarc_tag_string_value(tag, svalue)\n elif ttype == \"int\":\n error = _(\"Wrong value {} for tag {}:\").format(value, tag)\n try:\n value = int(value)\n except ValueError:\n raise DNSSyntaxError(error + _(\" not an integer\"))\n if \"min_value\" in tdef and value < tdef[\"min_value\"]:\n raise DNSSyntaxError(\n error + _(\" less than {}\").format(tdef[\"min_value\"]))\n if \"max_value\" in tdef and value > tdef[\"max_value\"]:\n raise DNSSyntaxError(\n error + _(\" greater than {}\").format(tdef[\"max_value\"]))\n else:\n check_dmarc_tag_string_value(tag, value)\n\n\ndef check_dmarc_syntax(record):\n \"\"\"Check if record has a valid DMARC syntax.\"\"\"\n if not record.startswith(\"v=DMARC1\"):\n raise DNSSyntaxError(_(\"Not a valid DMARC record\"))\n tags = {}\n for tag in record.split(\";\")[1:]:\n if tag == \"\":\n continue\n tag = tag.strip(\" \")\n parts = tag.split(\"=\")\n if len(parts) != 2:\n raise DNSSyntaxError(_(\"Invalid tag {}\").format(tag))\n name = parts[0].strip(\" \")\n if name not in constants.DMARC_TAGS:\n raise DNSSyntaxError(_(\"Unknown tag {}\").format(name))\n value = parts[1].strip(\" \")\n check_dmarc_tag(name, value)\n tags[name] = value\n if \"p\" not in tags:\n raise DNSSyntaxError(_(\"Missing required p tag\"))\n", "path": "modoboa/dnstools/lib.py"}]}
| 3,345 | 508 |
gh_patches_debug_11592
|
rasdani/github-patches
|
git_diff
|
opsdroid__opsdroid-184
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
API.AI Error when network is disconnected
The following error should be caught and warned about cleanly.
```
ERROR asyncio: Task exception was never retrieved
future: <Task finished coro=<parse_apiai() done, defined at /Users/jacob/Projects/opsdroid/opsdroid/opsdroid/parsers/apiai.py:34> exception=ClientConnectorError(8, 'Cannot connect to host api.api.ai:443 ssl:True [nodename nor servname provided, or not known]')>
Traceback (most recent call last):
File "/Users/jacob/Projects/opsdroid/opsdroid/venv/lib/python3.6/site-packages/aiohttp/connector.py", line 375, in connect
proto = yield from self._create_connection(req)
File "/Users/jacob/Projects/opsdroid/opsdroid/venv/lib/python3.6/site-packages/aiohttp/connector.py", line 632, in _create_connection
_, proto = yield from self._create_direct_connection(req)
File "/Users/jacob/Projects/opsdroid/opsdroid/venv/lib/python3.6/site-packages/aiohttp/connector.py", line 643, in _create_direct_connection
hosts = yield from self._resolve_host(req.url.raw_host, req.port)
File "/Users/jacob/Projects/opsdroid/opsdroid/venv/lib/python3.6/site-packages/aiohttp/connector.py", line 615, in _resolve_host
self._resolver.resolve(host, port, family=self._family)
File "/Users/jacob/Projects/opsdroid/opsdroid/venv/lib/python3.6/site-packages/aiohttp/resolver.py", line 30, in resolve
host, port, type=socket.SOCK_STREAM, family=family)
File "/opt/boxen/homebrew/Cellar/python3/3.6.0/Frameworks/Python.framework/Versions/3.6/lib/python3.6/concurrent/futures/thread.py", line 55, in run
result = self.fn(*self.args, **self.kwargs)
File "/opt/boxen/homebrew/Cellar/python3/3.6.0/Frameworks/Python.framework/Versions/3.6/lib/python3.6/socket.py", line 743, in getaddrinfo
for res in _socket.getaddrinfo(host, port, family, type, proto, flags):
socket.gaierror: [Errno 8] nodename nor servname provided, or not known
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/Users/jacob/Projects/opsdroid/opsdroid/opsdroid/parsers/apiai.py", line 42, in parse_apiai
result = await call_apiai(message, config)
File "/Users/jacob/Projects/opsdroid/opsdroid/opsdroid/parsers/apiai.py", line 27, in call_apiai
headers=headers)
File "/Users/jacob/Projects/opsdroid/opsdroid/venv/lib/python3.6/site-packages/aiohttp/client.py", line 621, in __await__
resp = yield from self._coro
File "/Users/jacob/Projects/opsdroid/opsdroid/venv/lib/python3.6/site-packages/aiohttp/client.py", line 225, in _request
conn = yield from self._connector.connect(req)
File "/Users/jacob/Projects/opsdroid/opsdroid/venv/lib/python3.6/site-packages/aiohttp/connector.py", line 380, in connect
.format(key, exc.strerror)) from exc
aiohttp.client_exceptions.ClientConnectorError: [Errno 8] Cannot connect to host api.api.ai:443 ssl:True [nodename nor servname provided, or not known]
```
</issue>
<code>
[start of opsdroid/parsers/apiai.py]
1 """A helper function for parsing and executing api.ai skills."""
2
3 import logging
4 import json
5
6 import aiohttp
7
8
9 _LOGGER = logging.getLogger(__name__)
10
11
12 async def call_apiai(message, config):
13 """Call the api.ai api and return the response."""
14 async with aiohttp.ClientSession() as session:
15 payload = {
16 "v": "20150910",
17 "lang": "en",
18 "sessionId": message.connector.name,
19 "query": message.text
20 }
21 headers = {
22 "Authorization": "Bearer " + config['access-token'],
23 "Content-Type": "application/json"
24 }
25 resp = await session.post("https://api.api.ai/v1/query",
26 data=json.dumps(payload),
27 headers=headers)
28 result = await resp.json()
29 _LOGGER.debug("api.ai response - " + json.dumps(result))
30
31 return result
32
33
34 async def parse_apiai(opsdroid, message, config):
35 """Parse a message against all apiai skills."""
36 # pylint: disable=broad-except
37 # We want to catch all exceptions coming from a skill module and not
38 # halt the application. If a skill throws an exception it just doesn't
39 # give a response to the user, so an error response should be given.
40 if 'access-token' in config:
41
42 result = await call_apiai(message, config)
43
44 if result["status"]["code"] >= 300:
45 _LOGGER.error("api.ai error - " +
46 str(result["status"]["code"]) + " " +
47 result["status"]["errorType"])
48 return
49
50 if "min-score" in config and \
51 result["result"]["score"] < config["min-score"]:
52 _LOGGER.debug("api.ai score lower than min-score")
53 return
54
55 if result:
56 for skill in opsdroid.skills:
57
58 if "apiai_action" in skill or "apiai_intent" in skill:
59 if ("action" in result["result"] and
60 skill["apiai_action"] in
61 result["result"]["action"]) \
62 or ("intentName" in result["result"] and
63 skill["apiai_intent"] in
64 result["result"]["intentName"]):
65 message.apiai = result
66 try:
67 await skill["skill"](opsdroid, skill["config"],
68 message)
69 except Exception:
70 await message.respond(
71 "Whoops there has been an error")
72 await message.respond(
73 "Check the log for details")
74 _LOGGER.exception("Exception when parsing '" +
75 message.text +
76 "' against skill '" +
77 result["result"]["action"] + "'")
78
[end of opsdroid/parsers/apiai.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/opsdroid/parsers/apiai.py b/opsdroid/parsers/apiai.py
--- a/opsdroid/parsers/apiai.py
+++ b/opsdroid/parsers/apiai.py
@@ -38,8 +38,11 @@
# halt the application. If a skill throws an exception it just doesn't
# give a response to the user, so an error response should be given.
if 'access-token' in config:
-
- result = await call_apiai(message, config)
+ try:
+ result = await call_apiai(message, config)
+ except aiohttp.ClientOSError:
+ _LOGGER.error("No response from api.ai, check your network.")
+ return
if result["status"]["code"] >= 300:
_LOGGER.error("api.ai error - " +
|
{"golden_diff": "diff --git a/opsdroid/parsers/apiai.py b/opsdroid/parsers/apiai.py\n--- a/opsdroid/parsers/apiai.py\n+++ b/opsdroid/parsers/apiai.py\n@@ -38,8 +38,11 @@\n # halt the application. If a skill throws an exception it just doesn't\n # give a response to the user, so an error response should be given.\n if 'access-token' in config:\n-\n- result = await call_apiai(message, config)\n+ try:\n+ result = await call_apiai(message, config)\n+ except aiohttp.ClientOSError:\n+ _LOGGER.error(\"No response from api.ai, check your network.\")\n+ return\n \n if result[\"status\"][\"code\"] >= 300:\n _LOGGER.error(\"api.ai error - \" +\n", "issue": "API.AI Error when network is disconnected\nThe following error should be caught and warned about cleanly.\r\n\r\n```\r\nERROR asyncio: Task exception was never retrieved\r\nfuture: <Task finished coro=<parse_apiai() done, defined at /Users/jacob/Projects/opsdroid/opsdroid/opsdroid/parsers/apiai.py:34> exception=ClientConnectorError(8, 'Cannot connect to host api.api.ai:443 ssl:True [nodename nor servname provided, or not known]')>\r\nTraceback (most recent call last):\r\n File \"/Users/jacob/Projects/opsdroid/opsdroid/venv/lib/python3.6/site-packages/aiohttp/connector.py\", line 375, in connect\r\n proto = yield from self._create_connection(req)\r\n File \"/Users/jacob/Projects/opsdroid/opsdroid/venv/lib/python3.6/site-packages/aiohttp/connector.py\", line 632, in _create_connection\r\n _, proto = yield from self._create_direct_connection(req)\r\n File \"/Users/jacob/Projects/opsdroid/opsdroid/venv/lib/python3.6/site-packages/aiohttp/connector.py\", line 643, in _create_direct_connection\r\n hosts = yield from self._resolve_host(req.url.raw_host, req.port)\r\n File \"/Users/jacob/Projects/opsdroid/opsdroid/venv/lib/python3.6/site-packages/aiohttp/connector.py\", line 615, in _resolve_host\r\n self._resolver.resolve(host, port, family=self._family)\r\n File \"/Users/jacob/Projects/opsdroid/opsdroid/venv/lib/python3.6/site-packages/aiohttp/resolver.py\", line 30, in resolve\r\n host, port, type=socket.SOCK_STREAM, family=family)\r\n File \"/opt/boxen/homebrew/Cellar/python3/3.6.0/Frameworks/Python.framework/Versions/3.6/lib/python3.6/concurrent/futures/thread.py\", line 55, in run\r\n result = self.fn(*self.args, **self.kwargs)\r\n File \"/opt/boxen/homebrew/Cellar/python3/3.6.0/Frameworks/Python.framework/Versions/3.6/lib/python3.6/socket.py\", line 743, in getaddrinfo\r\n for res in _socket.getaddrinfo(host, port, family, type, proto, flags):\r\nsocket.gaierror: [Errno 8] nodename nor servname provided, or not known\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nTraceback (most recent call last):\r\n File \"/Users/jacob/Projects/opsdroid/opsdroid/opsdroid/parsers/apiai.py\", line 42, in parse_apiai\r\n result = await call_apiai(message, config)\r\n File \"/Users/jacob/Projects/opsdroid/opsdroid/opsdroid/parsers/apiai.py\", line 27, in call_apiai\r\n headers=headers)\r\n File \"/Users/jacob/Projects/opsdroid/opsdroid/venv/lib/python3.6/site-packages/aiohttp/client.py\", line 621, in __await__\r\n resp = yield from self._coro\r\n File \"/Users/jacob/Projects/opsdroid/opsdroid/venv/lib/python3.6/site-packages/aiohttp/client.py\", line 225, in _request\r\n conn = yield from self._connector.connect(req)\r\n File \"/Users/jacob/Projects/opsdroid/opsdroid/venv/lib/python3.6/site-packages/aiohttp/connector.py\", line 380, in connect\r\n .format(key, exc.strerror)) from exc\r\naiohttp.client_exceptions.ClientConnectorError: [Errno 8] Cannot connect to host api.api.ai:443 ssl:True [nodename nor servname provided, or not known]\r\n```\n", "before_files": [{"content": "\"\"\"A helper function for parsing and executing api.ai skills.\"\"\"\n\nimport logging\nimport json\n\nimport aiohttp\n\n\n_LOGGER = logging.getLogger(__name__)\n\n\nasync def call_apiai(message, config):\n \"\"\"Call the api.ai api and return the response.\"\"\"\n async with aiohttp.ClientSession() as session:\n payload = {\n \"v\": \"20150910\",\n \"lang\": \"en\",\n \"sessionId\": message.connector.name,\n \"query\": message.text\n }\n headers = {\n \"Authorization\": \"Bearer \" + config['access-token'],\n \"Content-Type\": \"application/json\"\n }\n resp = await session.post(\"https://api.api.ai/v1/query\",\n data=json.dumps(payload),\n headers=headers)\n result = await resp.json()\n _LOGGER.debug(\"api.ai response - \" + json.dumps(result))\n\n return result\n\n\nasync def parse_apiai(opsdroid, message, config):\n \"\"\"Parse a message against all apiai skills.\"\"\"\n # pylint: disable=broad-except\n # We want to catch all exceptions coming from a skill module and not\n # halt the application. If a skill throws an exception it just doesn't\n # give a response to the user, so an error response should be given.\n if 'access-token' in config:\n\n result = await call_apiai(message, config)\n\n if result[\"status\"][\"code\"] >= 300:\n _LOGGER.error(\"api.ai error - \" +\n str(result[\"status\"][\"code\"]) + \" \" +\n result[\"status\"][\"errorType\"])\n return\n\n if \"min-score\" in config and \\\n result[\"result\"][\"score\"] < config[\"min-score\"]:\n _LOGGER.debug(\"api.ai score lower than min-score\")\n return\n\n if result:\n for skill in opsdroid.skills:\n\n if \"apiai_action\" in skill or \"apiai_intent\" in skill:\n if (\"action\" in result[\"result\"] and\n skill[\"apiai_action\"] in\n result[\"result\"][\"action\"]) \\\n or (\"intentName\" in result[\"result\"] and\n skill[\"apiai_intent\"] in\n result[\"result\"][\"intentName\"]):\n message.apiai = result\n try:\n await skill[\"skill\"](opsdroid, skill[\"config\"],\n message)\n except Exception:\n await message.respond(\n \"Whoops there has been an error\")\n await message.respond(\n \"Check the log for details\")\n _LOGGER.exception(\"Exception when parsing '\" +\n message.text +\n \"' against skill '\" +\n result[\"result\"][\"action\"] + \"'\")\n", "path": "opsdroid/parsers/apiai.py"}]}
| 2,142 | 193 |
gh_patches_debug_910
|
rasdani/github-patches
|
git_diff
|
d2l-ai__d2l-vi-115
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
test
</issue>
<code>
[start of utils.py]
1 # encoding=utf8
2 import codecs
3 import filecmp
4 import re
5 import sys
6 import argparse
7
8 # reload(sys)
9 # sys.setdefaultencoding('utf8')
10
11 BEGIN_BLOCK_COMMENT = '<!--\n'
12 END_BLOCK_COMMENT = '-->\n\n'
13 TRANSLATE_INDICATOR = '*dịch đoạn phía trên*'
14 HEADER_INDICATOR = ' *dịch tiêu đề phía trên*\n'
15 IMAGE_CAPTION_INDICATOR = '*dịch chú thích ảnh phía trên*'
16 START_FILE = '<!-- ===================== Bắt đầu dịch Phần 1 ==================== -->\n'
17 END_FILE = '<!-- ===================== Kết thúc dịch Phần 1 ==================== -->\n'
18 SUFIX_PATH = 'contributors_template_vn.md'
19
20 # Our special mark in markdown, e.g. :label:`chapter_intro`
21 MARK_RE_MD = re.compile(':([-\/\\._\w\d]+):`([\*-\/\\\._\w\d]+)`')
22
23 parser = argparse.ArgumentParser(description='Dịch Dive into Deep Learning')
24 parser.add_argument('--convert', type=str, help='path to md file')
25
26
27 def is_blank_line(line):
28 return line.strip() == ''
29
30
31 class MyLine(object):
32 def __init__(self, line_str, in_code_block):
33 self.line_str = line_str.replace(' -- ', ' \-\- ')
34 self.in_code_block = in_code_block
35 self.end_comment_if_next_line_blank = None
36
37 def process(self, file_writer, last_line):
38 if self.in_code_block:
39 file_writer.write(self.line_str)
40 else:
41 self._process(file_writer, last_line)
42 return self
43
44 def _process(self, file_writer, last_line):
45 raise NotImplementedError
46
47
48 class NormalLine(MyLine):
49 def __init__(self, line_str, in_code_block):
50 super(NormalLine, self).__init__(line_str, in_code_block)
51 self.end_comment_if_next_line_blank = True
52
53 def _process(self, file_writer, last_line):
54 if isinstance(last_line, BlankLine):
55 file_writer.write(BEGIN_BLOCK_COMMENT)
56 file_writer.write(self.line_str)
57
58
59 class BlankLine(MyLine):
60 def __init__(self, line_str, in_code_block):
61 super(BlankLine, self).__init__(line_str, in_code_block)
62 self.end_comment_if_next_line_blank = False
63
64 def _process(self, file_writer, last_line):
65 if last_line.end_comment_if_next_line_blank:
66 file_writer.write(END_BLOCK_COMMENT)
67 file_writer.write(TRANSLATE_INDICATOR)
68 file_writer.write('\n')
69 file_writer.write('\n')
70
71
72 class HeaderLine(MyLine):
73 def __init__(self, line_str, in_code_block):
74 super(HeaderLine, self).__init__(line_str, in_code_block)
75 self.end_comment_if_next_line_blank = False
76 self.heading = 0
77 cnt = 0
78 for char in self.line_str:
79 if char == '#':
80 cnt += 1
81 elif char == ' ':
82 self.heading = cnt
83 break
84 else:
85 assert False, self.line_str
86
87 def _process(self, file_writer, last_line):
88 assert isinstance(last_line, BlankLine),\
89 last_line.line_str
90 file_writer.write(BEGIN_BLOCK_COMMENT)
91 file_writer.write(self.line_str)
92 file_writer.write(END_BLOCK_COMMENT)
93 file_writer.write('#'*self.heading + HEADER_INDICATOR)
94
95
96 class ImageLine(MyLine):
97 def __init(self, line_str, in_code_block):
98 assert not in_code_block
99 super(ImageLine, self).__init__(line_str, in_code_block)
100
101 def _process(self, file_writer, last_line):
102 close_square_bracket_id = self.line_str.index(']')
103 assert self.line_str[close_square_bracket_id+1] == '(', self.line_str
104 # assert self.line_str.endswith(')'), self.line_str
105 file_writer.write(BEGIN_BLOCK_COMMENT)
106 file_writer.write(self.line_str)
107 file_writer.write(END_BLOCK_COMMENT)
108 file_writer.write(
109 '![' + IMAGE_CAPTION_INDICATOR + ']' + self.line_str[close_square_bracket_id+1:]
110 )
111
112
113 class CodeMarkerLine(MyLine):
114 def __init__(self, line_str, in_code_block):
115 super(CodeMarkerLine, self).__init__(line_str, in_code_block)
116 self.end_comment_if_next_line_blank = False
117
118 def _process(self, file_writer, last_line):
119 """ the print is printed in the super class"""
120 file_writer.write(self.line_str)
121
122
123
124 class MathLine(MyLine):
125 def __init__(self, line_str, in_code_block):
126 super(MathLine, self).__init__(line_str, in_code_block)
127 self.end_comment_if_next_line_blank = False
128
129 def _process(self, file_writer, last_line):
130 file_writer.write(self.line_str)
131 return self
132
133
134 class LabelLine(MyLine):
135 def __init__(self, line_str, in_code_block):
136 super(LabelLine, self).__init__(line_str, in_code_block)
137 self.end_comment_if_next_line_blank = False
138
139 def _process(self, file_writer, last_line):
140 # assert isinstance(last_line, HeaderLine) or isinstance(last_line, ImageLine), 'last line: {}\nthis_line: {}'.format(
141 # last_line.line_str, self.line_str
142 # )
143 file_writer.write(self.line_str)
144 # file_writer.write('\n')
145 return self
146
147
148 def block_comment(input_md, output_md, add_prefix_suffix=False):
149 last_line = BlankLine('', False)
150 in_code_block = False
151 with codecs.open(input_md, 'r', encoding='utf-8') as input_handle,\
152 codecs.open(output_md, 'w', encoding='utf-8') as output_handle,\
153 codecs.open(SUFIX_PATH, 'r', encoding='utf-8') as surfix_handle:
154 if add_prefix_suffix:
155 output_handle.write(START_FILE)
156 output_handle.write('\n')
157 for line_str in input_handle:
158 line_str = line_str.rstrip() + '\n'
159 line_str = line_str.replace(' -- ', ' \-\- ')
160 match = MARK_RE_MD.match(line_str)
161 if is_blank_line(line_str):
162 line_type = BlankLine
163 elif line_str.startswith('#'):
164 line_type = HeaderLine
165 elif line_str.startswith('!['):
166 line_type = ImageLine
167 elif line_str.startswith('$'):
168 line_type = MathLine
169 elif line_str.startswith('```'):
170 in_code_block = not in_code_block
171 line_type = CodeMarkerLine
172 elif match is not None and match[1] in ['label', 'eqlabel']:
173 line_type = LabelLine
174 else:
175 line_type = NormalLine
176
177 this_line = line_type(line_str, in_code_block)
178 last_line = this_line.process(output_handle, last_line)
179
180 assert in_code_block is False
181
182 # TODO: simplify 5 lines below
183 if isinstance(last_line, BlankLine) or isinstance(last_line, LabelLine)\
184 or isinstance(last_line, CodeMarkerLine) or isinstance(last_line, ImageLine):
185 print('skip')
186 else:
187 output_handle.write(END_BLOCK_COMMENT)
188 output_handle.write(TRANSLATE_INDICATOR)
189 if add_prefix_suffix:
190 output_handle.write('\n')
191 output_handle.write(END_FILE)
192 output_handle.write('\n')
193 for line in surfix_handle:
194 output_handle.write(line)
195
196
197 if __name__ == '__main__':
198 args = parser.parse_args()
199 input_md = args.convert
200 output_md = input_md[:-len('.md')] + '_vn.md'
201 block_comment(input_md, output_md, add_prefix_suffix=True)
202
[end of utils.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/utils.py b/utils.py
--- a/utils.py
+++ b/utils.py
@@ -5,9 +5,6 @@
import sys
import argparse
-# reload(sys)
-# sys.setdefaultencoding('utf8')
-
BEGIN_BLOCK_COMMENT = '<!--\n'
END_BLOCK_COMMENT = '-->\n\n'
TRANSLATE_INDICATOR = '*dịch đoạn phía trên*'
|
{"golden_diff": "diff --git a/utils.py b/utils.py\n--- a/utils.py\n+++ b/utils.py\n@@ -5,9 +5,6 @@\n import sys\n import argparse\n \n-# reload(sys)\n-# sys.setdefaultencoding('utf8')\n-\n BEGIN_BLOCK_COMMENT = '<!--\\n'\n END_BLOCK_COMMENT = '-->\\n\\n'\n TRANSLATE_INDICATOR = '*d\u1ecbch \u0111o\u1ea1n ph\u00eda tr\u00ean*'\n", "issue": "test\n\n", "before_files": [{"content": "# encoding=utf8\nimport codecs\nimport filecmp\nimport re\nimport sys\nimport argparse\n\n# reload(sys)\n# sys.setdefaultencoding('utf8')\n\nBEGIN_BLOCK_COMMENT = '<!--\\n'\nEND_BLOCK_COMMENT = '-->\\n\\n'\nTRANSLATE_INDICATOR = '*d\u1ecbch \u0111o\u1ea1n ph\u00eda tr\u00ean*'\nHEADER_INDICATOR = ' *d\u1ecbch ti\u00eau \u0111\u1ec1 ph\u00eda tr\u00ean*\\n'\nIMAGE_CAPTION_INDICATOR = '*d\u1ecbch ch\u00fa th\u00edch \u1ea3nh ph\u00eda tr\u00ean*'\nSTART_FILE = '<!-- ===================== B\u1eaft \u0111\u1ea7u d\u1ecbch Ph\u1ea7n 1 ==================== -->\\n'\nEND_FILE = '<!-- ===================== K\u1ebft th\u00fac d\u1ecbch Ph\u1ea7n 1 ==================== -->\\n'\nSUFIX_PATH = 'contributors_template_vn.md'\n\n# Our special mark in markdown, e.g. :label:`chapter_intro`\nMARK_RE_MD = re.compile(':([-\\/\\\\._\\w\\d]+):`([\\*-\\/\\\\\\._\\w\\d]+)`')\n\nparser = argparse.ArgumentParser(description='D\u1ecbch Dive into Deep Learning')\nparser.add_argument('--convert', type=str, help='path to md file')\n\n\ndef is_blank_line(line):\n return line.strip() == ''\n\n\nclass MyLine(object):\n def __init__(self, line_str, in_code_block):\n self.line_str = line_str.replace(' -- ', ' \\-\\- ')\n self.in_code_block = in_code_block\n self.end_comment_if_next_line_blank = None\n\n def process(self, file_writer, last_line):\n if self.in_code_block:\n file_writer.write(self.line_str)\n else:\n self._process(file_writer, last_line)\n return self\n\n def _process(self, file_writer, last_line):\n raise NotImplementedError\n\n\nclass NormalLine(MyLine):\n def __init__(self, line_str, in_code_block):\n super(NormalLine, self).__init__(line_str, in_code_block)\n self.end_comment_if_next_line_blank = True\n\n def _process(self, file_writer, last_line):\n if isinstance(last_line, BlankLine):\n file_writer.write(BEGIN_BLOCK_COMMENT)\n file_writer.write(self.line_str)\n\n\nclass BlankLine(MyLine):\n def __init__(self, line_str, in_code_block):\n super(BlankLine, self).__init__(line_str, in_code_block)\n self.end_comment_if_next_line_blank = False\n\n def _process(self, file_writer, last_line):\n if last_line.end_comment_if_next_line_blank:\n file_writer.write(END_BLOCK_COMMENT)\n file_writer.write(TRANSLATE_INDICATOR)\n file_writer.write('\\n')\n file_writer.write('\\n')\n\n\nclass HeaderLine(MyLine):\n def __init__(self, line_str, in_code_block):\n super(HeaderLine, self).__init__(line_str, in_code_block)\n self.end_comment_if_next_line_blank = False\n self.heading = 0\n cnt = 0\n for char in self.line_str:\n if char == '#':\n cnt += 1\n elif char == ' ':\n self.heading = cnt\n break\n else:\n assert False, self.line_str\n\n def _process(self, file_writer, last_line):\n assert isinstance(last_line, BlankLine),\\\n last_line.line_str\n file_writer.write(BEGIN_BLOCK_COMMENT)\n file_writer.write(self.line_str)\n file_writer.write(END_BLOCK_COMMENT)\n file_writer.write('#'*self.heading + HEADER_INDICATOR)\n\n\nclass ImageLine(MyLine):\n def __init(self, line_str, in_code_block):\n assert not in_code_block\n super(ImageLine, self).__init__(line_str, in_code_block)\n\n def _process(self, file_writer, last_line):\n close_square_bracket_id = self.line_str.index(']')\n assert self.line_str[close_square_bracket_id+1] == '(', self.line_str\n # assert self.line_str.endswith(')'), self.line_str\n file_writer.write(BEGIN_BLOCK_COMMENT)\n file_writer.write(self.line_str)\n file_writer.write(END_BLOCK_COMMENT)\n file_writer.write(\n '![' + IMAGE_CAPTION_INDICATOR + ']' + self.line_str[close_square_bracket_id+1:]\n )\n\n\nclass CodeMarkerLine(MyLine):\n def __init__(self, line_str, in_code_block):\n super(CodeMarkerLine, self).__init__(line_str, in_code_block)\n self.end_comment_if_next_line_blank = False\n\n def _process(self, file_writer, last_line):\n \"\"\" the print is printed in the super class\"\"\"\n file_writer.write(self.line_str)\n\n\n\nclass MathLine(MyLine):\n def __init__(self, line_str, in_code_block):\n super(MathLine, self).__init__(line_str, in_code_block)\n self.end_comment_if_next_line_blank = False\n\n def _process(self, file_writer, last_line):\n file_writer.write(self.line_str)\n return self\n\n\nclass LabelLine(MyLine):\n def __init__(self, line_str, in_code_block):\n super(LabelLine, self).__init__(line_str, in_code_block)\n self.end_comment_if_next_line_blank = False\n\n def _process(self, file_writer, last_line):\n # assert isinstance(last_line, HeaderLine) or isinstance(last_line, ImageLine), 'last line: {}\\nthis_line: {}'.format(\n # last_line.line_str, self.line_str\n # )\n file_writer.write(self.line_str)\n # file_writer.write('\\n')\n return self\n\n\ndef block_comment(input_md, output_md, add_prefix_suffix=False):\n last_line = BlankLine('', False)\n in_code_block = False\n with codecs.open(input_md, 'r', encoding='utf-8') as input_handle,\\\n codecs.open(output_md, 'w', encoding='utf-8') as output_handle,\\\n codecs.open(SUFIX_PATH, 'r', encoding='utf-8') as surfix_handle:\n if add_prefix_suffix:\n output_handle.write(START_FILE)\n output_handle.write('\\n')\n for line_str in input_handle:\n line_str = line_str.rstrip() + '\\n'\n line_str = line_str.replace(' -- ', ' \\-\\- ')\n match = MARK_RE_MD.match(line_str)\n if is_blank_line(line_str):\n line_type = BlankLine\n elif line_str.startswith('#'):\n line_type = HeaderLine\n elif line_str.startswith('!['):\n line_type = ImageLine\n elif line_str.startswith('$'):\n line_type = MathLine\n elif line_str.startswith('```'):\n in_code_block = not in_code_block\n line_type = CodeMarkerLine\n elif match is not None and match[1] in ['label', 'eqlabel']:\n line_type = LabelLine\n else:\n line_type = NormalLine\n\n this_line = line_type(line_str, in_code_block)\n last_line = this_line.process(output_handle, last_line)\n\n assert in_code_block is False\n\n # TODO: simplify 5 lines below\n if isinstance(last_line, BlankLine) or isinstance(last_line, LabelLine)\\\n or isinstance(last_line, CodeMarkerLine) or isinstance(last_line, ImageLine):\n print('skip')\n else:\n output_handle.write(END_BLOCK_COMMENT)\n output_handle.write(TRANSLATE_INDICATOR)\n if add_prefix_suffix:\n output_handle.write('\\n')\n output_handle.write(END_FILE)\n output_handle.write('\\n')\n for line in surfix_handle:\n output_handle.write(line)\n\n\nif __name__ == '__main__':\n args = parser.parse_args()\n input_md = args.convert\n output_md = input_md[:-len('.md')] + '_vn.md'\n block_comment(input_md, output_md, add_prefix_suffix=True)\n", "path": "utils.py"}]}
| 2,698 | 84 |
gh_patches_debug_9216
|
rasdani/github-patches
|
git_diff
|
interlegis__sapl-2102
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Audiências Públicas sem possibilidade de Edição
Ao criar uma Audiência Pública e salva-la, não aparecem os metadados da matéria legislativa inseridas no preenchimento.
Ao clicar em Editar, só aparece o título da audiência criada.
grato
</issue>
<code>
[start of sapl/audiencia/forms.py]
1 from django import forms
2 from django.core.exceptions import ObjectDoesNotExist, ValidationError
3 from django.db import transaction
4 from django.utils.translation import ugettext_lazy as _
5 from sapl.audiencia.models import AudienciaPublica, TipoAudienciaPublica
6 from sapl.materia.models import MateriaLegislativa, TipoMateriaLegislativa
7 from sapl.utils import timezone
8
9 class AudienciaForm(forms.ModelForm):
10
11 data_atual = timezone.now()
12
13 tipo = forms.ModelChoiceField(required=True,
14 label='Tipo de Audiência Pública',
15 queryset=TipoAudienciaPublica.objects.all().order_by('nome'))
16
17 tipo_materia = forms.ModelChoiceField(
18 label=_('Tipo Matéria'),
19 required=True,
20 queryset=TipoMateriaLegislativa.objects.all(),
21 empty_label='Selecione',
22 )
23
24 numero_materia = forms.CharField(
25 label='Número Matéria', required=True)
26
27 ano_materia = forms.CharField(
28 label='Ano Matéria',
29 initial=int(data_atual.year),
30 required=True)
31
32 class Meta:
33 model = AudienciaPublica
34 fields = ['tipo', 'numero', 'nome',
35 'tema', 'data', 'hora_inicio', 'hora_fim',
36 'observacao', 'audiencia_cancelada', 'url_audio',
37 'url_video', 'upload_pauta', 'upload_ata',
38 'upload_anexo', 'tipo_materia', 'numero_materia',
39 'ano_materia']
40
41
42 def __init__(self, **kwargs):
43 super(AudienciaForm, self).__init__(**kwargs)
44
45 tipos = []
46
47 if not self.fields['tipo'].queryset:
48 tipos.append(TipoAudienciaPublica.objects.create(nome='Audiência Pública', tipo='A'))
49 tipos.append(TipoAudienciaPublica.objects.create(nome='Plebiscito', tipo='P'))
50 tipos.append(TipoAudienciaPublica.objects.create(nome='Referendo', tipo='R'))
51 tipos.append(TipoAudienciaPublica.objects.create(nome='Iniciativa Popular', tipo='I'))
52
53 for t in tipos:
54 t.save()
55
56
57 def clean(self):
58 cleaned_data = super(AudienciaForm, self).clean()
59 if not self.is_valid():
60 return cleaned_data
61
62 try:
63 materia = MateriaLegislativa.objects.get(
64 numero=self.cleaned_data['numero_materia'],
65 ano=self.cleaned_data['ano_materia'],
66 tipo=self.cleaned_data['tipo_materia'])
67 except ObjectDoesNotExist:
68 msg = _('A matéria a ser inclusa não existe no cadastro'
69 ' de matérias legislativas.')
70 raise ValidationError(msg)
71 else:
72 cleaned_data['materia'] = materia
73
74 if self.cleaned_data['hora_inicio'] and self.cleaned_data['hora_fim']:
75 if (self.cleaned_data['hora_fim'] <
76 self.cleaned_data['hora_inicio']):
77 msg = _('A hora de fim não pode ser anterior a hora de ínicio')
78 raise ValidationError(msg)
79
80 return self.cleaned_data
81
82 @transaction.atomic()
83 def save(self, commit=True):
84 audiencia = super(AudienciaForm, self).save(commit)
85 return audiencia
[end of sapl/audiencia/forms.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/sapl/audiencia/forms.py b/sapl/audiencia/forms.py
--- a/sapl/audiencia/forms.py
+++ b/sapl/audiencia/forms.py
@@ -77,9 +77,11 @@
msg = _('A hora de fim não pode ser anterior a hora de ínicio')
raise ValidationError(msg)
- return self.cleaned_data
+ return cleaned_data
@transaction.atomic()
def save(self, commit=True):
- audiencia = super(AudienciaForm, self).save(commit)
+ audiencia = super(AudienciaForm, self).save(False)
+ audiencia.materia = self.cleaned_data['materia']
+ audiencia.save()
return audiencia
\ No newline at end of file
|
{"golden_diff": "diff --git a/sapl/audiencia/forms.py b/sapl/audiencia/forms.py\n--- a/sapl/audiencia/forms.py\n+++ b/sapl/audiencia/forms.py\n@@ -77,9 +77,11 @@\n msg = _('A hora de fim n\u00e3o pode ser anterior a hora de \u00ednicio')\n raise ValidationError(msg)\n \n- return self.cleaned_data\n+ return cleaned_data\n \n @transaction.atomic()\n def save(self, commit=True):\n- audiencia = super(AudienciaForm, self).save(commit)\n+ audiencia = super(AudienciaForm, self).save(False)\n+ audiencia.materia = self.cleaned_data['materia']\n+ audiencia.save()\n return audiencia\n\\ No newline at end of file\n", "issue": "Audi\u00eancias P\u00fablicas sem possibilidade de Edi\u00e7\u00e3o\nAo criar uma Audi\u00eancia P\u00fablica e salva-la, n\u00e3o aparecem os metadados da mat\u00e9ria legislativa inseridas no preenchimento. \r\nAo clicar em Editar, s\u00f3 aparece o t\u00edtulo da audi\u00eancia criada.\r\ngrato\n", "before_files": [{"content": "from django import forms\nfrom django.core.exceptions import ObjectDoesNotExist, ValidationError\nfrom django.db import transaction\nfrom django.utils.translation import ugettext_lazy as _\nfrom sapl.audiencia.models import AudienciaPublica, TipoAudienciaPublica\nfrom sapl.materia.models import MateriaLegislativa, TipoMateriaLegislativa\nfrom sapl.utils import timezone\n\nclass AudienciaForm(forms.ModelForm):\n\n data_atual = timezone.now()\n\n tipo = forms.ModelChoiceField(required=True,\n label='Tipo de Audi\u00eancia P\u00fablica',\n queryset=TipoAudienciaPublica.objects.all().order_by('nome'))\n\n tipo_materia = forms.ModelChoiceField(\n label=_('Tipo Mat\u00e9ria'),\n required=True,\n queryset=TipoMateriaLegislativa.objects.all(),\n empty_label='Selecione',\n )\n\n numero_materia = forms.CharField(\n label='N\u00famero Mat\u00e9ria', required=True)\n\n ano_materia = forms.CharField(\n label='Ano Mat\u00e9ria',\n initial=int(data_atual.year),\n required=True)\n\n class Meta:\n model = AudienciaPublica\n fields = ['tipo', 'numero', 'nome',\n 'tema', 'data', 'hora_inicio', 'hora_fim',\n 'observacao', 'audiencia_cancelada', 'url_audio',\n 'url_video', 'upload_pauta', 'upload_ata',\n 'upload_anexo', 'tipo_materia', 'numero_materia',\n 'ano_materia']\n\n\n def __init__(self, **kwargs):\n super(AudienciaForm, self).__init__(**kwargs)\n\n tipos = []\n\n if not self.fields['tipo'].queryset:\n tipos.append(TipoAudienciaPublica.objects.create(nome='Audi\u00eancia P\u00fablica', tipo='A'))\n tipos.append(TipoAudienciaPublica.objects.create(nome='Plebiscito', tipo='P'))\n tipos.append(TipoAudienciaPublica.objects.create(nome='Referendo', tipo='R'))\n tipos.append(TipoAudienciaPublica.objects.create(nome='Iniciativa Popular', tipo='I'))\n\n for t in tipos:\n t.save()\n\n\n def clean(self):\n cleaned_data = super(AudienciaForm, self).clean()\n if not self.is_valid():\n return cleaned_data\n\n try:\n materia = MateriaLegislativa.objects.get(\n numero=self.cleaned_data['numero_materia'],\n ano=self.cleaned_data['ano_materia'],\n tipo=self.cleaned_data['tipo_materia'])\n except ObjectDoesNotExist:\n msg = _('A mat\u00e9ria a ser inclusa n\u00e3o existe no cadastro'\n ' de mat\u00e9rias legislativas.')\n raise ValidationError(msg)\n else:\n cleaned_data['materia'] = materia\n\n if self.cleaned_data['hora_inicio'] and self.cleaned_data['hora_fim']:\n if (self.cleaned_data['hora_fim'] <\n self.cleaned_data['hora_inicio']):\n msg = _('A hora de fim n\u00e3o pode ser anterior a hora de \u00ednicio')\n raise ValidationError(msg)\n\n return self.cleaned_data\n\n @transaction.atomic()\n def save(self, commit=True):\n audiencia = super(AudienciaForm, self).save(commit)\n return audiencia", "path": "sapl/audiencia/forms.py"}]}
| 1,466 | 169 |
gh_patches_debug_43933
|
rasdani/github-patches
|
git_diff
|
huggingface__trl-566
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Potential bug of model output dim in reward trainer example
Hi!
I've been exploring the `examples/scripts/reward_trainer.py` and believe I've found a potential bug that I'd like to address.
## Potential bug.
To my understanding, the reward model should produce a scalar output. However, when we load the pre-trained model in the [example script](https://github.com/lvwerra/trl/blob/main/examples/scripts/reward_trainer.py#L75-L80), the model will be a binary classifier by default, with a 2-dim logit output:
```python
import torch
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
model = model.cpu()
with torch.no_grad():
out = model(**inputs)
print(out.logits.shape) # torch.Size([1, 2])
```
During training, this issue was hidden by the [the loss computation part](https://github.com/lvwerra/trl/blob/main/trl/trainer/reward_trainer.py#L169-L176) of the reward trainer:
```python
rewards_chosen = model(
input_ids=inputs["input_ids_chosen"],
attention_mask=inputs["attention_mask_chosen"],
)[0]
rewards_rejected = model(
input_ids=inputs["input_ids_rejected"],
attention_mask=inputs["attention_mask_rejected"],
)[0]
loss = -torch.nn.functional.logsigmoid(rewards_chosen - rewards_rejected).mean()
```
There, the `rewards_rejected` and `rewards_chosen` are dim [sample, 2] matrices, which I think should be dim [sample, 1] instead. The `.mean()` operator hides the issue as it returns a scalar value.
Here is a [colab](https://colab.research.google.com/drive/1szzGC8H1DQ7329_idVnnOnPIPE6D7J4j?usp=sharing) that shows the dimensionality problem.
## Proposed fix
To resolve this, in the model training example, we can use:
```python
model = AutoModelForSequenceClassification.from_pretrained(
script_args.model_name,
quantization_config=quantization_config,
device_map=device_map,
trust_remote_code=script_args.trust_remote_code,
num_labels=1 # <---- make sure that the model output is a scalar
)
```
I'll submit a pull request if it is OK.
In addition, while exploring `examples/scripts/reward_trainer.py`, I noticed a few other places that can be improved. For example, there is currently no validation steps that show the genearlization performance of the reward model; the `log_with` option in `ScriptArguments` is currently not being used anywhere and therefore has no effect. I'm happy to include the validation and fix the logging glitch in my pull request as well.
Tianlin
</issue>
<code>
[start of examples/scripts/reward_trainer.py]
1 # coding=utf-8
2 # Copyright 2023 The HuggingFace Inc. team. All rights reserved.
3 #
4 # Licensed under the Apache License, Version 2.0 (the "License");
5 # you may not use this file except in compliance with the License.
6 # You may obtain a copy of the License at
7 #
8 # http://www.apache.org/licenses/LICENSE-2.0
9 #
10 # Unless required by applicable law or agreed to in writing, software
11 # distributed under the License is distributed on an "AS IS" BASIS,
12 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 # See the License for the specific language governing permissions and
14 # limitations under the License.
15 from dataclasses import dataclass, field
16 from typing import Optional
17
18 from datasets import load_dataset
19 from peft import LoraConfig
20 from tqdm import tqdm
21 from transformers import (
22 AutoModelForSequenceClassification,
23 AutoTokenizer,
24 BitsAndBytesConfig,
25 HfArgumentParser,
26 TrainingArguments,
27 )
28
29 from trl import RewardTrainer
30
31
32 tqdm.pandas()
33
34
35 # Define and parse arguments.
36 @dataclass
37 class ScriptArguments:
38 """
39 The name of the Casual LM model we wish to fine with SFTTrainer
40 """
41
42 model_name: Optional[str] = field(default="facebook/opt-350m", metadata={"help": "the model name"})
43 dataset_name: Optional[str] = field(default="Anthropic/hh-rlhf", metadata={"help": "the model name"})
44 dataset_text_field: Optional[str] = field(default="text", metadata={"help": "the text field of the dataset"})
45 log_with: Optional[str] = field(default=None, metadata={"help": "use 'wandb' to log with wandb"})
46 learning_rate: Optional[float] = field(default=1.41e-5, metadata={"help": "the learning rate"})
47 batch_size: Optional[int] = field(default=64, metadata={"help": "the batch size"})
48 seq_length: Optional[int] = field(default=512, metadata={"help": "Input sequence length"})
49 gradient_accumulation_steps: Optional[int] = field(
50 default=16, metadata={"help": "the number of gradient accumulation steps"}
51 )
52 load_in_8bit: Optional[bool] = field(default=False, metadata={"help": "load the model in 8 bits precision"})
53 load_in_4bit: Optional[bool] = field(default=False, metadata={"help": "load the model in 4 bits precision"})
54 use_peft: Optional[bool] = field(default=False, metadata={"help": "Wether to use PEFT or not to train adapters"})
55 trust_remote_code: Optional[bool] = field(default=True, metadata={"help": "Enable `trust_remote_code`"})
56 output_dir: Optional[str] = field(default="output", metadata={"help": "the output directory"})
57
58
59 parser = HfArgumentParser(ScriptArguments)
60 script_args = parser.parse_args_into_dataclasses()[0]
61
62 # Step 1: Load the model
63 if script_args.load_in_8bit and script_args.load_in_4bit:
64 raise ValueError("You can't load the model in 8 bits and 4 bits at the same time")
65 elif script_args.load_in_8bit or script_args.load_in_4bit:
66 quantization_config = BitsAndBytesConfig(
67 load_in_8bit=script_args.load_in_8bit, load_in_4bit=script_args.load_in_4bit
68 )
69 # This means: fit the entire model on the GPU:0
70 device_map = {"": 0}
71 else:
72 device_map = None
73 quantization_config = None
74
75 model = AutoModelForSequenceClassification.from_pretrained(
76 script_args.model_name,
77 quantization_config=quantization_config,
78 device_map=device_map,
79 trust_remote_code=script_args.trust_remote_code,
80 )
81
82 # Step 2: Load the dataset and pre-process it
83 tokenizer = AutoTokenizer.from_pretrained(script_args.model_name)
84 dataset = load_dataset(script_args.dataset_name, split="train")
85 # Turn the dataset into pairs of post + summaries, where text_j is the preferred question + answer and text_k is the other.
86 # Then tokenize the dataset.
87 # Adapt this section to your needs for custom datasets
88
89
90 def preprocess_function(examples):
91 new_examples = {
92 "input_ids_chosen": [],
93 "attention_mask_chosen": [],
94 "input_ids_rejected": [],
95 "attention_mask_rejected": [],
96 }
97 for chosen, rejected in zip(examples["chosen"], examples["rejected"]):
98 tokenized_j = tokenizer(chosen, truncation=True)
99 tokenized_k = tokenizer(rejected, truncation=True)
100
101 new_examples["input_ids_chosen"].append(tokenized_j["input_ids"])
102 new_examples["attention_mask_chosen"].append(tokenized_j["attention_mask"])
103 new_examples["input_ids_rejected"].append(tokenized_k["input_ids"])
104 new_examples["attention_mask_rejected"].append(tokenized_k["attention_mask"])
105
106 return new_examples
107
108
109 # preprocess the dataset and filter out QAs that are longer than script_args.max_length
110 original_columns = dataset.column_names
111 train_dataset = dataset.map(
112 preprocess_function,
113 batched=True,
114 num_proc=4,
115 )
116 train_dataset = train_dataset.filter(
117 lambda x: len(x["input_ids_chosen"]) <= script_args.seq_length
118 and len(x["input_ids_rejected"]) <= script_args.seq_length
119 )
120
121
122 # Step 3: Define the training arguments
123 training_args = TrainingArguments(
124 output_dir=script_args.output_dir,
125 per_device_train_batch_size=script_args.batch_size,
126 gradient_accumulation_steps=script_args.gradient_accumulation_steps,
127 learning_rate=script_args.learning_rate,
128 )
129
130 # Step 4: Define the LoraConfig
131 if script_args.use_peft:
132 peft_config = LoraConfig(r=16, lora_alpha=16, bias="none", task_type="SEQ_CLS", modules_to_save=["scores"])
133 else:
134 peft_config = None
135
136 # Step 5: Define the Trainer
137 trainer = RewardTrainer(
138 model=model,
139 tokenizer=tokenizer,
140 args=training_args,
141 train_dataset=train_dataset,
142 peft_config=peft_config,
143 max_length=script_args.seq_length,
144 )
145
146 trainer.train()
147
[end of examples/scripts/reward_trainer.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/examples/scripts/reward_trainer.py b/examples/scripts/reward_trainer.py
--- a/examples/scripts/reward_trainer.py
+++ b/examples/scripts/reward_trainer.py
@@ -43,8 +43,13 @@
dataset_name: Optional[str] = field(default="Anthropic/hh-rlhf", metadata={"help": "the model name"})
dataset_text_field: Optional[str] = field(default="text", metadata={"help": "the text field of the dataset"})
log_with: Optional[str] = field(default=None, metadata={"help": "use 'wandb' to log with wandb"})
+ logging_steps: Optional[int] = field(default=500, metadata={"help": "the number of update steps between two logs"})
+ eval_split: Optional[str] = field(
+ default="none", metadata={"help": "the dataset split to evaluate on; default to 'none' (no evaluation)"}
+ )
learning_rate: Optional[float] = field(default=1.41e-5, metadata={"help": "the learning rate"})
batch_size: Optional[int] = field(default=64, metadata={"help": "the batch size"})
+ num_train_epochs: Optional[int] = field(default=1, metadata={"help": "the number of training epochs"})
seq_length: Optional[int] = field(default=512, metadata={"help": "Input sequence length"})
gradient_accumulation_steps: Optional[int] = field(
default=16, metadata={"help": "the number of gradient accumulation steps"}
@@ -77,11 +82,14 @@
quantization_config=quantization_config,
device_map=device_map,
trust_remote_code=script_args.trust_remote_code,
+ num_labels=1,
)
# Step 2: Load the dataset and pre-process it
tokenizer = AutoTokenizer.from_pretrained(script_args.model_name)
-dataset = load_dataset(script_args.dataset_name, split="train")
+train_dataset = load_dataset(script_args.dataset_name, split="train")
+
+
# Turn the dataset into pairs of post + summaries, where text_j is the preferred question + answer and text_k is the other.
# Then tokenize the dataset.
# Adapt this section to your needs for custom datasets
@@ -107,8 +115,7 @@
# preprocess the dataset and filter out QAs that are longer than script_args.max_length
-original_columns = dataset.column_names
-train_dataset = dataset.map(
+train_dataset = train_dataset.map(
preprocess_function,
batched=True,
num_proc=4,
@@ -118,13 +125,34 @@
and len(x["input_ids_rejected"]) <= script_args.seq_length
)
+if script_args.eval_split == "none":
+ eval_dataset = None
+else:
+ eval_dataset = load_dataset(script_args.dataset_name, split=script_args.eval_split)
+
+ eval_dataset = eval_dataset.map(
+ preprocess_function,
+ batched=True,
+ num_proc=4,
+ )
+ eval_dataset = eval_dataset.filter(
+ lambda x: len(x["input_ids_chosen"]) <= script_args.seq_length
+ and len(x["input_ids_rejected"]) <= script_args.seq_length
+ )
+
# Step 3: Define the training arguments
training_args = TrainingArguments(
output_dir=script_args.output_dir,
per_device_train_batch_size=script_args.batch_size,
+ num_train_epochs=script_args.num_train_epochs,
gradient_accumulation_steps=script_args.gradient_accumulation_steps,
learning_rate=script_args.learning_rate,
+ report_to="wandb" if script_args.log_with == "wandb" else "tensorboard",
+ remove_unused_columns=False,
+ optim="adamw_torch",
+ logging_steps=script_args.logging_steps,
+ evaluation_strategy="steps" if script_args.eval_split != "none" else "no",
)
# Step 4: Define the LoraConfig
@@ -139,6 +167,7 @@
tokenizer=tokenizer,
args=training_args,
train_dataset=train_dataset,
+ eval_dataset=eval_dataset,
peft_config=peft_config,
max_length=script_args.seq_length,
)
|
{"golden_diff": "diff --git a/examples/scripts/reward_trainer.py b/examples/scripts/reward_trainer.py\n--- a/examples/scripts/reward_trainer.py\n+++ b/examples/scripts/reward_trainer.py\n@@ -43,8 +43,13 @@\n dataset_name: Optional[str] = field(default=\"Anthropic/hh-rlhf\", metadata={\"help\": \"the model name\"})\n dataset_text_field: Optional[str] = field(default=\"text\", metadata={\"help\": \"the text field of the dataset\"})\n log_with: Optional[str] = field(default=None, metadata={\"help\": \"use 'wandb' to log with wandb\"})\n+ logging_steps: Optional[int] = field(default=500, metadata={\"help\": \"the number of update steps between two logs\"})\n+ eval_split: Optional[str] = field(\n+ default=\"none\", metadata={\"help\": \"the dataset split to evaluate on; default to 'none' (no evaluation)\"}\n+ )\n learning_rate: Optional[float] = field(default=1.41e-5, metadata={\"help\": \"the learning rate\"})\n batch_size: Optional[int] = field(default=64, metadata={\"help\": \"the batch size\"})\n+ num_train_epochs: Optional[int] = field(default=1, metadata={\"help\": \"the number of training epochs\"})\n seq_length: Optional[int] = field(default=512, metadata={\"help\": \"Input sequence length\"})\n gradient_accumulation_steps: Optional[int] = field(\n default=16, metadata={\"help\": \"the number of gradient accumulation steps\"}\n@@ -77,11 +82,14 @@\n quantization_config=quantization_config,\n device_map=device_map,\n trust_remote_code=script_args.trust_remote_code,\n+ num_labels=1,\n )\n \n # Step 2: Load the dataset and pre-process it\n tokenizer = AutoTokenizer.from_pretrained(script_args.model_name)\n-dataset = load_dataset(script_args.dataset_name, split=\"train\")\n+train_dataset = load_dataset(script_args.dataset_name, split=\"train\")\n+\n+\n # Turn the dataset into pairs of post + summaries, where text_j is the preferred question + answer and text_k is the other.\n # Then tokenize the dataset.\n # Adapt this section to your needs for custom datasets\n@@ -107,8 +115,7 @@\n \n \n # preprocess the dataset and filter out QAs that are longer than script_args.max_length\n-original_columns = dataset.column_names\n-train_dataset = dataset.map(\n+train_dataset = train_dataset.map(\n preprocess_function,\n batched=True,\n num_proc=4,\n@@ -118,13 +125,34 @@\n and len(x[\"input_ids_rejected\"]) <= script_args.seq_length\n )\n \n+if script_args.eval_split == \"none\":\n+ eval_dataset = None\n+else:\n+ eval_dataset = load_dataset(script_args.dataset_name, split=script_args.eval_split)\n+\n+ eval_dataset = eval_dataset.map(\n+ preprocess_function,\n+ batched=True,\n+ num_proc=4,\n+ )\n+ eval_dataset = eval_dataset.filter(\n+ lambda x: len(x[\"input_ids_chosen\"]) <= script_args.seq_length\n+ and len(x[\"input_ids_rejected\"]) <= script_args.seq_length\n+ )\n+\n \n # Step 3: Define the training arguments\n training_args = TrainingArguments(\n output_dir=script_args.output_dir,\n per_device_train_batch_size=script_args.batch_size,\n+ num_train_epochs=script_args.num_train_epochs,\n gradient_accumulation_steps=script_args.gradient_accumulation_steps,\n learning_rate=script_args.learning_rate,\n+ report_to=\"wandb\" if script_args.log_with == \"wandb\" else \"tensorboard\",\n+ remove_unused_columns=False,\n+ optim=\"adamw_torch\",\n+ logging_steps=script_args.logging_steps,\n+ evaluation_strategy=\"steps\" if script_args.eval_split != \"none\" else \"no\",\n )\n \n # Step 4: Define the LoraConfig\n@@ -139,6 +167,7 @@\n tokenizer=tokenizer,\n args=training_args,\n train_dataset=train_dataset,\n+ eval_dataset=eval_dataset,\n peft_config=peft_config,\n max_length=script_args.seq_length,\n )\n", "issue": "Potential bug of model output dim in reward trainer example\nHi!\r\n\r\nI've been exploring the `examples/scripts/reward_trainer.py` and believe I've found a potential bug that I'd like to address.\r\n\r\n## Potential bug.\r\nTo my understanding, the reward model should produce a scalar output. However, when we load the pre-trained model in the [example script](https://github.com/lvwerra/trl/blob/main/examples/scripts/reward_trainer.py#L75-L80), the model will be a binary classifier by default, with a 2-dim logit output:\r\n\r\n```python\r\nimport torch\r\ninputs = tokenizer(\"Hello, my dog is cute\", return_tensors=\"pt\")\r\nmodel = model.cpu()\r\nwith torch.no_grad():\r\n out = model(**inputs)\r\nprint(out.logits.shape) # torch.Size([1, 2])\r\n```\r\n\r\nDuring training, this issue was hidden by the [the loss computation part](https://github.com/lvwerra/trl/blob/main/trl/trainer/reward_trainer.py#L169-L176) of the reward trainer:\r\n\r\n```python\r\n rewards_chosen = model(\r\n input_ids=inputs[\"input_ids_chosen\"],\r\n attention_mask=inputs[\"attention_mask_chosen\"],\r\n )[0]\r\n\r\n rewards_rejected = model(\r\n input_ids=inputs[\"input_ids_rejected\"],\r\n attention_mask=inputs[\"attention_mask_rejected\"],\r\n )[0]\r\n\r\n loss = -torch.nn.functional.logsigmoid(rewards_chosen - rewards_rejected).mean()\r\n```\r\nThere, the `rewards_rejected` and `rewards_chosen` are dim [sample, 2] matrices, which I think should be dim [sample, 1] instead. The `.mean()` operator hides the issue as it returns a scalar value.\r\n\r\nHere is a [colab](https://colab.research.google.com/drive/1szzGC8H1DQ7329_idVnnOnPIPE6D7J4j?usp=sharing) that shows the dimensionality problem.\r\n\r\n## Proposed fix\r\n\r\nTo resolve this, in the model training example, we can use:\r\n\r\n```python\r\nmodel = AutoModelForSequenceClassification.from_pretrained(\r\n script_args.model_name,\r\n quantization_config=quantization_config,\r\n device_map=device_map,\r\n trust_remote_code=script_args.trust_remote_code,\r\n num_labels=1 # <---- make sure that the model output is a scalar\r\n)\r\n```\r\nI'll submit a pull request if it is OK. \r\n\r\nIn addition, while exploring `examples/scripts/reward_trainer.py`, I noticed a few other places that can be improved. For example, there is currently no validation steps that show the genearlization performance of the reward model; the `log_with` option in `ScriptArguments` is currently not being used anywhere and therefore has no effect. I'm happy to include the validation and fix the logging glitch in my pull request as well.\r\n\r\nTianlin\n", "before_files": [{"content": "# coding=utf-8\n# Copyright 2023 The HuggingFace Inc. team. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nfrom dataclasses import dataclass, field\nfrom typing import Optional\n\nfrom datasets import load_dataset\nfrom peft import LoraConfig\nfrom tqdm import tqdm\nfrom transformers import (\n AutoModelForSequenceClassification,\n AutoTokenizer,\n BitsAndBytesConfig,\n HfArgumentParser,\n TrainingArguments,\n)\n\nfrom trl import RewardTrainer\n\n\ntqdm.pandas()\n\n\n# Define and parse arguments.\n@dataclass\nclass ScriptArguments:\n \"\"\"\n The name of the Casual LM model we wish to fine with SFTTrainer\n \"\"\"\n\n model_name: Optional[str] = field(default=\"facebook/opt-350m\", metadata={\"help\": \"the model name\"})\n dataset_name: Optional[str] = field(default=\"Anthropic/hh-rlhf\", metadata={\"help\": \"the model name\"})\n dataset_text_field: Optional[str] = field(default=\"text\", metadata={\"help\": \"the text field of the dataset\"})\n log_with: Optional[str] = field(default=None, metadata={\"help\": \"use 'wandb' to log with wandb\"})\n learning_rate: Optional[float] = field(default=1.41e-5, metadata={\"help\": \"the learning rate\"})\n batch_size: Optional[int] = field(default=64, metadata={\"help\": \"the batch size\"})\n seq_length: Optional[int] = field(default=512, metadata={\"help\": \"Input sequence length\"})\n gradient_accumulation_steps: Optional[int] = field(\n default=16, metadata={\"help\": \"the number of gradient accumulation steps\"}\n )\n load_in_8bit: Optional[bool] = field(default=False, metadata={\"help\": \"load the model in 8 bits precision\"})\n load_in_4bit: Optional[bool] = field(default=False, metadata={\"help\": \"load the model in 4 bits precision\"})\n use_peft: Optional[bool] = field(default=False, metadata={\"help\": \"Wether to use PEFT or not to train adapters\"})\n trust_remote_code: Optional[bool] = field(default=True, metadata={\"help\": \"Enable `trust_remote_code`\"})\n output_dir: Optional[str] = field(default=\"output\", metadata={\"help\": \"the output directory\"})\n\n\nparser = HfArgumentParser(ScriptArguments)\nscript_args = parser.parse_args_into_dataclasses()[0]\n\n# Step 1: Load the model\nif script_args.load_in_8bit and script_args.load_in_4bit:\n raise ValueError(\"You can't load the model in 8 bits and 4 bits at the same time\")\nelif script_args.load_in_8bit or script_args.load_in_4bit:\n quantization_config = BitsAndBytesConfig(\n load_in_8bit=script_args.load_in_8bit, load_in_4bit=script_args.load_in_4bit\n )\n # This means: fit the entire model on the GPU:0\n device_map = {\"\": 0}\nelse:\n device_map = None\n quantization_config = None\n\nmodel = AutoModelForSequenceClassification.from_pretrained(\n script_args.model_name,\n quantization_config=quantization_config,\n device_map=device_map,\n trust_remote_code=script_args.trust_remote_code,\n)\n\n# Step 2: Load the dataset and pre-process it\ntokenizer = AutoTokenizer.from_pretrained(script_args.model_name)\ndataset = load_dataset(script_args.dataset_name, split=\"train\")\n# Turn the dataset into pairs of post + summaries, where text_j is the preferred question + answer and text_k is the other.\n# Then tokenize the dataset.\n# Adapt this section to your needs for custom datasets\n\n\ndef preprocess_function(examples):\n new_examples = {\n \"input_ids_chosen\": [],\n \"attention_mask_chosen\": [],\n \"input_ids_rejected\": [],\n \"attention_mask_rejected\": [],\n }\n for chosen, rejected in zip(examples[\"chosen\"], examples[\"rejected\"]):\n tokenized_j = tokenizer(chosen, truncation=True)\n tokenized_k = tokenizer(rejected, truncation=True)\n\n new_examples[\"input_ids_chosen\"].append(tokenized_j[\"input_ids\"])\n new_examples[\"attention_mask_chosen\"].append(tokenized_j[\"attention_mask\"])\n new_examples[\"input_ids_rejected\"].append(tokenized_k[\"input_ids\"])\n new_examples[\"attention_mask_rejected\"].append(tokenized_k[\"attention_mask\"])\n\n return new_examples\n\n\n# preprocess the dataset and filter out QAs that are longer than script_args.max_length\noriginal_columns = dataset.column_names\ntrain_dataset = dataset.map(\n preprocess_function,\n batched=True,\n num_proc=4,\n)\ntrain_dataset = train_dataset.filter(\n lambda x: len(x[\"input_ids_chosen\"]) <= script_args.seq_length\n and len(x[\"input_ids_rejected\"]) <= script_args.seq_length\n)\n\n\n# Step 3: Define the training arguments\ntraining_args = TrainingArguments(\n output_dir=script_args.output_dir,\n per_device_train_batch_size=script_args.batch_size,\n gradient_accumulation_steps=script_args.gradient_accumulation_steps,\n learning_rate=script_args.learning_rate,\n)\n\n# Step 4: Define the LoraConfig\nif script_args.use_peft:\n peft_config = LoraConfig(r=16, lora_alpha=16, bias=\"none\", task_type=\"SEQ_CLS\", modules_to_save=[\"scores\"])\nelse:\n peft_config = None\n\n# Step 5: Define the Trainer\ntrainer = RewardTrainer(\n model=model,\n tokenizer=tokenizer,\n args=training_args,\n train_dataset=train_dataset,\n peft_config=peft_config,\n max_length=script_args.seq_length,\n)\n\ntrainer.train()\n", "path": "examples/scripts/reward_trainer.py"}]}
| 2,854 | 923 |
gh_patches_debug_4862
|
rasdani/github-patches
|
git_diff
|
pre-commit__pre-commit-420
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Not working on macOS Sierra?
Attempting to utilize a collection of hooks from the default repo, I get the following:
```
An unexpected error has occurred: CalledProcessError: Command: ('/Users/amcgregor/Projects/marrow/.venv/bin/python3', '-m', 'virtualenv', '/Users/amcgregor/.pre-commit/repofu57ylaa/py_env-default')
Return code: 100
Expected return code: 0
Output:
Using base prefix '/usr/local/bin/../../../Library/Frameworks/Python.framework/Versions/3.5'
New python executable in /Users/amcgregor/.pre-commit/repofu57ylaa/py_env-default/bin/python3
Also creating executable in /Users/amcgregor/.pre-commit/repofu57ylaa/py_env-default/bin/python
ERROR: The executable /Users/amcgregor/.pre-commit/repofu57ylaa/py_env-default/bin/python3 is not functioning
ERROR: It thinks sys.prefix is '/Library/Frameworks/Python.framework/Versions/3.5' (should be '/Users/amcgregor/.pre-commit/repofu57ylaa/py_env-default')
ERROR: virtualenv is not compatible with this system or executable
Errors: (none)
Traceback (most recent call last):
File "/Users/amcgregor/Projects/marrow/.venv/lib/python3.5/site-packages/pre_commit/error_handler.py", line 47, in error_handler
yield
File "/Users/amcgregor/Projects/marrow/.venv/lib/python3.5/site-packages/pre_commit/main.py", line 157, in main
return run(runner, args)
File "/Users/amcgregor/Projects/marrow/.venv/lib/python3.5/site-packages/pre_commit/commands/run.py", line 212, in run
return _run_hooks(repo_hooks, args, write, environ)
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/contextlib.py", line 77, in __exit__
self.gen.throw(type, value, traceback)
File "/Users/amcgregor/Projects/marrow/.venv/lib/python3.5/site-packages/pre_commit/staged_files_only.py", line 63, in staged_files_only
yield
File "/Users/amcgregor/Projects/marrow/.venv/lib/python3.5/site-packages/pre_commit/commands/run.py", line 195, in run
repo_hooks = list(get_repo_hooks(runner))
File "/Users/amcgregor/Projects/marrow/.venv/lib/python3.5/site-packages/pre_commit/commands/run.py", line 141, in get_repo_hooks
for repo in runner.repositories:
File "/Users/amcgregor/Projects/marrow/.venv/lib/python3.5/site-packages/cached_property.py", line 26, in __get__
value = obj.__dict__[self.func.__name__] = self.func(obj)
File "/Users/amcgregor/Projects/marrow/.venv/lib/python3.5/site-packages/pre_commit/runner.py", line 47, in repositories
repository.require_installed()
File "/Users/amcgregor/Projects/marrow/.venv/lib/python3.5/site-packages/pre_commit/repository.py", line 117, in require_installed
self.install()
File "/Users/amcgregor/Projects/marrow/.venv/lib/python3.5/site-packages/pre_commit/repository.py", line 187, in install
self.additional_dependencies[language_name][language_version],
File "/Users/amcgregor/Projects/marrow/.venv/lib/python3.5/site-packages/pre_commit/languages/python.py", line 78, in install_environment
('pip', 'install', '.') + additional_dependencies,
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/contextlib.py", line 77, in __exit__
self.gen.throw(type, value, traceback)
File "/Users/amcgregor/Projects/marrow/.venv/lib/python3.5/site-packages/pre_commit/util.py", line 58, in clean_path_on_failure
yield
File "/Users/amcgregor/Projects/marrow/.venv/lib/python3.5/site-packages/pre_commit/languages/python.py", line 74, in install_environment
repo_cmd_runner.run(venv_cmd)
File "/Users/amcgregor/Projects/marrow/.venv/lib/python3.5/site-packages/pre_commit/prefixed_command_runner.py", line 39, in run
return cmd_output(*replaced_cmd, __popen=self.__popen, **kwargs)
File "/Users/amcgregor/Projects/marrow/.venv/lib/python3.5/site-packages/pre_commit/util.py", line 189, in cmd_output
returncode, cmd, retcode, output=(stdout, stderr),
pre_commit.util.CalledProcessError: Command: ('/Users/amcgregor/Projects/marrow/.venv/bin/python3', '-m', 'virtualenv', '/Users/amcgregor/.pre-commit/repofu57ylaa/py_env-default')
Return code: 100
Expected return code: 0
Output:
Using base prefix '/usr/local/bin/../../../Library/Frameworks/Python.framework/Versions/3.5'
New python executable in /Users/amcgregor/.pre-commit/repofu57ylaa/py_env-default/bin/python3
Also creating executable in /Users/amcgregor/.pre-commit/repofu57ylaa/py_env-default/bin/python
ERROR: The executable /Users/amcgregor/.pre-commit/repofu57ylaa/py_env-default/bin/python3 is not functioning
ERROR: It thinks sys.prefix is '/Library/Frameworks/Python.framework/Versions/3.5' (should be '/Users/amcgregor/.pre-commit/repofu57ylaa/py_env-default')
ERROR: virtualenv is not compatible with this system or executable
Errors: (none)
```
Using the Python.org-provided Python 3.5 installer package. I'm already within a virtual environment at the point of execution.
</issue>
<code>
[start of pre_commit/languages/python.py]
1 from __future__ import unicode_literals
2
3 import contextlib
4 import distutils.spawn
5 import os
6 import sys
7
8 from pre_commit.envcontext import envcontext
9 from pre_commit.envcontext import UNSET
10 from pre_commit.envcontext import Var
11 from pre_commit.languages import helpers
12 from pre_commit.util import clean_path_on_failure
13 from pre_commit.xargs import xargs
14
15
16 ENVIRONMENT_DIR = 'py_env'
17
18
19 def bin_dir(venv):
20 """On windows there's a different directory for the virtualenv"""
21 bin_part = 'Scripts' if os.name == 'nt' else 'bin'
22 return os.path.join(venv, bin_part)
23
24
25 def get_env_patch(venv):
26 return (
27 ('PYTHONHOME', UNSET),
28 ('VIRTUAL_ENV', venv),
29 ('PATH', (bin_dir(venv), os.pathsep, Var('PATH'))),
30 )
31
32
33 @contextlib.contextmanager
34 def in_env(repo_cmd_runner, language_version):
35 envdir = os.path.join(
36 repo_cmd_runner.prefix_dir,
37 helpers.environment_dir(ENVIRONMENT_DIR, language_version),
38 )
39 with envcontext(get_env_patch(envdir)):
40 yield
41
42
43 def norm_version(version):
44 if os.name == 'nt': # pragma: no cover (windows)
45 # Try looking up by name
46 if distutils.spawn.find_executable(version):
47 return version
48
49 # If it is in the form pythonx.x search in the default
50 # place on windows
51 if version.startswith('python'):
52 return r'C:\{}\python.exe'.format(version.replace('.', ''))
53
54 # Otherwise assume it is a path
55 return os.path.expanduser(version)
56
57
58 def install_environment(
59 repo_cmd_runner,
60 version='default',
61 additional_dependencies=(),
62 ):
63 additional_dependencies = tuple(additional_dependencies)
64 directory = helpers.environment_dir(ENVIRONMENT_DIR, version)
65
66 # Install a virtualenv
67 with clean_path_on_failure(repo_cmd_runner.path(directory)):
68 venv_cmd = [
69 sys.executable, '-m', 'virtualenv',
70 '{{prefix}}{}'.format(directory)
71 ]
72 if version != 'default':
73 venv_cmd.extend(['-p', norm_version(version)])
74 repo_cmd_runner.run(venv_cmd)
75 with in_env(repo_cmd_runner, version):
76 helpers.run_setup_cmd(
77 repo_cmd_runner,
78 ('pip', 'install', '.') + additional_dependencies,
79 )
80
81
82 def run_hook(repo_cmd_runner, hook, file_args):
83 with in_env(repo_cmd_runner, hook['language_version']):
84 return xargs((hook['entry'],) + tuple(hook['args']), file_args)
85
[end of pre_commit/languages/python.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/pre_commit/languages/python.py b/pre_commit/languages/python.py
--- a/pre_commit/languages/python.py
+++ b/pre_commit/languages/python.py
@@ -71,6 +71,8 @@
]
if version != 'default':
venv_cmd.extend(['-p', norm_version(version)])
+ else:
+ venv_cmd.extend(['-p', os.path.realpath(sys.executable)])
repo_cmd_runner.run(venv_cmd)
with in_env(repo_cmd_runner, version):
helpers.run_setup_cmd(
|
{"golden_diff": "diff --git a/pre_commit/languages/python.py b/pre_commit/languages/python.py\n--- a/pre_commit/languages/python.py\n+++ b/pre_commit/languages/python.py\n@@ -71,6 +71,8 @@\n ]\n if version != 'default':\n venv_cmd.extend(['-p', norm_version(version)])\n+ else:\n+ venv_cmd.extend(['-p', os.path.realpath(sys.executable)])\n repo_cmd_runner.run(venv_cmd)\n with in_env(repo_cmd_runner, version):\n helpers.run_setup_cmd(\n", "issue": "Not working on macOS Sierra?\nAttempting to utilize a collection of hooks from the default repo, I get the following:\n\n```\nAn unexpected error has occurred: CalledProcessError: Command: ('/Users/amcgregor/Projects/marrow/.venv/bin/python3', '-m', 'virtualenv', '/Users/amcgregor/.pre-commit/repofu57ylaa/py_env-default')\nReturn code: 100\nExpected return code: 0\nOutput: \n Using base prefix '/usr/local/bin/../../../Library/Frameworks/Python.framework/Versions/3.5'\n New python executable in /Users/amcgregor/.pre-commit/repofu57ylaa/py_env-default/bin/python3\n Also creating executable in /Users/amcgregor/.pre-commit/repofu57ylaa/py_env-default/bin/python\n ERROR: The executable /Users/amcgregor/.pre-commit/repofu57ylaa/py_env-default/bin/python3 is not functioning\n ERROR: It thinks sys.prefix is '/Library/Frameworks/Python.framework/Versions/3.5' (should be '/Users/amcgregor/.pre-commit/repofu57ylaa/py_env-default')\n ERROR: virtualenv is not compatible with this system or executable\n\nErrors: (none)\n\nTraceback (most recent call last):\n File \"/Users/amcgregor/Projects/marrow/.venv/lib/python3.5/site-packages/pre_commit/error_handler.py\", line 47, in error_handler\n yield\n File \"/Users/amcgregor/Projects/marrow/.venv/lib/python3.5/site-packages/pre_commit/main.py\", line 157, in main\n return run(runner, args)\n File \"/Users/amcgregor/Projects/marrow/.venv/lib/python3.5/site-packages/pre_commit/commands/run.py\", line 212, in run\n return _run_hooks(repo_hooks, args, write, environ)\n File \"/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/contextlib.py\", line 77, in __exit__\n self.gen.throw(type, value, traceback)\n File \"/Users/amcgregor/Projects/marrow/.venv/lib/python3.5/site-packages/pre_commit/staged_files_only.py\", line 63, in staged_files_only\n yield\n File \"/Users/amcgregor/Projects/marrow/.venv/lib/python3.5/site-packages/pre_commit/commands/run.py\", line 195, in run\n repo_hooks = list(get_repo_hooks(runner))\n File \"/Users/amcgregor/Projects/marrow/.venv/lib/python3.5/site-packages/pre_commit/commands/run.py\", line 141, in get_repo_hooks\n for repo in runner.repositories:\n File \"/Users/amcgregor/Projects/marrow/.venv/lib/python3.5/site-packages/cached_property.py\", line 26, in __get__\n value = obj.__dict__[self.func.__name__] = self.func(obj)\n File \"/Users/amcgregor/Projects/marrow/.venv/lib/python3.5/site-packages/pre_commit/runner.py\", line 47, in repositories\n repository.require_installed()\n File \"/Users/amcgregor/Projects/marrow/.venv/lib/python3.5/site-packages/pre_commit/repository.py\", line 117, in require_installed\n self.install()\n File \"/Users/amcgregor/Projects/marrow/.venv/lib/python3.5/site-packages/pre_commit/repository.py\", line 187, in install\n self.additional_dependencies[language_name][language_version],\n File \"/Users/amcgregor/Projects/marrow/.venv/lib/python3.5/site-packages/pre_commit/languages/python.py\", line 78, in install_environment\n ('pip', 'install', '.') + additional_dependencies,\n File \"/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/contextlib.py\", line 77, in __exit__\n self.gen.throw(type, value, traceback)\n File \"/Users/amcgregor/Projects/marrow/.venv/lib/python3.5/site-packages/pre_commit/util.py\", line 58, in clean_path_on_failure\n yield\n File \"/Users/amcgregor/Projects/marrow/.venv/lib/python3.5/site-packages/pre_commit/languages/python.py\", line 74, in install_environment\n repo_cmd_runner.run(venv_cmd)\n File \"/Users/amcgregor/Projects/marrow/.venv/lib/python3.5/site-packages/pre_commit/prefixed_command_runner.py\", line 39, in run\n return cmd_output(*replaced_cmd, __popen=self.__popen, **kwargs)\n File \"/Users/amcgregor/Projects/marrow/.venv/lib/python3.5/site-packages/pre_commit/util.py\", line 189, in cmd_output\n returncode, cmd, retcode, output=(stdout, stderr),\npre_commit.util.CalledProcessError: Command: ('/Users/amcgregor/Projects/marrow/.venv/bin/python3', '-m', 'virtualenv', '/Users/amcgregor/.pre-commit/repofu57ylaa/py_env-default')\nReturn code: 100\nExpected return code: 0\nOutput: \n Using base prefix '/usr/local/bin/../../../Library/Frameworks/Python.framework/Versions/3.5'\n New python executable in /Users/amcgregor/.pre-commit/repofu57ylaa/py_env-default/bin/python3\n Also creating executable in /Users/amcgregor/.pre-commit/repofu57ylaa/py_env-default/bin/python\n ERROR: The executable /Users/amcgregor/.pre-commit/repofu57ylaa/py_env-default/bin/python3 is not functioning\n ERROR: It thinks sys.prefix is '/Library/Frameworks/Python.framework/Versions/3.5' (should be '/Users/amcgregor/.pre-commit/repofu57ylaa/py_env-default')\n ERROR: virtualenv is not compatible with this system or executable\n\nErrors: (none)\n```\n\nUsing the Python.org-provided Python 3.5 installer package. I'm already within a virtual environment at the point of execution.\n\n", "before_files": [{"content": "from __future__ import unicode_literals\n\nimport contextlib\nimport distutils.spawn\nimport os\nimport sys\n\nfrom pre_commit.envcontext import envcontext\nfrom pre_commit.envcontext import UNSET\nfrom pre_commit.envcontext import Var\nfrom pre_commit.languages import helpers\nfrom pre_commit.util import clean_path_on_failure\nfrom pre_commit.xargs import xargs\n\n\nENVIRONMENT_DIR = 'py_env'\n\n\ndef bin_dir(venv):\n \"\"\"On windows there's a different directory for the virtualenv\"\"\"\n bin_part = 'Scripts' if os.name == 'nt' else 'bin'\n return os.path.join(venv, bin_part)\n\n\ndef get_env_patch(venv):\n return (\n ('PYTHONHOME', UNSET),\n ('VIRTUAL_ENV', venv),\n ('PATH', (bin_dir(venv), os.pathsep, Var('PATH'))),\n )\n\n\[email protected]\ndef in_env(repo_cmd_runner, language_version):\n envdir = os.path.join(\n repo_cmd_runner.prefix_dir,\n helpers.environment_dir(ENVIRONMENT_DIR, language_version),\n )\n with envcontext(get_env_patch(envdir)):\n yield\n\n\ndef norm_version(version):\n if os.name == 'nt': # pragma: no cover (windows)\n # Try looking up by name\n if distutils.spawn.find_executable(version):\n return version\n\n # If it is in the form pythonx.x search in the default\n # place on windows\n if version.startswith('python'):\n return r'C:\\{}\\python.exe'.format(version.replace('.', ''))\n\n # Otherwise assume it is a path\n return os.path.expanduser(version)\n\n\ndef install_environment(\n repo_cmd_runner,\n version='default',\n additional_dependencies=(),\n):\n additional_dependencies = tuple(additional_dependencies)\n directory = helpers.environment_dir(ENVIRONMENT_DIR, version)\n\n # Install a virtualenv\n with clean_path_on_failure(repo_cmd_runner.path(directory)):\n venv_cmd = [\n sys.executable, '-m', 'virtualenv',\n '{{prefix}}{}'.format(directory)\n ]\n if version != 'default':\n venv_cmd.extend(['-p', norm_version(version)])\n repo_cmd_runner.run(venv_cmd)\n with in_env(repo_cmd_runner, version):\n helpers.run_setup_cmd(\n repo_cmd_runner,\n ('pip', 'install', '.') + additional_dependencies,\n )\n\n\ndef run_hook(repo_cmd_runner, hook, file_args):\n with in_env(repo_cmd_runner, hook['language_version']):\n return xargs((hook['entry'],) + tuple(hook['args']), file_args)\n", "path": "pre_commit/languages/python.py"}]}
| 2,623 | 117 |
gh_patches_debug_23913
|
rasdani/github-patches
|
git_diff
|
rotki__rotki-1238
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Show all user trades/actions in a new page
## Problem Definition
We have the user data action but users can't actually explore it in any way in the UI. That's not good and we should definitely offer a way to display all actions/trades in a dedicated page.
That would mean showing user's:
- Exchange trades
- Exchange deposit/withdrawals
- DEX trades (when we support them)
## Considerations
Number of visible actions will be limited whether user has premium or not. Need to think a bit here on how to make this a premium feature but also available to normal users. At the moment can only think of limit in the number of trades/actions shown.
## Task
## Backend
The backend already has an api call for this. It's actually being used with a filter for external location for the OTC trades.
## Frontend
The frontend need a new dedicated page. User tactions or something like that. And there all trades, deposits/withdrawals of all locations should be displayed. This is going to be a very long list in most cases so we should think on optimizations and how to show the data in visually appealing and also useful way.
I would imagine it would make sense to have different tables for trades, different for asset movements (deposit/withdrawals) etc. And perhaps a differen component per location. Or simply offer filtering.
</issue>
<code>
[start of tools/data_faker/data_faker/actions.py]
1 import logging
2 import random
3 from typing import Tuple
4
5 from rotkehlchen.assets.asset import Asset
6 from rotkehlchen.balances.manual import ManuallyTrackedBalance
7 from rotkehlchen.constants.assets import A_BTC, A_EUR, A_USD, FIAT_CURRENCIES
8 from rotkehlchen.exchanges.data_structures import Trade, TradeType
9 from rotkehlchen.fval import FVal
10 from rotkehlchen.history import PriceHistorian
11 from rotkehlchen.serialization.deserialize import deserialize_location, pair_get_assets
12 from rotkehlchen.typing import Location, Timestamp, TradePair
13
14 STARTING_TIMESTAMP = 1464739200 # 01/06/2016
15 NUMBER_OF_TRADES = 5
16 STARTING_FUNDS = {A_EUR: FVal(100000), A_BTC: FVal(10)}
17
18 MAX_TRADE_DIFF_VARIANCE = 14400
19
20 ALLOWED_EXCHANGES = ['kraken', 'binance']
21 KRAKEN_PAIRS = [TradePair('ETH_EUR'), TradePair('BTC_EUR')]
22
23 MAX_TRADE_USD_VALUE = FVal(100)
24 MAX_FEE_USD_VALUE = 1
25
26
27 logger = logging.getLogger(__name__)
28
29
30 class ActionWriter():
31
32 def __init__(
33 self,
34 trades_number: int,
35 seconds_between_trades: int,
36 seconds_between_balance_save: int,
37 rotkehlchen,
38 fake_kraken,
39 fake_binance,
40 ):
41 self.seconds_between_trades = seconds_between_trades
42 self.seconds_between_balance_save = seconds_between_balance_save
43 self.trades_number = trades_number
44 self.current_ts = STARTING_TIMESTAMP
45 self.last_trade_ts = 0
46 self.last_balance_save_ts = 0
47 self.funds = STARTING_FUNDS
48 self.rotki = rotkehlchen
49 self.kraken = fake_kraken
50 self.binance = fake_binance
51
52 timestamp, _, _ = self.get_next_ts()
53 for asset, value in self.funds.items():
54 if asset.is_fiat():
55 self.rotki.data.db.add_manually_tracked_balances([ManuallyTrackedBalance(
56 asset=asset,
57 label=f'{asset.identifier} balance',
58 amount=value,
59 location=Location.BANKS,
60 tags=None,
61 )])
62 self.rotki.query_balances(requested_save_data=True, timestamp=timestamp)
63
64 # divide our starting funds between exchanges and keep a part out
65 divide_by = len(ALLOWED_EXCHANGES) + 1
66 for asset, value in self.funds.items():
67 amount = value / divide_by
68 for exchange in ALLOWED_EXCHANGES:
69 timestamp, _, _ = self.get_next_ts()
70
71 skip_exchange = asset in FIAT_CURRENCIES and exchange != 'kraken'
72
73 if not skip_exchange:
74 getattr(self, exchange).deposit(
75 asset=asset,
76 amount=amount,
77 time=timestamp,
78 )
79 if asset in FIAT_CURRENCIES:
80 self.rotki.data.db.add_manually_tracked_balances([ManuallyTrackedBalance(
81 asset=asset,
82 label=f'{asset.identifier} balance',
83 amount=value,
84 location=Location.BANKS,
85 tags=None,
86 )])
87
88 self.rotki.query_balances(requested_save_data=True, timestamp=timestamp)
89 self.last_balance_save_ts = timestamp
90
91 def maybe_save_balances(self, save_ts: Timestamp) -> None:
92 """Maybe Save all current balances in the fake user's DB at the current timestamp
93
94 If the save_ts is not after the time we save balances then nothing happens
95 """
96 if save_ts - self.last_balance_save_ts < self.seconds_between_balance_save:
97 return
98 self.rotki.query_balances(requested_save_data=True, timestamp=save_ts)
99 self.last_balance_save_ts = save_ts
100
101 def generate_history(self) -> None:
102 created_trades = 0
103 while created_trades <= self.trades_number:
104 current_ts, save_balances, make_trade = self.get_next_ts()
105
106 if make_trade:
107 self.create_action(created_trades, current_ts)
108 created_trades += 1
109
110 if save_balances:
111 self.maybe_save_balances(save_ts=current_ts)
112
113 @staticmethod
114 def query_historical_price(from_asset: Asset, to_asset: Asset, timestamp: Timestamp):
115 return PriceHistorian().query_historical_price(
116 from_asset=from_asset,
117 to_asset=to_asset,
118 timestamp=timestamp,
119 )
120
121 def increase_asset(self, asset: Asset, amount: FVal, exchange: str) -> None:
122 if asset not in self.funds:
123 self.funds[asset] = amount
124 else:
125 self.funds[asset] += amount
126
127 getattr(self, exchange).increase_asset(asset, amount)
128
129 def decrease_asset(self, asset: Asset, amount: FVal, exchange: str) -> None:
130 assert asset in self.funds, 'Asset should exist in funds'
131 assert amount <= self.funds[asset], 'We should have enough funds to decrease asset'
132 self.funds[asset] -= amount
133
134 getattr(self, exchange).decrease_asset(asset, amount)
135
136 def get_next_ts(self) -> Tuple[Timestamp, bool, bool]:
137 current_ts = self.current_ts
138 advance_by_secs = min(self.seconds_between_trades, self.seconds_between_balance_save)
139 secs_in_future = random.randint(
140 advance_by_secs,
141 advance_by_secs + MAX_TRADE_DIFF_VARIANCE,
142 )
143 self.current_ts += secs_in_future
144
145 save_balances = False
146 if self.current_ts - self.last_balance_save_ts >= self.seconds_between_balance_save:
147 save_balances = True
148
149 make_trade = False
150 if self.current_ts - self.last_trade_ts >= self.seconds_between_trades:
151 make_trade = True
152
153 return Timestamp(current_ts), save_balances, make_trade
154
155 def create_action(self, index: int, ts: Timestamp):
156 """Create a random trade action on a random exchange depending
157 on the funds that are available in that exchange"""
158 # choose an exchange at random
159 exchange_name = random.choice(ALLOWED_EXCHANGES)
160 exchange = getattr(self, exchange_name)
161 # choose a random pair at that exchange
162 pair = exchange.choose_pair(
163 timestamp=ts,
164 price_query=self.query_historical_price,
165 )
166 print(
167 f'Creating trade {index + 1} / {self.trades_number} in {exchange_name}'
168 f' for the pair: {pair} at timestamp {ts}',
169 )
170 # depending on our funds decide on what to do. Buy/sell
171 base, quote = pair_get_assets(pair)
172 if exchange.get_balance(base) is None:
173 action_type = TradeType.BUY
174 elif exchange.get_balance(quote) is None:
175 action_type = TradeType.SELL
176 else:
177 # TODO: trade the one we have most of
178 action_type = random.choice(list(TradeType))
179
180 # if we are buying we are going to spend from the quote asset
181 if action_type == TradeType.BUY:
182 spending_asset = quote
183 else: # selling spends from the base asset
184 spending_asset = base
185 # get a spending asset amount within our per-trade equivalent range and
186 # our available funds
187 spending_usd_rate = self.query_historical_price(spending_asset, A_USD, ts)
188 max_usd_in_spending_asset = spending_usd_rate * exchange.get_balance(spending_asset)
189 max_usd_equivalent_to_spend = min(max_usd_in_spending_asset, MAX_TRADE_USD_VALUE)
190 rate = self.query_historical_price(base, quote, ts)
191 usd_to_spend = FVal(random.uniform(0.01, float(max_usd_equivalent_to_spend)))
192 amount_in_spending_asset = usd_to_spend / spending_usd_rate
193 # if we are buying then the amount is the amount of asset we bought
194 if action_type == TradeType.BUY:
195 amount = amount_in_spending_asset / rate
196 # if we are selling the amount is the spending asset amount
197 else:
198 amount = amount_in_spending_asset
199
200 quote_asset_usd_rate = self.query_historical_price(quote, A_USD, ts)
201 fee_in_quote_currency = FVal(random.uniform(0, MAX_FEE_USD_VALUE)) / quote_asset_usd_rate
202
203 # create the trade
204 trade = Trade(
205 timestamp=ts,
206 location=deserialize_location(exchange_name),
207 pair=pair,
208 trade_type=action_type,
209 amount=amount,
210 rate=rate,
211 fee=fee_in_quote_currency,
212 fee_currency=quote,
213 link='',
214 notes='',
215 )
216 logger.info(f'Created trade: {trade}')
217
218 # Adjust our global and per exchange accounting
219 if action_type == TradeType.BUY:
220 # we buy so we increase our base asset by amount
221 self.increase_asset(base, amount, exchange_name)
222 # and decrease quote by amount * rate
223 self.decrease_asset(quote, amount * rate, exchange_name)
224 else:
225 # we sell so we increase our quote asset
226 self.increase_asset(quote, amount * rate, exchange_name)
227 # and decrease our base asset
228 self.decrease_asset(base, amount, exchange_name)
229
230 # finally add it to the exchange
231 exchange.append_trade(trade)
232
[end of tools/data_faker/data_faker/actions.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/tools/data_faker/data_faker/actions.py b/tools/data_faker/data_faker/actions.py
--- a/tools/data_faker/data_faker/actions.py
+++ b/tools/data_faker/data_faker/actions.py
@@ -79,7 +79,7 @@
if asset in FIAT_CURRENCIES:
self.rotki.data.db.add_manually_tracked_balances([ManuallyTrackedBalance(
asset=asset,
- label=f'{asset.identifier} balance',
+ label=f'{asset.identifier} balance {timestamp}',
amount=value,
location=Location.BANKS,
tags=None,
@@ -104,8 +104,11 @@
current_ts, save_balances, make_trade = self.get_next_ts()
if make_trade:
- self.create_action(created_trades, current_ts)
- created_trades += 1
+ try:
+ self.create_action(created_trades, current_ts)
+ created_trades += 1
+ except Exception as e:
+ logger.error(f'failed to create trade: {e}')
if save_balances:
self.maybe_save_balances(save_ts=current_ts)
|
{"golden_diff": "diff --git a/tools/data_faker/data_faker/actions.py b/tools/data_faker/data_faker/actions.py\n--- a/tools/data_faker/data_faker/actions.py\n+++ b/tools/data_faker/data_faker/actions.py\n@@ -79,7 +79,7 @@\n if asset in FIAT_CURRENCIES:\n self.rotki.data.db.add_manually_tracked_balances([ManuallyTrackedBalance(\n asset=asset,\n- label=f'{asset.identifier} balance',\n+ label=f'{asset.identifier} balance {timestamp}',\n amount=value,\n location=Location.BANKS,\n tags=None,\n@@ -104,8 +104,11 @@\n current_ts, save_balances, make_trade = self.get_next_ts()\n \n if make_trade:\n- self.create_action(created_trades, current_ts)\n- created_trades += 1\n+ try:\n+ self.create_action(created_trades, current_ts)\n+ created_trades += 1\n+ except Exception as e:\n+ logger.error(f'failed to create trade: {e}')\n \n if save_balances:\n self.maybe_save_balances(save_ts=current_ts)\n", "issue": "Show all user trades/actions in a new page\n## Problem Definition\r\n\r\nWe have the user data action but users can't actually explore it in any way in the UI. That's not good and we should definitely offer a way to display all actions/trades in a dedicated page.\r\n\r\nThat would mean showing user's:\r\n- Exchange trades\r\n- Exchange deposit/withdrawals\r\n- DEX trades (when we support them)\r\n\r\n\r\n## Considerations\r\n\r\nNumber of visible actions will be limited whether user has premium or not. Need to think a bit here on how to make this a premium feature but also available to normal users. At the moment can only think of limit in the number of trades/actions shown.\r\n\r\n## Task\r\n\r\n## Backend\r\nThe backend already has an api call for this. It's actually being used with a filter for external location for the OTC trades.\r\n\r\n## Frontend\r\n\r\nThe frontend need a new dedicated page. User tactions or something like that. And there all trades, deposits/withdrawals of all locations should be displayed. This is going to be a very long list in most cases so we should think on optimizations and how to show the data in visually appealing and also useful way.\r\n\r\nI would imagine it would make sense to have different tables for trades, different for asset movements (deposit/withdrawals) etc. And perhaps a differen component per location. Or simply offer filtering.\r\n\n", "before_files": [{"content": "import logging\nimport random\nfrom typing import Tuple\n\nfrom rotkehlchen.assets.asset import Asset\nfrom rotkehlchen.balances.manual import ManuallyTrackedBalance\nfrom rotkehlchen.constants.assets import A_BTC, A_EUR, A_USD, FIAT_CURRENCIES\nfrom rotkehlchen.exchanges.data_structures import Trade, TradeType\nfrom rotkehlchen.fval import FVal\nfrom rotkehlchen.history import PriceHistorian\nfrom rotkehlchen.serialization.deserialize import deserialize_location, pair_get_assets\nfrom rotkehlchen.typing import Location, Timestamp, TradePair\n\nSTARTING_TIMESTAMP = 1464739200 # 01/06/2016\nNUMBER_OF_TRADES = 5\nSTARTING_FUNDS = {A_EUR: FVal(100000), A_BTC: FVal(10)}\n\nMAX_TRADE_DIFF_VARIANCE = 14400\n\nALLOWED_EXCHANGES = ['kraken', 'binance']\nKRAKEN_PAIRS = [TradePair('ETH_EUR'), TradePair('BTC_EUR')]\n\nMAX_TRADE_USD_VALUE = FVal(100)\nMAX_FEE_USD_VALUE = 1\n\n\nlogger = logging.getLogger(__name__)\n\n\nclass ActionWriter():\n\n def __init__(\n self,\n trades_number: int,\n seconds_between_trades: int,\n seconds_between_balance_save: int,\n rotkehlchen,\n fake_kraken,\n fake_binance,\n ):\n self.seconds_between_trades = seconds_between_trades\n self.seconds_between_balance_save = seconds_between_balance_save\n self.trades_number = trades_number\n self.current_ts = STARTING_TIMESTAMP\n self.last_trade_ts = 0\n self.last_balance_save_ts = 0\n self.funds = STARTING_FUNDS\n self.rotki = rotkehlchen\n self.kraken = fake_kraken\n self.binance = fake_binance\n\n timestamp, _, _ = self.get_next_ts()\n for asset, value in self.funds.items():\n if asset.is_fiat():\n self.rotki.data.db.add_manually_tracked_balances([ManuallyTrackedBalance(\n asset=asset,\n label=f'{asset.identifier} balance',\n amount=value,\n location=Location.BANKS,\n tags=None,\n )])\n self.rotki.query_balances(requested_save_data=True, timestamp=timestamp)\n\n # divide our starting funds between exchanges and keep a part out\n divide_by = len(ALLOWED_EXCHANGES) + 1\n for asset, value in self.funds.items():\n amount = value / divide_by\n for exchange in ALLOWED_EXCHANGES:\n timestamp, _, _ = self.get_next_ts()\n\n skip_exchange = asset in FIAT_CURRENCIES and exchange != 'kraken'\n\n if not skip_exchange:\n getattr(self, exchange).deposit(\n asset=asset,\n amount=amount,\n time=timestamp,\n )\n if asset in FIAT_CURRENCIES:\n self.rotki.data.db.add_manually_tracked_balances([ManuallyTrackedBalance(\n asset=asset,\n label=f'{asset.identifier} balance',\n amount=value,\n location=Location.BANKS,\n tags=None,\n )])\n\n self.rotki.query_balances(requested_save_data=True, timestamp=timestamp)\n self.last_balance_save_ts = timestamp\n\n def maybe_save_balances(self, save_ts: Timestamp) -> None:\n \"\"\"Maybe Save all current balances in the fake user's DB at the current timestamp\n\n If the save_ts is not after the time we save balances then nothing happens\n \"\"\"\n if save_ts - self.last_balance_save_ts < self.seconds_between_balance_save:\n return\n self.rotki.query_balances(requested_save_data=True, timestamp=save_ts)\n self.last_balance_save_ts = save_ts\n\n def generate_history(self) -> None:\n created_trades = 0\n while created_trades <= self.trades_number:\n current_ts, save_balances, make_trade = self.get_next_ts()\n\n if make_trade:\n self.create_action(created_trades, current_ts)\n created_trades += 1\n\n if save_balances:\n self.maybe_save_balances(save_ts=current_ts)\n\n @staticmethod\n def query_historical_price(from_asset: Asset, to_asset: Asset, timestamp: Timestamp):\n return PriceHistorian().query_historical_price(\n from_asset=from_asset,\n to_asset=to_asset,\n timestamp=timestamp,\n )\n\n def increase_asset(self, asset: Asset, amount: FVal, exchange: str) -> None:\n if asset not in self.funds:\n self.funds[asset] = amount\n else:\n self.funds[asset] += amount\n\n getattr(self, exchange).increase_asset(asset, amount)\n\n def decrease_asset(self, asset: Asset, amount: FVal, exchange: str) -> None:\n assert asset in self.funds, 'Asset should exist in funds'\n assert amount <= self.funds[asset], 'We should have enough funds to decrease asset'\n self.funds[asset] -= amount\n\n getattr(self, exchange).decrease_asset(asset, amount)\n\n def get_next_ts(self) -> Tuple[Timestamp, bool, bool]:\n current_ts = self.current_ts\n advance_by_secs = min(self.seconds_between_trades, self.seconds_between_balance_save)\n secs_in_future = random.randint(\n advance_by_secs,\n advance_by_secs + MAX_TRADE_DIFF_VARIANCE,\n )\n self.current_ts += secs_in_future\n\n save_balances = False\n if self.current_ts - self.last_balance_save_ts >= self.seconds_between_balance_save:\n save_balances = True\n\n make_trade = False\n if self.current_ts - self.last_trade_ts >= self.seconds_between_trades:\n make_trade = True\n\n return Timestamp(current_ts), save_balances, make_trade\n\n def create_action(self, index: int, ts: Timestamp):\n \"\"\"Create a random trade action on a random exchange depending\n on the funds that are available in that exchange\"\"\"\n # choose an exchange at random\n exchange_name = random.choice(ALLOWED_EXCHANGES)\n exchange = getattr(self, exchange_name)\n # choose a random pair at that exchange\n pair = exchange.choose_pair(\n timestamp=ts,\n price_query=self.query_historical_price,\n )\n print(\n f'Creating trade {index + 1} / {self.trades_number} in {exchange_name}'\n f' for the pair: {pair} at timestamp {ts}',\n )\n # depending on our funds decide on what to do. Buy/sell\n base, quote = pair_get_assets(pair)\n if exchange.get_balance(base) is None:\n action_type = TradeType.BUY\n elif exchange.get_balance(quote) is None:\n action_type = TradeType.SELL\n else:\n # TODO: trade the one we have most of\n action_type = random.choice(list(TradeType))\n\n # if we are buying we are going to spend from the quote asset\n if action_type == TradeType.BUY:\n spending_asset = quote\n else: # selling spends from the base asset\n spending_asset = base\n # get a spending asset amount within our per-trade equivalent range and\n # our available funds\n spending_usd_rate = self.query_historical_price(spending_asset, A_USD, ts)\n max_usd_in_spending_asset = spending_usd_rate * exchange.get_balance(spending_asset)\n max_usd_equivalent_to_spend = min(max_usd_in_spending_asset, MAX_TRADE_USD_VALUE)\n rate = self.query_historical_price(base, quote, ts)\n usd_to_spend = FVal(random.uniform(0.01, float(max_usd_equivalent_to_spend)))\n amount_in_spending_asset = usd_to_spend / spending_usd_rate\n # if we are buying then the amount is the amount of asset we bought\n if action_type == TradeType.BUY:\n amount = amount_in_spending_asset / rate\n # if we are selling the amount is the spending asset amount\n else:\n amount = amount_in_spending_asset\n\n quote_asset_usd_rate = self.query_historical_price(quote, A_USD, ts)\n fee_in_quote_currency = FVal(random.uniform(0, MAX_FEE_USD_VALUE)) / quote_asset_usd_rate\n\n # create the trade\n trade = Trade(\n timestamp=ts,\n location=deserialize_location(exchange_name),\n pair=pair,\n trade_type=action_type,\n amount=amount,\n rate=rate,\n fee=fee_in_quote_currency,\n fee_currency=quote,\n link='',\n notes='',\n )\n logger.info(f'Created trade: {trade}')\n\n # Adjust our global and per exchange accounting\n if action_type == TradeType.BUY:\n # we buy so we increase our base asset by amount\n self.increase_asset(base, amount, exchange_name)\n # and decrease quote by amount * rate\n self.decrease_asset(quote, amount * rate, exchange_name)\n else:\n # we sell so we increase our quote asset\n self.increase_asset(quote, amount * rate, exchange_name)\n # and decrease our base asset\n self.decrease_asset(base, amount, exchange_name)\n\n # finally add it to the exchange\n exchange.append_trade(trade)\n", "path": "tools/data_faker/data_faker/actions.py"}]}
| 3,492 | 257 |
gh_patches_debug_15313
|
rasdani/github-patches
|
git_diff
|
SciTools__cartopy-1027
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Doc: gallery code links don't need to be underlined
As an example, https://2-5282596-gh.circle-artifacts.com/0/root/project/docs/build/html/gallery/miscellanea/star_shaped_boundary.html#sphx-glr-gallery-miscellanea-star-shaped-boundary-py has all of the functions that have links underlined. This adds visual noise that distracts from the message.
We want to keep the links, but define the CSS to avoid the underlining.
</issue>
<code>
[start of docs/source/sphinxext/pre_sphinx_gallery.py]
1 """
2 Override sphinx_gallery's treatment of groups (folders) with cartopy's
3 ``__tags__`` semantics. This is tightly bound to the sphinx_gallery
4 implementation, hence the explicit version checking.
5
6 """
7 from collections import OrderedDict
8 import os.path
9 import shutil
10 import tempfile
11 import textwrap
12
13 import sphinx_gallery.gen_gallery
14 import sphinx_gallery.gen_rst
15 from sphinx_gallery.gen_rst import (
16 write_backreferences, extract_intro, _thumbnail_div,
17 generate_file_rst, sphinx_compatibility)
18
19
20 if sphinx_gallery.__version__ not in ['0.1.12']:
21 raise RuntimeError('not tested with this version of sphinx_gallery ({}). '
22 'Please modify this check, and validate sphinx_gallery'
23 ' behaves as expected.'
24 ''.format(sphinx_gallery.__version__))
25
26
27 GALLERY_HEADER = textwrap.dedent("""
28
29 Cartopy Gallery
30 ---------------
31
32 The following visual examples demonstrate some of the functionality of
33 Cartopy, particularly its matplotlib interface.
34
35 For a structured introduction to cartopy, including some of these
36 examples, see :ref:`getting-started-with-cartopy`.
37
38 """)
39
40
41 def example_groups(src_dir):
42 """Return a dictionary of {tag: [example filenames]} for the given dir."""
43
44 sorted_listdir = [fname for fname in sorted(os.listdir(src_dir))
45 if fname.endswith('.py') and not fname.startswith('_')]
46 tagged_examples = {}
47
48 for fname in sorted_listdir:
49 fpath = os.path.join(src_dir, fname)
50 __tags__ = []
51 with open(fpath, 'r') as fh:
52 for line in fh:
53 # Crudely remove the __tags__ line.
54 if line.startswith('__tags__ = '):
55 exec(line.strip(), locals(), globals())
56 for tag in __tags__:
57 tagged_examples.setdefault(tag, []).append(fname)
58 break
59 else:
60 tag = 'Miscellanea'
61 tagged_examples.setdefault(tag, []).append(fname)
62 return tagged_examples
63
64
65 def order_examples(tagged_examples):
66 """Order the tags and their examples."""
67 preferred_tag_order = ['Introductory',
68 'Lines and polygons',
69 'Scalar data',
70 'Vector data',
71 'Web services']
72
73 def sort_key(item):
74 tag = item[0]
75 try:
76 index = preferred_tag_order.index(tag)
77 except ValueError:
78 index = len(preferred_tag_order) + 1
79
80 return (index, tag.lower())
81 sorted_items = sorted(tagged_examples.items(), key=sort_key)
82 return OrderedDict(sorted_items)
83
84
85 def write_example(src_fpath, target_dir):
86 target_fpath = os.path.join(target_dir, os.path.basename(src_fpath))
87 with open(src_fpath, 'r') as fh:
88 with open(target_fpath, 'w') as fh_out:
89 for line in fh:
90 # Crudely remove the __tags__ line.
91 if line.startswith('__tags__ = '):
92 continue
93 fh_out.write(line)
94
95
96 def generate_dir_rst(src_dir, target_dir, gallery_conf, seen_backrefs):
97 """Generate the gallery reStructuredText for an example directory"""
98
99 fhindex = GALLERY_HEADER
100
101 if not os.path.exists(target_dir):
102 os.makedirs(target_dir)
103 tagged_examples = example_groups(src_dir)
104 tagged_examples = order_examples(tagged_examples)
105
106 computation_times = []
107 build_target_dir = os.path.relpath(target_dir, gallery_conf['src_dir'])
108
109 seen = set()
110 tmp_dir = tempfile.mkdtemp()
111
112 for tag, examples in tagged_examples.items():
113 sorted_listdir = examples
114
115 entries_text = []
116 iterator = sphinx_compatibility.status_iterator(
117 sorted_listdir,
118 'Generating gallery for %s ' % tag,
119 length=len(sorted_listdir))
120 for fname in iterator:
121 write_example(os.path.join(src_dir, fname), tmp_dir)
122 amount_of_code, time_elapsed = generate_file_rst(
123 fname, target_dir, tmp_dir, gallery_conf)
124
125 if fname not in seen:
126 seen.add(fname)
127 computation_times.append((time_elapsed, fname))
128
129 new_fname = os.path.join(src_dir, fname)
130 intro = extract_intro(new_fname)
131 this_entry = _thumbnail_div(build_target_dir, fname, intro) + textwrap.dedent("""
132
133 .. toctree::
134 :hidden:
135
136 /%s
137
138 """) % os.path.join(build_target_dir, fname[:-3]).replace(os.sep, '/') # noqa: E501
139
140 entries_text.append((amount_of_code, this_entry))
141
142 if gallery_conf['backreferences_dir']:
143 write_backreferences(seen_backrefs, gallery_conf,
144 target_dir, fname, intro)
145
146 # sort to have the smallest entries in the beginning
147 entries_text.sort()
148
149 fhindex += textwrap.dedent("""
150
151 {tag}
152 {tag_underline}
153
154 .. container:: gallery_images
155
156 """.format(tag=tag, tag_underline='-' * len(tag)))
157
158 for _, entry_text in entries_text:
159 fhindex += '\n '.join(entry_text.split('\n'))
160
161 # clear at the end of the section
162 fhindex += """.. raw:: html\n
163 <div style='clear:both'></div>\n\n"""
164
165 # Tidy up the temp directory
166 shutil.rmtree(tmp_dir)
167
168 return fhindex, computation_times
169
170
171 # Monkey-patch sphinx_gallery to handle cartopy's example format.
172 sphinx_gallery.gen_rst.generate_dir_rst = generate_dir_rst
173 sphinx_gallery.gen_gallery.generate_dir_rst = generate_dir_rst
174
175
176 def setup(app):
177 pass
178
[end of docs/source/sphinxext/pre_sphinx_gallery.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/docs/source/sphinxext/pre_sphinx_gallery.py b/docs/source/sphinxext/pre_sphinx_gallery.py
--- a/docs/source/sphinxext/pre_sphinx_gallery.py
+++ b/docs/source/sphinxext/pre_sphinx_gallery.py
@@ -47,13 +47,12 @@
for fname in sorted_listdir:
fpath = os.path.join(src_dir, fname)
- __tags__ = []
with open(fpath, 'r') as fh:
for line in fh:
# Crudely remove the __tags__ line.
if line.startswith('__tags__ = '):
exec(line.strip(), locals(), globals())
- for tag in __tags__:
+ for tag in __tags__: # noqa:
tagged_examples.setdefault(tag, []).append(fname)
break
else:
|
{"golden_diff": "diff --git a/docs/source/sphinxext/pre_sphinx_gallery.py b/docs/source/sphinxext/pre_sphinx_gallery.py\n--- a/docs/source/sphinxext/pre_sphinx_gallery.py\n+++ b/docs/source/sphinxext/pre_sphinx_gallery.py\n@@ -47,13 +47,12 @@\n \n for fname in sorted_listdir:\n fpath = os.path.join(src_dir, fname)\n- __tags__ = []\n with open(fpath, 'r') as fh:\n for line in fh:\n # Crudely remove the __tags__ line.\n if line.startswith('__tags__ = '):\n exec(line.strip(), locals(), globals())\n- for tag in __tags__:\n+ for tag in __tags__: # noqa:\n tagged_examples.setdefault(tag, []).append(fname)\n break\n else:\n", "issue": "Doc: gallery code links don't need to be underlined\nAs an example, https://2-5282596-gh.circle-artifacts.com/0/root/project/docs/build/html/gallery/miscellanea/star_shaped_boundary.html#sphx-glr-gallery-miscellanea-star-shaped-boundary-py has all of the functions that have links underlined. This adds visual noise that distracts from the message.\r\n\r\nWe want to keep the links, but define the CSS to avoid the underlining.\n", "before_files": [{"content": "\"\"\"\nOverride sphinx_gallery's treatment of groups (folders) with cartopy's\n``__tags__`` semantics. This is tightly bound to the sphinx_gallery\nimplementation, hence the explicit version checking.\n\n\"\"\"\nfrom collections import OrderedDict\nimport os.path\nimport shutil\nimport tempfile\nimport textwrap\n\nimport sphinx_gallery.gen_gallery\nimport sphinx_gallery.gen_rst\nfrom sphinx_gallery.gen_rst import (\n write_backreferences, extract_intro, _thumbnail_div,\n generate_file_rst, sphinx_compatibility)\n\n\nif sphinx_gallery.__version__ not in ['0.1.12']:\n raise RuntimeError('not tested with this version of sphinx_gallery ({}). '\n 'Please modify this check, and validate sphinx_gallery'\n ' behaves as expected.'\n ''.format(sphinx_gallery.__version__))\n\n\nGALLERY_HEADER = textwrap.dedent(\"\"\"\n\n Cartopy Gallery\n ---------------\n\n The following visual examples demonstrate some of the functionality of\n Cartopy, particularly its matplotlib interface.\n\n For a structured introduction to cartopy, including some of these\n examples, see :ref:`getting-started-with-cartopy`.\n\n\"\"\")\n\n\ndef example_groups(src_dir):\n \"\"\"Return a dictionary of {tag: [example filenames]} for the given dir.\"\"\"\n\n sorted_listdir = [fname for fname in sorted(os.listdir(src_dir))\n if fname.endswith('.py') and not fname.startswith('_')]\n tagged_examples = {}\n\n for fname in sorted_listdir:\n fpath = os.path.join(src_dir, fname)\n __tags__ = []\n with open(fpath, 'r') as fh:\n for line in fh:\n # Crudely remove the __tags__ line.\n if line.startswith('__tags__ = '):\n exec(line.strip(), locals(), globals())\n for tag in __tags__:\n tagged_examples.setdefault(tag, []).append(fname)\n break\n else:\n tag = 'Miscellanea'\n tagged_examples.setdefault(tag, []).append(fname)\n return tagged_examples\n\n\ndef order_examples(tagged_examples):\n \"\"\"Order the tags and their examples.\"\"\"\n preferred_tag_order = ['Introductory',\n 'Lines and polygons',\n 'Scalar data',\n 'Vector data',\n 'Web services']\n\n def sort_key(item):\n tag = item[0]\n try:\n index = preferred_tag_order.index(tag)\n except ValueError:\n index = len(preferred_tag_order) + 1\n\n return (index, tag.lower())\n sorted_items = sorted(tagged_examples.items(), key=sort_key)\n return OrderedDict(sorted_items)\n\n\ndef write_example(src_fpath, target_dir):\n target_fpath = os.path.join(target_dir, os.path.basename(src_fpath))\n with open(src_fpath, 'r') as fh:\n with open(target_fpath, 'w') as fh_out:\n for line in fh:\n # Crudely remove the __tags__ line.\n if line.startswith('__tags__ = '):\n continue\n fh_out.write(line)\n\n\ndef generate_dir_rst(src_dir, target_dir, gallery_conf, seen_backrefs):\n \"\"\"Generate the gallery reStructuredText for an example directory\"\"\"\n\n fhindex = GALLERY_HEADER\n\n if not os.path.exists(target_dir):\n os.makedirs(target_dir)\n tagged_examples = example_groups(src_dir)\n tagged_examples = order_examples(tagged_examples)\n\n computation_times = []\n build_target_dir = os.path.relpath(target_dir, gallery_conf['src_dir'])\n\n seen = set()\n tmp_dir = tempfile.mkdtemp()\n\n for tag, examples in tagged_examples.items():\n sorted_listdir = examples\n\n entries_text = []\n iterator = sphinx_compatibility.status_iterator(\n sorted_listdir,\n 'Generating gallery for %s ' % tag,\n length=len(sorted_listdir))\n for fname in iterator:\n write_example(os.path.join(src_dir, fname), tmp_dir)\n amount_of_code, time_elapsed = generate_file_rst(\n fname, target_dir, tmp_dir, gallery_conf)\n\n if fname not in seen:\n seen.add(fname)\n computation_times.append((time_elapsed, fname))\n\n new_fname = os.path.join(src_dir, fname)\n intro = extract_intro(new_fname)\n this_entry = _thumbnail_div(build_target_dir, fname, intro) + textwrap.dedent(\"\"\"\n\n .. toctree::\n :hidden:\n\n /%s\n\n \"\"\") % os.path.join(build_target_dir, fname[:-3]).replace(os.sep, '/') # noqa: E501\n\n entries_text.append((amount_of_code, this_entry))\n\n if gallery_conf['backreferences_dir']:\n write_backreferences(seen_backrefs, gallery_conf,\n target_dir, fname, intro)\n\n # sort to have the smallest entries in the beginning\n entries_text.sort()\n\n fhindex += textwrap.dedent(\"\"\"\n\n {tag}\n {tag_underline}\n\n .. container:: gallery_images\n\n \"\"\".format(tag=tag, tag_underline='-' * len(tag)))\n\n for _, entry_text in entries_text:\n fhindex += '\\n '.join(entry_text.split('\\n'))\n\n # clear at the end of the section\n fhindex += \"\"\".. raw:: html\\n\n <div style='clear:both'></div>\\n\\n\"\"\"\n\n # Tidy up the temp directory\n shutil.rmtree(tmp_dir)\n\n return fhindex, computation_times\n\n\n# Monkey-patch sphinx_gallery to handle cartopy's example format.\nsphinx_gallery.gen_rst.generate_dir_rst = generate_dir_rst\nsphinx_gallery.gen_gallery.generate_dir_rst = generate_dir_rst\n\n\ndef setup(app):\n pass\n", "path": "docs/source/sphinxext/pre_sphinx_gallery.py"}]}
| 2,308 | 179 |
gh_patches_debug_33694
|
rasdani/github-patches
|
git_diff
|
kedro-org__kedro-1467
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Project templating using TemplatedConfigLoader raises ValidationError in 0.18.0
## Description
When following the guide for [Template configuration](https://kedro.readthedocs.io/en/stable/kedro_project_setup/configuration.html#template-configuration), Kedro 0.18.0 raises ValidationError as `kedro.config.templated_config.TemplatedConfigLoader` is not recognized as a subclass of `kedro.config.config.ConfigLoader`.
## Context
This bug hinders me from using the `TemplatedConfigLoader`, according to the documentation.
## Steps to Reproduce
1. Create new Kedro project
```
kedro new --starter=pandas-iris
cd new-kedro-project
```
2. Create and activate venv. Install requirements.
```
python -m venv my_venv
source my_venv/bin/activate
pip install -r src/requirements.txt
```
3. Open the `settings.py` file.
```
nano src/new_kedro_project/settings.py
```
Uncomment lines 28,29,31,32,33 as suggested in the guide. Expected config:
```
# Class that manages how configuration is loaded.
from kedro.config import TemplatedConfigLoader
CONFIG_LOADER_CLASS = TemplatedConfigLoader
# Keyword arguments to pass to the `CONFIG_LOADER_CLASS` constructor.
CONFIG_LOADER_ARGS = {
"globals_pattern": "*globals.yml",
}
```
4. Run the default pipeline
```
kedro run
```
## Expected Result
Defeault pipeline should run with global config keys overwritten.
## Actual Result
ValidationError is raised.
```
Traceback (most recent call last):
File "/usr/local/bin/kedro", line 8, in <module>
sys.exit(main())
File "/usr/local/lib/python3.9/site-packages/kedro/framework/cli/cli.py", line 205, in main
cli_collection = KedroCLI(project_path=Path.cwd())
File "/usr/local/lib/python3.9/site-packages/kedro/framework/cli/cli.py", line 114, in __init__
self._metadata = bootstrap_project(project_path)
File "/usr/local/lib/python3.9/site-packages/kedro/framework/startup.py", line 155, in bootstrap_project
configure_project(metadata.package_name)
File "/usr/local/lib/python3.9/site-packages/kedro/framework/project/__init__.py", line 166, in configure_project
settings.configure(settings_module)
File "/usr/local/lib/python3.9/site-packages/dynaconf/base.py", line 182, in configure
self._wrapped = Settings(settings_module=settings_module, **kwargs)
File "/usr/local/lib/python3.9/site-packages/dynaconf/base.py", line 235, in __init__
self.validators.validate(
File "/usr/local/lib/python3.9/site-packages/dynaconf/validator.py", line 417, in validate
validator.validate(self.settings, only=only, exclude=exclude)
File "/usr/local/lib/python3.9/site-packages/kedro/framework/project/__init__.py", line 35, in validate
raise ValidationError(
dynaconf.validator.ValidationError: Invalid value `kedro.config.templated_config.TemplatedConfigLoader` received for setting `CONFIG_LOADER_CLASS`. It must be a subclass of `kedro.config.config.ConfigLoader`.
```
## Your Environment
* Kedro version used (`pip show kedro` or `kedro -V`): `0.18.0`
* Python version used (`python -V`): `3.9.10`
* Operating system and version: `macOS 12.3`
</issue>
<code>
[start of kedro/framework/project/__init__.py]
1 """``kedro.framework.project`` module provides utitlity to
2 configure a Kedro project and access its settings."""
3 # pylint: disable=redefined-outer-name,unused-argument,global-statement
4 import importlib
5 import logging.config
6 import operator
7 from collections.abc import MutableMapping
8 from typing import Any, Dict, Optional
9
10 from dynaconf import LazySettings
11 from dynaconf.validator import ValidationError, Validator
12
13 from kedro.pipeline import Pipeline
14
15
16 def _get_default_class(class_import_path):
17 module, _, class_name = class_import_path.rpartition(".")
18
19 def validator_func(settings, validators):
20 return getattr(importlib.import_module(module), class_name)
21
22 return validator_func
23
24
25 class _IsSubclassValidator(Validator):
26 """A validator to check if the supplied setting value is a subclass of the default class"""
27
28 def validate(self, settings, *args, **kwargs):
29 super().validate(settings, *args, **kwargs)
30
31 default_class = self.default(settings, self)
32 for name in self.names:
33 setting_value = getattr(settings, name)
34 if not issubclass(setting_value, default_class):
35 raise ValidationError(
36 f"Invalid value `{setting_value.__module__}.{setting_value.__qualname__}` "
37 f"received for setting `{name}`. It must be a subclass of "
38 f"`{default_class.__module__}.{default_class.__qualname__}`."
39 )
40
41
42 class _ProjectSettings(LazySettings):
43 """Define all settings available for users to configure in Kedro,
44 along with their validation rules and default values.
45 Use Dynaconf's LazySettings as base.
46 """
47
48 _CONF_SOURCE = Validator("CONF_SOURCE", default="conf")
49 _HOOKS = Validator("HOOKS", default=tuple())
50 _CONTEXT_CLASS = _IsSubclassValidator(
51 "CONTEXT_CLASS",
52 default=_get_default_class("kedro.framework.context.KedroContext"),
53 )
54 _SESSION_STORE_CLASS = _IsSubclassValidator(
55 "SESSION_STORE_CLASS",
56 default=_get_default_class("kedro.framework.session.session.BaseSessionStore"),
57 )
58 _SESSION_STORE_ARGS = Validator("SESSION_STORE_ARGS", default={})
59 _DISABLE_HOOKS_FOR_PLUGINS = Validator("DISABLE_HOOKS_FOR_PLUGINS", default=tuple())
60 _CONFIG_LOADER_CLASS = _IsSubclassValidator(
61 "CONFIG_LOADER_CLASS", default=_get_default_class("kedro.config.ConfigLoader")
62 )
63 _CONFIG_LOADER_ARGS = Validator("CONFIG_LOADER_ARGS", default={})
64 _DATA_CATALOG_CLASS = _IsSubclassValidator(
65 "DATA_CATALOG_CLASS", default=_get_default_class("kedro.io.DataCatalog")
66 )
67
68 def __init__(self, *args, **kwargs):
69
70 kwargs.update(
71 validators=[
72 self._CONF_SOURCE,
73 self._HOOKS,
74 self._CONTEXT_CLASS,
75 self._SESSION_STORE_CLASS,
76 self._SESSION_STORE_ARGS,
77 self._DISABLE_HOOKS_FOR_PLUGINS,
78 self._CONFIG_LOADER_CLASS,
79 self._CONFIG_LOADER_ARGS,
80 self._DATA_CATALOG_CLASS,
81 ]
82 )
83 super().__init__(*args, **kwargs)
84
85
86 def _load_data_wrapper(func):
87 """Wrap a method in _ProjectPipelines so that data is loaded on first access.
88 Taking inspiration from dynaconf.utils.functional.new_method_proxy
89 """
90 # pylint: disable=protected-access
91 def inner(self, *args, **kwargs):
92 self._load_data()
93 return func(self._content, *args, **kwargs)
94
95 return inner
96
97
98 class _ProjectPipelines(MutableMapping):
99 """A read-only lazy dictionary-like object to hold the project pipelines.
100 On configure it will store the pipelines module.
101 On first data access, e.g. through __getitem__, it will load the registered pipelines and merge
102 them with pipelines defined from hooks.
103 """
104
105 def __init__(self) -> None:
106 self._pipelines_module: Optional[str] = None
107 self._is_data_loaded = False
108 self._content: Dict[str, Pipeline] = {}
109
110 @staticmethod
111 def _get_pipelines_registry_callable(pipelines_module: str):
112 module_obj = importlib.import_module(pipelines_module)
113 register_pipelines = getattr(module_obj, "register_pipelines")
114 return register_pipelines
115
116 def _load_data(self):
117 """Lazily read pipelines defined in the pipelines registry module"""
118
119 # If the pipelines dictionary has not been configured with a pipelines module
120 # or if data has been loaded
121 if self._pipelines_module is None or self._is_data_loaded:
122 return
123
124 register_pipelines = self._get_pipelines_registry_callable(
125 self._pipelines_module
126 )
127 project_pipelines = register_pipelines()
128
129 self._content = project_pipelines
130 self._is_data_loaded = True
131
132 def configure(self, pipelines_module: Optional[str] = None) -> None:
133 """Configure the pipelines_module to load the pipelines dictionary.
134 Reset the data loading state so that after every `configure` call,
135 data are reloaded.
136 """
137 self._pipelines_module = pipelines_module
138 self._is_data_loaded = False
139 self._content = {}
140
141 # Dict-like interface
142 __getitem__ = _load_data_wrapper(operator.getitem)
143 __setitem__ = _load_data_wrapper(operator.setitem)
144 __delitem__ = _load_data_wrapper(operator.delitem)
145 __iter__ = _load_data_wrapper(iter)
146 __len__ = _load_data_wrapper(len)
147
148 # Presentation methods
149 __repr__ = _load_data_wrapper(repr)
150 __str__ = _load_data_wrapper(str)
151
152
153 PACKAGE_NAME = None
154 LOGGING = None
155
156 settings = _ProjectSettings()
157
158 pipelines = _ProjectPipelines()
159
160
161 def configure_project(package_name: str):
162 """Configure a Kedro project by populating its settings with values
163 defined in user's settings.py and pipeline_registry.py.
164 """
165 settings_module = f"{package_name}.settings"
166 settings.configure(settings_module)
167
168 pipelines_module = f"{package_name}.pipeline_registry"
169 pipelines.configure(pipelines_module)
170
171 # Once the project is successfully configured once, store PACKAGE_NAME as a
172 # global variable to make it easily accessible. This is used by validate_settings()
173 # below, and also by ParallelRunner on Windows, as package_name is required every
174 # time a new subprocess is spawned.
175 global PACKAGE_NAME
176 PACKAGE_NAME = package_name
177
178
179 def configure_logging(logging_config: Dict[str, Any]) -> None:
180 """Configure logging to make it available as a global variable."""
181 logging.config.dictConfig(logging_config)
182 global LOGGING
183 LOGGING = logging_config
184
185
186 def validate_settings():
187 """Eagerly validate that the settings module is importable. This is desirable to
188 surface any syntax or import errors early. In particular, without eagerly importing
189 the settings module, dynaconf would silence any import error (e.g. missing
190 dependency, missing/mislabelled pipeline), and users would instead get a cryptic
191 error message ``Expected an instance of `ConfigLoader`, got `NoneType` instead``.
192 More info on the dynaconf issue: https://github.com/rochacbruno/dynaconf/issues/460
193 """
194 importlib.import_module(f"{PACKAGE_NAME}.settings")
195
[end of kedro/framework/project/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/kedro/framework/project/__init__.py b/kedro/framework/project/__init__.py
--- a/kedro/framework/project/__init__.py
+++ b/kedro/framework/project/__init__.py
@@ -39,6 +39,35 @@
)
+class _HasSharedParentClassValidator(Validator):
+ """A validator to check that the parent of the default class is an ancestor of
+ the settings value."""
+
+ def validate(self, settings, *args, **kwargs):
+ super().validate(settings, *args, **kwargs)
+
+ default_class = self.default(settings, self)
+ for name in self.names:
+ setting_value = getattr(settings, name)
+ # In the case of ConfigLoader, default_class.mro() will be:
+ # [kedro.config.config.ConfigLoader,
+ # kedro.config.abstract_config.AbstractConfigLoader,
+ # abc.ABC,
+ # object]
+ # We pick out the direct parent and check if it's in any of the ancestors of
+ # the supplied setting_value. This assumes that the direct parent is
+ # the abstract class that must be inherited from.
+ # A more general check just for a shared ancestor would be:
+ # set(default_class.mro()) & set(setting_value.mro()) - {abc.ABC, object}
+ default_class_parent = default_class.mro()[1]
+ if default_class_parent not in setting_value.mro():
+ raise ValidationError(
+ f"Invalid value `{setting_value.__module__}.{setting_value.__qualname__}` "
+ f"received for setting `{name}`. It must be a subclass of "
+ f"`{default_class_parent.__module__}.{default_class_parent.__qualname__}`."
+ )
+
+
class _ProjectSettings(LazySettings):
"""Define all settings available for users to configure in Kedro,
along with their validation rules and default values.
@@ -57,7 +86,7 @@
)
_SESSION_STORE_ARGS = Validator("SESSION_STORE_ARGS", default={})
_DISABLE_HOOKS_FOR_PLUGINS = Validator("DISABLE_HOOKS_FOR_PLUGINS", default=tuple())
- _CONFIG_LOADER_CLASS = _IsSubclassValidator(
+ _CONFIG_LOADER_CLASS = _HasSharedParentClassValidator(
"CONFIG_LOADER_CLASS", default=_get_default_class("kedro.config.ConfigLoader")
)
_CONFIG_LOADER_ARGS = Validator("CONFIG_LOADER_ARGS", default={})
|
{"golden_diff": "diff --git a/kedro/framework/project/__init__.py b/kedro/framework/project/__init__.py\n--- a/kedro/framework/project/__init__.py\n+++ b/kedro/framework/project/__init__.py\n@@ -39,6 +39,35 @@\n )\n \n \n+class _HasSharedParentClassValidator(Validator):\n+ \"\"\"A validator to check that the parent of the default class is an ancestor of\n+ the settings value.\"\"\"\n+\n+ def validate(self, settings, *args, **kwargs):\n+ super().validate(settings, *args, **kwargs)\n+\n+ default_class = self.default(settings, self)\n+ for name in self.names:\n+ setting_value = getattr(settings, name)\n+ # In the case of ConfigLoader, default_class.mro() will be:\n+ # [kedro.config.config.ConfigLoader,\n+ # kedro.config.abstract_config.AbstractConfigLoader,\n+ # abc.ABC,\n+ # object]\n+ # We pick out the direct parent and check if it's in any of the ancestors of\n+ # the supplied setting_value. This assumes that the direct parent is\n+ # the abstract class that must be inherited from.\n+ # A more general check just for a shared ancestor would be:\n+ # set(default_class.mro()) & set(setting_value.mro()) - {abc.ABC, object}\n+ default_class_parent = default_class.mro()[1]\n+ if default_class_parent not in setting_value.mro():\n+ raise ValidationError(\n+ f\"Invalid value `{setting_value.__module__}.{setting_value.__qualname__}` \"\n+ f\"received for setting `{name}`. It must be a subclass of \"\n+ f\"`{default_class_parent.__module__}.{default_class_parent.__qualname__}`.\"\n+ )\n+\n+\n class _ProjectSettings(LazySettings):\n \"\"\"Define all settings available for users to configure in Kedro,\n along with their validation rules and default values.\n@@ -57,7 +86,7 @@\n )\n _SESSION_STORE_ARGS = Validator(\"SESSION_STORE_ARGS\", default={})\n _DISABLE_HOOKS_FOR_PLUGINS = Validator(\"DISABLE_HOOKS_FOR_PLUGINS\", default=tuple())\n- _CONFIG_LOADER_CLASS = _IsSubclassValidator(\n+ _CONFIG_LOADER_CLASS = _HasSharedParentClassValidator(\n \"CONFIG_LOADER_CLASS\", default=_get_default_class(\"kedro.config.ConfigLoader\")\n )\n _CONFIG_LOADER_ARGS = Validator(\"CONFIG_LOADER_ARGS\", default={})\n", "issue": "Project templating using TemplatedConfigLoader raises ValidationError in 0.18.0\n## Description\r\nWhen following the guide for [Template configuration](https://kedro.readthedocs.io/en/stable/kedro_project_setup/configuration.html#template-configuration), Kedro 0.18.0 raises ValidationError as `kedro.config.templated_config.TemplatedConfigLoader` is not recognized as a subclass of `kedro.config.config.ConfigLoader`.\r\n\r\n## Context\r\nThis bug hinders me from using the `TemplatedConfigLoader`, according to the documentation.\r\n\r\n## Steps to Reproduce\r\n1. Create new Kedro project\r\n```\r\nkedro new --starter=pandas-iris\r\ncd new-kedro-project\r\n```\r\n2. Create and activate venv. Install requirements.\r\n```\r\npython -m venv my_venv\r\nsource my_venv/bin/activate\r\npip install -r src/requirements.txt\r\n```\r\n3. Open the `settings.py` file.\r\n```\r\nnano src/new_kedro_project/settings.py\r\n```\r\nUncomment lines 28,29,31,32,33 as suggested in the guide. Expected config:\r\n```\r\n# Class that manages how configuration is loaded.\r\nfrom kedro.config import TemplatedConfigLoader\r\nCONFIG_LOADER_CLASS = TemplatedConfigLoader\r\n# Keyword arguments to pass to the `CONFIG_LOADER_CLASS` constructor.\r\nCONFIG_LOADER_ARGS = {\r\n \"globals_pattern\": \"*globals.yml\",\r\n}\r\n```\r\n4. Run the default pipeline\r\n```\r\nkedro run\r\n```\r\n\r\n## Expected Result\r\nDefeault pipeline should run with global config keys overwritten.\r\n\r\n## Actual Result\r\nValidationError is raised.\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"/usr/local/bin/kedro\", line 8, in <module>\r\n sys.exit(main())\r\n File \"/usr/local/lib/python3.9/site-packages/kedro/framework/cli/cli.py\", line 205, in main\r\n cli_collection = KedroCLI(project_path=Path.cwd())\r\n File \"/usr/local/lib/python3.9/site-packages/kedro/framework/cli/cli.py\", line 114, in __init__\r\n self._metadata = bootstrap_project(project_path)\r\n File \"/usr/local/lib/python3.9/site-packages/kedro/framework/startup.py\", line 155, in bootstrap_project\r\n configure_project(metadata.package_name)\r\n File \"/usr/local/lib/python3.9/site-packages/kedro/framework/project/__init__.py\", line 166, in configure_project\r\n settings.configure(settings_module)\r\n File \"/usr/local/lib/python3.9/site-packages/dynaconf/base.py\", line 182, in configure\r\n self._wrapped = Settings(settings_module=settings_module, **kwargs)\r\n File \"/usr/local/lib/python3.9/site-packages/dynaconf/base.py\", line 235, in __init__\r\n self.validators.validate(\r\n File \"/usr/local/lib/python3.9/site-packages/dynaconf/validator.py\", line 417, in validate\r\n validator.validate(self.settings, only=only, exclude=exclude)\r\n File \"/usr/local/lib/python3.9/site-packages/kedro/framework/project/__init__.py\", line 35, in validate\r\n raise ValidationError(\r\ndynaconf.validator.ValidationError: Invalid value `kedro.config.templated_config.TemplatedConfigLoader` received for setting `CONFIG_LOADER_CLASS`. It must be a subclass of `kedro.config.config.ConfigLoader`.\r\n```\r\n\r\n## Your Environment\r\n* Kedro version used (`pip show kedro` or `kedro -V`): `0.18.0`\r\n* Python version used (`python -V`): `3.9.10`\r\n* Operating system and version: `macOS 12.3`\r\n\n", "before_files": [{"content": "\"\"\"``kedro.framework.project`` module provides utitlity to\nconfigure a Kedro project and access its settings.\"\"\"\n# pylint: disable=redefined-outer-name,unused-argument,global-statement\nimport importlib\nimport logging.config\nimport operator\nfrom collections.abc import MutableMapping\nfrom typing import Any, Dict, Optional\n\nfrom dynaconf import LazySettings\nfrom dynaconf.validator import ValidationError, Validator\n\nfrom kedro.pipeline import Pipeline\n\n\ndef _get_default_class(class_import_path):\n module, _, class_name = class_import_path.rpartition(\".\")\n\n def validator_func(settings, validators):\n return getattr(importlib.import_module(module), class_name)\n\n return validator_func\n\n\nclass _IsSubclassValidator(Validator):\n \"\"\"A validator to check if the supplied setting value is a subclass of the default class\"\"\"\n\n def validate(self, settings, *args, **kwargs):\n super().validate(settings, *args, **kwargs)\n\n default_class = self.default(settings, self)\n for name in self.names:\n setting_value = getattr(settings, name)\n if not issubclass(setting_value, default_class):\n raise ValidationError(\n f\"Invalid value `{setting_value.__module__}.{setting_value.__qualname__}` \"\n f\"received for setting `{name}`. It must be a subclass of \"\n f\"`{default_class.__module__}.{default_class.__qualname__}`.\"\n )\n\n\nclass _ProjectSettings(LazySettings):\n \"\"\"Define all settings available for users to configure in Kedro,\n along with their validation rules and default values.\n Use Dynaconf's LazySettings as base.\n \"\"\"\n\n _CONF_SOURCE = Validator(\"CONF_SOURCE\", default=\"conf\")\n _HOOKS = Validator(\"HOOKS\", default=tuple())\n _CONTEXT_CLASS = _IsSubclassValidator(\n \"CONTEXT_CLASS\",\n default=_get_default_class(\"kedro.framework.context.KedroContext\"),\n )\n _SESSION_STORE_CLASS = _IsSubclassValidator(\n \"SESSION_STORE_CLASS\",\n default=_get_default_class(\"kedro.framework.session.session.BaseSessionStore\"),\n )\n _SESSION_STORE_ARGS = Validator(\"SESSION_STORE_ARGS\", default={})\n _DISABLE_HOOKS_FOR_PLUGINS = Validator(\"DISABLE_HOOKS_FOR_PLUGINS\", default=tuple())\n _CONFIG_LOADER_CLASS = _IsSubclassValidator(\n \"CONFIG_LOADER_CLASS\", default=_get_default_class(\"kedro.config.ConfigLoader\")\n )\n _CONFIG_LOADER_ARGS = Validator(\"CONFIG_LOADER_ARGS\", default={})\n _DATA_CATALOG_CLASS = _IsSubclassValidator(\n \"DATA_CATALOG_CLASS\", default=_get_default_class(\"kedro.io.DataCatalog\")\n )\n\n def __init__(self, *args, **kwargs):\n\n kwargs.update(\n validators=[\n self._CONF_SOURCE,\n self._HOOKS,\n self._CONTEXT_CLASS,\n self._SESSION_STORE_CLASS,\n self._SESSION_STORE_ARGS,\n self._DISABLE_HOOKS_FOR_PLUGINS,\n self._CONFIG_LOADER_CLASS,\n self._CONFIG_LOADER_ARGS,\n self._DATA_CATALOG_CLASS,\n ]\n )\n super().__init__(*args, **kwargs)\n\n\ndef _load_data_wrapper(func):\n \"\"\"Wrap a method in _ProjectPipelines so that data is loaded on first access.\n Taking inspiration from dynaconf.utils.functional.new_method_proxy\n \"\"\"\n # pylint: disable=protected-access\n def inner(self, *args, **kwargs):\n self._load_data()\n return func(self._content, *args, **kwargs)\n\n return inner\n\n\nclass _ProjectPipelines(MutableMapping):\n \"\"\"A read-only lazy dictionary-like object to hold the project pipelines.\n On configure it will store the pipelines module.\n On first data access, e.g. through __getitem__, it will load the registered pipelines and merge\n them with pipelines defined from hooks.\n \"\"\"\n\n def __init__(self) -> None:\n self._pipelines_module: Optional[str] = None\n self._is_data_loaded = False\n self._content: Dict[str, Pipeline] = {}\n\n @staticmethod\n def _get_pipelines_registry_callable(pipelines_module: str):\n module_obj = importlib.import_module(pipelines_module)\n register_pipelines = getattr(module_obj, \"register_pipelines\")\n return register_pipelines\n\n def _load_data(self):\n \"\"\"Lazily read pipelines defined in the pipelines registry module\"\"\"\n\n # If the pipelines dictionary has not been configured with a pipelines module\n # or if data has been loaded\n if self._pipelines_module is None or self._is_data_loaded:\n return\n\n register_pipelines = self._get_pipelines_registry_callable(\n self._pipelines_module\n )\n project_pipelines = register_pipelines()\n\n self._content = project_pipelines\n self._is_data_loaded = True\n\n def configure(self, pipelines_module: Optional[str] = None) -> None:\n \"\"\"Configure the pipelines_module to load the pipelines dictionary.\n Reset the data loading state so that after every `configure` call,\n data are reloaded.\n \"\"\"\n self._pipelines_module = pipelines_module\n self._is_data_loaded = False\n self._content = {}\n\n # Dict-like interface\n __getitem__ = _load_data_wrapper(operator.getitem)\n __setitem__ = _load_data_wrapper(operator.setitem)\n __delitem__ = _load_data_wrapper(operator.delitem)\n __iter__ = _load_data_wrapper(iter)\n __len__ = _load_data_wrapper(len)\n\n # Presentation methods\n __repr__ = _load_data_wrapper(repr)\n __str__ = _load_data_wrapper(str)\n\n\nPACKAGE_NAME = None\nLOGGING = None\n\nsettings = _ProjectSettings()\n\npipelines = _ProjectPipelines()\n\n\ndef configure_project(package_name: str):\n \"\"\"Configure a Kedro project by populating its settings with values\n defined in user's settings.py and pipeline_registry.py.\n \"\"\"\n settings_module = f\"{package_name}.settings\"\n settings.configure(settings_module)\n\n pipelines_module = f\"{package_name}.pipeline_registry\"\n pipelines.configure(pipelines_module)\n\n # Once the project is successfully configured once, store PACKAGE_NAME as a\n # global variable to make it easily accessible. This is used by validate_settings()\n # below, and also by ParallelRunner on Windows, as package_name is required every\n # time a new subprocess is spawned.\n global PACKAGE_NAME\n PACKAGE_NAME = package_name\n\n\ndef configure_logging(logging_config: Dict[str, Any]) -> None:\n \"\"\"Configure logging to make it available as a global variable.\"\"\"\n logging.config.dictConfig(logging_config)\n global LOGGING\n LOGGING = logging_config\n\n\ndef validate_settings():\n \"\"\"Eagerly validate that the settings module is importable. This is desirable to\n surface any syntax or import errors early. In particular, without eagerly importing\n the settings module, dynaconf would silence any import error (e.g. missing\n dependency, missing/mislabelled pipeline), and users would instead get a cryptic\n error message ``Expected an instance of `ConfigLoader`, got `NoneType` instead``.\n More info on the dynaconf issue: https://github.com/rochacbruno/dynaconf/issues/460\n \"\"\"\n importlib.import_module(f\"{PACKAGE_NAME}.settings\")\n", "path": "kedro/framework/project/__init__.py"}]}
| 3,418 | 546 |
gh_patches_debug_31210
|
rasdani/github-patches
|
git_diff
|
ansible__ansible-lint-480
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
E202 (no leading zero for octal permission) False positive
# Issue Type
- Bug report
# Ansible and Ansible Lint details
- ansible-lint installation method: Ansible Galaxy use of ansible-lint
# Desired Behaviour
ansible-lint provides correct reason to following the lint
# Actual Behaviour (Bug report only)
Please give some details of what is actually happening.
Include a [minimum complete verifiable example](http://stackoverflow.com/help/mcve)
with:
- playbook
- output of running ansible-lint
- if you're getting a stack trace, output of
`ansible-playbook --syntax-check playbook`
"Numeric file permissions without leading zero can behave in unexpected ways. See http://docs.ansible.com/ansible/file_module.html" - https://docs.ansible.com/ansible-lint/rules/default_rules.html
./galaxy/downloads/tmp2REDBx/tasks/main-tasks.yml:4: [E202] Octal file permissions must contain leading zero
```
# Lets create the configuration first...
# avoid locking ourself out.
- name: Create firewalld config directories
file:
dest: "{{ item }}"
state: directory
mode: "750"
with_items:
- /etc/firewalld
- /etc/firewalld/zones
notify: firewalld__reload
```
It is not correct that "750" is mis-interpreted. "750" is a string, and is correctly interpreted as the octal permission number `750` i.e. `rwxr-x---`.
Personally I have been using the string "750", rather than remember that YAML supports octal numeric literals using a leading zero.
</issue>
<code>
[start of lib/ansiblelint/rules/OctalPermissionsRule.py]
1 # Copyright (c) 2013-2014 Will Thames <[email protected]>
2 #
3 # Permission is hereby granted, free of charge, to any person obtaining a copy
4 # of this software and associated documentation files (the "Software"), to deal
5 # in the Software without restriction, including without limitation the rights
6 # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
7 # copies of the Software, and to permit persons to whom the Software is
8 # furnished to do so, subject to the following conditions:
9 #
10 # The above copyright notice and this permission notice shall be included in
11 # all copies or substantial portions of the Software.
12 #
13 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
14 # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
15 # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
16 # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
17 # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
18 # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
19 # THE SOFTWARE.
20
21 from ansiblelint import AnsibleLintRule
22 import re
23 import six
24
25
26 class OctalPermissionsRule(AnsibleLintRule):
27 id = '202'
28 shortdesc = 'Octal file permissions must contain leading zero'
29 description = (
30 'Numeric file permissions without leading zero can behave '
31 'in unexpected ways. See '
32 'http://docs.ansible.com/ansible/file_module.html'
33 )
34 severity = 'VERY_HIGH'
35 tags = ['formatting', 'ANSIBLE0009']
36 version_added = 'historic'
37
38 _modules = ['assemble', 'copy', 'file', 'ini_file', 'lineinfile',
39 'replace', 'synchronize', 'template', 'unarchive']
40
41 mode_regex = re.compile(r'^\s*[0-9]+\s*$')
42 valid_mode_regex = re.compile(r'^\s*0[0-7]{3,4}\s*$')
43
44 def is_invalid_permission(self, mode):
45 # sensible file permission modes don't
46 # have write bit set when read bit is
47 # not set and don't have execute bit set
48 # when user execute bit is not set.
49 # also, user permissions are more generous than
50 # group permissions and user and group permissions
51 # are more generous than world permissions
52
53 other_write_without_read = (mode % 8 and mode % 8 < 4 and
54 not (mode % 8 == 1 and (mode >> 6) % 2 == 1))
55 group_write_without_read = ((mode >> 3) % 8 and (mode >> 3) % 8 < 4 and
56 not ((mode >> 3) % 8 == 1 and (mode >> 6) % 2 == 1))
57 user_write_without_read = ((mode >> 6) % 8 and (mode >> 6) % 8 < 4 and
58 not (mode >> 6) % 8 == 1)
59 other_more_generous_than_group = mode % 8 > (mode >> 3) % 8
60 other_more_generous_than_user = mode % 8 > (mode >> 6) % 8
61 group_more_generous_than_user = (mode >> 3) % 8 > (mode >> 6) % 8
62
63 return (other_write_without_read or
64 group_write_without_read or
65 user_write_without_read or
66 other_more_generous_than_group or
67 other_more_generous_than_user or
68 group_more_generous_than_user)
69
70 def matchtask(self, file, task):
71 if task["action"]["__ansible_module__"] in self._modules:
72 mode = task['action'].get('mode', None)
73 if isinstance(mode, six.string_types) and self.mode_regex.match(mode):
74 return not self.valid_mode_regex.match(mode)
75 if isinstance(mode, int):
76 return self.is_invalid_permission(mode)
77
[end of lib/ansiblelint/rules/OctalPermissionsRule.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/lib/ansiblelint/rules/OctalPermissionsRule.py b/lib/ansiblelint/rules/OctalPermissionsRule.py
--- a/lib/ansiblelint/rules/OctalPermissionsRule.py
+++ b/lib/ansiblelint/rules/OctalPermissionsRule.py
@@ -19,13 +19,12 @@
# THE SOFTWARE.
from ansiblelint import AnsibleLintRule
-import re
import six
class OctalPermissionsRule(AnsibleLintRule):
id = '202'
- shortdesc = 'Octal file permissions must contain leading zero'
+ shortdesc = 'Octal file permissions must contain leading zero or be a string'
description = (
'Numeric file permissions without leading zero can behave '
'in unexpected ways. See '
@@ -38,9 +37,6 @@
_modules = ['assemble', 'copy', 'file', 'ini_file', 'lineinfile',
'replace', 'synchronize', 'template', 'unarchive']
- mode_regex = re.compile(r'^\s*[0-9]+\s*$')
- valid_mode_regex = re.compile(r'^\s*0[0-7]{3,4}\s*$')
-
def is_invalid_permission(self, mode):
# sensible file permission modes don't
# have write bit set when read bit is
@@ -70,7 +66,9 @@
def matchtask(self, file, task):
if task["action"]["__ansible_module__"] in self._modules:
mode = task['action'].get('mode', None)
- if isinstance(mode, six.string_types) and self.mode_regex.match(mode):
- return not self.valid_mode_regex.match(mode)
+
+ if isinstance(mode, six.string_types):
+ return False
+
if isinstance(mode, int):
return self.is_invalid_permission(mode)
|
{"golden_diff": "diff --git a/lib/ansiblelint/rules/OctalPermissionsRule.py b/lib/ansiblelint/rules/OctalPermissionsRule.py\n--- a/lib/ansiblelint/rules/OctalPermissionsRule.py\n+++ b/lib/ansiblelint/rules/OctalPermissionsRule.py\n@@ -19,13 +19,12 @@\n # THE SOFTWARE.\n \n from ansiblelint import AnsibleLintRule\n-import re\n import six\n \n \n class OctalPermissionsRule(AnsibleLintRule):\n id = '202'\n- shortdesc = 'Octal file permissions must contain leading zero'\n+ shortdesc = 'Octal file permissions must contain leading zero or be a string'\n description = (\n 'Numeric file permissions without leading zero can behave '\n 'in unexpected ways. See '\n@@ -38,9 +37,6 @@\n _modules = ['assemble', 'copy', 'file', 'ini_file', 'lineinfile',\n 'replace', 'synchronize', 'template', 'unarchive']\n \n- mode_regex = re.compile(r'^\\s*[0-9]+\\s*$')\n- valid_mode_regex = re.compile(r'^\\s*0[0-7]{3,4}\\s*$')\n-\n def is_invalid_permission(self, mode):\n # sensible file permission modes don't\n # have write bit set when read bit is\n@@ -70,7 +66,9 @@\n def matchtask(self, file, task):\n if task[\"action\"][\"__ansible_module__\"] in self._modules:\n mode = task['action'].get('mode', None)\n- if isinstance(mode, six.string_types) and self.mode_regex.match(mode):\n- return not self.valid_mode_regex.match(mode)\n+\n+ if isinstance(mode, six.string_types):\n+ return False\n+\n if isinstance(mode, int):\n return self.is_invalid_permission(mode)\n", "issue": "E202 (no leading zero for octal permission) False positive\n# Issue Type\r\n- Bug report\r\n\r\n# Ansible and Ansible Lint details\r\n\r\n- ansible-lint installation method: Ansible Galaxy use of ansible-lint\r\n\r\n# Desired Behaviour\r\n\r\nansible-lint provides correct reason to following the lint\r\n\r\n# Actual Behaviour (Bug report only)\r\n\r\nPlease give some details of what is actually happening.\r\nInclude a [minimum complete verifiable example](http://stackoverflow.com/help/mcve)\r\nwith:\r\n- playbook\r\n- output of running ansible-lint\r\n- if you're getting a stack trace, output of\r\n `ansible-playbook --syntax-check playbook`\r\n\r\n\"Numeric file permissions without leading zero can behave in unexpected ways. See http://docs.ansible.com/ansible/file_module.html\" - https://docs.ansible.com/ansible-lint/rules/default_rules.html\r\n\r\n./galaxy/downloads/tmp2REDBx/tasks/main-tasks.yml:4: [E202] Octal file permissions must contain leading zero\r\n\r\n```\r\n# Lets create the configuration first...\r\n# avoid locking ourself out.\r\n\r\n- name: Create firewalld config directories\r\n file:\r\n dest: \"{{ item }}\"\r\n state: directory\r\n mode: \"750\"\r\n with_items:\r\n - /etc/firewalld\r\n - /etc/firewalld/zones\r\n notify: firewalld__reload\r\n```\r\n\r\nIt is not correct that \"750\" is mis-interpreted. \"750\" is a string, and is correctly interpreted as the octal permission number `750` i.e. `rwxr-x---`.\r\n\r\nPersonally I have been using the string \"750\", rather than remember that YAML supports octal numeric literals using a leading zero.\n", "before_files": [{"content": "# Copyright (c) 2013-2014 Will Thames <[email protected]>\n#\n# Permission is hereby granted, free of charge, to any person obtaining a copy\n# of this software and associated documentation files (the \"Software\"), to deal\n# in the Software without restriction, including without limitation the rights\n# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n# copies of the Software, and to permit persons to whom the Software is\n# furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice shall be included in\n# all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN\n# THE SOFTWARE.\n\nfrom ansiblelint import AnsibleLintRule\nimport re\nimport six\n\n\nclass OctalPermissionsRule(AnsibleLintRule):\n id = '202'\n shortdesc = 'Octal file permissions must contain leading zero'\n description = (\n 'Numeric file permissions without leading zero can behave '\n 'in unexpected ways. See '\n 'http://docs.ansible.com/ansible/file_module.html'\n )\n severity = 'VERY_HIGH'\n tags = ['formatting', 'ANSIBLE0009']\n version_added = 'historic'\n\n _modules = ['assemble', 'copy', 'file', 'ini_file', 'lineinfile',\n 'replace', 'synchronize', 'template', 'unarchive']\n\n mode_regex = re.compile(r'^\\s*[0-9]+\\s*$')\n valid_mode_regex = re.compile(r'^\\s*0[0-7]{3,4}\\s*$')\n\n def is_invalid_permission(self, mode):\n # sensible file permission modes don't\n # have write bit set when read bit is\n # not set and don't have execute bit set\n # when user execute bit is not set.\n # also, user permissions are more generous than\n # group permissions and user and group permissions\n # are more generous than world permissions\n\n other_write_without_read = (mode % 8 and mode % 8 < 4 and\n not (mode % 8 == 1 and (mode >> 6) % 2 == 1))\n group_write_without_read = ((mode >> 3) % 8 and (mode >> 3) % 8 < 4 and\n not ((mode >> 3) % 8 == 1 and (mode >> 6) % 2 == 1))\n user_write_without_read = ((mode >> 6) % 8 and (mode >> 6) % 8 < 4 and\n not (mode >> 6) % 8 == 1)\n other_more_generous_than_group = mode % 8 > (mode >> 3) % 8\n other_more_generous_than_user = mode % 8 > (mode >> 6) % 8\n group_more_generous_than_user = (mode >> 3) % 8 > (mode >> 6) % 8\n\n return (other_write_without_read or\n group_write_without_read or\n user_write_without_read or\n other_more_generous_than_group or\n other_more_generous_than_user or\n group_more_generous_than_user)\n\n def matchtask(self, file, task):\n if task[\"action\"][\"__ansible_module__\"] in self._modules:\n mode = task['action'].get('mode', None)\n if isinstance(mode, six.string_types) and self.mode_regex.match(mode):\n return not self.valid_mode_regex.match(mode)\n if isinstance(mode, int):\n return self.is_invalid_permission(mode)\n", "path": "lib/ansiblelint/rules/OctalPermissionsRule.py"}]}
| 1,942 | 398 |
gh_patches_debug_32269
|
rasdani/github-patches
|
git_diff
|
strawberry-graphql__strawberry-491
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add mypy plugin for strawberry.union
We have a function to create union types and also add name and description that are used when generating the GraphQL schema[1].
Unfortunately MyPy complains when using the annotation, since it doesn't recognise it as a type, here's an example:
```python
ExampleUnion = strawberry.union(
"ExampleUnion",
types=(
ABC,
CDE,
),
)
@strawberry.mutation
def abc(phone_number: str) -> ExampleUnion:
# ...
```
Error:
```
Variable "api.mutations.abc.ExampleUnion" is not valid as a type
```
[1] This is why we can't use typing.Union directly, as we can't pass arguments to it
</issue>
<code>
[start of strawberry/ext/mypy_plugin.py]
1 from typing import Callable, Optional
2
3 from mypy.plugin import AnalyzeTypeContext, ClassDefContext, Plugin
4 from mypy.plugins import dataclasses
5 from mypy.types import Type
6
7
8 def lazy_type_analyze_callback(ctx: AnalyzeTypeContext) -> Type:
9 type_name = ctx.type.args[0]
10 type_ = ctx.api.analyze_type(type_name)
11
12 return type_
13
14
15 def private_type_analyze_callback(ctx: AnalyzeTypeContext) -> Type:
16 type_name = ctx.type.args[0]
17 type_ = ctx.api.analyze_type(type_name)
18
19 return type_
20
21
22 class StrawberryPlugin(Plugin):
23 def get_type_analyze_hook(self, fullname: str):
24 if fullname == "strawberry.lazy_type.LazyType":
25 return lazy_type_analyze_callback
26
27 if any(
28 name in fullname
29 for name in {"strawberry.private.Private", "strawberry.Private"}
30 ):
31 return private_type_analyze_callback
32
33 return None
34
35 def get_class_decorator_hook(
36 self, fullname: str
37 ) -> Optional[Callable[[ClassDefContext], None]]:
38 if any(
39 strawberry_decorator in fullname
40 for strawberry_decorator in {
41 "strawberry.type",
42 "strawberry.federation.type",
43 "strawberry.input",
44 "strawberry.interface",
45 }
46 ):
47 return dataclasses.dataclass_class_maker_callback
48 return None
49
50
51 def plugin(version: str):
52 return StrawberryPlugin
53
[end of strawberry/ext/mypy_plugin.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/strawberry/ext/mypy_plugin.py b/strawberry/ext/mypy_plugin.py
--- a/strawberry/ext/mypy_plugin.py
+++ b/strawberry/ext/mypy_plugin.py
@@ -1,8 +1,23 @@
from typing import Callable, Optional
-from mypy.plugin import AnalyzeTypeContext, ClassDefContext, Plugin
+from mypy.nodes import (
+ GDEF,
+ Expression,
+ IndexExpr,
+ NameExpr,
+ SymbolTableNode,
+ TupleExpr,
+ TypeAlias,
+)
+from mypy.plugin import (
+ AnalyzeTypeContext,
+ ClassDefContext,
+ DynamicClassDefContext,
+ Plugin,
+ SemanticAnalyzerPluginInterface,
+)
from mypy.plugins import dataclasses
-from mypy.types import Type
+from mypy.types import Type, UnionType
def lazy_type_analyze_callback(ctx: AnalyzeTypeContext) -> Type:
@@ -19,7 +34,48 @@
return type_
+def _get_type_for_expr(expr: Expression, api: SemanticAnalyzerPluginInterface):
+ if isinstance(expr, NameExpr):
+ return api.named_type(expr.name)
+
+ if isinstance(expr, IndexExpr):
+ type_ = _get_type_for_expr(expr.base, api)
+ type_.args = [_get_type_for_expr(expr.index, api)]
+
+ return type_
+
+ raise ValueError(f"Unsupported expression f{type(expr)}")
+
+
+def union_hook(ctx: DynamicClassDefContext) -> None:
+ types = ctx.call.args[1]
+
+ if isinstance(types, TupleExpr):
+ type_ = UnionType(tuple(_get_type_for_expr(x, ctx.api) for x in types.items))
+
+ type_alias = TypeAlias(
+ type_,
+ fullname=ctx.api.qualified_name(ctx.name),
+ line=ctx.call.line,
+ column=ctx.call.column,
+ )
+
+ ctx.api.add_symbol_table_node(
+ ctx.name, SymbolTableNode(GDEF, type_alias, plugin_generated=False)
+ )
+
+
class StrawberryPlugin(Plugin):
+ def get_dynamic_class_hook(
+ self, fullname: str
+ ) -> Optional[Callable[[DynamicClassDefContext], None]]:
+ # TODO: investigate why we need this instead of `strawberry.union.union` on CI
+ # we have the same issue in the other hooks
+ if "strawberry.union" in fullname:
+ return union_hook
+
+ return None
+
def get_type_analyze_hook(self, fullname: str):
if fullname == "strawberry.lazy_type.LazyType":
return lazy_type_analyze_callback
|
{"golden_diff": "diff --git a/strawberry/ext/mypy_plugin.py b/strawberry/ext/mypy_plugin.py\n--- a/strawberry/ext/mypy_plugin.py\n+++ b/strawberry/ext/mypy_plugin.py\n@@ -1,8 +1,23 @@\n from typing import Callable, Optional\n \n-from mypy.plugin import AnalyzeTypeContext, ClassDefContext, Plugin\n+from mypy.nodes import (\n+ GDEF,\n+ Expression,\n+ IndexExpr,\n+ NameExpr,\n+ SymbolTableNode,\n+ TupleExpr,\n+ TypeAlias,\n+)\n+from mypy.plugin import (\n+ AnalyzeTypeContext,\n+ ClassDefContext,\n+ DynamicClassDefContext,\n+ Plugin,\n+ SemanticAnalyzerPluginInterface,\n+)\n from mypy.plugins import dataclasses\n-from mypy.types import Type\n+from mypy.types import Type, UnionType\n \n \n def lazy_type_analyze_callback(ctx: AnalyzeTypeContext) -> Type:\n@@ -19,7 +34,48 @@\n return type_\n \n \n+def _get_type_for_expr(expr: Expression, api: SemanticAnalyzerPluginInterface):\n+ if isinstance(expr, NameExpr):\n+ return api.named_type(expr.name)\n+\n+ if isinstance(expr, IndexExpr):\n+ type_ = _get_type_for_expr(expr.base, api)\n+ type_.args = [_get_type_for_expr(expr.index, api)]\n+\n+ return type_\n+\n+ raise ValueError(f\"Unsupported expression f{type(expr)}\")\n+\n+\n+def union_hook(ctx: DynamicClassDefContext) -> None:\n+ types = ctx.call.args[1]\n+\n+ if isinstance(types, TupleExpr):\n+ type_ = UnionType(tuple(_get_type_for_expr(x, ctx.api) for x in types.items))\n+\n+ type_alias = TypeAlias(\n+ type_,\n+ fullname=ctx.api.qualified_name(ctx.name),\n+ line=ctx.call.line,\n+ column=ctx.call.column,\n+ )\n+\n+ ctx.api.add_symbol_table_node(\n+ ctx.name, SymbolTableNode(GDEF, type_alias, plugin_generated=False)\n+ )\n+\n+\n class StrawberryPlugin(Plugin):\n+ def get_dynamic_class_hook(\n+ self, fullname: str\n+ ) -> Optional[Callable[[DynamicClassDefContext], None]]:\n+ # TODO: investigate why we need this instead of `strawberry.union.union` on CI\n+ # we have the same issue in the other hooks\n+ if \"strawberry.union\" in fullname:\n+ return union_hook\n+\n+ return None\n+\n def get_type_analyze_hook(self, fullname: str):\n if fullname == \"strawberry.lazy_type.LazyType\":\n return lazy_type_analyze_callback\n", "issue": "Add mypy plugin for strawberry.union\nWe have a function to create union types and also add name and description that are used when generating the GraphQL schema[1].\r\n\r\nUnfortunately MyPy complains when using the annotation, since it doesn't recognise it as a type, here's an example:\r\n\r\n```python\r\nExampleUnion = strawberry.union(\r\n \"ExampleUnion\",\r\n types=(\r\n ABC,\r\n CDE,\r\n ),\r\n)\r\n\r\[email protected]\r\ndef abc(phone_number: str) -> ExampleUnion:\r\n # ...\r\n```\r\n\r\nError:\r\n\r\n```\r\nVariable \"api.mutations.abc.ExampleUnion\" is not valid as a type\r\n```\r\n\r\n[1] This is why we can't use typing.Union directly, as we can't pass arguments to it\n", "before_files": [{"content": "from typing import Callable, Optional\n\nfrom mypy.plugin import AnalyzeTypeContext, ClassDefContext, Plugin\nfrom mypy.plugins import dataclasses\nfrom mypy.types import Type\n\n\ndef lazy_type_analyze_callback(ctx: AnalyzeTypeContext) -> Type:\n type_name = ctx.type.args[0]\n type_ = ctx.api.analyze_type(type_name)\n\n return type_\n\n\ndef private_type_analyze_callback(ctx: AnalyzeTypeContext) -> Type:\n type_name = ctx.type.args[0]\n type_ = ctx.api.analyze_type(type_name)\n\n return type_\n\n\nclass StrawberryPlugin(Plugin):\n def get_type_analyze_hook(self, fullname: str):\n if fullname == \"strawberry.lazy_type.LazyType\":\n return lazy_type_analyze_callback\n\n if any(\n name in fullname\n for name in {\"strawberry.private.Private\", \"strawberry.Private\"}\n ):\n return private_type_analyze_callback\n\n return None\n\n def get_class_decorator_hook(\n self, fullname: str\n ) -> Optional[Callable[[ClassDefContext], None]]:\n if any(\n strawberry_decorator in fullname\n for strawberry_decorator in {\n \"strawberry.type\",\n \"strawberry.federation.type\",\n \"strawberry.input\",\n \"strawberry.interface\",\n }\n ):\n return dataclasses.dataclass_class_maker_callback\n return None\n\n\ndef plugin(version: str):\n return StrawberryPlugin\n", "path": "strawberry/ext/mypy_plugin.py"}]}
| 1,110 | 594 |
gh_patches_debug_4568
|
rasdani/github-patches
|
git_diff
|
medtagger__MedTagger-466
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add information about Dataset to Tasks endpoint
## Current Behavior
Tasks endpoint returns list of all Tasks **without** information about their Datasets.
## Expected Behavior
Each Task returned by this endpoint should also contain information about its Dataset.
## Tasks to do
- [ ] Add Dataset key to the Task returned by Tasks endpoint.
## Additional comment
Dataset key will be fine only if there will be an endpoint that will return all Datasets available in MedTagger.
</issue>
<code>
[start of backend/medtagger/api/tasks/serializers.py]
1 """Module responsible for storage of serializers used in Tasks endpoints."""
2 from flask_restplus import fields
3
4 from medtagger.api import api
5 from medtagger.definitions import LabelTool
6
7 out__label_tag = api.model('Label Tag model', {
8 'key': fields.String(),
9 'name': fields.String(),
10 'actions_ids': fields.List(fields.Integer(),
11 attribute=lambda label_tag: [action.id for action in label_tag.actions]),
12 'tools': fields.List(fields.String(), description='Available tools for Label Tag',
13 enum=[tool.name for tool in LabelTool],
14 attribute=lambda label_tag: [tool.name for tool in label_tag.tools]),
15 })
16
17 in__label_tag = api.model('Label Tag model', {
18 'key': fields.String(),
19 'name': fields.String(),
20 'actions_ids': fields.List(fields.Integer()),
21 'tools': fields.List(fields.String(), description='Available tools for Label Tag',
22 enum=[tool.name for tool in LabelTool]),
23 })
24
25 out__task = api.model('Task model', {
26 'key': fields.String(),
27 'name': fields.String(),
28 'image_path': fields.String(),
29 'tags': fields.List(fields.Nested(out__label_tag), attribute='available_tags'),
30 })
31
32 in__task = api.model('New Task model', {
33 'key': fields.String(),
34 'name': fields.String(),
35 'image_path': fields.String(),
36 'datasets_keys': fields.List(fields.String()),
37 'tags': fields.List(fields.Nested(in__label_tag), attribute='available_tags'),
38 })
39
[end of backend/medtagger/api/tasks/serializers.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/backend/medtagger/api/tasks/serializers.py b/backend/medtagger/api/tasks/serializers.py
--- a/backend/medtagger/api/tasks/serializers.py
+++ b/backend/medtagger/api/tasks/serializers.py
@@ -27,6 +27,7 @@
'name': fields.String(),
'image_path': fields.String(),
'tags': fields.List(fields.Nested(out__label_tag), attribute='available_tags'),
+ 'datasets_keys': fields.List(fields.String(), attribute=lambda task: [dataset.key for dataset in task.datasets]),
})
in__task = api.model('New Task model', {
|
{"golden_diff": "diff --git a/backend/medtagger/api/tasks/serializers.py b/backend/medtagger/api/tasks/serializers.py\n--- a/backend/medtagger/api/tasks/serializers.py\n+++ b/backend/medtagger/api/tasks/serializers.py\n@@ -27,6 +27,7 @@\n 'name': fields.String(),\n 'image_path': fields.String(),\n 'tags': fields.List(fields.Nested(out__label_tag), attribute='available_tags'),\n+ 'datasets_keys': fields.List(fields.String(), attribute=lambda task: [dataset.key for dataset in task.datasets]),\n })\n \n in__task = api.model('New Task model', {\n", "issue": "Add information about Dataset to Tasks endpoint\n## Current Behavior\r\n\r\nTasks endpoint returns list of all Tasks **without** information about their Datasets.\r\n\r\n## Expected Behavior\r\n\r\nEach Task returned by this endpoint should also contain information about its Dataset.\r\n\r\n## Tasks to do\r\n\r\n- [ ] Add Dataset key to the Task returned by Tasks endpoint.\r\n\r\n## Additional comment\r\n\r\nDataset key will be fine only if there will be an endpoint that will return all Datasets available in MedTagger.\n", "before_files": [{"content": "\"\"\"Module responsible for storage of serializers used in Tasks endpoints.\"\"\"\nfrom flask_restplus import fields\n\nfrom medtagger.api import api\nfrom medtagger.definitions import LabelTool\n\nout__label_tag = api.model('Label Tag model', {\n 'key': fields.String(),\n 'name': fields.String(),\n 'actions_ids': fields.List(fields.Integer(),\n attribute=lambda label_tag: [action.id for action in label_tag.actions]),\n 'tools': fields.List(fields.String(), description='Available tools for Label Tag',\n enum=[tool.name for tool in LabelTool],\n attribute=lambda label_tag: [tool.name for tool in label_tag.tools]),\n})\n\nin__label_tag = api.model('Label Tag model', {\n 'key': fields.String(),\n 'name': fields.String(),\n 'actions_ids': fields.List(fields.Integer()),\n 'tools': fields.List(fields.String(), description='Available tools for Label Tag',\n enum=[tool.name for tool in LabelTool]),\n})\n\nout__task = api.model('Task model', {\n 'key': fields.String(),\n 'name': fields.String(),\n 'image_path': fields.String(),\n 'tags': fields.List(fields.Nested(out__label_tag), attribute='available_tags'),\n})\n\nin__task = api.model('New Task model', {\n 'key': fields.String(),\n 'name': fields.String(),\n 'image_path': fields.String(),\n 'datasets_keys': fields.List(fields.String()),\n 'tags': fields.List(fields.Nested(in__label_tag), attribute='available_tags'),\n})\n", "path": "backend/medtagger/api/tasks/serializers.py"}]}
| 1,032 | 140 |
gh_patches_debug_36259
|
rasdani/github-patches
|
git_diff
|
piskvorky__gensim-1833
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
BM25 : Incorrect scoring function
https://github.com/RaRe-Technologies/gensim/blob/develop/gensim/summarization/bm25.py
**Instead of "len(document)" it should be the length of the _index_ document of the corpus.**
```
def get_score(self, document, index, average_idf):
# in this line it should be the length of the index document in the corpus
score += (idf * self.f[index][word] * (PARAM_K1 + 1)
/ (self.f[index][word] + PARAM_K1 * (1 - PARAM_B + PARAM_B * len(document) / self.avgdl)))
```
BM25 : Incorrect scoring function
https://github.com/RaRe-Technologies/gensim/blob/develop/gensim/summarization/bm25.py
**Instead of "len(document)" it should be the length of the _index_ document of the corpus.**
```
def get_score(self, document, index, average_idf):
# in this line it should be the length of the index document in the corpus
score += (idf * self.f[index][word] * (PARAM_K1 + 1)
/ (self.f[index][word] + PARAM_K1 * (1 - PARAM_B + PARAM_B * len(document) / self.avgdl)))
```
</issue>
<code>
[start of gensim/summarization/bm25.py]
1 #!/usr/bin/env python
2 # -*- coding: utf-8 -*-
3 #
4 # Licensed under the GNU LGPL v2.1 - http://www.gnu.org/licenses/lgpl.html
5
6 """This module contains function of computing rank scores for documents in
7 corpus and helper class `BM25` used in calculations. Original alhorithm
8 descibed in [1]_, also you may check Wikipedia page [2]_.
9
10
11 .. [1] Robertson, Stephen; Zaragoza, Hugo (2009). The Probabilistic Relevance Framework: BM25 and Beyond,
12 http://www.staff.city.ac.uk/~sb317/papers/foundations_bm25_review.pdf
13 .. [2] Okapi BM25 on Wikipedia, https://en.wikipedia.org/wiki/Okapi_BM25
14
15
16
17 Examples
18 --------
19 >>> from gensim.summarization.bm25 import get_bm25_weights
20 >>> corpus = [
21 ... ["black", "cat", "white", "cat"],
22 ... ["cat", "outer", "space"],
23 ... ["wag", "dog"]
24 ... ]
25 >>> result = get_bm25_weights(corpus)
26
27
28 Data:
29 -----
30 .. data:: PARAM_K1 - Free smoothing parameter for BM25.
31 .. data:: PARAM_B - Free smoothing parameter for BM25.
32 .. data:: EPSILON - Constant used for negative idf of document in corpus.
33
34 """
35
36
37 import math
38 from six import iteritems
39 from six.moves import xrange
40
41
42 PARAM_K1 = 1.5
43 PARAM_B = 0.75
44 EPSILON = 0.25
45
46
47 class BM25(object):
48 """Implementation of Best Matching 25 ranking function.
49
50 Attributes
51 ----------
52 corpus_size : int
53 Size of corpus (number of documents).
54 avgdl : float
55 Average length of document in `corpus`.
56 corpus : list of list of str
57 Corpus of documents.
58 f : list of dicts of int
59 Dictionary with terms frequencies for each document in `corpus`. Words used as keys and frequencies as values.
60 df : dict
61 Dictionary with terms frequencies for whole `corpus`. Words used as keys and frequencies as values.
62 idf : dict
63 Dictionary with inversed terms frequencies for whole `corpus`. Words used as keys and frequencies as values.
64
65 """
66
67 def __init__(self, corpus):
68 """
69 Parameters
70 ----------
71 corpus : list of list of str
72 Given corpus.
73
74 """
75 self.corpus_size = len(corpus)
76 self.avgdl = sum(float(len(x)) for x in corpus) / self.corpus_size
77 self.corpus = corpus
78 self.f = []
79 self.df = {}
80 self.idf = {}
81 self.initialize()
82
83 def initialize(self):
84 """Calculates frequencies of terms in documents and in corpus. Also computes inverse document frequencies."""
85 for document in self.corpus:
86 frequencies = {}
87 for word in document:
88 if word not in frequencies:
89 frequencies[word] = 0
90 frequencies[word] += 1
91 self.f.append(frequencies)
92
93 for word, freq in iteritems(frequencies):
94 if word not in self.df:
95 self.df[word] = 0
96 self.df[word] += 1
97
98 for word, freq in iteritems(self.df):
99 self.idf[word] = math.log(self.corpus_size - freq + 0.5) - math.log(freq + 0.5)
100
101 def get_score(self, document, index, average_idf):
102 """Computes BM25 score of given `document` in relation to item of corpus selected by `index`.
103
104 Parameters
105 ----------
106 document : list of str
107 Document to be scored.
108 index : int
109 Index of document in corpus selected to score with `document`.
110 average_idf : float
111 Average idf in corpus.
112
113 Returns
114 -------
115 float
116 BM25 score.
117
118 """
119 score = 0
120 for word in document:
121 if word not in self.f[index]:
122 continue
123 idf = self.idf[word] if self.idf[word] >= 0 else EPSILON * average_idf
124 score += (idf * self.f[index][word] * (PARAM_K1 + 1)
125 / (self.f[index][word] + PARAM_K1 * (1 - PARAM_B + PARAM_B * len(document) / self.avgdl)))
126 return score
127
128 def get_scores(self, document, average_idf):
129 """Computes and returns BM25 scores of given `document` in relation to
130 every item in corpus.
131
132 Parameters
133 ----------
134 document : list of str
135 Document to be scored.
136 average_idf : float
137 Average idf in corpus.
138
139 Returns
140 -------
141 list of float
142 BM25 scores.
143
144 """
145 scores = []
146 for index in xrange(self.corpus_size):
147 score = self.get_score(document, index, average_idf)
148 scores.append(score)
149 return scores
150
151
152 def get_bm25_weights(corpus):
153 """Returns BM25 scores (weights) of documents in corpus.
154 Each document has to be weighted with every document in given corpus.
155
156 Parameters
157 ----------
158 corpus : list of list of str
159 Corpus of documents.
160
161 Returns
162 -------
163 list of list of float
164 BM25 scores.
165
166 Examples
167 --------
168 >>> from gensim.summarization.bm25 import get_bm25_weights
169 >>> corpus = [
170 ... ["black", "cat", "white", "cat"],
171 ... ["cat", "outer", "space"],
172 ... ["wag", "dog"]
173 ... ]
174 >>> result = get_bm25_weights(corpus)
175
176 """
177 bm25 = BM25(corpus)
178 average_idf = sum(float(val) for val in bm25.idf.values()) / len(bm25.idf)
179
180 weights = []
181 for doc in corpus:
182 scores = bm25.get_scores(doc, average_idf)
183 weights.append(scores)
184
185 return weights
186
[end of gensim/summarization/bm25.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/gensim/summarization/bm25.py b/gensim/summarization/bm25.py
--- a/gensim/summarization/bm25.py
+++ b/gensim/summarization/bm25.py
@@ -4,7 +4,7 @@
# Licensed under the GNU LGPL v2.1 - http://www.gnu.org/licenses/lgpl.html
"""This module contains function of computing rank scores for documents in
-corpus and helper class `BM25` used in calculations. Original alhorithm
+corpus and helper class `BM25` used in calculations. Original algorithm
descibed in [1]_, also you may check Wikipedia page [2]_.
@@ -61,7 +61,8 @@
Dictionary with terms frequencies for whole `corpus`. Words used as keys and frequencies as values.
idf : dict
Dictionary with inversed terms frequencies for whole `corpus`. Words used as keys and frequencies as values.
-
+ doc_len : list of int
+ List of document lengths.
"""
def __init__(self, corpus):
@@ -78,12 +79,14 @@
self.f = []
self.df = {}
self.idf = {}
+ self.doc_len = []
self.initialize()
def initialize(self):
"""Calculates frequencies of terms in documents and in corpus. Also computes inverse document frequencies."""
for document in self.corpus:
frequencies = {}
+ self.doc_len.append(len(document))
for word in document:
if word not in frequencies:
frequencies[word] = 0
@@ -122,7 +125,7 @@
continue
idf = self.idf[word] if self.idf[word] >= 0 else EPSILON * average_idf
score += (idf * self.f[index][word] * (PARAM_K1 + 1)
- / (self.f[index][word] + PARAM_K1 * (1 - PARAM_B + PARAM_B * len(document) / self.avgdl)))
+ / (self.f[index][word] + PARAM_K1 * (1 - PARAM_B + PARAM_B * self.doc_len[index] / self.avgdl)))
return score
def get_scores(self, document, average_idf):
|
{"golden_diff": "diff --git a/gensim/summarization/bm25.py b/gensim/summarization/bm25.py\n--- a/gensim/summarization/bm25.py\n+++ b/gensim/summarization/bm25.py\n@@ -4,7 +4,7 @@\n # Licensed under the GNU LGPL v2.1 - http://www.gnu.org/licenses/lgpl.html\n \n \"\"\"This module contains function of computing rank scores for documents in\n-corpus and helper class `BM25` used in calculations. Original alhorithm\n+corpus and helper class `BM25` used in calculations. Original algorithm\n descibed in [1]_, also you may check Wikipedia page [2]_.\n \n \n@@ -61,7 +61,8 @@\n Dictionary with terms frequencies for whole `corpus`. Words used as keys and frequencies as values.\n idf : dict\n Dictionary with inversed terms frequencies for whole `corpus`. Words used as keys and frequencies as values.\n-\n+ doc_len : list of int\n+ List of document lengths.\n \"\"\"\n \n def __init__(self, corpus):\n@@ -78,12 +79,14 @@\n self.f = []\n self.df = {}\n self.idf = {}\n+ self.doc_len = []\n self.initialize()\n \n def initialize(self):\n \"\"\"Calculates frequencies of terms in documents and in corpus. Also computes inverse document frequencies.\"\"\"\n for document in self.corpus:\n frequencies = {}\n+ self.doc_len.append(len(document))\n for word in document:\n if word not in frequencies:\n frequencies[word] = 0\n@@ -122,7 +125,7 @@\n continue\n idf = self.idf[word] if self.idf[word] >= 0 else EPSILON * average_idf\n score += (idf * self.f[index][word] * (PARAM_K1 + 1)\n- / (self.f[index][word] + PARAM_K1 * (1 - PARAM_B + PARAM_B * len(document) / self.avgdl)))\n+ / (self.f[index][word] + PARAM_K1 * (1 - PARAM_B + PARAM_B * self.doc_len[index] / self.avgdl)))\n return score\n \n def get_scores(self, document, average_idf):\n", "issue": "BM25 : Incorrect scoring function\nhttps://github.com/RaRe-Technologies/gensim/blob/develop/gensim/summarization/bm25.py\r\n**Instead of \"len(document)\" it should be the length of the _index_ document of the corpus.**\r\n \r\n```\r\n def get_score(self, document, index, average_idf): \r\n # in this line it should be the length of the index document in the corpus\r\n score += (idf * self.f[index][word] * (PARAM_K1 + 1)\r\n / (self.f[index][word] + PARAM_K1 * (1 - PARAM_B + PARAM_B * len(document) / self.avgdl)))\r\n```\r\n\nBM25 : Incorrect scoring function\nhttps://github.com/RaRe-Technologies/gensim/blob/develop/gensim/summarization/bm25.py\r\n**Instead of \"len(document)\" it should be the length of the _index_ document of the corpus.**\r\n \r\n```\r\n def get_score(self, document, index, average_idf): \r\n # in this line it should be the length of the index document in the corpus\r\n score += (idf * self.f[index][word] * (PARAM_K1 + 1)\r\n / (self.f[index][word] + PARAM_K1 * (1 - PARAM_B + PARAM_B * len(document) / self.avgdl)))\r\n```\r\n\n", "before_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n#\n# Licensed under the GNU LGPL v2.1 - http://www.gnu.org/licenses/lgpl.html\n\n\"\"\"This module contains function of computing rank scores for documents in\ncorpus and helper class `BM25` used in calculations. Original alhorithm\ndescibed in [1]_, also you may check Wikipedia page [2]_.\n\n\n.. [1] Robertson, Stephen; Zaragoza, Hugo (2009). The Probabilistic Relevance Framework: BM25 and Beyond,\n http://www.staff.city.ac.uk/~sb317/papers/foundations_bm25_review.pdf\n.. [2] Okapi BM25 on Wikipedia, https://en.wikipedia.org/wiki/Okapi_BM25\n\n\n\nExamples\n--------\n>>> from gensim.summarization.bm25 import get_bm25_weights\n>>> corpus = [\n... [\"black\", \"cat\", \"white\", \"cat\"],\n... [\"cat\", \"outer\", \"space\"],\n... [\"wag\", \"dog\"]\n... ]\n>>> result = get_bm25_weights(corpus)\n\n\nData:\n-----\n.. data:: PARAM_K1 - Free smoothing parameter for BM25.\n.. data:: PARAM_B - Free smoothing parameter for BM25.\n.. data:: EPSILON - Constant used for negative idf of document in corpus.\n\n\"\"\"\n\n\nimport math\nfrom six import iteritems\nfrom six.moves import xrange\n\n\nPARAM_K1 = 1.5\nPARAM_B = 0.75\nEPSILON = 0.25\n\n\nclass BM25(object):\n \"\"\"Implementation of Best Matching 25 ranking function.\n\n Attributes\n ----------\n corpus_size : int\n Size of corpus (number of documents).\n avgdl : float\n Average length of document in `corpus`.\n corpus : list of list of str\n Corpus of documents.\n f : list of dicts of int\n Dictionary with terms frequencies for each document in `corpus`. Words used as keys and frequencies as values.\n df : dict\n Dictionary with terms frequencies for whole `corpus`. Words used as keys and frequencies as values.\n idf : dict\n Dictionary with inversed terms frequencies for whole `corpus`. Words used as keys and frequencies as values.\n\n \"\"\"\n\n def __init__(self, corpus):\n \"\"\"\n Parameters\n ----------\n corpus : list of list of str\n Given corpus.\n\n \"\"\"\n self.corpus_size = len(corpus)\n self.avgdl = sum(float(len(x)) for x in corpus) / self.corpus_size\n self.corpus = corpus\n self.f = []\n self.df = {}\n self.idf = {}\n self.initialize()\n\n def initialize(self):\n \"\"\"Calculates frequencies of terms in documents and in corpus. Also computes inverse document frequencies.\"\"\"\n for document in self.corpus:\n frequencies = {}\n for word in document:\n if word not in frequencies:\n frequencies[word] = 0\n frequencies[word] += 1\n self.f.append(frequencies)\n\n for word, freq in iteritems(frequencies):\n if word not in self.df:\n self.df[word] = 0\n self.df[word] += 1\n\n for word, freq in iteritems(self.df):\n self.idf[word] = math.log(self.corpus_size - freq + 0.5) - math.log(freq + 0.5)\n\n def get_score(self, document, index, average_idf):\n \"\"\"Computes BM25 score of given `document` in relation to item of corpus selected by `index`.\n\n Parameters\n ----------\n document : list of str\n Document to be scored.\n index : int\n Index of document in corpus selected to score with `document`.\n average_idf : float\n Average idf in corpus.\n\n Returns\n -------\n float\n BM25 score.\n\n \"\"\"\n score = 0\n for word in document:\n if word not in self.f[index]:\n continue\n idf = self.idf[word] if self.idf[word] >= 0 else EPSILON * average_idf\n score += (idf * self.f[index][word] * (PARAM_K1 + 1)\n / (self.f[index][word] + PARAM_K1 * (1 - PARAM_B + PARAM_B * len(document) / self.avgdl)))\n return score\n\n def get_scores(self, document, average_idf):\n \"\"\"Computes and returns BM25 scores of given `document` in relation to\n every item in corpus.\n\n Parameters\n ----------\n document : list of str\n Document to be scored.\n average_idf : float\n Average idf in corpus.\n\n Returns\n -------\n list of float\n BM25 scores.\n\n \"\"\"\n scores = []\n for index in xrange(self.corpus_size):\n score = self.get_score(document, index, average_idf)\n scores.append(score)\n return scores\n\n\ndef get_bm25_weights(corpus):\n \"\"\"Returns BM25 scores (weights) of documents in corpus.\n Each document has to be weighted with every document in given corpus.\n\n Parameters\n ----------\n corpus : list of list of str\n Corpus of documents.\n\n Returns\n -------\n list of list of float\n BM25 scores.\n\n Examples\n --------\n >>> from gensim.summarization.bm25 import get_bm25_weights\n >>> corpus = [\n ... [\"black\", \"cat\", \"white\", \"cat\"],\n ... [\"cat\", \"outer\", \"space\"],\n ... [\"wag\", \"dog\"]\n ... ]\n >>> result = get_bm25_weights(corpus)\n\n \"\"\"\n bm25 = BM25(corpus)\n average_idf = sum(float(val) for val in bm25.idf.values()) / len(bm25.idf)\n\n weights = []\n for doc in corpus:\n scores = bm25.get_scores(doc, average_idf)\n weights.append(scores)\n\n return weights\n", "path": "gensim/summarization/bm25.py"}]}
| 2,627 | 504 |
gh_patches_debug_721
|
rasdani/github-patches
|
git_diff
|
networkx__networkx-3958
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Misleading description in the doc
In this page
https://networkx.github.io/documentation/stable/reference/algorithms/generated/networkx.algorithms.structuralholes.effective_size.html
The description of *Return* is "Dictionary with nodes as keys and the constraint on the node as values."
But this is effective size. I think it should be "Dictionary with nodes as keys and the **effective size of** the node as values."
</issue>
<code>
[start of networkx/algorithms/structuralholes.py]
1 """Functions for computing measures of structural holes."""
2
3 import networkx as nx
4
5 __all__ = ['constraint', 'local_constraint', 'effective_size']
6
7
8 def mutual_weight(G, u, v, weight=None):
9 """Returns the sum of the weights of the edge from `u` to `v` and
10 the edge from `v` to `u` in `G`.
11
12 `weight` is the edge data key that represents the edge weight. If
13 the specified key is `None` or is not in the edge data for an edge,
14 that edge is assumed to have weight 1.
15
16 Pre-conditions: `u` and `v` must both be in `G`.
17
18 """
19 try:
20 a_uv = G[u][v].get(weight, 1)
21 except KeyError:
22 a_uv = 0
23 try:
24 a_vu = G[v][u].get(weight, 1)
25 except KeyError:
26 a_vu = 0
27 return a_uv + a_vu
28
29
30 def normalized_mutual_weight(G, u, v, norm=sum, weight=None):
31 """Returns normalized mutual weight of the edges from `u` to `v`
32 with respect to the mutual weights of the neighbors of `u` in `G`.
33
34 `norm` specifies how the normalization factor is computed. It must
35 be a function that takes a single argument and returns a number.
36 The argument will be an iterable of mutual weights
37 of pairs ``(u, w)``, where ``w`` ranges over each (in- and
38 out-)neighbor of ``u``. Commons values for `normalization` are
39 ``sum`` and ``max``.
40
41 `weight` can be ``None`` or a string, if None, all edge weights
42 are considered equal. Otherwise holds the name of the edge
43 attribute used as weight.
44
45 """
46 scale = norm(mutual_weight(G, u, w, weight)
47 for w in set(nx.all_neighbors(G, u)))
48 return 0 if scale == 0 else mutual_weight(G, u, v, weight) / scale
49
50
51 def effective_size(G, nodes=None, weight=None):
52 r"""Returns the effective size of all nodes in the graph ``G``.
53
54 The *effective size* of a node's ego network is based on the concept
55 of redundancy. A person's ego network has redundancy to the extent
56 that her contacts are connected to each other as well. The
57 nonredundant part of a person's relationships it's the effective
58 size of her ego network [1]_. Formally, the effective size of a
59 node $u$, denoted $e(u)$, is defined by
60
61 .. math::
62
63 e(u) = \sum_{v \in N(u) \setminus \{u\}}
64 \left(1 - \sum_{w \in N(v)} p_{uw} m_{vw}\right)
65
66 where $N(u)$ is the set of neighbors of $u$ and $p_{uw}$ is the
67 normalized mutual weight of the (directed or undirected) edges
68 joining $u$ and $v$, for each vertex $u$ and $v$ [1]_. And $m_{vw}$
69 is the mutual weight of $v$ and $w$ divided by $v$ highest mutual
70 weight with any of its neighbors. The *mutual weight* of $u$ and $v$
71 is the sum of the weights of edges joining them (edge weights are
72 assumed to be one if the graph is unweighted).
73
74 For the case of unweighted and undirected graphs, Borgatti proposed
75 a simplified formula to compute effective size [2]_
76
77 .. math::
78
79 e(u) = n - \frac{2t}{n}
80
81 where `t` is the number of ties in the ego network (not including
82 ties to ego) and `n` is the number of nodes (excluding ego).
83
84 Parameters
85 ----------
86 G : NetworkX graph
87 The graph containing ``v``. Directed graphs are treated like
88 undirected graphs when computing neighbors of ``v``.
89
90 nodes : container, optional
91 Container of nodes in the graph ``G`` to compute the effective size.
92 If None, the effective size of every node is computed.
93
94 weight : None or string, optional
95 If None, all edge weights are considered equal.
96 Otherwise holds the name of the edge attribute used as weight.
97
98 Returns
99 -------
100 dict
101 Dictionary with nodes as keys and the constraint on the node as values.
102
103 Notes
104 -----
105 Burt also defined the related concept of *efficiency* of a node's ego
106 network, which is its effective size divided by the degree of that
107 node [1]_. So you can easily compute efficiency:
108
109 >>> G = nx.DiGraph()
110 >>> G.add_edges_from([(0, 1), (0, 2), (1, 0), (2, 1)])
111 >>> esize = nx.effective_size(G)
112 >>> efficiency = {n: v / G.degree(n) for n, v in esize.items()}
113
114 See also
115 --------
116 constraint
117
118 References
119 ----------
120 .. [1] Burt, Ronald S.
121 *Structural Holes: The Social Structure of Competition.*
122 Cambridge: Harvard University Press, 1995.
123
124 .. [2] Borgatti, S.
125 "Structural Holes: Unpacking Burt's Redundancy Measures"
126 CONNECTIONS 20(1):35-38.
127 http://www.analytictech.com/connections/v20(1)/holes.htm
128
129 """
130 def redundancy(G, u, v, weight=None):
131 nmw = normalized_mutual_weight
132 r = sum(nmw(G, u, w, weight=weight) * nmw(G, v, w, norm=max, weight=weight)
133 for w in set(nx.all_neighbors(G, u)))
134 return 1 - r
135 effective_size = {}
136 if nodes is None:
137 nodes = G
138 # Use Borgatti's simplified formula for unweighted and undirected graphs
139 if not G.is_directed() and weight is None:
140 for v in nodes:
141 # Effective size is not defined for isolated nodes
142 if len(G[v]) == 0:
143 effective_size[v] = float('nan')
144 continue
145 E = nx.ego_graph(G, v, center=False, undirected=True)
146 effective_size[v] = len(E) - (2 * E.size()) / len(E)
147 else:
148 for v in nodes:
149 # Effective size is not defined for isolated nodes
150 if len(G[v]) == 0:
151 effective_size[v] = float('nan')
152 continue
153 effective_size[v] = sum(redundancy(G, v, u, weight)
154 for u in set(nx.all_neighbors(G, v)))
155 return effective_size
156
157
158 def constraint(G, nodes=None, weight=None):
159 r"""Returns the constraint on all nodes in the graph ``G``.
160
161 The *constraint* is a measure of the extent to which a node *v* is
162 invested in those nodes that are themselves invested in the
163 neighbors of *v*. Formally, the *constraint on v*, denoted `c(v)`,
164 is defined by
165
166 .. math::
167
168 c(v) = \sum_{w \in N(v) \setminus \{v\}} \ell(v, w)
169
170 where `N(v)` is the subset of the neighbors of `v` that are either
171 predecessors or successors of `v` and `\ell(v, w)` is the local
172 constraint on `v` with respect to `w` [1]_. For the definition of local
173 constraint, see :func:`local_constraint`.
174
175 Parameters
176 ----------
177 G : NetworkX graph
178 The graph containing ``v``. This can be either directed or undirected.
179
180 nodes : container, optional
181 Container of nodes in the graph ``G`` to compute the constraint. If
182 None, the constraint of every node is computed.
183
184 weight : None or string, optional
185 If None, all edge weights are considered equal.
186 Otherwise holds the name of the edge attribute used as weight.
187
188 Returns
189 -------
190 dict
191 Dictionary with nodes as keys and the constraint on the node as values.
192
193 See also
194 --------
195 local_constraint
196
197 References
198 ----------
199 .. [1] Burt, Ronald S.
200 "Structural holes and good ideas".
201 American Journal of Sociology (110): 349–399.
202
203 """
204 if nodes is None:
205 nodes = G
206 constraint = {}
207 for v in nodes:
208 # Constraint is not defined for isolated nodes
209 if len(G[v]) == 0:
210 constraint[v] = float('nan')
211 continue
212 constraint[v] = sum(local_constraint(G, v, n, weight)
213 for n in set(nx.all_neighbors(G, v)))
214 return constraint
215
216
217 def local_constraint(G, u, v, weight=None):
218 r"""Returns the local constraint on the node ``u`` with respect to
219 the node ``v`` in the graph ``G``.
220
221 Formally, the *local constraint on u with respect to v*, denoted
222 $\ell(v)$, is defined by
223
224 .. math::
225
226 \ell(u, v) = \left(p_{uv} + \sum_{w \in N(v)} p_{uw} p{wv}\right)^2,
227
228 where $N(v)$ is the set of neighbors of $v$ and $p_{uv}$ is the
229 normalized mutual weight of the (directed or undirected) edges
230 joining $u$ and $v$, for each vertex $u$ and $v$ [1]_. The *mutual
231 weight* of $u$ and $v$ is the sum of the weights of edges joining
232 them (edge weights are assumed to be one if the graph is
233 unweighted).
234
235 Parameters
236 ----------
237 G : NetworkX graph
238 The graph containing ``u`` and ``v``. This can be either
239 directed or undirected.
240
241 u : node
242 A node in the graph ``G``.
243
244 v : node
245 A node in the graph ``G``.
246
247 weight : None or string, optional
248 If None, all edge weights are considered equal.
249 Otherwise holds the name of the edge attribute used as weight.
250
251 Returns
252 -------
253 float
254 The constraint of the node ``v`` in the graph ``G``.
255
256 See also
257 --------
258 constraint
259
260 References
261 ----------
262 .. [1] Burt, Ronald S.
263 "Structural holes and good ideas".
264 American Journal of Sociology (110): 349–399.
265
266 """
267 nmw = normalized_mutual_weight
268 direct = nmw(G, u, v, weight=weight)
269 indirect = sum(nmw(G, u, w, weight=weight) * nmw(G, w, v, weight=weight)
270 for w in set(nx.all_neighbors(G, u)))
271 return (direct + indirect) ** 2
272
[end of networkx/algorithms/structuralholes.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/networkx/algorithms/structuralholes.py b/networkx/algorithms/structuralholes.py
--- a/networkx/algorithms/structuralholes.py
+++ b/networkx/algorithms/structuralholes.py
@@ -98,7 +98,7 @@
Returns
-------
dict
- Dictionary with nodes as keys and the constraint on the node as values.
+ Dictionary with nodes as keys and the effective size of the node as values.
Notes
-----
|
{"golden_diff": "diff --git a/networkx/algorithms/structuralholes.py b/networkx/algorithms/structuralholes.py\n--- a/networkx/algorithms/structuralholes.py\n+++ b/networkx/algorithms/structuralholes.py\n@@ -98,7 +98,7 @@\n Returns\n -------\n dict\n- Dictionary with nodes as keys and the constraint on the node as values.\n+ Dictionary with nodes as keys and the effective size of the node as values.\n \n Notes\n -----\n", "issue": "Misleading description in the doc\nIn this page\r\nhttps://networkx.github.io/documentation/stable/reference/algorithms/generated/networkx.algorithms.structuralholes.effective_size.html\r\nThe description of *Return* is \"Dictionary with nodes as keys and the constraint on the node as values.\"\r\nBut this is effective size. I think it should be \"Dictionary with nodes as keys and the **effective size of** the node as values.\"\n", "before_files": [{"content": "\"\"\"Functions for computing measures of structural holes.\"\"\"\n\nimport networkx as nx\n\n__all__ = ['constraint', 'local_constraint', 'effective_size']\n\n\ndef mutual_weight(G, u, v, weight=None):\n \"\"\"Returns the sum of the weights of the edge from `u` to `v` and\n the edge from `v` to `u` in `G`.\n\n `weight` is the edge data key that represents the edge weight. If\n the specified key is `None` or is not in the edge data for an edge,\n that edge is assumed to have weight 1.\n\n Pre-conditions: `u` and `v` must both be in `G`.\n\n \"\"\"\n try:\n a_uv = G[u][v].get(weight, 1)\n except KeyError:\n a_uv = 0\n try:\n a_vu = G[v][u].get(weight, 1)\n except KeyError:\n a_vu = 0\n return a_uv + a_vu\n\n\ndef normalized_mutual_weight(G, u, v, norm=sum, weight=None):\n \"\"\"Returns normalized mutual weight of the edges from `u` to `v`\n with respect to the mutual weights of the neighbors of `u` in `G`.\n\n `norm` specifies how the normalization factor is computed. It must\n be a function that takes a single argument and returns a number.\n The argument will be an iterable of mutual weights\n of pairs ``(u, w)``, where ``w`` ranges over each (in- and\n out-)neighbor of ``u``. Commons values for `normalization` are\n ``sum`` and ``max``.\n\n `weight` can be ``None`` or a string, if None, all edge weights\n are considered equal. Otherwise holds the name of the edge\n attribute used as weight.\n\n \"\"\"\n scale = norm(mutual_weight(G, u, w, weight)\n for w in set(nx.all_neighbors(G, u)))\n return 0 if scale == 0 else mutual_weight(G, u, v, weight) / scale\n\n\ndef effective_size(G, nodes=None, weight=None):\n r\"\"\"Returns the effective size of all nodes in the graph ``G``.\n\n The *effective size* of a node's ego network is based on the concept\n of redundancy. A person's ego network has redundancy to the extent\n that her contacts are connected to each other as well. The\n nonredundant part of a person's relationships it's the effective\n size of her ego network [1]_. Formally, the effective size of a\n node $u$, denoted $e(u)$, is defined by\n\n .. math::\n\n e(u) = \\sum_{v \\in N(u) \\setminus \\{u\\}}\n \\left(1 - \\sum_{w \\in N(v)} p_{uw} m_{vw}\\right)\n\n where $N(u)$ is the set of neighbors of $u$ and $p_{uw}$ is the\n normalized mutual weight of the (directed or undirected) edges\n joining $u$ and $v$, for each vertex $u$ and $v$ [1]_. And $m_{vw}$\n is the mutual weight of $v$ and $w$ divided by $v$ highest mutual\n weight with any of its neighbors. The *mutual weight* of $u$ and $v$\n is the sum of the weights of edges joining them (edge weights are\n assumed to be one if the graph is unweighted).\n\n For the case of unweighted and undirected graphs, Borgatti proposed\n a simplified formula to compute effective size [2]_\n\n .. math::\n\n e(u) = n - \\frac{2t}{n}\n\n where `t` is the number of ties in the ego network (not including\n ties to ego) and `n` is the number of nodes (excluding ego).\n\n Parameters\n ----------\n G : NetworkX graph\n The graph containing ``v``. Directed graphs are treated like\n undirected graphs when computing neighbors of ``v``.\n\n nodes : container, optional\n Container of nodes in the graph ``G`` to compute the effective size.\n If None, the effective size of every node is computed.\n\n weight : None or string, optional\n If None, all edge weights are considered equal.\n Otherwise holds the name of the edge attribute used as weight.\n\n Returns\n -------\n dict\n Dictionary with nodes as keys and the constraint on the node as values.\n\n Notes\n -----\n Burt also defined the related concept of *efficiency* of a node's ego\n network, which is its effective size divided by the degree of that\n node [1]_. So you can easily compute efficiency:\n\n >>> G = nx.DiGraph()\n >>> G.add_edges_from([(0, 1), (0, 2), (1, 0), (2, 1)])\n >>> esize = nx.effective_size(G)\n >>> efficiency = {n: v / G.degree(n) for n, v in esize.items()}\n\n See also\n --------\n constraint\n\n References\n ----------\n .. [1] Burt, Ronald S.\n *Structural Holes: The Social Structure of Competition.*\n Cambridge: Harvard University Press, 1995.\n\n .. [2] Borgatti, S.\n \"Structural Holes: Unpacking Burt's Redundancy Measures\"\n CONNECTIONS 20(1):35-38.\n http://www.analytictech.com/connections/v20(1)/holes.htm\n\n \"\"\"\n def redundancy(G, u, v, weight=None):\n nmw = normalized_mutual_weight\n r = sum(nmw(G, u, w, weight=weight) * nmw(G, v, w, norm=max, weight=weight)\n for w in set(nx.all_neighbors(G, u)))\n return 1 - r\n effective_size = {}\n if nodes is None:\n nodes = G\n # Use Borgatti's simplified formula for unweighted and undirected graphs\n if not G.is_directed() and weight is None:\n for v in nodes:\n # Effective size is not defined for isolated nodes\n if len(G[v]) == 0:\n effective_size[v] = float('nan')\n continue\n E = nx.ego_graph(G, v, center=False, undirected=True)\n effective_size[v] = len(E) - (2 * E.size()) / len(E)\n else:\n for v in nodes:\n # Effective size is not defined for isolated nodes\n if len(G[v]) == 0:\n effective_size[v] = float('nan')\n continue\n effective_size[v] = sum(redundancy(G, v, u, weight)\n for u in set(nx.all_neighbors(G, v)))\n return effective_size\n\n\ndef constraint(G, nodes=None, weight=None):\n r\"\"\"Returns the constraint on all nodes in the graph ``G``.\n\n The *constraint* is a measure of the extent to which a node *v* is\n invested in those nodes that are themselves invested in the\n neighbors of *v*. Formally, the *constraint on v*, denoted `c(v)`,\n is defined by\n\n .. math::\n\n c(v) = \\sum_{w \\in N(v) \\setminus \\{v\\}} \\ell(v, w)\n\n where `N(v)` is the subset of the neighbors of `v` that are either\n predecessors or successors of `v` and `\\ell(v, w)` is the local\n constraint on `v` with respect to `w` [1]_. For the definition of local\n constraint, see :func:`local_constraint`.\n\n Parameters\n ----------\n G : NetworkX graph\n The graph containing ``v``. This can be either directed or undirected.\n\n nodes : container, optional\n Container of nodes in the graph ``G`` to compute the constraint. If\n None, the constraint of every node is computed.\n\n weight : None or string, optional\n If None, all edge weights are considered equal.\n Otherwise holds the name of the edge attribute used as weight.\n\n Returns\n -------\n dict\n Dictionary with nodes as keys and the constraint on the node as values.\n\n See also\n --------\n local_constraint\n\n References\n ----------\n .. [1] Burt, Ronald S.\n \"Structural holes and good ideas\".\n American Journal of Sociology (110): 349\u2013399.\n\n \"\"\"\n if nodes is None:\n nodes = G\n constraint = {}\n for v in nodes:\n # Constraint is not defined for isolated nodes\n if len(G[v]) == 0:\n constraint[v] = float('nan')\n continue\n constraint[v] = sum(local_constraint(G, v, n, weight)\n for n in set(nx.all_neighbors(G, v)))\n return constraint\n\n\ndef local_constraint(G, u, v, weight=None):\n r\"\"\"Returns the local constraint on the node ``u`` with respect to\n the node ``v`` in the graph ``G``.\n\n Formally, the *local constraint on u with respect to v*, denoted\n $\\ell(v)$, is defined by\n\n .. math::\n\n \\ell(u, v) = \\left(p_{uv} + \\sum_{w \\in N(v)} p_{uw} p{wv}\\right)^2,\n\n where $N(v)$ is the set of neighbors of $v$ and $p_{uv}$ is the\n normalized mutual weight of the (directed or undirected) edges\n joining $u$ and $v$, for each vertex $u$ and $v$ [1]_. The *mutual\n weight* of $u$ and $v$ is the sum of the weights of edges joining\n them (edge weights are assumed to be one if the graph is\n unweighted).\n\n Parameters\n ----------\n G : NetworkX graph\n The graph containing ``u`` and ``v``. This can be either\n directed or undirected.\n\n u : node\n A node in the graph ``G``.\n\n v : node\n A node in the graph ``G``.\n\n weight : None or string, optional\n If None, all edge weights are considered equal.\n Otherwise holds the name of the edge attribute used as weight.\n\n Returns\n -------\n float\n The constraint of the node ``v`` in the graph ``G``.\n\n See also\n --------\n constraint\n\n References\n ----------\n .. [1] Burt, Ronald S.\n \"Structural holes and good ideas\".\n American Journal of Sociology (110): 349\u2013399.\n\n \"\"\"\n nmw = normalized_mutual_weight\n direct = nmw(G, u, v, weight=weight)\n indirect = sum(nmw(G, u, w, weight=weight) * nmw(G, w, v, weight=weight)\n for w in set(nx.all_neighbors(G, u)))\n return (direct + indirect) ** 2\n", "path": "networkx/algorithms/structuralholes.py"}]}
| 3,806 | 103 |
gh_patches_debug_29120
|
rasdani/github-patches
|
git_diff
|
OCA__social-262
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[10.0] mail_sendrid, Error: No module named simplejson
> File ".../addons/oca-social/mail_sendgrid/controllers/json_request.py", line 4, in <module>
> import simplejson
> ImportError: No module named simplejson
I put `simplejson` into file requirements.txt and it solved this issue.
</issue>
<code>
[start of mail_sendgrid/controllers/json_request.py]
1 # -*- coding: utf-8 -*-
2 # Copyright 2016-2017 Compassion CH (http://www.compassion.ch)
3 # License AGPL-3.0 or later (http://www.gnu.org/licenses/agpl).
4 import simplejson
5
6 from odoo.http import JsonRequest, Root, Response
7
8 # Monkeypatch type of request rooter to use RESTJsonRequest
9 old_get_request = Root.get_request
10
11
12 def get_request(self, httprequest):
13 if (httprequest.mimetype == "application/json" and
14 httprequest.environ['PATH_INFO'].startswith('/mail')):
15 return RESTJsonRequest(httprequest)
16 return old_get_request(self, httprequest)
17
18
19 Root.get_request = get_request
20
21
22 class RESTJsonRequest(JsonRequest):
23 """ Special RestJson Handler to enable receiving lists in JSON
24 body
25 """
26 def __init__(self, *args):
27 try:
28 super(RESTJsonRequest, self).__init__(*args)
29 except AttributeError:
30 # The JSON may contain a list
31 self.params = dict()
32 self.context = dict(self.session.context)
33
34 def _json_response(self, result=None, error=None):
35 response = {}
36 if error is not None:
37 response['error'] = error
38 if result is not None:
39 response['result'] = result
40
41 mime = 'application/json'
42 body = simplejson.dumps(response)
43
44 return Response(
45 body, headers=[('Content-Type', mime),
46 ('Content-Length', len(body))])
47
[end of mail_sendgrid/controllers/json_request.py]
[start of mail_sendgrid/__manifest__.py]
1 # -*- coding: utf-8 -*-
2 # Copyright 2015-2017 Compassion CH (http://www.compassion.ch)
3 # License AGPL-3.0 or later (http://www.gnu.org/licenses/agpl).
4 {
5 'name': 'SendGrid',
6 'version': '10.0.1.0.0',
7 'category': 'Social Network',
8 'author': 'Compassion CH, Odoo Community Association (OCA)',
9 'license': 'AGPL-3',
10 'website': 'https://github.com/OCA/social',
11 'depends': ['mail_tracking'],
12 'data': [
13 'security/ir.model.access.csv',
14 'views/sendgrid_email_view.xml',
15 'views/sendgrid_template_view.xml',
16 'views/mail_compose_message_view.xml',
17 'views/email_template_view.xml',
18 ],
19 'demo': [],
20 'installable': True,
21 'auto_install': False,
22 'external_dependencies': {
23 'python': ['sendgrid'],
24 },
25 }
26
[end of mail_sendgrid/__manifest__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/mail_sendgrid/__manifest__.py b/mail_sendgrid/__manifest__.py
--- a/mail_sendgrid/__manifest__.py
+++ b/mail_sendgrid/__manifest__.py
@@ -1,9 +1,9 @@
# -*- coding: utf-8 -*-
-# Copyright 2015-2017 Compassion CH (http://www.compassion.ch)
+# Copyright 2015-2018 Compassion CH (http://www.compassion.ch)
# License AGPL-3.0 or later (http://www.gnu.org/licenses/agpl).
{
'name': 'SendGrid',
- 'version': '10.0.1.0.0',
+ 'version': '10.0.1.0.1',
'category': 'Social Network',
'author': 'Compassion CH, Odoo Community Association (OCA)',
'license': 'AGPL-3',
@@ -20,6 +20,6 @@
'installable': True,
'auto_install': False,
'external_dependencies': {
- 'python': ['sendgrid'],
+ 'python': ['sendgrid', 'simplejson'],
},
}
diff --git a/mail_sendgrid/controllers/json_request.py b/mail_sendgrid/controllers/json_request.py
--- a/mail_sendgrid/controllers/json_request.py
+++ b/mail_sendgrid/controllers/json_request.py
@@ -1,12 +1,18 @@
# -*- coding: utf-8 -*-
# Copyright 2016-2017 Compassion CH (http://www.compassion.ch)
# License AGPL-3.0 or later (http://www.gnu.org/licenses/agpl).
-import simplejson
-
+import logging
from odoo.http import JsonRequest, Root, Response
# Monkeypatch type of request rooter to use RESTJsonRequest
old_get_request = Root.get_request
+_logger = logging.getLogger(__name__)
+
+try:
+ import simplejson
+except ImportError:
+ _logger.error("Please install simplejson tu use mail_sendgrid module")
+ _logger.debug("ImportError details:", exc_info=True)
def get_request(self, httprequest):
|
{"golden_diff": "diff --git a/mail_sendgrid/__manifest__.py b/mail_sendgrid/__manifest__.py\n--- a/mail_sendgrid/__manifest__.py\n+++ b/mail_sendgrid/__manifest__.py\n@@ -1,9 +1,9 @@\n # -*- coding: utf-8 -*-\n-# Copyright 2015-2017 Compassion CH (http://www.compassion.ch)\n+# Copyright 2015-2018 Compassion CH (http://www.compassion.ch)\n # License AGPL-3.0 or later (http://www.gnu.org/licenses/agpl).\n {\n 'name': 'SendGrid',\n- 'version': '10.0.1.0.0',\n+ 'version': '10.0.1.0.1',\n 'category': 'Social Network',\n 'author': 'Compassion CH, Odoo Community Association (OCA)',\n 'license': 'AGPL-3',\n@@ -20,6 +20,6 @@\n 'installable': True,\n 'auto_install': False,\n 'external_dependencies': {\n- 'python': ['sendgrid'],\n+ 'python': ['sendgrid', 'simplejson'],\n },\n }\ndiff --git a/mail_sendgrid/controllers/json_request.py b/mail_sendgrid/controllers/json_request.py\n--- a/mail_sendgrid/controllers/json_request.py\n+++ b/mail_sendgrid/controllers/json_request.py\n@@ -1,12 +1,18 @@\n # -*- coding: utf-8 -*-\n # Copyright 2016-2017 Compassion CH (http://www.compassion.ch)\n # License AGPL-3.0 or later (http://www.gnu.org/licenses/agpl).\n-import simplejson\n-\n+import logging\n from odoo.http import JsonRequest, Root, Response\n \n # Monkeypatch type of request rooter to use RESTJsonRequest\n old_get_request = Root.get_request\n+_logger = logging.getLogger(__name__)\n+\n+try:\n+ import simplejson\n+except ImportError:\n+ _logger.error(\"Please install simplejson tu use mail_sendgrid module\")\n+ _logger.debug(\"ImportError details:\", exc_info=True)\n \n \n def get_request(self, httprequest):\n", "issue": "[10.0] mail_sendrid, Error: No module named simplejson\n> File \".../addons/oca-social/mail_sendgrid/controllers/json_request.py\", line 4, in <module>\r\n> import simplejson\r\n> ImportError: No module named simplejson\r\n\r\nI put `simplejson` into file requirements.txt and it solved this issue.\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n# Copyright 2016-2017 Compassion CH (http://www.compassion.ch)\n# License AGPL-3.0 or later (http://www.gnu.org/licenses/agpl).\nimport simplejson\n\nfrom odoo.http import JsonRequest, Root, Response\n\n# Monkeypatch type of request rooter to use RESTJsonRequest\nold_get_request = Root.get_request\n\n\ndef get_request(self, httprequest):\n if (httprequest.mimetype == \"application/json\" and\n httprequest.environ['PATH_INFO'].startswith('/mail')):\n return RESTJsonRequest(httprequest)\n return old_get_request(self, httprequest)\n\n\nRoot.get_request = get_request\n\n\nclass RESTJsonRequest(JsonRequest):\n \"\"\" Special RestJson Handler to enable receiving lists in JSON\n body\n \"\"\"\n def __init__(self, *args):\n try:\n super(RESTJsonRequest, self).__init__(*args)\n except AttributeError:\n # The JSON may contain a list\n self.params = dict()\n self.context = dict(self.session.context)\n\n def _json_response(self, result=None, error=None):\n response = {}\n if error is not None:\n response['error'] = error\n if result is not None:\n response['result'] = result\n\n mime = 'application/json'\n body = simplejson.dumps(response)\n\n return Response(\n body, headers=[('Content-Type', mime),\n ('Content-Length', len(body))])\n", "path": "mail_sendgrid/controllers/json_request.py"}, {"content": "# -*- coding: utf-8 -*-\n# Copyright 2015-2017 Compassion CH (http://www.compassion.ch)\n# License AGPL-3.0 or later (http://www.gnu.org/licenses/agpl).\n{\n 'name': 'SendGrid',\n 'version': '10.0.1.0.0',\n 'category': 'Social Network',\n 'author': 'Compassion CH, Odoo Community Association (OCA)',\n 'license': 'AGPL-3',\n 'website': 'https://github.com/OCA/social',\n 'depends': ['mail_tracking'],\n 'data': [\n 'security/ir.model.access.csv',\n 'views/sendgrid_email_view.xml',\n 'views/sendgrid_template_view.xml',\n 'views/mail_compose_message_view.xml',\n 'views/email_template_view.xml',\n ],\n 'demo': [],\n 'installable': True,\n 'auto_install': False,\n 'external_dependencies': {\n 'python': ['sendgrid'],\n },\n}\n", "path": "mail_sendgrid/__manifest__.py"}]}
| 1,307 | 471 |
gh_patches_debug_27142
|
rasdani/github-patches
|
git_diff
|
Netflix__lemur-302
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Chain Certificate is not exporting
Hi Team,
While extracting .p12 formatted file (which was exported from Lemur) we cannot find the Chain file on the same. Could you please let us know if we need to perform any additional step to download the .p12 along with the chain.
Thanks,
Akash John
</issue>
<code>
[start of lemur/plugins/lemur_openssl/plugin.py]
1 """
2 .. module: lemur.plugins.lemur_openssl.plugin
3 :platform: Unix
4 :copyright: (c) 2015 by Netflix Inc., see AUTHORS for more
5 :license: Apache, see LICENSE for more details.
6
7 .. moduleauthor:: Kevin Glisson <[email protected]>
8 """
9 import subprocess
10
11 from flask import current_app
12
13 from lemur.utils import mktempfile, mktemppath
14 from lemur.plugins.bases import ExportPlugin
15 from lemur.plugins import lemur_openssl as openssl
16 from lemur.common.utils import get_psuedo_random_string
17
18
19 def run_process(command):
20 """
21 Runs a given command with pOpen and wraps some
22 error handling around it.
23 :param command:
24 :return:
25 """
26 p = subprocess.Popen(command, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
27 current_app.logger.debug(command)
28 stdout, stderr = p.communicate()
29
30 if p.returncode != 0:
31 current_app.logger.debug(" ".join(command))
32 current_app.logger.error(stderr)
33 raise Exception(stderr)
34
35
36 def create_pkcs12(cert, p12_tmp, key, alias, passphrase):
37 """
38 Creates a pkcs12 formated file.
39 :param cert:
40 :param jks_tmp:
41 :param key:
42 :param alias:
43 :param passphrase:
44 """
45 with mktempfile() as key_tmp:
46 with open(key_tmp, 'w') as f:
47 f.write(key)
48
49 # Create PKCS12 keystore from private key and public certificate
50 with mktempfile() as cert_tmp:
51 with open(cert_tmp, 'w') as f:
52 f.write(cert)
53
54 run_process([
55 "openssl",
56 "pkcs12",
57 "-export",
58 "-name", alias,
59 "-in", cert_tmp,
60 "-inkey", key_tmp,
61 "-out", p12_tmp,
62 "-password", "pass:{}".format(passphrase)
63 ])
64
65
66 class OpenSSLExportPlugin(ExportPlugin):
67 title = 'OpenSSL'
68 slug = 'openssl-export'
69 description = 'Is a loose interface to openssl and support various formats'
70 version = openssl.VERSION
71
72 author = 'Kevin Glisson'
73 author_url = 'https://github.com/netflix/lemur'
74
75 options = [
76 {
77 'name': 'type',
78 'type': 'select',
79 'required': True,
80 'available': ['PKCS12 (.p12)'],
81 'helpMessage': 'Choose the format you wish to export',
82 },
83 {
84 'name': 'passphrase',
85 'type': 'str',
86 'required': False,
87 'helpMessage': 'If no passphrase is given one will be generated for you, we highly recommend this. Minimum length is 8.',
88 'validation': ''
89 },
90 {
91 'name': 'alias',
92 'type': 'str',
93 'required': False,
94 'helpMessage': 'Enter the alias you wish to use for the keystore.',
95 }
96 ]
97
98 def export(self, body, chain, key, options, **kwargs):
99 """
100 Generates a Java Keystore or Truststore
101
102 :param key:
103 :param chain:
104 :param body:
105 :param options:
106 :param kwargs:
107 """
108 if self.get_option('passphrase', options):
109 passphrase = self.get_option('passphrase', options)
110 else:
111 passphrase = get_psuedo_random_string()
112
113 if self.get_option('alias', options):
114 alias = self.get_option('alias', options)
115 else:
116 alias = "blah"
117
118 type = self.get_option('type', options)
119
120 with mktemppath() as output_tmp:
121 if type == 'PKCS12 (.p12)':
122 create_pkcs12(body, output_tmp, key, alias, passphrase)
123 extension = "p12"
124 else:
125 raise Exception("Unable to export, unsupported type: {0}".format(type))
126
127 with open(output_tmp, 'rb') as f:
128 raw = f.read()
129
130 return extension, passphrase, raw
131
[end of lemur/plugins/lemur_openssl/plugin.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/lemur/plugins/lemur_openssl/plugin.py b/lemur/plugins/lemur_openssl/plugin.py
--- a/lemur/plugins/lemur_openssl/plugin.py
+++ b/lemur/plugins/lemur_openssl/plugin.py
@@ -33,11 +33,12 @@
raise Exception(stderr)
-def create_pkcs12(cert, p12_tmp, key, alias, passphrase):
+def create_pkcs12(cert, chain, p12_tmp, key, alias, passphrase):
"""
Creates a pkcs12 formated file.
:param cert:
- :param jks_tmp:
+ :param chain:
+ :param p12_tmp:
:param key:
:param alias:
:param passphrase:
@@ -49,7 +50,7 @@
# Create PKCS12 keystore from private key and public certificate
with mktempfile() as cert_tmp:
with open(cert_tmp, 'w') as f:
- f.write(cert)
+ f.writelines([cert + "\n", chain + "\n"])
run_process([
"openssl",
@@ -119,7 +120,7 @@
with mktemppath() as output_tmp:
if type == 'PKCS12 (.p12)':
- create_pkcs12(body, output_tmp, key, alias, passphrase)
+ create_pkcs12(body, chain, output_tmp, key, alias, passphrase)
extension = "p12"
else:
raise Exception("Unable to export, unsupported type: {0}".format(type))
|
{"golden_diff": "diff --git a/lemur/plugins/lemur_openssl/plugin.py b/lemur/plugins/lemur_openssl/plugin.py\n--- a/lemur/plugins/lemur_openssl/plugin.py\n+++ b/lemur/plugins/lemur_openssl/plugin.py\n@@ -33,11 +33,12 @@\n raise Exception(stderr)\n \n \n-def create_pkcs12(cert, p12_tmp, key, alias, passphrase):\n+def create_pkcs12(cert, chain, p12_tmp, key, alias, passphrase):\n \"\"\"\n Creates a pkcs12 formated file.\n :param cert:\n- :param jks_tmp:\n+ :param chain:\n+ :param p12_tmp:\n :param key:\n :param alias:\n :param passphrase:\n@@ -49,7 +50,7 @@\n # Create PKCS12 keystore from private key and public certificate\n with mktempfile() as cert_tmp:\n with open(cert_tmp, 'w') as f:\n- f.write(cert)\n+ f.writelines([cert + \"\\n\", chain + \"\\n\"])\n \n run_process([\n \"openssl\",\n@@ -119,7 +120,7 @@\n \n with mktemppath() as output_tmp:\n if type == 'PKCS12 (.p12)':\n- create_pkcs12(body, output_tmp, key, alias, passphrase)\n+ create_pkcs12(body, chain, output_tmp, key, alias, passphrase)\n extension = \"p12\"\n else:\n raise Exception(\"Unable to export, unsupported type: {0}\".format(type))\n", "issue": "Chain Certificate is not exporting\nHi Team,\n\nWhile extracting .p12 formatted file (which was exported from Lemur) we cannot find the Chain file on the same. Could you please let us know if we need to perform any additional step to download the .p12 along with the chain. \n\nThanks,\nAkash John\n\n", "before_files": [{"content": "\"\"\"\n.. module: lemur.plugins.lemur_openssl.plugin\n :platform: Unix\n :copyright: (c) 2015 by Netflix Inc., see AUTHORS for more\n :license: Apache, see LICENSE for more details.\n\n.. moduleauthor:: Kevin Glisson <[email protected]>\n\"\"\"\nimport subprocess\n\nfrom flask import current_app\n\nfrom lemur.utils import mktempfile, mktemppath\nfrom lemur.plugins.bases import ExportPlugin\nfrom lemur.plugins import lemur_openssl as openssl\nfrom lemur.common.utils import get_psuedo_random_string\n\n\ndef run_process(command):\n \"\"\"\n Runs a given command with pOpen and wraps some\n error handling around it.\n :param command:\n :return:\n \"\"\"\n p = subprocess.Popen(command, stdout=subprocess.PIPE, stderr=subprocess.PIPE)\n current_app.logger.debug(command)\n stdout, stderr = p.communicate()\n\n if p.returncode != 0:\n current_app.logger.debug(\" \".join(command))\n current_app.logger.error(stderr)\n raise Exception(stderr)\n\n\ndef create_pkcs12(cert, p12_tmp, key, alias, passphrase):\n \"\"\"\n Creates a pkcs12 formated file.\n :param cert:\n :param jks_tmp:\n :param key:\n :param alias:\n :param passphrase:\n \"\"\"\n with mktempfile() as key_tmp:\n with open(key_tmp, 'w') as f:\n f.write(key)\n\n # Create PKCS12 keystore from private key and public certificate\n with mktempfile() as cert_tmp:\n with open(cert_tmp, 'w') as f:\n f.write(cert)\n\n run_process([\n \"openssl\",\n \"pkcs12\",\n \"-export\",\n \"-name\", alias,\n \"-in\", cert_tmp,\n \"-inkey\", key_tmp,\n \"-out\", p12_tmp,\n \"-password\", \"pass:{}\".format(passphrase)\n ])\n\n\nclass OpenSSLExportPlugin(ExportPlugin):\n title = 'OpenSSL'\n slug = 'openssl-export'\n description = 'Is a loose interface to openssl and support various formats'\n version = openssl.VERSION\n\n author = 'Kevin Glisson'\n author_url = 'https://github.com/netflix/lemur'\n\n options = [\n {\n 'name': 'type',\n 'type': 'select',\n 'required': True,\n 'available': ['PKCS12 (.p12)'],\n 'helpMessage': 'Choose the format you wish to export',\n },\n {\n 'name': 'passphrase',\n 'type': 'str',\n 'required': False,\n 'helpMessage': 'If no passphrase is given one will be generated for you, we highly recommend this. Minimum length is 8.',\n 'validation': ''\n },\n {\n 'name': 'alias',\n 'type': 'str',\n 'required': False,\n 'helpMessage': 'Enter the alias you wish to use for the keystore.',\n }\n ]\n\n def export(self, body, chain, key, options, **kwargs):\n \"\"\"\n Generates a Java Keystore or Truststore\n\n :param key:\n :param chain:\n :param body:\n :param options:\n :param kwargs:\n \"\"\"\n if self.get_option('passphrase', options):\n passphrase = self.get_option('passphrase', options)\n else:\n passphrase = get_psuedo_random_string()\n\n if self.get_option('alias', options):\n alias = self.get_option('alias', options)\n else:\n alias = \"blah\"\n\n type = self.get_option('type', options)\n\n with mktemppath() as output_tmp:\n if type == 'PKCS12 (.p12)':\n create_pkcs12(body, output_tmp, key, alias, passphrase)\n extension = \"p12\"\n else:\n raise Exception(\"Unable to export, unsupported type: {0}\".format(type))\n\n with open(output_tmp, 'rb') as f:\n raw = f.read()\n\n return extension, passphrase, raw\n", "path": "lemur/plugins/lemur_openssl/plugin.py"}]}
| 1,797 | 365 |
gh_patches_debug_4433
|
rasdani/github-patches
|
git_diff
|
liberapay__liberapay.com-1314
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Bug during payment modification
I try to donate to this account: https://liberapay.com/lowtechlab/ after a previous promise on the twitter account.
lowtechlab created its account on the plateform but if I see that I donate to it, the actual number of mecene is still 0.
I tried to change the amount since I know that with the new system we need to do a payment in advance but I had a 500 error with the following error code: `fc677c3e6c54434ba77bcf1eac628a18`
</issue>
<code>
[start of liberapay/utils/currencies.py]
1 from __future__ import absolute_import, division, print_function, unicode_literals
2
3 from collections import defaultdict, OrderedDict
4 from decimal import Decimal, ROUND_DOWN, ROUND_HALF_UP, ROUND_UP
5 from numbers import Number
6 import operator
7
8 from mangopay.exceptions import CurrencyMismatch
9 from mangopay.utils import Money
10 import requests
11 import xmltodict
12
13 from liberapay.constants import CURRENCIES, D_CENT, D_ZERO
14 from liberapay.website import website
15
16
17 def _convert(self, c, rounding=ROUND_HALF_UP):
18 if self.currency == c:
19 return self
20 if 'EUR' in (self.currency, c):
21 rate = website.currency_exchange_rates[(self.currency, c)]
22 else:
23 rate = (
24 website.currency_exchange_rates[(self.currency, 'EUR')] *
25 website.currency_exchange_rates[('EUR', c)]
26 )
27 amount = self.amount * rate
28 return Money(amount, c, rounding=rounding)
29
30 def _sum(cls, amounts, currency):
31 a = Money.ZEROS[currency].amount
32 for m in amounts:
33 if m.currency != currency:
34 raise CurrencyMismatch(m.currency, currency, 'sum')
35 a += m.amount
36 return cls(a, currency)
37
38 def _Money_init(self, amount=Decimal('0'), currency=None, rounding=None):
39 if not isinstance(amount, Decimal):
40 amount = Decimal(str(amount))
41 # Why `str(amount)`? Because:
42 # >>> Decimal(0.23)
43 # Decimal('0.2300000000000000099920072216264088638126850128173828125')
44 # >>> Decimal(str(0.23))
45 # Decimal('0.23')
46 if rounding is not None:
47 minimum = Money.MINIMUMS[currency].amount
48 amount = amount.quantize(minimum, rounding=rounding)
49 self.amount = amount
50 self.currency = currency
51
52 def _Money_eq(self, other):
53 if isinstance(other, self.__class__):
54 return self.__dict__ == other.__dict__
55 if isinstance(other, (Decimal, Number)):
56 return self.amount == other
57 if isinstance(other, MoneyBasket):
58 return other.__eq__(self)
59 return False
60
61 def _Money_round(self, rounding=ROUND_HALF_UP):
62 return Money(self.amount, self.currency, rounding=rounding)
63
64 class _Minimums(defaultdict):
65 def __missing__(self, currency):
66 exponent = website.db.one("SELECT get_currency_exponent(%s)", (currency,))
67 minimum = Money((D_CENT if exponent == 2 else Decimal(10) ** (-exponent)), currency)
68 self[currency] = minimum
69 return minimum
70
71 class _Zeros(defaultdict):
72 def __missing__(self, currency):
73 minimum = Money.MINIMUMS[currency].amount
74 zero = Money((D_ZERO if minimum is D_CENT else minimum - minimum), currency)
75 self[currency] = zero
76 return zero
77
78
79 Money.__init__ = _Money_init
80 Money.__nonzero__ = Money.__bool__
81 Money.__eq__ = _Money_eq
82 Money.__iter__ = lambda m: iter((m.amount, m.currency))
83 Money.__repr__ = lambda m: '<Money "%s">' % m
84 Money.__str__ = lambda m: '%(amount)s %(currency)s' % m.__dict__
85 Money.__unicode__ = Money.__str__
86 Money.convert = _convert
87 Money.minimum = lambda m: Money.MINIMUMS[m.currency]
88 Money.MINIMUMS = _Minimums()
89 Money.round = _Money_round
90 Money.round_down = lambda m: m.round(ROUND_DOWN)
91 Money.round_up = lambda m: m.round(ROUND_UP)
92 Money.sum = classmethod(_sum)
93 Money.zero = lambda m: Money.ZEROS[m.currency]
94 Money.ZEROS = _Zeros()
95
96
97 class MoneyBasket(object):
98
99 def __init__(self, *args, **decimals):
100 self.amounts = OrderedDict(
101 (currency, decimals.get(currency, Money.ZEROS[currency].amount))
102 for currency in CURRENCIES
103 )
104 for arg in args:
105 if isinstance(arg, Money):
106 self.amounts[arg.currency] += arg.amount
107 else:
108 for m in arg:
109 self.amounts[m.currency] += m.amount
110
111 def __getitem__(self, currency):
112 return Money(self.amounts[currency], currency)
113
114 def __iter__(self):
115 return (Money(amount, currency) for currency, amount in self.amounts.items())
116
117 def __eq__(self, other):
118 if isinstance(other, self.__class__):
119 return self.amounts == other.amounts
120 elif isinstance(other, Money):
121 return self.amounts == MoneyBasket(other).amounts
122 elif other == 0:
123 return all(v == 0 for v in self.amounts.values())
124 return False
125
126 def __ne__(self, other):
127 return not self.__eq__(other)
128
129 def _compare(self, op, other):
130 if isinstance(other, self.__class__):
131 return all(op(a, b) for a, b in zip(self.amounts.values(), other.amounts.values()))
132 elif isinstance(other, Money):
133 return op(self.amounts[other.currency], other.amount)
134 elif other == 0:
135 return any(op(v, 0) for v in self.amounts.values())
136 else:
137 raise TypeError(
138 "can't compare %r and %r" % (self.__class__, other.__class__)
139 )
140
141 def __ge__(self, other):
142 return self._compare(operator.ge, other)
143
144 def __gt__(self, other):
145 return self._compare(operator.gt, other)
146
147 def __add__(self, other):
148 if other is 0:
149 return self
150 r = self.__class__(**self.amounts)
151 if isinstance(other, self.__class__):
152 for currency, amount in other.amounts.items():
153 if currency in r.amounts:
154 r.amounts[currency] += amount
155 else:
156 r.amounts[currency] = amount
157 elif isinstance(other, Money):
158 currency = other.currency
159 if currency in r.amounts:
160 r.amounts[currency] += other.amount
161 else:
162 r.amounts[currency] = other.amount
163 elif other == 0:
164 return r
165 else:
166 raise TypeError(other)
167 return r
168
169 def __radd__(self, other):
170 return self.__add__(other)
171
172 def __sub__(self, other):
173 if other is 0:
174 return self
175 r = self.__class__(**self.amounts)
176 if isinstance(other, self.__class__):
177 for currency, v in other.amounts.items():
178 if currency in r.amounts:
179 r.amounts[currency] -= v
180 else:
181 r.amounts[currency] = -v
182 elif isinstance(other, Money):
183 currency = other.currency
184 if currency in r.amounts:
185 r.amounts[currency] -= other.amount
186 else:
187 r.amounts[currency] = -other.amount
188 else:
189 raise TypeError(other)
190 return r
191
192 def __repr__(self):
193 return '%s[%s]' % (
194 self.__class__.__name__,
195 ', '.join('%s %s' % (a, c) for c, a in self.amounts.items() if a)
196 )
197
198 def __bool__(self):
199 return any(v for v in self.amounts.values())
200
201 __nonzero__ = __bool__
202
203 def __setstate__(self, state):
204 """Backward-compatible unpickling
205
206 The original version of `MoneyBasket` stored `Money` objects in its
207 `__dict__`, whereas the current version stores `Decimal`s in the
208 `amounts` attribute.
209 """
210 if 'amounts' in state:
211 self.__dict__ = state
212 else:
213 self.amounts = {m.currency: m.amount for m in state.values()}
214
215 @property
216 def currencies_present(self):
217 return [k for k, v in self.amounts.items() if v > 0]
218
219 def fuzzy_sum(self, currency, rounding=ROUND_UP):
220 a = Money.ZEROS[currency].amount
221 fuzzy = False
222 for m in self:
223 if m.currency == currency:
224 a += m.amount
225 elif m.amount:
226 a += m.convert(currency, rounding=None).amount
227 fuzzy = True
228 r = Money(a, currency, rounding=rounding)
229 r.fuzzy = fuzzy
230 return r
231
232
233 def fetch_currency_exchange_rates(db):
234 currencies = set(db.one("SELECT array_to_json(enum_range(NULL::currency))"))
235 r = requests.get('https://www.ecb.europa.eu/stats/eurofxref/eurofxref-daily.xml')
236 rates = xmltodict.parse(r.text)['gesmes:Envelope']['Cube']['Cube']['Cube']
237 for fx in rates:
238 currency = fx['@currency']
239 if currency not in currencies:
240 continue
241 db.run("""
242 INSERT INTO currency_exchange_rates
243 (source_currency, target_currency, rate)
244 VALUES ('EUR', %(target)s, %(rate)s)
245 , (%(target)s, 'EUR', 1 / %(rate)s)
246 ON CONFLICT (source_currency, target_currency) DO UPDATE
247 SET rate = excluded.rate
248 """, dict(target=currency, rate=Decimal(fx['@rate'])))
249
250
251 def get_currency_exchange_rates(db):
252 r = {(r[0], r[1]): r[2] for r in db.all("SELECT * FROM currency_exchange_rates")}
253 if r:
254 return r
255 fetch_currency_exchange_rates(db)
256 return get_currency_exchange_rates(db)
257
[end of liberapay/utils/currencies.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/liberapay/utils/currencies.py b/liberapay/utils/currencies.py
--- a/liberapay/utils/currencies.py
+++ b/liberapay/utils/currencies.py
@@ -51,7 +51,7 @@
def _Money_eq(self, other):
if isinstance(other, self.__class__):
- return self.__dict__ == other.__dict__
+ return self.amount == other.amount and self.currency == other.currency
if isinstance(other, (Decimal, Number)):
return self.amount == other
if isinstance(other, MoneyBasket):
|
{"golden_diff": "diff --git a/liberapay/utils/currencies.py b/liberapay/utils/currencies.py\n--- a/liberapay/utils/currencies.py\n+++ b/liberapay/utils/currencies.py\n@@ -51,7 +51,7 @@\n \n def _Money_eq(self, other):\n if isinstance(other, self.__class__):\n- return self.__dict__ == other.__dict__\n+ return self.amount == other.amount and self.currency == other.currency\n if isinstance(other, (Decimal, Number)):\n return self.amount == other\n if isinstance(other, MoneyBasket):\n", "issue": "Bug during payment modification\nI try to donate to this account: https://liberapay.com/lowtechlab/ after a previous promise on the twitter account.\r\n\r\nlowtechlab created its account on the plateform but if I see that I donate to it, the actual number of mecene is still 0.\r\n\r\nI tried to change the amount since I know that with the new system we need to do a payment in advance but I had a 500 error with the following error code: `fc677c3e6c54434ba77bcf1eac628a18`\n", "before_files": [{"content": "from __future__ import absolute_import, division, print_function, unicode_literals\n\nfrom collections import defaultdict, OrderedDict\nfrom decimal import Decimal, ROUND_DOWN, ROUND_HALF_UP, ROUND_UP\nfrom numbers import Number\nimport operator\n\nfrom mangopay.exceptions import CurrencyMismatch\nfrom mangopay.utils import Money\nimport requests\nimport xmltodict\n\nfrom liberapay.constants import CURRENCIES, D_CENT, D_ZERO\nfrom liberapay.website import website\n\n\ndef _convert(self, c, rounding=ROUND_HALF_UP):\n if self.currency == c:\n return self\n if 'EUR' in (self.currency, c):\n rate = website.currency_exchange_rates[(self.currency, c)]\n else:\n rate = (\n website.currency_exchange_rates[(self.currency, 'EUR')] *\n website.currency_exchange_rates[('EUR', c)]\n )\n amount = self.amount * rate\n return Money(amount, c, rounding=rounding)\n\ndef _sum(cls, amounts, currency):\n a = Money.ZEROS[currency].amount\n for m in amounts:\n if m.currency != currency:\n raise CurrencyMismatch(m.currency, currency, 'sum')\n a += m.amount\n return cls(a, currency)\n\ndef _Money_init(self, amount=Decimal('0'), currency=None, rounding=None):\n if not isinstance(amount, Decimal):\n amount = Decimal(str(amount))\n # Why `str(amount)`? Because:\n # >>> Decimal(0.23)\n # Decimal('0.2300000000000000099920072216264088638126850128173828125')\n # >>> Decimal(str(0.23))\n # Decimal('0.23')\n if rounding is not None:\n minimum = Money.MINIMUMS[currency].amount\n amount = amount.quantize(minimum, rounding=rounding)\n self.amount = amount\n self.currency = currency\n\ndef _Money_eq(self, other):\n if isinstance(other, self.__class__):\n return self.__dict__ == other.__dict__\n if isinstance(other, (Decimal, Number)):\n return self.amount == other\n if isinstance(other, MoneyBasket):\n return other.__eq__(self)\n return False\n\ndef _Money_round(self, rounding=ROUND_HALF_UP):\n return Money(self.amount, self.currency, rounding=rounding)\n\nclass _Minimums(defaultdict):\n def __missing__(self, currency):\n exponent = website.db.one(\"SELECT get_currency_exponent(%s)\", (currency,))\n minimum = Money((D_CENT if exponent == 2 else Decimal(10) ** (-exponent)), currency)\n self[currency] = minimum\n return minimum\n\nclass _Zeros(defaultdict):\n def __missing__(self, currency):\n minimum = Money.MINIMUMS[currency].amount\n zero = Money((D_ZERO if minimum is D_CENT else minimum - minimum), currency)\n self[currency] = zero\n return zero\n\n\nMoney.__init__ = _Money_init\nMoney.__nonzero__ = Money.__bool__\nMoney.__eq__ = _Money_eq\nMoney.__iter__ = lambda m: iter((m.amount, m.currency))\nMoney.__repr__ = lambda m: '<Money \"%s\">' % m\nMoney.__str__ = lambda m: '%(amount)s %(currency)s' % m.__dict__\nMoney.__unicode__ = Money.__str__\nMoney.convert = _convert\nMoney.minimum = lambda m: Money.MINIMUMS[m.currency]\nMoney.MINIMUMS = _Minimums()\nMoney.round = _Money_round\nMoney.round_down = lambda m: m.round(ROUND_DOWN)\nMoney.round_up = lambda m: m.round(ROUND_UP)\nMoney.sum = classmethod(_sum)\nMoney.zero = lambda m: Money.ZEROS[m.currency]\nMoney.ZEROS = _Zeros()\n\n\nclass MoneyBasket(object):\n\n def __init__(self, *args, **decimals):\n self.amounts = OrderedDict(\n (currency, decimals.get(currency, Money.ZEROS[currency].amount))\n for currency in CURRENCIES\n )\n for arg in args:\n if isinstance(arg, Money):\n self.amounts[arg.currency] += arg.amount\n else:\n for m in arg:\n self.amounts[m.currency] += m.amount\n\n def __getitem__(self, currency):\n return Money(self.amounts[currency], currency)\n\n def __iter__(self):\n return (Money(amount, currency) for currency, amount in self.amounts.items())\n\n def __eq__(self, other):\n if isinstance(other, self.__class__):\n return self.amounts == other.amounts\n elif isinstance(other, Money):\n return self.amounts == MoneyBasket(other).amounts\n elif other == 0:\n return all(v == 0 for v in self.amounts.values())\n return False\n\n def __ne__(self, other):\n return not self.__eq__(other)\n\n def _compare(self, op, other):\n if isinstance(other, self.__class__):\n return all(op(a, b) for a, b in zip(self.amounts.values(), other.amounts.values()))\n elif isinstance(other, Money):\n return op(self.amounts[other.currency], other.amount)\n elif other == 0:\n return any(op(v, 0) for v in self.amounts.values())\n else:\n raise TypeError(\n \"can't compare %r and %r\" % (self.__class__, other.__class__)\n )\n\n def __ge__(self, other):\n return self._compare(operator.ge, other)\n\n def __gt__(self, other):\n return self._compare(operator.gt, other)\n\n def __add__(self, other):\n if other is 0:\n return self\n r = self.__class__(**self.amounts)\n if isinstance(other, self.__class__):\n for currency, amount in other.amounts.items():\n if currency in r.amounts:\n r.amounts[currency] += amount\n else:\n r.amounts[currency] = amount\n elif isinstance(other, Money):\n currency = other.currency\n if currency in r.amounts:\n r.amounts[currency] += other.amount\n else:\n r.amounts[currency] = other.amount\n elif other == 0:\n return r\n else:\n raise TypeError(other)\n return r\n\n def __radd__(self, other):\n return self.__add__(other)\n\n def __sub__(self, other):\n if other is 0:\n return self\n r = self.__class__(**self.amounts)\n if isinstance(other, self.__class__):\n for currency, v in other.amounts.items():\n if currency in r.amounts:\n r.amounts[currency] -= v\n else:\n r.amounts[currency] = -v\n elif isinstance(other, Money):\n currency = other.currency\n if currency in r.amounts:\n r.amounts[currency] -= other.amount\n else:\n r.amounts[currency] = -other.amount\n else:\n raise TypeError(other)\n return r\n\n def __repr__(self):\n return '%s[%s]' % (\n self.__class__.__name__,\n ', '.join('%s %s' % (a, c) for c, a in self.amounts.items() if a)\n )\n\n def __bool__(self):\n return any(v for v in self.amounts.values())\n\n __nonzero__ = __bool__\n\n def __setstate__(self, state):\n \"\"\"Backward-compatible unpickling\n\n The original version of `MoneyBasket` stored `Money` objects in its\n `__dict__`, whereas the current version stores `Decimal`s in the\n `amounts` attribute.\n \"\"\"\n if 'amounts' in state:\n self.__dict__ = state\n else:\n self.amounts = {m.currency: m.amount for m in state.values()}\n\n @property\n def currencies_present(self):\n return [k for k, v in self.amounts.items() if v > 0]\n\n def fuzzy_sum(self, currency, rounding=ROUND_UP):\n a = Money.ZEROS[currency].amount\n fuzzy = False\n for m in self:\n if m.currency == currency:\n a += m.amount\n elif m.amount:\n a += m.convert(currency, rounding=None).amount\n fuzzy = True\n r = Money(a, currency, rounding=rounding)\n r.fuzzy = fuzzy\n return r\n\n\ndef fetch_currency_exchange_rates(db):\n currencies = set(db.one(\"SELECT array_to_json(enum_range(NULL::currency))\"))\n r = requests.get('https://www.ecb.europa.eu/stats/eurofxref/eurofxref-daily.xml')\n rates = xmltodict.parse(r.text)['gesmes:Envelope']['Cube']['Cube']['Cube']\n for fx in rates:\n currency = fx['@currency']\n if currency not in currencies:\n continue\n db.run(\"\"\"\n INSERT INTO currency_exchange_rates\n (source_currency, target_currency, rate)\n VALUES ('EUR', %(target)s, %(rate)s)\n , (%(target)s, 'EUR', 1 / %(rate)s)\n ON CONFLICT (source_currency, target_currency) DO UPDATE\n SET rate = excluded.rate\n \"\"\", dict(target=currency, rate=Decimal(fx['@rate'])))\n\n\ndef get_currency_exchange_rates(db):\n r = {(r[0], r[1]): r[2] for r in db.all(\"SELECT * FROM currency_exchange_rates\")}\n if r:\n return r\n fetch_currency_exchange_rates(db)\n return get_currency_exchange_rates(db)\n", "path": "liberapay/utils/currencies.py"}]}
| 3,472 | 127 |
gh_patches_debug_1332
|
rasdani/github-patches
|
git_diff
|
SeldonIO__MLServer-1064
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
decode_args with tuple return value
I'm confused about how to use `decode_args()` when the model returns a tuple of, let's say, a numpy array.
If I have an inference function with the following signature
```python
import numpy as np
from mlserver.codecs.decorator import decode_args
def predict(input: np.ndarray) -> tuple[np.ndarray]:
# simple identity model
return (input, )
decode_args(predict)
```
I receive a
```
CodecNotFound: Codec not found for output field with type tuple[numpy.ndarray]
```
error message.
I think the issue lies in the following line that is used to unwrap the tuple signature and convert it to a list of types.
https://github.com/SeldonIO/MLServer/blob/9d7045cc3c50a14201733865ca0c3e8480ebd942/mlserver/codecs/decorator.py#L37
However the line checks if the type hint is a tuple instance, which is not true in the above case: `isinstance(tuple[np.ndarray], tuple) == False)`
To make `decode_args` work with a tuple return signature, I have to change the function signature to
```python
def predict(input: np.ndarray) -> tuple([np.ndarray]):
```
which is an actual tuple instance and not a type hint. This looks like a bug to me.
</issue>
<code>
[start of mlserver/codecs/decorator.py]
1 from functools import wraps, partial
2 from typing import (
3 Any,
4 Callable,
5 Coroutine,
6 Dict,
7 List,
8 Optional,
9 Union,
10 Type,
11 Tuple,
12 get_origin,
13 get_args,
14 get_type_hints,
15 TYPE_CHECKING,
16 )
17
18
19 from ..types import InferenceRequest, InferenceResponse, ResponseOutput
20
21 from .base import RequestCodec, InputCodec, find_input_codec, find_request_codec
22 from .errors import InputsNotFound, OutputNotFound, CodecNotFound
23 from .utils import Codec
24
25 if TYPE_CHECKING:
26 from ..model import MLModel
27
28 PredictFunc = Callable[
29 ["MLModel", InferenceRequest], Coroutine[Any, Any, InferenceResponse]
30 ]
31
32
33 def _as_list(a: Optional[Union[Any, Tuple[Any]]]) -> List[Any]:
34 if a is None:
35 return []
36
37 if isinstance(a, tuple):
38 # Split into components
39 return list(a)
40
41 # Otherwise, assume it's a single element
42 return [a]
43
44
45 def _is_codec_type(c: Codec, t: Type) -> bool:
46 if issubclass(c, t): # type: ignore
47 return True
48
49 if isinstance(c, t):
50 return True
51
52 return False
53
54
55 _is_input_codec = partial(_is_codec_type, t=InputCodec)
56 _is_request_codec = partial(_is_codec_type, t=RequestCodec)
57
58
59 def _is_optional(t: Type) -> bool:
60 origin = get_origin(t)
61 if origin == Optional:
62 return True
63
64 if origin == Union:
65 # Cover case where Optional[a] is reported as Union[a, None]
66 args = get_args(t)
67 if len(args) == 2 and type(None) in args:
68 return True
69
70 return False
71
72
73 def _unwrap_optional(t: Type) -> Type:
74 args = get_args(t)
75 for arg in args:
76 if not isinstance(arg, type(None)):
77 return arg
78
79 return t
80
81
82 class SignatureCodec(RequestCodec):
83 """
84 Internal codec that knows how to map type hints to codecs.
85 """
86
87 # TODO: Should this receive the whole class as argument?
88 def __init__(self, predict: Callable):
89 self._predict = predict
90 self._input_codecs, self._output_codecs = self._get_codecs(predict)
91
92 def _get_codecs(self, pred: Callable) -> Tuple[Dict[str, Codec], List[Codec]]:
93 self._input_hints = self._get_type_hints(pred)
94 self._output_hints = _as_list(self._input_hints.pop("return", None))
95
96 input_codecs = {}
97 for name, type_hint in self._input_hints.items():
98 codec = self._find_codec(name=name, type_hint=type_hint, is_input=True)
99 # TODO: Consider metadata as well! (needs to be done at runtime)
100 input_codecs[name] = codec
101
102 output_codecs = []
103 for type_hint in self._output_hints:
104 # Try either as an input or as a request codec
105 codec = self._find_codec(name=None, type_hint=type_hint, is_input=False)
106 output_codecs.append(codec)
107
108 return input_codecs, output_codecs
109
110 def _get_type_hints(self, pred: Callable) -> Dict[str, Type]:
111 type_hints = get_type_hints(pred)
112 # For us, `typing.Optional` is just syntactic sugar, so let's ensure we
113 # unwrap it
114 for name, hint in type_hints.items():
115 if _is_optional(hint):
116 type_hints[name] = _unwrap_optional(hint)
117
118 return type_hints
119
120 def _find_codec(
121 self, name: Optional[str], type_hint: Type, is_input: bool = False
122 ) -> Codec:
123 codec = find_input_codec(type_hint=type_hint)
124 if codec is not None:
125 return codec
126
127 codec = find_request_codec(type_hint=type_hint) # type: ignore
128 if codec is not None:
129 return codec
130
131 raise CodecNotFound(name=name, payload_type=str(type_hint), is_input=is_input)
132
133 def decode_request( # type: ignore
134 self, request: InferenceRequest
135 ) -> Dict[str, Any]:
136 inputs = {}
137 extra_request_inputs = []
138 for request_input in request.inputs:
139 input_name = request_input.name
140 if input_name not in self._input_codecs:
141 # Aggregate extra request inputs to check later, as they could
142 # be part of aggregated request codecs (e.g. like dataframes)
143 extra_request_inputs.append(request_input)
144 continue
145
146 # Ensure matching codec is an input codec
147 codec = self._input_codecs[input_name]
148 if not _is_input_codec(codec):
149 raise CodecNotFound(name=input_name, is_input=True)
150
151 inputs[input_name] = codec.decode_input(request_input) # type: ignore
152
153 if extra_request_inputs:
154 request_codec = self._get_request_codec()
155 if not request_codec:
156 # If there are no request codecs that can aggregate all
157 # remaining inputs, raise an error
158 raise InputsNotFound(extra_request_inputs, self._input_codecs)
159
160 # We create a fake request built from the extra request inputs
161 name, codec = request_codec
162 extra_inputs = InferenceRequest(inputs=extra_request_inputs)
163 inputs[name] = codec.decode_request(extra_inputs)
164
165 return inputs
166
167 def _get_request_codec(self) -> Optional[Tuple[str, RequestCodec]]:
168 for name, codec in self._input_codecs.items():
169 if _is_request_codec(codec):
170 return name, codec # type: ignore
171
172 return None
173
174 def encode_response( # type: ignore
175 self, model_name: str, payload: Any, model_version: Optional[str] = None
176 ) -> InferenceResponse:
177 payloads = _as_list(payload)
178 outputs = []
179 for idx, payload in enumerate(payloads):
180 outputs += self._encode_outputs(idx, payload)
181
182 return InferenceResponse(
183 model_name=model_name, model_version=model_version, outputs=outputs
184 )
185
186 def _encode_outputs(self, idx: int, payload: Any) -> List[ResponseOutput]:
187 output_type = type(payload)
188 if idx >= len(self._output_codecs):
189 raise OutputNotFound(idx, output_type, self._output_hints)
190
191 # TODO: Fallback to encode_by_payload?
192 codec = self._output_codecs[idx]
193 if not codec.can_encode(payload):
194 raise OutputNotFound(idx, output_type, self._output_hints)
195
196 if _is_input_codec(codec):
197 # TODO: Check model metadata for output names
198 output_name = f"output-{idx}"
199 response_output = codec.encode_output( # type: ignore
200 name=output_name, payload=payload
201 )
202 return [response_output]
203
204 if _is_request_codec(codec):
205 # NOTE: We will ignore `model_name` and only grab the outputs
206 response = codec.encode_response( # type: ignore
207 model_name="", payload=payload
208 )
209 return response.outputs
210
211 return []
212
213
214 def decode_args(predict: Callable) -> PredictFunc:
215 codec = SignatureCodec(predict)
216
217 @wraps(predict)
218 async def _f(self: "MLModel", request: InferenceRequest) -> InferenceResponse:
219 inputs = codec.decode_request(request=request)
220
221 outputs = await predict(self, **inputs)
222
223 return codec.encode_response(
224 model_name=self.name, payload=outputs, model_version=self.version
225 )
226
227 return _f
228
[end of mlserver/codecs/decorator.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/mlserver/codecs/decorator.py b/mlserver/codecs/decorator.py
--- a/mlserver/codecs/decorator.py
+++ b/mlserver/codecs/decorator.py
@@ -38,6 +38,10 @@
# Split into components
return list(a)
+ if get_origin(a) is tuple:
+ # Convert type arguments into list
+ return list(get_args(a))
+
# Otherwise, assume it's a single element
return [a]
|
{"golden_diff": "diff --git a/mlserver/codecs/decorator.py b/mlserver/codecs/decorator.py\n--- a/mlserver/codecs/decorator.py\n+++ b/mlserver/codecs/decorator.py\n@@ -38,6 +38,10 @@\n # Split into components\n return list(a)\n \n+ if get_origin(a) is tuple:\n+ # Convert type arguments into list\n+ return list(get_args(a))\n+\n # Otherwise, assume it's a single element\n return [a]\n", "issue": "decode_args with tuple return value\nI'm confused about how to use `decode_args()` when the model returns a tuple of, let's say, a numpy array.\r\n\r\nIf I have an inference function with the following signature\r\n\r\n```python\r\nimport numpy as np\r\nfrom mlserver.codecs.decorator import decode_args\r\n\r\ndef predict(input: np.ndarray) -> tuple[np.ndarray]:\r\n # simple identity model\r\n return (input, )\r\n\r\ndecode_args(predict)\r\n```\r\nI receive a\r\n\r\n```\r\nCodecNotFound: Codec not found for output field with type tuple[numpy.ndarray]\r\n```\r\n\r\nerror message.\r\n\r\nI think the issue lies in the following line that is used to unwrap the tuple signature and convert it to a list of types.\r\n\r\nhttps://github.com/SeldonIO/MLServer/blob/9d7045cc3c50a14201733865ca0c3e8480ebd942/mlserver/codecs/decorator.py#L37\r\n\r\nHowever the line checks if the type hint is a tuple instance, which is not true in the above case: `isinstance(tuple[np.ndarray], tuple) == False)`\r\n\r\nTo make `decode_args` work with a tuple return signature, I have to change the function signature to\r\n\r\n```python\r\ndef predict(input: np.ndarray) -> tuple([np.ndarray]):\r\n```\r\nwhich is an actual tuple instance and not a type hint. This looks like a bug to me. \n", "before_files": [{"content": "from functools import wraps, partial\nfrom typing import (\n Any,\n Callable,\n Coroutine,\n Dict,\n List,\n Optional,\n Union,\n Type,\n Tuple,\n get_origin,\n get_args,\n get_type_hints,\n TYPE_CHECKING,\n)\n\n\nfrom ..types import InferenceRequest, InferenceResponse, ResponseOutput\n\nfrom .base import RequestCodec, InputCodec, find_input_codec, find_request_codec\nfrom .errors import InputsNotFound, OutputNotFound, CodecNotFound\nfrom .utils import Codec\n\nif TYPE_CHECKING:\n from ..model import MLModel\n\nPredictFunc = Callable[\n [\"MLModel\", InferenceRequest], Coroutine[Any, Any, InferenceResponse]\n]\n\n\ndef _as_list(a: Optional[Union[Any, Tuple[Any]]]) -> List[Any]:\n if a is None:\n return []\n\n if isinstance(a, tuple):\n # Split into components\n return list(a)\n\n # Otherwise, assume it's a single element\n return [a]\n\n\ndef _is_codec_type(c: Codec, t: Type) -> bool:\n if issubclass(c, t): # type: ignore\n return True\n\n if isinstance(c, t):\n return True\n\n return False\n\n\n_is_input_codec = partial(_is_codec_type, t=InputCodec)\n_is_request_codec = partial(_is_codec_type, t=RequestCodec)\n\n\ndef _is_optional(t: Type) -> bool:\n origin = get_origin(t)\n if origin == Optional:\n return True\n\n if origin == Union:\n # Cover case where Optional[a] is reported as Union[a, None]\n args = get_args(t)\n if len(args) == 2 and type(None) in args:\n return True\n\n return False\n\n\ndef _unwrap_optional(t: Type) -> Type:\n args = get_args(t)\n for arg in args:\n if not isinstance(arg, type(None)):\n return arg\n\n return t\n\n\nclass SignatureCodec(RequestCodec):\n \"\"\"\n Internal codec that knows how to map type hints to codecs.\n \"\"\"\n\n # TODO: Should this receive the whole class as argument?\n def __init__(self, predict: Callable):\n self._predict = predict\n self._input_codecs, self._output_codecs = self._get_codecs(predict)\n\n def _get_codecs(self, pred: Callable) -> Tuple[Dict[str, Codec], List[Codec]]:\n self._input_hints = self._get_type_hints(pred)\n self._output_hints = _as_list(self._input_hints.pop(\"return\", None))\n\n input_codecs = {}\n for name, type_hint in self._input_hints.items():\n codec = self._find_codec(name=name, type_hint=type_hint, is_input=True)\n # TODO: Consider metadata as well! (needs to be done at runtime)\n input_codecs[name] = codec\n\n output_codecs = []\n for type_hint in self._output_hints:\n # Try either as an input or as a request codec\n codec = self._find_codec(name=None, type_hint=type_hint, is_input=False)\n output_codecs.append(codec)\n\n return input_codecs, output_codecs\n\n def _get_type_hints(self, pred: Callable) -> Dict[str, Type]:\n type_hints = get_type_hints(pred)\n # For us, `typing.Optional` is just syntactic sugar, so let's ensure we\n # unwrap it\n for name, hint in type_hints.items():\n if _is_optional(hint):\n type_hints[name] = _unwrap_optional(hint)\n\n return type_hints\n\n def _find_codec(\n self, name: Optional[str], type_hint: Type, is_input: bool = False\n ) -> Codec:\n codec = find_input_codec(type_hint=type_hint)\n if codec is not None:\n return codec\n\n codec = find_request_codec(type_hint=type_hint) # type: ignore\n if codec is not None:\n return codec\n\n raise CodecNotFound(name=name, payload_type=str(type_hint), is_input=is_input)\n\n def decode_request( # type: ignore\n self, request: InferenceRequest\n ) -> Dict[str, Any]:\n inputs = {}\n extra_request_inputs = []\n for request_input in request.inputs:\n input_name = request_input.name\n if input_name not in self._input_codecs:\n # Aggregate extra request inputs to check later, as they could\n # be part of aggregated request codecs (e.g. like dataframes)\n extra_request_inputs.append(request_input)\n continue\n\n # Ensure matching codec is an input codec\n codec = self._input_codecs[input_name]\n if not _is_input_codec(codec):\n raise CodecNotFound(name=input_name, is_input=True)\n\n inputs[input_name] = codec.decode_input(request_input) # type: ignore\n\n if extra_request_inputs:\n request_codec = self._get_request_codec()\n if not request_codec:\n # If there are no request codecs that can aggregate all\n # remaining inputs, raise an error\n raise InputsNotFound(extra_request_inputs, self._input_codecs)\n\n # We create a fake request built from the extra request inputs\n name, codec = request_codec\n extra_inputs = InferenceRequest(inputs=extra_request_inputs)\n inputs[name] = codec.decode_request(extra_inputs)\n\n return inputs\n\n def _get_request_codec(self) -> Optional[Tuple[str, RequestCodec]]:\n for name, codec in self._input_codecs.items():\n if _is_request_codec(codec):\n return name, codec # type: ignore\n\n return None\n\n def encode_response( # type: ignore\n self, model_name: str, payload: Any, model_version: Optional[str] = None\n ) -> InferenceResponse:\n payloads = _as_list(payload)\n outputs = []\n for idx, payload in enumerate(payloads):\n outputs += self._encode_outputs(idx, payload)\n\n return InferenceResponse(\n model_name=model_name, model_version=model_version, outputs=outputs\n )\n\n def _encode_outputs(self, idx: int, payload: Any) -> List[ResponseOutput]:\n output_type = type(payload)\n if idx >= len(self._output_codecs):\n raise OutputNotFound(idx, output_type, self._output_hints)\n\n # TODO: Fallback to encode_by_payload?\n codec = self._output_codecs[idx]\n if not codec.can_encode(payload):\n raise OutputNotFound(idx, output_type, self._output_hints)\n\n if _is_input_codec(codec):\n # TODO: Check model metadata for output names\n output_name = f\"output-{idx}\"\n response_output = codec.encode_output( # type: ignore\n name=output_name, payload=payload\n )\n return [response_output]\n\n if _is_request_codec(codec):\n # NOTE: We will ignore `model_name` and only grab the outputs\n response = codec.encode_response( # type: ignore\n model_name=\"\", payload=payload\n )\n return response.outputs\n\n return []\n\n\ndef decode_args(predict: Callable) -> PredictFunc:\n codec = SignatureCodec(predict)\n\n @wraps(predict)\n async def _f(self: \"MLModel\", request: InferenceRequest) -> InferenceResponse:\n inputs = codec.decode_request(request=request)\n\n outputs = await predict(self, **inputs)\n\n return codec.encode_response(\n model_name=self.name, payload=outputs, model_version=self.version\n )\n\n return _f\n", "path": "mlserver/codecs/decorator.py"}]}
| 3,080 | 113 |
gh_patches_debug_21335
|
rasdani/github-patches
|
git_diff
|
bridgecrewio__checkov-5189
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[CKV_AZURE_6] AKS API Server White Tests Failing with Correct Code
**Describe the issue**
[CKV_AZURE_6](https://github.com/bridgecrewio/checkov/tree/master/checkov/arm/checks/resource/AKSApiServerAuthorizedIpRanges.py)
This check should trigger when an API Server whitelist IP isn't found in the TF code.
**Examples**
Please share an example code sample (in the IaC of your choice) + the expected outcomes.
Sample (Pre 3.39.0 Provider Version):
```
resource "azurerm_kubernetes_cluster" "aks_k2" {
name = var.cluster_name
location = azurerm_resource_group.rg_aks.location
resource_group_name = azurerm_resource_group.rg_aks.name
sku_tier = var.sku_tier
dns_prefix = var.dns_name
api_server_authorized_ip_ranges = [my_ip_list]
}
```
Sample (Post 3.39.0):
```
resource "azurerm_kubernetes_cluster" "aks_k2" {
name = var.cluster_name
location = azurerm_resource_group.rg_aks.location
resource_group_name = azurerm_resource_group.rg_aks.name
sku_tier = var.sku_tier
dns_prefix = var.dns_name
api_server_access_profile {
authorized_ip_ranges = [my_ip_list]
}
}
```
Both have expected outcome of passing this test, as we list 4 IP's for whitelisting.
We are failing tests

**Version (please complete the following information):**
- Checkov Version: checkov-2.3.272
</issue>
<code>
[start of checkov/terraform/checks/resource/azure/AKSApiServerAuthorizedIpRanges.py]
1 from __future__ import annotations
2
3 from typing import Any
4
5 from checkov.common.models.consts import ANY_VALUE
6 from checkov.common.models.enums import CheckCategories, CheckResult
7 from checkov.terraform.checks.resource.base_resource_value_check import BaseResourceValueCheck
8
9
10 class AKSApiServerAuthorizedIpRanges(BaseResourceValueCheck):
11 def __init__(self) -> None:
12 name = "Ensure AKS has an API Server Authorized IP Ranges enabled"
13 id = "CKV_AZURE_6"
14 supported_resources = ("azurerm_kubernetes_cluster",)
15 categories = (CheckCategories.KUBERNETES,)
16 super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)
17
18 def get_inspected_key(self) -> str:
19 return "api_server_authorized_ip_ranges/[0]"
20
21 def get_expected_value(self) -> Any:
22 return ANY_VALUE
23
24 def scan_resource_conf(self, conf: dict[str, list[Any]]) -> CheckResult:
25 # can't be set for private cluster
26 private_cluster_enabled = conf.get("private_cluster_enabled", [False])[0]
27 if private_cluster_enabled:
28 return CheckResult.PASSED
29 return super().scan_resource_conf(conf)
30
31
32 check = AKSApiServerAuthorizedIpRanges()
33
[end of checkov/terraform/checks/resource/azure/AKSApiServerAuthorizedIpRanges.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/checkov/terraform/checks/resource/azure/AKSApiServerAuthorizedIpRanges.py b/checkov/terraform/checks/resource/azure/AKSApiServerAuthorizedIpRanges.py
--- a/checkov/terraform/checks/resource/azure/AKSApiServerAuthorizedIpRanges.py
+++ b/checkov/terraform/checks/resource/azure/AKSApiServerAuthorizedIpRanges.py
@@ -16,7 +16,7 @@
super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)
def get_inspected_key(self) -> str:
- return "api_server_authorized_ip_ranges/[0]"
+ return "api_server_access_profile/[0]/authorized_ip_ranges/[0]"
def get_expected_value(self) -> Any:
return ANY_VALUE
@@ -26,6 +26,12 @@
private_cluster_enabled = conf.get("private_cluster_enabled", [False])[0]
if private_cluster_enabled:
return CheckResult.PASSED
+
+ # provider version <=3.38.0
+ api_server = conf.get("api_server_authorized_ip_ranges")
+ if api_server and isinstance(api_server, list) and api_server[0]:
+ return CheckResult.PASSED
+
return super().scan_resource_conf(conf)
|
{"golden_diff": "diff --git a/checkov/terraform/checks/resource/azure/AKSApiServerAuthorizedIpRanges.py b/checkov/terraform/checks/resource/azure/AKSApiServerAuthorizedIpRanges.py\n--- a/checkov/terraform/checks/resource/azure/AKSApiServerAuthorizedIpRanges.py\n+++ b/checkov/terraform/checks/resource/azure/AKSApiServerAuthorizedIpRanges.py\n@@ -16,7 +16,7 @@\n super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)\n \n def get_inspected_key(self) -> str:\n- return \"api_server_authorized_ip_ranges/[0]\"\n+ return \"api_server_access_profile/[0]/authorized_ip_ranges/[0]\"\n \n def get_expected_value(self) -> Any:\n return ANY_VALUE\n@@ -26,6 +26,12 @@\n private_cluster_enabled = conf.get(\"private_cluster_enabled\", [False])[0]\n if private_cluster_enabled:\n return CheckResult.PASSED\n+\n+ # provider version <=3.38.0\n+ api_server = conf.get(\"api_server_authorized_ip_ranges\")\n+ if api_server and isinstance(api_server, list) and api_server[0]:\n+ return CheckResult.PASSED\n+\n return super().scan_resource_conf(conf)\n", "issue": "[CKV_AZURE_6] AKS API Server White Tests Failing with Correct Code\n**Describe the issue**\r\n[CKV_AZURE_6](https://github.com/bridgecrewio/checkov/tree/master/checkov/arm/checks/resource/AKSApiServerAuthorizedIpRanges.py)\r\n\r\nThis check should trigger when an API Server whitelist IP isn't found in the TF code. \r\n\r\n**Examples**\r\nPlease share an example code sample (in the IaC of your choice) + the expected outcomes.\r\n\r\nSample (Pre 3.39.0 Provider Version):\r\n```\r\nresource \"azurerm_kubernetes_cluster\" \"aks_k2\" {\r\n name = var.cluster_name\r\n location = azurerm_resource_group.rg_aks.location\r\n resource_group_name = azurerm_resource_group.rg_aks.name\r\n sku_tier = var.sku_tier\r\n dns_prefix = var.dns_name\r\n api_server_authorized_ip_ranges = [my_ip_list]\r\n}\r\n```\r\nSample (Post 3.39.0):\r\n```\r\nresource \"azurerm_kubernetes_cluster\" \"aks_k2\" {\r\n name = var.cluster_name\r\n location = azurerm_resource_group.rg_aks.location\r\n resource_group_name = azurerm_resource_group.rg_aks.name\r\n sku_tier = var.sku_tier\r\n dns_prefix = var.dns_name\r\n api_server_access_profile {\r\n authorized_ip_ranges = [my_ip_list]\r\n }\r\n}\r\n```\r\n\r\nBoth have expected outcome of passing this test, as we list 4 IP's for whitelisting.\r\nWe are failing tests\r\n\r\n\r\n**Version (please complete the following information):**\r\n - Checkov Version: checkov-2.3.272\r\n\n", "before_files": [{"content": "from __future__ import annotations\n\nfrom typing import Any\n\nfrom checkov.common.models.consts import ANY_VALUE\nfrom checkov.common.models.enums import CheckCategories, CheckResult\nfrom checkov.terraform.checks.resource.base_resource_value_check import BaseResourceValueCheck\n\n\nclass AKSApiServerAuthorizedIpRanges(BaseResourceValueCheck):\n def __init__(self) -> None:\n name = \"Ensure AKS has an API Server Authorized IP Ranges enabled\"\n id = \"CKV_AZURE_6\"\n supported_resources = (\"azurerm_kubernetes_cluster\",)\n categories = (CheckCategories.KUBERNETES,)\n super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)\n\n def get_inspected_key(self) -> str:\n return \"api_server_authorized_ip_ranges/[0]\"\n\n def get_expected_value(self) -> Any:\n return ANY_VALUE\n\n def scan_resource_conf(self, conf: dict[str, list[Any]]) -> CheckResult:\n # can't be set for private cluster\n private_cluster_enabled = conf.get(\"private_cluster_enabled\", [False])[0]\n if private_cluster_enabled:\n return CheckResult.PASSED\n return super().scan_resource_conf(conf)\n\n\ncheck = AKSApiServerAuthorizedIpRanges()\n", "path": "checkov/terraform/checks/resource/azure/AKSApiServerAuthorizedIpRanges.py"}]}
| 1,317 | 281 |
gh_patches_debug_48523
|
rasdani/github-patches
|
git_diff
|
meltano__meltano-6488
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Support Node v16
Currently building the Meltano UI with Node 16 results in a crash while building libsass. [That library is deprecated](https://sass-lang.com/blog/libsass-is-deprecated), so we should switch to using Dart-sass instead.
CC @alexmarple
</issue>
<code>
[start of scripts/alembic_freeze.py]
1 #!/usr/bin/env python3
2
3 """Script to freeze the Meltano database - executed by the Makefile."""
4
5 from __future__ import annotations
6
7 from alembic.script import ScriptDirectory
8
9 from meltano.migrations import LOCK_PATH, MIGRATION_DIR
10
11 scripts = ScriptDirectory(str(MIGRATION_DIR))
12
13 with LOCK_PATH.open("w") as lock:
14 HEAD = scripts.get_current_head()
15 lock.write(HEAD)
16
17 print(f"Meltano database frozen at {HEAD}.")
18
[end of scripts/alembic_freeze.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/scripts/alembic_freeze.py b/scripts/alembic_freeze.py
--- a/scripts/alembic_freeze.py
+++ b/scripts/alembic_freeze.py
@@ -1,6 +1,6 @@
#!/usr/bin/env python3
-"""Script to freeze the Meltano database - executed by the Makefile."""
+"""Script to freeze the Meltano database - executed by GitHub CI."""
from __future__ import annotations
|
{"golden_diff": "diff --git a/scripts/alembic_freeze.py b/scripts/alembic_freeze.py\n--- a/scripts/alembic_freeze.py\n+++ b/scripts/alembic_freeze.py\n@@ -1,6 +1,6 @@\n #!/usr/bin/env python3\n \n-\"\"\"Script to freeze the Meltano database - executed by the Makefile.\"\"\"\n+\"\"\"Script to freeze the Meltano database - executed by GitHub CI.\"\"\"\n \n from __future__ import annotations\n", "issue": "Support Node v16\nCurrently building the Meltano UI with Node 16 results in a crash while building libsass. [That library is deprecated](https://sass-lang.com/blog/libsass-is-deprecated), so we should switch to using Dart-sass instead.\r\n\r\nCC @alexmarple \n", "before_files": [{"content": "#!/usr/bin/env python3\n\n\"\"\"Script to freeze the Meltano database - executed by the Makefile.\"\"\"\n\nfrom __future__ import annotations\n\nfrom alembic.script import ScriptDirectory\n\nfrom meltano.migrations import LOCK_PATH, MIGRATION_DIR\n\nscripts = ScriptDirectory(str(MIGRATION_DIR))\n\nwith LOCK_PATH.open(\"w\") as lock:\n HEAD = scripts.get_current_head()\n lock.write(HEAD)\n\nprint(f\"Meltano database frozen at {HEAD}.\")\n", "path": "scripts/alembic_freeze.py"}]}
| 737 | 103 |
gh_patches_debug_25821
|
rasdani/github-patches
|
git_diff
|
sktime__sktime-439
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[DOC]add new software in Wiki
#### Describe the issue linked to the documentation
I cannot find the LoudML in the Related Software in wiki page.https://github.com/alan-turing-institute/sktime/wiki/related-software
<!--
Tell us about the confusion introduced in the documentation.
-->
Loud ML is an open source time series inference engine built on top of TensorFlow. It's useful to forecast data, detect outliers, and automate your process using future knowledge. here is link: https://github.com/regel/loudml
#### Suggest a potential alternative/fix
could you add it into the list?
<!--
Tell us how we could improve the documentation in this regard.
-->
</issue>
<code>
[start of docs/source/conf.py]
1 #!/usr/bin/env python3
2 # -*- coding: utf-8 -*-
3 #
4 # Configuration file for the Sphinx documentation builder.
5 #
6 # This file does only contain a selection of the most common options. For a
7 # full list see the documentation:
8 # http://www.sphinx-doc.org/en/master/config
9
10 import os
11 import sys
12
13 import sktime
14
15 # -- Path setup --------------------------------------------------------------
16
17 # If extensions (or modules to document with autodoc) are in another directory,
18 # add these directories to sys.path here. If the directory is relative to the
19 # documentation root, use os.path.abspath to make it absolute, like shown here.
20
21 # When we build the docs on readthedocs, we build the package and want to
22 # use the built files in order for sphinx to be able to properly read the
23 # Cython files. Hence, we do not add the source code path to the system
24 # path.
25 ON_READTHEDOCS = os.environ.get("READTHEDOCS") == "True"
26 if not ON_READTHEDOCS:
27 sys.path.insert(0, os.path.abspath("../.."))
28
29 # -- Project information -----------------------------------------------------
30 PROJECT = "sktime"
31 COPYRIGHT = "2019 - 2020 (BSD-3-Clause License)"
32 AUTHOR = "sktime developers"
33
34 # The full version, including alpha/beta/rc tags
35 CURRENT_VERSION = f"v{sktime.__version__}"
36
37 # If on readthedocs, and we're building the latest version, update tag to generate
38 # correct links in notebooks
39 if ON_READTHEDOCS:
40 READTHEDOCS_VERSION = os.environ.get("READTHEDOCS_VERSION")
41 if READTHEDOCS_VERSION == "latest":
42 CURRENT_VERSION = "master"
43
44 # -- General configuration ---------------------------------------------------
45
46 # Add any Sphinx extension module names here, as strings. They can be
47 # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
48 # ones.
49 extensions = [
50 "sphinx.ext.autodoc",
51 "sphinx.ext.autosummary",
52 "sphinx.ext.intersphinx",
53 "sphinx.ext.autosectionlabel",
54 "sphinx.ext.todo",
55 "sphinx.ext.mathjax",
56 # 'sphinx.ext.viewcode', # link to auto-generated source code files (rst)
57 "sphinx.ext.githubpages",
58 "sphinx.ext.linkcode", # link to GitHub source code via linkcode_resolve()
59 "sphinx.ext.napoleon",
60 "nbsphinx", # integrates example notebooks
61 "m2r2", # markdown rendering
62 ]
63
64 # Add any paths that contain templates here, relative to this directory.
65 templates_path = ["_templates"]
66
67 # The suffix(es) of source filenames.
68 # You can specify multiple suffix as a list of string:
69 source_suffix = {
70 ".rst": "restructuredtext",
71 ".md": "markdown",
72 }
73
74 # The master toctree document.
75 master_doc = "index"
76
77 # The language for content autogenerated by Sphinx. Refer to documentation
78 # for a list of supported languages.
79 #
80 # This is also used if you do content translation via gettext catalogs.
81 # Usually you set "language" from the command line for these cases.
82 language = None
83
84 # List of patterns, relative to source directory, that match files and
85 # directories to ignore when looking for source files.
86 # This pattern also affects html_static_path and html_extra_path.
87 exclude_patterns = ["_build", ".ipynb_checkpoints", "Thumbs.db", ".DS_Store"]
88
89 # The name of the Pygments (syntax highlighting) style to use.
90 pygments_style = "sphinx"
91
92 # see http://stackoverflow.com/q/12206334/562769
93 numpydoc_show_class_members = True
94 numpydoc_class_members_toctree = False
95
96 # generate autosummary even if no references
97 autosummary_generate = True
98 autodoc_default_flags = ["members", "inherited-members"]
99
100
101 def linkcode_resolve(domain, info):
102 def find_source():
103 # try to find the file and line number, based on code from numpy:
104 # https://github.com/numpy/numpy/blob/master/doc/source/conf.py#L286
105 obj = sys.modules[info["module"]]
106 for part in info["fullname"].split("."):
107 obj = getattr(obj, part)
108 import inspect
109 import os
110
111 fn = inspect.getsourcefile(obj)
112 fn = os.path.relpath(fn, start=os.path.dirname(sktime.__file__))
113 source, lineno = inspect.getsourcelines(obj)
114 return fn, lineno, lineno + len(source) - 1
115
116 if domain != "py" or not info["module"]:
117 return None
118 try:
119 filename = "sktime/%s#L%d-L%d" % find_source()
120 except Exception:
121 filename = info["module"].replace(".", "/") + ".py"
122 return "https://github.com/alan-turing-institute/sktime/blob/%s/%s" % (
123 CURRENT_VERSION,
124 filename,
125 )
126
127
128 # -- Options for HTML output -------------------------------------------------
129
130 # The theme to use for HTML and HTML Help pages. See the documentation for
131 # a list of builtin themes.
132
133 html_theme = "sphinx_rtd_theme"
134 # html_theme = 'bootstrap'
135
136 # Theme options are theme-specific and customize the look and feel of a theme
137 # further. For a list of options available for each theme, see the
138 # documentation.
139
140 html_theme_options = {
141 "prev_next_buttons_location": None,
142 }
143
144 html_favicon = "images/sktime-favicon.ico"
145
146 # Add any paths that contain custom static files (such as style sheets) here,
147 # relative to this directory. They are copied after the builtin static files,
148 # so a file named "default.css" will overwrite the builtin "default.css".
149 html_static_path = ["_static"]
150
151 # Custom sidebar templates, must be a dictionary that maps document names
152 # to template names.
153 #
154 # The default sidebars (for documents that don't match any pattern) are
155 # defined by theme itself. Builtin themes are using these templates by
156 # default: ``['localtoc.html', 'relations.html', 'sourcelink.html',
157 # 'searchbox.html']``.
158 #
159 # html_sidebars = {}
160
161 html_show_sourcelink = False
162
163 # -- Options for HTMLHelp output ---------------------------------------------
164
165 # Output file base name for HTML help builder.
166 htmlhelp_basename = "sktimedoc"
167
168 # -- Options for LaTeX output ------------------------------------------------
169
170 latex_elements = {
171 # The paper size ('letterpaper' or 'a4paper').
172 # 'papersize': 'letterpaper',
173 # The font size ('10pt', '11pt' or '12pt').
174 # 'pointsize': '10pt',
175 # Additional stuff for the LaTeX preamble.
176 # 'preamble': '',
177 # Latex figure (float) alignment
178 # 'figure_align': 'htbp',
179 }
180
181 # Grouping the document tree into LaTeX files. List of tuples
182 # (source start file, target name, title,
183 # author, documentclass [howto, manual, or own class]).
184 latex_documents = [
185 (master_doc, "sktime.tex", "sktime Documentation", "sktime developers", "manual"),
186 ]
187
188 # -- Options for manual page output ------------------------------------------
189
190 # One entry per manual page. List of tuples
191 # (source start file, name, description, authors, manual section).
192 man_pages = [(master_doc, "sktime", "sktime Documentation", [AUTHOR], 1)]
193
194 # -- Options for Texinfo output ----------------------------------------------
195
196 # Grouping the document tree into Texinfo files. List of tuples
197 # (source start file, target name, title, author,
198 # dir menu entry, description, category)
199 texinfo_documents = [
200 (
201 master_doc,
202 "sktime",
203 "sktime Documentation",
204 AUTHOR,
205 "sktime",
206 "One line description of project.",
207 "Miscellaneous",
208 ),
209 ]
210
211
212 def setup(app):
213 def adds(pth):
214 print("Adding stylesheet: %s" % pth) # noqa: T001
215 app.add_css_file(pth)
216
217 adds("fields.css") # for parameters, etc.
218
219
220 # -- Extension configuration -------------------------------------------------
221
222 # -- Options for nbsphinx extension ---------------------------------------
223 nbsphinx_execute = "always" # whether or not to run notebooks
224 nbsphinx_allow_errors = False # False
225 nbsphinx_timeout = 600 # time out in secs, set to -1 to disable timeout
226
227 # add Binder launch buttom at the top
228 CURRENT_FILE = "{{ env.doc2path( env.docname, base=None) }}"
229
230 # make sure Binder points to latest stable release, not master
231 BINDER_URL = f"https://mybinder.org/v2/gh/alan-turing-institute/sktime/{CURRENT_VERSION}?filepath={CURRENT_FILE}" # noqa
232 nbsphinx_prolog = f"""
233 .. |binder| image:: https://mybinder.org/badge_logo.svg
234 .. _Binder: {BINDER_URL}
235
236 |Binder|_
237 """
238
239 # add link to original notebook at the bottom
240 NOTEBOOK_URL = f"https://github.com/alan-turing-institute/sktime/tree/{CURRENT_VERSION}/{CURRENT_FILE}" # noqa
241 nbsphinx_epilog = f"""
242 ----
243
244 Generated by nbsphinx_. The Jupyter notebook can be found here_.
245
246 .. _here: {NOTEBOOK_URL}
247 .. _nbsphinx: https://nbsphinx.readthedocs.io/
248 """
249
250 # -- Options for intersphinx extension ---------------------------------------
251
252 # Example configuration for intersphinx: refer to the Python standard library.
253 intersphinx_mapping = {"https://docs.python.org/": None}
254
255 # -- Options for _todo extension ----------------------------------------------
256 todo_include_todos = False
257
[end of docs/source/conf.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/docs/source/conf.py b/docs/source/conf.py
--- a/docs/source/conf.py
+++ b/docs/source/conf.py
@@ -27,9 +27,9 @@
sys.path.insert(0, os.path.abspath("../.."))
# -- Project information -----------------------------------------------------
-PROJECT = "sktime"
-COPYRIGHT = "2019 - 2020 (BSD-3-Clause License)"
-AUTHOR = "sktime developers"
+project = "sktime"
+copyright = "2019 - 2020 (BSD-3-Clause License)"
+author = "sktime developers"
# The full version, including alpha/beta/rc tags
CURRENT_VERSION = f"v{sktime.__version__}"
@@ -189,7 +189,7 @@
# One entry per manual page. List of tuples
# (source start file, name, description, authors, manual section).
-man_pages = [(master_doc, "sktime", "sktime Documentation", [AUTHOR], 1)]
+man_pages = [(master_doc, "sktime", "sktime Documentation", [author], 1)]
# -- Options for Texinfo output ----------------------------------------------
@@ -201,7 +201,7 @@
master_doc,
"sktime",
"sktime Documentation",
- AUTHOR,
+ author,
"sktime",
"One line description of project.",
"Miscellaneous",
|
{"golden_diff": "diff --git a/docs/source/conf.py b/docs/source/conf.py\n--- a/docs/source/conf.py\n+++ b/docs/source/conf.py\n@@ -27,9 +27,9 @@\n sys.path.insert(0, os.path.abspath(\"../..\"))\n \n # -- Project information -----------------------------------------------------\n-PROJECT = \"sktime\"\n-COPYRIGHT = \"2019 - 2020 (BSD-3-Clause License)\"\n-AUTHOR = \"sktime developers\"\n+project = \"sktime\"\n+copyright = \"2019 - 2020 (BSD-3-Clause License)\"\n+author = \"sktime developers\"\n \n # The full version, including alpha/beta/rc tags\n CURRENT_VERSION = f\"v{sktime.__version__}\"\n@@ -189,7 +189,7 @@\n \n # One entry per manual page. List of tuples\n # (source start file, name, description, authors, manual section).\n-man_pages = [(master_doc, \"sktime\", \"sktime Documentation\", [AUTHOR], 1)]\n+man_pages = [(master_doc, \"sktime\", \"sktime Documentation\", [author], 1)]\n \n # -- Options for Texinfo output ----------------------------------------------\n \n@@ -201,7 +201,7 @@\n master_doc,\n \"sktime\",\n \"sktime Documentation\",\n- AUTHOR,\n+ author,\n \"sktime\",\n \"One line description of project.\",\n \"Miscellaneous\",\n", "issue": "[DOC]add new software in Wiki\n#### Describe the issue linked to the documentation\r\nI cannot find the LoudML in the Related Software in wiki page.https://github.com/alan-turing-institute/sktime/wiki/related-software\r\n<!--\r\nTell us about the confusion introduced in the documentation.\r\n-->\r\nLoud ML is an open source time series inference engine built on top of TensorFlow. It's useful to forecast data, detect outliers, and automate your process using future knowledge. here is link\uff1a https://github.com/regel/loudml\r\n#### Suggest a potential alternative/fix\r\ncould you add it into the list?\r\n<!--\r\nTell us how we could improve the documentation in this regard.\r\n-->\r\n\n", "before_files": [{"content": "#!/usr/bin/env python3\n# -*- coding: utf-8 -*-\n#\n# Configuration file for the Sphinx documentation builder.\n#\n# This file does only contain a selection of the most common options. For a\n# full list see the documentation:\n# http://www.sphinx-doc.org/en/master/config\n\nimport os\nimport sys\n\nimport sktime\n\n# -- Path setup --------------------------------------------------------------\n\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\n\n# When we build the docs on readthedocs, we build the package and want to\n# use the built files in order for sphinx to be able to properly read the\n# Cython files. Hence, we do not add the source code path to the system\n# path.\nON_READTHEDOCS = os.environ.get(\"READTHEDOCS\") == \"True\"\nif not ON_READTHEDOCS:\n sys.path.insert(0, os.path.abspath(\"../..\"))\n\n# -- Project information -----------------------------------------------------\nPROJECT = \"sktime\"\nCOPYRIGHT = \"2019 - 2020 (BSD-3-Clause License)\"\nAUTHOR = \"sktime developers\"\n\n# The full version, including alpha/beta/rc tags\nCURRENT_VERSION = f\"v{sktime.__version__}\"\n\n# If on readthedocs, and we're building the latest version, update tag to generate\n# correct links in notebooks\nif ON_READTHEDOCS:\n READTHEDOCS_VERSION = os.environ.get(\"READTHEDOCS_VERSION\")\n if READTHEDOCS_VERSION == \"latest\":\n CURRENT_VERSION = \"master\"\n\n# -- General configuration ---------------------------------------------------\n\n# Add any Sphinx extension module names here, as strings. They can be\n# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom\n# ones.\nextensions = [\n \"sphinx.ext.autodoc\",\n \"sphinx.ext.autosummary\",\n \"sphinx.ext.intersphinx\",\n \"sphinx.ext.autosectionlabel\",\n \"sphinx.ext.todo\",\n \"sphinx.ext.mathjax\",\n # 'sphinx.ext.viewcode', # link to auto-generated source code files (rst)\n \"sphinx.ext.githubpages\",\n \"sphinx.ext.linkcode\", # link to GitHub source code via linkcode_resolve()\n \"sphinx.ext.napoleon\",\n \"nbsphinx\", # integrates example notebooks\n \"m2r2\", # markdown rendering\n]\n\n# Add any paths that contain templates here, relative to this directory.\ntemplates_path = [\"_templates\"]\n\n# The suffix(es) of source filenames.\n# You can specify multiple suffix as a list of string:\nsource_suffix = {\n \".rst\": \"restructuredtext\",\n \".md\": \"markdown\",\n}\n\n# The master toctree document.\nmaster_doc = \"index\"\n\n# The language for content autogenerated by Sphinx. Refer to documentation\n# for a list of supported languages.\n#\n# This is also used if you do content translation via gettext catalogs.\n# Usually you set \"language\" from the command line for these cases.\nlanguage = None\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\n# This pattern also affects html_static_path and html_extra_path.\nexclude_patterns = [\"_build\", \".ipynb_checkpoints\", \"Thumbs.db\", \".DS_Store\"]\n\n# The name of the Pygments (syntax highlighting) style to use.\npygments_style = \"sphinx\"\n\n# see http://stackoverflow.com/q/12206334/562769\nnumpydoc_show_class_members = True\nnumpydoc_class_members_toctree = False\n\n# generate autosummary even if no references\nautosummary_generate = True\nautodoc_default_flags = [\"members\", \"inherited-members\"]\n\n\ndef linkcode_resolve(domain, info):\n def find_source():\n # try to find the file and line number, based on code from numpy:\n # https://github.com/numpy/numpy/blob/master/doc/source/conf.py#L286\n obj = sys.modules[info[\"module\"]]\n for part in info[\"fullname\"].split(\".\"):\n obj = getattr(obj, part)\n import inspect\n import os\n\n fn = inspect.getsourcefile(obj)\n fn = os.path.relpath(fn, start=os.path.dirname(sktime.__file__))\n source, lineno = inspect.getsourcelines(obj)\n return fn, lineno, lineno + len(source) - 1\n\n if domain != \"py\" or not info[\"module\"]:\n return None\n try:\n filename = \"sktime/%s#L%d-L%d\" % find_source()\n except Exception:\n filename = info[\"module\"].replace(\".\", \"/\") + \".py\"\n return \"https://github.com/alan-turing-institute/sktime/blob/%s/%s\" % (\n CURRENT_VERSION,\n filename,\n )\n\n\n# -- Options for HTML output -------------------------------------------------\n\n# The theme to use for HTML and HTML Help pages. See the documentation for\n# a list of builtin themes.\n\nhtml_theme = \"sphinx_rtd_theme\"\n# html_theme = 'bootstrap'\n\n# Theme options are theme-specific and customize the look and feel of a theme\n# further. For a list of options available for each theme, see the\n# documentation.\n\nhtml_theme_options = {\n \"prev_next_buttons_location\": None,\n}\n\nhtml_favicon = \"images/sktime-favicon.ico\"\n\n# Add any paths that contain custom static files (such as style sheets) here,\n# relative to this directory. They are copied after the builtin static files,\n# so a file named \"default.css\" will overwrite the builtin \"default.css\".\nhtml_static_path = [\"_static\"]\n\n# Custom sidebar templates, must be a dictionary that maps document names\n# to template names.\n#\n# The default sidebars (for documents that don't match any pattern) are\n# defined by theme itself. Builtin themes are using these templates by\n# default: ``['localtoc.html', 'relations.html', 'sourcelink.html',\n# 'searchbox.html']``.\n#\n# html_sidebars = {}\n\nhtml_show_sourcelink = False\n\n# -- Options for HTMLHelp output ---------------------------------------------\n\n# Output file base name for HTML help builder.\nhtmlhelp_basename = \"sktimedoc\"\n\n# -- Options for LaTeX output ------------------------------------------------\n\nlatex_elements = {\n # The paper size ('letterpaper' or 'a4paper').\n # 'papersize': 'letterpaper',\n # The font size ('10pt', '11pt' or '12pt').\n # 'pointsize': '10pt',\n # Additional stuff for the LaTeX preamble.\n # 'preamble': '',\n # Latex figure (float) alignment\n # 'figure_align': 'htbp',\n}\n\n# Grouping the document tree into LaTeX files. List of tuples\n# (source start file, target name, title,\n# author, documentclass [howto, manual, or own class]).\nlatex_documents = [\n (master_doc, \"sktime.tex\", \"sktime Documentation\", \"sktime developers\", \"manual\"),\n]\n\n# -- Options for manual page output ------------------------------------------\n\n# One entry per manual page. List of tuples\n# (source start file, name, description, authors, manual section).\nman_pages = [(master_doc, \"sktime\", \"sktime Documentation\", [AUTHOR], 1)]\n\n# -- Options for Texinfo output ----------------------------------------------\n\n# Grouping the document tree into Texinfo files. List of tuples\n# (source start file, target name, title, author,\n# dir menu entry, description, category)\ntexinfo_documents = [\n (\n master_doc,\n \"sktime\",\n \"sktime Documentation\",\n AUTHOR,\n \"sktime\",\n \"One line description of project.\",\n \"Miscellaneous\",\n ),\n]\n\n\ndef setup(app):\n def adds(pth):\n print(\"Adding stylesheet: %s\" % pth) # noqa: T001\n app.add_css_file(pth)\n\n adds(\"fields.css\") # for parameters, etc.\n\n\n# -- Extension configuration -------------------------------------------------\n\n# -- Options for nbsphinx extension ---------------------------------------\nnbsphinx_execute = \"always\" # whether or not to run notebooks\nnbsphinx_allow_errors = False # False\nnbsphinx_timeout = 600 # time out in secs, set to -1 to disable timeout\n\n# add Binder launch buttom at the top\nCURRENT_FILE = \"{{ env.doc2path( env.docname, base=None) }}\"\n\n# make sure Binder points to latest stable release, not master\nBINDER_URL = f\"https://mybinder.org/v2/gh/alan-turing-institute/sktime/{CURRENT_VERSION}?filepath={CURRENT_FILE}\" # noqa\nnbsphinx_prolog = f\"\"\"\n.. |binder| image:: https://mybinder.org/badge_logo.svg\n.. _Binder: {BINDER_URL}\n\n|Binder|_\n\"\"\"\n\n# add link to original notebook at the bottom\nNOTEBOOK_URL = f\"https://github.com/alan-turing-institute/sktime/tree/{CURRENT_VERSION}/{CURRENT_FILE}\" # noqa\nnbsphinx_epilog = f\"\"\"\n----\n\nGenerated by nbsphinx_. The Jupyter notebook can be found here_.\n\n.. _here: {NOTEBOOK_URL}\n.. _nbsphinx: https://nbsphinx.readthedocs.io/\n\"\"\"\n\n# -- Options for intersphinx extension ---------------------------------------\n\n# Example configuration for intersphinx: refer to the Python standard library.\nintersphinx_mapping = {\"https://docs.python.org/\": None}\n\n# -- Options for _todo extension ----------------------------------------------\ntodo_include_todos = False\n", "path": "docs/source/conf.py"}]}
| 3,478 | 313 |
gh_patches_debug_2696
|
rasdani/github-patches
|
git_diff
|
dbt-labs__dbt-core-7080
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[CT-2225] [Bug] Suddenly getting ModuleNotFoundError: No module named 'pytz'
### Is this a new bug in dbt-core?
- [X] I believe this is a new bug in dbt-core
- [X] I have searched the existing issues, and I could not find an existing issue for this bug
### Current Behavior
I am installing dbt-bigquery with meltano (which installs it in a isolated *venv*).
Today when invoking `dbt deps` using `meltano invoke dbt-bigquery:deps` I am getting a stacktrace with
ModuleNotFoundError: No module named 'pytz'
### Expected Behavior
`pytz` should be found. I have noted that it is not included in the requirements. So while it's strange that it suddenly started failing, maybe it was more of an accident that it ever worked in the first place?
### Steps To Reproduce
With versions specified as
dbt-core~=1.3.0
dbt-bigquery~=1.3.0
invoking `dbt deps` should not throw a ModuleNotFoundError
### Relevant log output
```shell
Traceback (most recent call last):
File "/workspaces/elt/.meltano/transformers/dbt-bigquery/venv/bin/dbt", line 5, in <module>
from dbt.main import main
File "/workspaces/elt/.meltano/transformers/dbt-bigquery/venv/lib/python3.9/site-packages/dbt/main.py", line 24, in <module>
import dbt.task.build as build_task
File "/workspaces/elt/.meltano/transformers/dbt-bigquery/venv/lib/python3.9/site-packages/dbt/task/build.py", line 1, in <module>
from .run import RunTask, ModelRunner as run_model_runner
File "/workspaces/elt/.meltano/transformers/dbt-bigquery/venv/lib/python3.9/site-packages/dbt/task/run.py", line 8, in <module>
from .compile import CompileRunner, CompileTask
File "/workspaces/elt/.meltano/transformers/dbt-bigquery/venv/lib/python3.9/site-packages/dbt/task/compile.py", line 4, in <module>
from .runnable import GraphRunnableTask
File "/workspaces/elt/.meltano/transformers/dbt-bigquery/venv/lib/python3.9/site-packages/dbt/task/runnable.py", line 11, in <module>
from .printer import (
File "/workspaces/elt/.meltano/transformers/dbt-bigquery/venv/lib/python3.9/site-packages/dbt/task/printer.py", line 22, in <module>
from dbt.tracking import InvocationProcessor
File "/workspaces/elt/.meltano/transformers/dbt-bigquery/venv/lib/python3.9/site-packages/dbt/tracking.py", line 25, in <module>
import pytz
ModuleNotFoundError: No module named 'pytz'
```
### Environment
```markdown
- OS: Linux (fresh docker container inside virtual environment)
- Python: 3.9
- dbt: 1.3.1 (~=1.3.0)
```
### Which database adapter are you using with dbt?
other (mention it in "Additional Context")
### Additional Context
_No response_
</issue>
<code>
[start of core/setup.py]
1 #!/usr/bin/env python
2 import os
3 import sys
4
5 if sys.version_info < (3, 7, 2):
6 print("Error: dbt does not support this version of Python.")
7 print("Please upgrade to Python 3.7.2 or higher.")
8 sys.exit(1)
9
10
11 from setuptools import setup
12
13 try:
14 from setuptools import find_namespace_packages
15 except ImportError:
16 # the user has a downlevel version of setuptools.
17 print("Error: dbt requires setuptools v40.1.0 or higher.")
18 print('Please upgrade setuptools with "pip install --upgrade setuptools" ' "and try again")
19 sys.exit(1)
20
21
22 this_directory = os.path.abspath(os.path.dirname(__file__))
23 with open(os.path.join(this_directory, "README.md")) as f:
24 long_description = f.read()
25
26
27 package_name = "dbt-core"
28 package_version = "1.2.4"
29 description = """With dbt, data analysts and engineers can build analytics \
30 the way engineers build applications."""
31
32
33 setup(
34 name=package_name,
35 version=package_version,
36 description=description,
37 long_description=long_description,
38 long_description_content_type="text/markdown",
39 author="dbt Labs",
40 author_email="[email protected]",
41 url="https://github.com/dbt-labs/dbt-core",
42 packages=find_namespace_packages(include=["dbt", "dbt.*"]),
43 include_package_data=True,
44 test_suite="test",
45 entry_points={
46 "console_scripts": [
47 "dbt = dbt.main:main",
48 ],
49 },
50 install_requires=[
51 "Jinja2==2.11.3",
52 "MarkupSafe>=0.23,<2.1",
53 "agate>=1.6,<1.6.4",
54 "click>=7.0,<9",
55 "colorama>=0.3.9,<0.4.6",
56 "hologram>=0.0.14,<=0.0.15",
57 "isodate>=0.6,<0.7",
58 "logbook>=1.5,<1.6",
59 "mashumaro==2.9",
60 "minimal-snowplow-tracker==0.0.2",
61 "networkx>=2.3,<2.8.1;python_version<'3.8'",
62 "networkx>=2.3,<3;python_version>='3.8'",
63 "packaging>=20.9,<22.0",
64 "sqlparse>=0.2.3,<0.5",
65 "dbt-extractor~=0.4.1",
66 "typing-extensions>=3.7.4",
67 "werkzeug>=1,<3",
68 # the following are all to match snowflake-connector-python
69 "requests<3.0.0",
70 "idna>=2.5,<4",
71 "cffi>=1.9,<2.0.0",
72 ],
73 zip_safe=False,
74 classifiers=[
75 "Development Status :: 5 - Production/Stable",
76 "License :: OSI Approved :: Apache Software License",
77 "Operating System :: Microsoft :: Windows",
78 "Operating System :: MacOS :: MacOS X",
79 "Operating System :: POSIX :: Linux",
80 "Programming Language :: Python :: 3.7",
81 "Programming Language :: Python :: 3.8",
82 "Programming Language :: Python :: 3.9",
83 "Programming Language :: Python :: 3.10",
84 ],
85 python_requires=">=3.7.2",
86 )
87
[end of core/setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/core/setup.py b/core/setup.py
--- a/core/setup.py
+++ b/core/setup.py
@@ -65,6 +65,7 @@
"dbt-extractor~=0.4.1",
"typing-extensions>=3.7.4",
"werkzeug>=1,<3",
+ "pytz>=2015.7",
# the following are all to match snowflake-connector-python
"requests<3.0.0",
"idna>=2.5,<4",
|
{"golden_diff": "diff --git a/core/setup.py b/core/setup.py\n--- a/core/setup.py\n+++ b/core/setup.py\n@@ -65,6 +65,7 @@\n \"dbt-extractor~=0.4.1\",\n \"typing-extensions>=3.7.4\",\n \"werkzeug>=1,<3\",\n+ \"pytz>=2015.7\",\n # the following are all to match snowflake-connector-python\n \"requests<3.0.0\",\n \"idna>=2.5,<4\",\n", "issue": "[CT-2225] [Bug] Suddenly getting ModuleNotFoundError: No module named 'pytz'\n### Is this a new bug in dbt-core?\n\n- [X] I believe this is a new bug in dbt-core\n- [X] I have searched the existing issues, and I could not find an existing issue for this bug\n\n### Current Behavior\n\nI am installing dbt-bigquery with meltano (which installs it in a isolated *venv*).\r\n\r\nToday when invoking `dbt deps` using `meltano invoke dbt-bigquery:deps` I am getting a stacktrace with \r\n\r\n ModuleNotFoundError: No module named 'pytz'\r\n\r\n\n\n### Expected Behavior\n\n`pytz` should be found. I have noted that it is not included in the requirements. So while it's strange that it suddenly started failing, maybe it was more of an accident that it ever worked in the first place?\n\n### Steps To Reproduce\n\nWith versions specified as\r\n\r\n dbt-core~=1.3.0\r\n dbt-bigquery~=1.3.0\r\n\r\ninvoking `dbt deps` should not throw a ModuleNotFoundError\n\n### Relevant log output\n\n```shell\nTraceback (most recent call last):\r\n File \"/workspaces/elt/.meltano/transformers/dbt-bigquery/venv/bin/dbt\", line 5, in <module>\r\n from dbt.main import main\r\n File \"/workspaces/elt/.meltano/transformers/dbt-bigquery/venv/lib/python3.9/site-packages/dbt/main.py\", line 24, in <module>\r\n import dbt.task.build as build_task\r\n File \"/workspaces/elt/.meltano/transformers/dbt-bigquery/venv/lib/python3.9/site-packages/dbt/task/build.py\", line 1, in <module>\r\n from .run import RunTask, ModelRunner as run_model_runner\r\n File \"/workspaces/elt/.meltano/transformers/dbt-bigquery/venv/lib/python3.9/site-packages/dbt/task/run.py\", line 8, in <module>\r\n from .compile import CompileRunner, CompileTask\r\n File \"/workspaces/elt/.meltano/transformers/dbt-bigquery/venv/lib/python3.9/site-packages/dbt/task/compile.py\", line 4, in <module>\r\n from .runnable import GraphRunnableTask\r\n File \"/workspaces/elt/.meltano/transformers/dbt-bigquery/venv/lib/python3.9/site-packages/dbt/task/runnable.py\", line 11, in <module>\r\n from .printer import (\r\n File \"/workspaces/elt/.meltano/transformers/dbt-bigquery/venv/lib/python3.9/site-packages/dbt/task/printer.py\", line 22, in <module>\r\n from dbt.tracking import InvocationProcessor\r\n File \"/workspaces/elt/.meltano/transformers/dbt-bigquery/venv/lib/python3.9/site-packages/dbt/tracking.py\", line 25, in <module>\r\n import pytz\r\nModuleNotFoundError: No module named 'pytz'\n```\n\n\n### Environment\n\n```markdown\n- OS: Linux (fresh docker container inside virtual environment)\r\n- Python: 3.9\r\n- dbt: 1.3.1 (~=1.3.0)\n```\n\n\n### Which database adapter are you using with dbt?\n\nother (mention it in \"Additional Context\")\n\n### Additional Context\n\n_No response_\n", "before_files": [{"content": "#!/usr/bin/env python\nimport os\nimport sys\n\nif sys.version_info < (3, 7, 2):\n print(\"Error: dbt does not support this version of Python.\")\n print(\"Please upgrade to Python 3.7.2 or higher.\")\n sys.exit(1)\n\n\nfrom setuptools import setup\n\ntry:\n from setuptools import find_namespace_packages\nexcept ImportError:\n # the user has a downlevel version of setuptools.\n print(\"Error: dbt requires setuptools v40.1.0 or higher.\")\n print('Please upgrade setuptools with \"pip install --upgrade setuptools\" ' \"and try again\")\n sys.exit(1)\n\n\nthis_directory = os.path.abspath(os.path.dirname(__file__))\nwith open(os.path.join(this_directory, \"README.md\")) as f:\n long_description = f.read()\n\n\npackage_name = \"dbt-core\"\npackage_version = \"1.2.4\"\ndescription = \"\"\"With dbt, data analysts and engineers can build analytics \\\nthe way engineers build applications.\"\"\"\n\n\nsetup(\n name=package_name,\n version=package_version,\n description=description,\n long_description=long_description,\n long_description_content_type=\"text/markdown\",\n author=\"dbt Labs\",\n author_email=\"[email protected]\",\n url=\"https://github.com/dbt-labs/dbt-core\",\n packages=find_namespace_packages(include=[\"dbt\", \"dbt.*\"]),\n include_package_data=True,\n test_suite=\"test\",\n entry_points={\n \"console_scripts\": [\n \"dbt = dbt.main:main\",\n ],\n },\n install_requires=[\n \"Jinja2==2.11.3\",\n \"MarkupSafe>=0.23,<2.1\",\n \"agate>=1.6,<1.6.4\",\n \"click>=7.0,<9\",\n \"colorama>=0.3.9,<0.4.6\",\n \"hologram>=0.0.14,<=0.0.15\",\n \"isodate>=0.6,<0.7\",\n \"logbook>=1.5,<1.6\",\n \"mashumaro==2.9\",\n \"minimal-snowplow-tracker==0.0.2\",\n \"networkx>=2.3,<2.8.1;python_version<'3.8'\",\n \"networkx>=2.3,<3;python_version>='3.8'\",\n \"packaging>=20.9,<22.0\",\n \"sqlparse>=0.2.3,<0.5\",\n \"dbt-extractor~=0.4.1\",\n \"typing-extensions>=3.7.4\",\n \"werkzeug>=1,<3\",\n # the following are all to match snowflake-connector-python\n \"requests<3.0.0\",\n \"idna>=2.5,<4\",\n \"cffi>=1.9,<2.0.0\",\n ],\n zip_safe=False,\n classifiers=[\n \"Development Status :: 5 - Production/Stable\",\n \"License :: OSI Approved :: Apache Software License\",\n \"Operating System :: Microsoft :: Windows\",\n \"Operating System :: MacOS :: MacOS X\",\n \"Operating System :: POSIX :: Linux\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Programming Language :: Python :: 3.10\",\n ],\n python_requires=\">=3.7.2\",\n)\n", "path": "core/setup.py"}]}
| 2,214 | 118 |
gh_patches_debug_21129
|
rasdani/github-patches
|
git_diff
|
docker__docker-py-1626
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
images.build failing on container build output
The ImageCollection.build() method parses the `docker build` output to extract the name of the image being built by looking for the string "Successfully built {sha256}". Some commands in a Dockerfile can also produce this string causing the build() method to return early with an incorrect image name.
Example output from a `RUN pip install cffi` command:
```
...
Building wheels for collected packages: cffi, pycparser
Running setup.py bdist_wheel for cffi
Stored in directory: /root/.cache/pip/wheels/ea/df/5c/7d11cd4eec7b94be3b0ec4ed0076e0f38846a22c7200576149
Running setup.py bdist_wheel for pycparser
Stored in directory: /root/.cache/pip/wheels/bc/07/7e/cd81c0b23417ebaed9eb2584c00bd539871d5309a7a2324953
Successfully built cffi pycparser
...
```
A partial fix is to make the regex used to search for the string more restrictive by adding start and end anchors. This can still result in some false positives.
A more complete fix only checks the last line of output for the "Success..." string.
PR in progress.
```
docker==2.3.0
docker-pycreds==0.2.1
Python 2.7.5
Client:
Version: 1.12.6
API version: 1.24
Package version: docker-common-1.12.6-16.el7.x86_64
Go version: go1.7.4
Git commit: 3a094bd/1.12.6
Built: Tue Mar 21 13:30:59 2017
OS/Arch: linux/amd64
Server:
Version: 17.03.1-ce
API version: 1.27
Package version:
Go version: go1.7.5
Git commit: c6d412e
Built: Fri Mar 24 00:00:50 2017
OS/Arch: linux/amd64
```
</issue>
<code>
[start of docker/models/images.py]
1 import re
2
3 import six
4
5 from ..api import APIClient
6 from ..errors import BuildError
7 from ..utils.json_stream import json_stream
8 from .resource import Collection, Model
9
10
11 class Image(Model):
12 """
13 An image on the server.
14 """
15 def __repr__(self):
16 return "<%s: '%s'>" % (self.__class__.__name__, "', '".join(self.tags))
17
18 @property
19 def labels(self):
20 """
21 The labels of an image as dictionary.
22 """
23 result = self.attrs['Config'].get('Labels')
24 return result or {}
25
26 @property
27 def short_id(self):
28 """
29 The ID of the image truncated to 10 characters, plus the ``sha256:``
30 prefix.
31 """
32 if self.id.startswith('sha256:'):
33 return self.id[:17]
34 return self.id[:10]
35
36 @property
37 def tags(self):
38 """
39 The image's tags.
40 """
41 tags = self.attrs.get('RepoTags')
42 if tags is None:
43 tags = []
44 return [tag for tag in tags if tag != '<none>:<none>']
45
46 def history(self):
47 """
48 Show the history of an image.
49
50 Returns:
51 (str): The history of the image.
52
53 Raises:
54 :py:class:`docker.errors.APIError`
55 If the server returns an error.
56 """
57 return self.client.api.history(self.id)
58
59 def save(self):
60 """
61 Get a tarball of an image. Similar to the ``docker save`` command.
62
63 Returns:
64 (urllib3.response.HTTPResponse object): The response from the
65 daemon.
66
67 Raises:
68 :py:class:`docker.errors.APIError`
69 If the server returns an error.
70
71 Example:
72
73 >>> image = cli.images.get("fedora:latest")
74 >>> resp = image.save()
75 >>> f = open('/tmp/fedora-latest.tar', 'w')
76 >>> for chunk in resp.stream():
77 >>> f.write(chunk)
78 >>> f.close()
79 """
80 return self.client.api.get_image(self.id)
81
82 def tag(self, repository, tag=None, **kwargs):
83 """
84 Tag this image into a repository. Similar to the ``docker tag``
85 command.
86
87 Args:
88 repository (str): The repository to set for the tag
89 tag (str): The tag name
90 force (bool): Force
91
92 Raises:
93 :py:class:`docker.errors.APIError`
94 If the server returns an error.
95
96 Returns:
97 (bool): ``True`` if successful
98 """
99 self.client.api.tag(self.id, repository, tag=tag, **kwargs)
100
101
102 class ImageCollection(Collection):
103 model = Image
104
105 def build(self, **kwargs):
106 """
107 Build an image and return it. Similar to the ``docker build``
108 command. Either ``path`` or ``fileobj`` must be set.
109
110 If you have a tar file for the Docker build context (including a
111 Dockerfile) already, pass a readable file-like object to ``fileobj``
112 and also pass ``custom_context=True``. If the stream is compressed
113 also, set ``encoding`` to the correct value (e.g ``gzip``).
114
115 If you want to get the raw output of the build, use the
116 :py:meth:`~docker.api.build.BuildApiMixin.build` method in the
117 low-level API.
118
119 Args:
120 path (str): Path to the directory containing the Dockerfile
121 fileobj: A file object to use as the Dockerfile. (Or a file-like
122 object)
123 tag (str): A tag to add to the final image
124 quiet (bool): Whether to return the status
125 nocache (bool): Don't use the cache when set to ``True``
126 rm (bool): Remove intermediate containers. The ``docker build``
127 command now defaults to ``--rm=true``, but we have kept the old
128 default of `False` to preserve backward compatibility
129 stream (bool): *Deprecated for API version > 1.8 (always True)*.
130 Return a blocking generator you can iterate over to retrieve
131 build output as it happens
132 timeout (int): HTTP timeout
133 custom_context (bool): Optional if using ``fileobj``
134 encoding (str): The encoding for a stream. Set to ``gzip`` for
135 compressing
136 pull (bool): Downloads any updates to the FROM image in Dockerfiles
137 forcerm (bool): Always remove intermediate containers, even after
138 unsuccessful builds
139 dockerfile (str): path within the build context to the Dockerfile
140 buildargs (dict): A dictionary of build arguments
141 container_limits (dict): A dictionary of limits applied to each
142 container created by the build process. Valid keys:
143
144 - memory (int): set memory limit for build
145 - memswap (int): Total memory (memory + swap), -1 to disable
146 swap
147 - cpushares (int): CPU shares (relative weight)
148 - cpusetcpus (str): CPUs in which to allow execution, e.g.,
149 ``"0-3"``, ``"0,1"``
150 decode (bool): If set to ``True``, the returned stream will be
151 decoded into dicts on the fly. Default ``False``.
152 cache_from (list): A list of images used for build cache
153 resolution.
154 target (str): Name of the build-stage to build in a multi-stage
155 Dockerfile.
156
157 Returns:
158 (:py:class:`Image`): The built image.
159
160 Raises:
161 :py:class:`docker.errors.BuildError`
162 If there is an error during the build.
163 :py:class:`docker.errors.APIError`
164 If the server returns any other error.
165 ``TypeError``
166 If neither ``path`` nor ``fileobj`` is specified.
167 """
168 resp = self.client.api.build(**kwargs)
169 if isinstance(resp, six.string_types):
170 return self.get(resp)
171 last_event = None
172 for chunk in json_stream(resp):
173 if 'error' in chunk:
174 raise BuildError(chunk['error'])
175 if 'stream' in chunk:
176 match = re.search(
177 r'(Successfully built |sha256:)([0-9a-f]+)',
178 chunk['stream']
179 )
180 if match:
181 image_id = match.group(2)
182 return self.get(image_id)
183 last_event = chunk
184
185 raise BuildError(last_event or 'Unknown')
186
187 def get(self, name):
188 """
189 Gets an image.
190
191 Args:
192 name (str): The name of the image.
193
194 Returns:
195 (:py:class:`Image`): The image.
196
197 Raises:
198 :py:class:`docker.errors.ImageNotFound`
199 If the image does not exist.
200 :py:class:`docker.errors.APIError`
201 If the server returns an error.
202 """
203 return self.prepare_model(self.client.api.inspect_image(name))
204
205 def list(self, name=None, all=False, filters=None):
206 """
207 List images on the server.
208
209 Args:
210 name (str): Only show images belonging to the repository ``name``
211 all (bool): Show intermediate image layers. By default, these are
212 filtered out.
213 filters (dict): Filters to be processed on the image list.
214 Available filters:
215 - ``dangling`` (bool)
216 - ``label`` (str): format either ``key`` or ``key=value``
217
218 Returns:
219 (list of :py:class:`Image`): The images.
220
221 Raises:
222 :py:class:`docker.errors.APIError`
223 If the server returns an error.
224 """
225 resp = self.client.api.images(name=name, all=all, filters=filters)
226 return [self.prepare_model(r) for r in resp]
227
228 def load(self, data):
229 """
230 Load an image that was previously saved using
231 :py:meth:`~docker.models.images.Image.save` (or ``docker save``).
232 Similar to ``docker load``.
233
234 Args:
235 data (binary): Image data to be loaded.
236
237 Raises:
238 :py:class:`docker.errors.APIError`
239 If the server returns an error.
240 """
241 return self.client.api.load_image(data)
242
243 def pull(self, name, tag=None, **kwargs):
244 """
245 Pull an image of the given name and return it. Similar to the
246 ``docker pull`` command.
247
248 If you want to get the raw pull output, use the
249 :py:meth:`~docker.api.image.ImageApiMixin.pull` method in the
250 low-level API.
251
252 Args:
253 repository (str): The repository to pull
254 tag (str): The tag to pull
255 insecure_registry (bool): Use an insecure registry
256 auth_config (dict): Override the credentials that
257 :py:meth:`~docker.client.DockerClient.login` has set for
258 this request. ``auth_config`` should contain the ``username``
259 and ``password`` keys to be valid.
260
261 Returns:
262 (:py:class:`Image`): The image that has been pulled.
263
264 Raises:
265 :py:class:`docker.errors.APIError`
266 If the server returns an error.
267
268 Example:
269
270 >>> image = client.images.pull('busybox')
271 """
272 self.client.api.pull(name, tag=tag, **kwargs)
273 return self.get('{0}:{1}'.format(name, tag) if tag else name)
274
275 def push(self, repository, tag=None, **kwargs):
276 return self.client.api.push(repository, tag=tag, **kwargs)
277 push.__doc__ = APIClient.push.__doc__
278
279 def remove(self, *args, **kwargs):
280 self.client.api.remove_image(*args, **kwargs)
281 remove.__doc__ = APIClient.remove_image.__doc__
282
283 def search(self, *args, **kwargs):
284 return self.client.api.search(*args, **kwargs)
285 search.__doc__ = APIClient.search.__doc__
286
287 def prune(self, filters=None):
288 return self.client.api.prune_images(filters=filters)
289 prune.__doc__ = APIClient.prune_images.__doc__
290
[end of docker/models/images.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/docker/models/images.py b/docker/models/images.py
--- a/docker/models/images.py
+++ b/docker/models/images.py
@@ -169,19 +169,20 @@
if isinstance(resp, six.string_types):
return self.get(resp)
last_event = None
+ image_id = None
for chunk in json_stream(resp):
if 'error' in chunk:
raise BuildError(chunk['error'])
if 'stream' in chunk:
match = re.search(
- r'(Successfully built |sha256:)([0-9a-f]+)',
+ r'(^Successfully built |sha256:)([0-9a-f]+)$',
chunk['stream']
)
if match:
image_id = match.group(2)
- return self.get(image_id)
last_event = chunk
-
+ if image_id:
+ return self.get(image_id)
raise BuildError(last_event or 'Unknown')
def get(self, name):
|
{"golden_diff": "diff --git a/docker/models/images.py b/docker/models/images.py\n--- a/docker/models/images.py\n+++ b/docker/models/images.py\n@@ -169,19 +169,20 @@\n if isinstance(resp, six.string_types):\n return self.get(resp)\n last_event = None\n+ image_id = None\n for chunk in json_stream(resp):\n if 'error' in chunk:\n raise BuildError(chunk['error'])\n if 'stream' in chunk:\n match = re.search(\n- r'(Successfully built |sha256:)([0-9a-f]+)',\n+ r'(^Successfully built |sha256:)([0-9a-f]+)$',\n chunk['stream']\n )\n if match:\n image_id = match.group(2)\n- return self.get(image_id)\n last_event = chunk\n-\n+ if image_id:\n+ return self.get(image_id)\n raise BuildError(last_event or 'Unknown')\n \n def get(self, name):\n", "issue": "images.build failing on container build output\nThe ImageCollection.build() method parses the `docker build` output to extract the name of the image being built by looking for the string \"Successfully built {sha256}\". Some commands in a Dockerfile can also produce this string causing the build() method to return early with an incorrect image name.\r\n\r\nExample output from a `RUN pip install cffi` command:\r\n```\r\n ...\r\nBuilding wheels for collected packages: cffi, pycparser\r\n Running setup.py bdist_wheel for cffi\r\n Stored in directory: /root/.cache/pip/wheels/ea/df/5c/7d11cd4eec7b94be3b0ec4ed0076e0f38846a22c7200576149\r\n Running setup.py bdist_wheel for pycparser\r\n Stored in directory: /root/.cache/pip/wheels/bc/07/7e/cd81c0b23417ebaed9eb2584c00bd539871d5309a7a2324953\r\nSuccessfully built cffi pycparser\r\n...\r\n```\r\n\r\nA partial fix is to make the regex used to search for the string more restrictive by adding start and end anchors. This can still result in some false positives.\r\nA more complete fix only checks the last line of output for the \"Success...\" string.\r\nPR in progress.\r\n\r\n```\r\ndocker==2.3.0\r\ndocker-pycreds==0.2.1\r\nPython 2.7.5\r\nClient:\r\n Version: 1.12.6\r\n API version: 1.24\r\n Package version: docker-common-1.12.6-16.el7.x86_64\r\n Go version: go1.7.4\r\n Git commit: 3a094bd/1.12.6\r\n Built: Tue Mar 21 13:30:59 2017\r\n OS/Arch: linux/amd64\r\n\r\nServer:\r\n Version: 17.03.1-ce\r\n API version: 1.27\r\n Package version: \r\n Go version: go1.7.5\r\n Git commit: c6d412e\r\n Built: Fri Mar 24 00:00:50 2017\r\n OS/Arch: linux/amd64\r\n```\n", "before_files": [{"content": "import re\n\nimport six\n\nfrom ..api import APIClient\nfrom ..errors import BuildError\nfrom ..utils.json_stream import json_stream\nfrom .resource import Collection, Model\n\n\nclass Image(Model):\n \"\"\"\n An image on the server.\n \"\"\"\n def __repr__(self):\n return \"<%s: '%s'>\" % (self.__class__.__name__, \"', '\".join(self.tags))\n\n @property\n def labels(self):\n \"\"\"\n The labels of an image as dictionary.\n \"\"\"\n result = self.attrs['Config'].get('Labels')\n return result or {}\n\n @property\n def short_id(self):\n \"\"\"\n The ID of the image truncated to 10 characters, plus the ``sha256:``\n prefix.\n \"\"\"\n if self.id.startswith('sha256:'):\n return self.id[:17]\n return self.id[:10]\n\n @property\n def tags(self):\n \"\"\"\n The image's tags.\n \"\"\"\n tags = self.attrs.get('RepoTags')\n if tags is None:\n tags = []\n return [tag for tag in tags if tag != '<none>:<none>']\n\n def history(self):\n \"\"\"\n Show the history of an image.\n\n Returns:\n (str): The history of the image.\n\n Raises:\n :py:class:`docker.errors.APIError`\n If the server returns an error.\n \"\"\"\n return self.client.api.history(self.id)\n\n def save(self):\n \"\"\"\n Get a tarball of an image. Similar to the ``docker save`` command.\n\n Returns:\n (urllib3.response.HTTPResponse object): The response from the\n daemon.\n\n Raises:\n :py:class:`docker.errors.APIError`\n If the server returns an error.\n\n Example:\n\n >>> image = cli.images.get(\"fedora:latest\")\n >>> resp = image.save()\n >>> f = open('/tmp/fedora-latest.tar', 'w')\n >>> for chunk in resp.stream():\n >>> f.write(chunk)\n >>> f.close()\n \"\"\"\n return self.client.api.get_image(self.id)\n\n def tag(self, repository, tag=None, **kwargs):\n \"\"\"\n Tag this image into a repository. Similar to the ``docker tag``\n command.\n\n Args:\n repository (str): The repository to set for the tag\n tag (str): The tag name\n force (bool): Force\n\n Raises:\n :py:class:`docker.errors.APIError`\n If the server returns an error.\n\n Returns:\n (bool): ``True`` if successful\n \"\"\"\n self.client.api.tag(self.id, repository, tag=tag, **kwargs)\n\n\nclass ImageCollection(Collection):\n model = Image\n\n def build(self, **kwargs):\n \"\"\"\n Build an image and return it. Similar to the ``docker build``\n command. Either ``path`` or ``fileobj`` must be set.\n\n If you have a tar file for the Docker build context (including a\n Dockerfile) already, pass a readable file-like object to ``fileobj``\n and also pass ``custom_context=True``. If the stream is compressed\n also, set ``encoding`` to the correct value (e.g ``gzip``).\n\n If you want to get the raw output of the build, use the\n :py:meth:`~docker.api.build.BuildApiMixin.build` method in the\n low-level API.\n\n Args:\n path (str): Path to the directory containing the Dockerfile\n fileobj: A file object to use as the Dockerfile. (Or a file-like\n object)\n tag (str): A tag to add to the final image\n quiet (bool): Whether to return the status\n nocache (bool): Don't use the cache when set to ``True``\n rm (bool): Remove intermediate containers. The ``docker build``\n command now defaults to ``--rm=true``, but we have kept the old\n default of `False` to preserve backward compatibility\n stream (bool): *Deprecated for API version > 1.8 (always True)*.\n Return a blocking generator you can iterate over to retrieve\n build output as it happens\n timeout (int): HTTP timeout\n custom_context (bool): Optional if using ``fileobj``\n encoding (str): The encoding for a stream. Set to ``gzip`` for\n compressing\n pull (bool): Downloads any updates to the FROM image in Dockerfiles\n forcerm (bool): Always remove intermediate containers, even after\n unsuccessful builds\n dockerfile (str): path within the build context to the Dockerfile\n buildargs (dict): A dictionary of build arguments\n container_limits (dict): A dictionary of limits applied to each\n container created by the build process. Valid keys:\n\n - memory (int): set memory limit for build\n - memswap (int): Total memory (memory + swap), -1 to disable\n swap\n - cpushares (int): CPU shares (relative weight)\n - cpusetcpus (str): CPUs in which to allow execution, e.g.,\n ``\"0-3\"``, ``\"0,1\"``\n decode (bool): If set to ``True``, the returned stream will be\n decoded into dicts on the fly. Default ``False``.\n cache_from (list): A list of images used for build cache\n resolution.\n target (str): Name of the build-stage to build in a multi-stage\n Dockerfile.\n\n Returns:\n (:py:class:`Image`): The built image.\n\n Raises:\n :py:class:`docker.errors.BuildError`\n If there is an error during the build.\n :py:class:`docker.errors.APIError`\n If the server returns any other error.\n ``TypeError``\n If neither ``path`` nor ``fileobj`` is specified.\n \"\"\"\n resp = self.client.api.build(**kwargs)\n if isinstance(resp, six.string_types):\n return self.get(resp)\n last_event = None\n for chunk in json_stream(resp):\n if 'error' in chunk:\n raise BuildError(chunk['error'])\n if 'stream' in chunk:\n match = re.search(\n r'(Successfully built |sha256:)([0-9a-f]+)',\n chunk['stream']\n )\n if match:\n image_id = match.group(2)\n return self.get(image_id)\n last_event = chunk\n\n raise BuildError(last_event or 'Unknown')\n\n def get(self, name):\n \"\"\"\n Gets an image.\n\n Args:\n name (str): The name of the image.\n\n Returns:\n (:py:class:`Image`): The image.\n\n Raises:\n :py:class:`docker.errors.ImageNotFound`\n If the image does not exist.\n :py:class:`docker.errors.APIError`\n If the server returns an error.\n \"\"\"\n return self.prepare_model(self.client.api.inspect_image(name))\n\n def list(self, name=None, all=False, filters=None):\n \"\"\"\n List images on the server.\n\n Args:\n name (str): Only show images belonging to the repository ``name``\n all (bool): Show intermediate image layers. By default, these are\n filtered out.\n filters (dict): Filters to be processed on the image list.\n Available filters:\n - ``dangling`` (bool)\n - ``label`` (str): format either ``key`` or ``key=value``\n\n Returns:\n (list of :py:class:`Image`): The images.\n\n Raises:\n :py:class:`docker.errors.APIError`\n If the server returns an error.\n \"\"\"\n resp = self.client.api.images(name=name, all=all, filters=filters)\n return [self.prepare_model(r) for r in resp]\n\n def load(self, data):\n \"\"\"\n Load an image that was previously saved using\n :py:meth:`~docker.models.images.Image.save` (or ``docker save``).\n Similar to ``docker load``.\n\n Args:\n data (binary): Image data to be loaded.\n\n Raises:\n :py:class:`docker.errors.APIError`\n If the server returns an error.\n \"\"\"\n return self.client.api.load_image(data)\n\n def pull(self, name, tag=None, **kwargs):\n \"\"\"\n Pull an image of the given name and return it. Similar to the\n ``docker pull`` command.\n\n If you want to get the raw pull output, use the\n :py:meth:`~docker.api.image.ImageApiMixin.pull` method in the\n low-level API.\n\n Args:\n repository (str): The repository to pull\n tag (str): The tag to pull\n insecure_registry (bool): Use an insecure registry\n auth_config (dict): Override the credentials that\n :py:meth:`~docker.client.DockerClient.login` has set for\n this request. ``auth_config`` should contain the ``username``\n and ``password`` keys to be valid.\n\n Returns:\n (:py:class:`Image`): The image that has been pulled.\n\n Raises:\n :py:class:`docker.errors.APIError`\n If the server returns an error.\n\n Example:\n\n >>> image = client.images.pull('busybox')\n \"\"\"\n self.client.api.pull(name, tag=tag, **kwargs)\n return self.get('{0}:{1}'.format(name, tag) if tag else name)\n\n def push(self, repository, tag=None, **kwargs):\n return self.client.api.push(repository, tag=tag, **kwargs)\n push.__doc__ = APIClient.push.__doc__\n\n def remove(self, *args, **kwargs):\n self.client.api.remove_image(*args, **kwargs)\n remove.__doc__ = APIClient.remove_image.__doc__\n\n def search(self, *args, **kwargs):\n return self.client.api.search(*args, **kwargs)\n search.__doc__ = APIClient.search.__doc__\n\n def prune(self, filters=None):\n return self.client.api.prune_images(filters=filters)\n prune.__doc__ = APIClient.prune_images.__doc__\n", "path": "docker/models/images.py"}]}
| 4,025 | 221 |
gh_patches_debug_1373
|
rasdani/github-patches
|
git_diff
|
ros__ros_comm-2007
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Rospy import *
Hi,
Doing
```python
from rospy import *
```
raises the following exception :
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: module 'rospy' has no attribute 'NodeProxy'
```
After some investigations, `NodeProxy` doesn't seem to exist anymore in the codebase. Simply removing it from the exports should do the trick.
</issue>
<code>
[start of clients/rospy/src/rospy/__init__.py]
1 # Software License Agreement (BSD License)
2 #
3 # Copyright (c) 2008, Willow Garage, Inc.
4 # All rights reserved.
5 #
6 # Redistribution and use in source and binary forms, with or without
7 # modification, are permitted provided that the following conditions
8 # are met:
9 #
10 # * Redistributions of source code must retain the above copyright
11 # notice, this list of conditions and the following disclaimer.
12 # * Redistributions in binary form must reproduce the above
13 # copyright notice, this list of conditions and the following
14 # disclaimer in the documentation and/or other materials provided
15 # with the distribution.
16 # * Neither the name of Willow Garage, Inc. nor the names of its
17 # contributors may be used to endorse or promote products derived
18 # from this software without specific prior written permission.
19 #
20 # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
21 # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
22 # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
23 # FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
24 # COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
25 # INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
26 # BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
27 # LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
28 # CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
29 # LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN
30 # ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
31 # POSSIBILITY OF SUCH DAMAGE.
32 #
33 # Copyright (c) 2008, Willow Garage, Inc.
34 # Revision $Id$
35
36 """
37 ROS client library for Python.
38 See U{http://ros.org/wiki/rospy}
39 @author: Ken Conley (kwc)
40 """
41
42 # import symbols into rospy namespace
43 # NOTE: there are much better ways to configure python module
44 # dictionaries, but the rospy codebase isn't quite in shape for that
45 # yet
46
47 from std_msgs.msg import Header
48
49 from .client import spin, myargv, init_node, \
50 get_published_topics, \
51 wait_for_message, \
52 get_master, \
53 on_shutdown, \
54 get_param, get_param_cached, get_param_names, set_param, delete_param, has_param, search_param,\
55 DEBUG, INFO, WARN, ERROR, FATAL
56 from .timer import sleep, Rate, Timer
57 from .core import is_shutdown, signal_shutdown, \
58 get_node_uri, get_ros_root, \
59 logdebug, logwarn, loginfo, logout, logerr, logfatal, \
60 logdebug_throttle, logwarn_throttle, loginfo_throttle, logerr_throttle, logfatal_throttle, \
61 logdebug_throttle_identical, logwarn_throttle_identical, loginfo_throttle_identical, logerr_throttle_identical, logfatal_throttle_identical, \
62 logdebug_once, logwarn_once, loginfo_once, logerr_once, logfatal_once, \
63 parse_rosrpc_uri
64 from .exceptions import *
65 from .msg import AnyMsg
66 from .msproxy import MasterProxy
67 from .names import get_name, get_caller_id, get_namespace, resolve_name, remap_name
68 from .rostime import Time, Duration, get_rostime, get_time
69 from .service import ServiceException
70
71 # - use tcp ros implementation of services
72 from .impl.tcpros_service import Service, ServiceProxy, wait_for_service
73 from .topics import Message, SubscribeListener, Publisher, Subscriber
74
75 ## \defgroup validators Validators
76 ## \defgroup clientapi Client API
77
78 __all__ = [
79 'Header',
80 'spin',
81 'myargv',
82 'init_node',
83 'get_master',
84 'get_published_topics',
85 'wait_for_service',
86 'on_shutdown',
87 'get_param',
88 'get_param_cached',
89 'get_param_names',
90 'set_param',
91 'delete_param',
92 'has_param',
93 'search_param',
94 'sleep',
95 'Rate',
96 'DEBUG',
97 'INFO',
98 'WARN',
99 'ERROR',
100 'FATAL',
101 'is_shutdown',
102 'signal_shutdown',
103 'get_node_uri',
104 'get_ros_root',
105 'logdebug',
106 'logwarn', 'loginfo',
107 'logout', 'logerr', 'logfatal',
108 'logdebug_throttle',
109 'logwarn_throttle', 'loginfo_throttle',
110 'logerr_throttle', 'logfatal_throttle',
111 'logdebug_once',
112 'logwarn_once', 'loginfo_once',
113 'logerr_once', 'logfatal_once',
114 'parse_rosrpc_uri',
115 'MasterProxy',
116 'NodeProxy',
117 'ROSException',
118 'ROSSerializationException',
119 'ROSInitException',
120 'ROSInterruptException',
121 'ROSInternalException',
122 'TransportException',
123 'TransportTerminated',
124 'TransportInitError',
125 'AnyMsg', 'Message',
126 'get_name',
127 'get_caller_id',
128 'get_namespace',
129 'resolve_name',
130 'remap_name',
131 'Time', 'Duration', 'get_rostime', 'get_time',
132 'ServiceException',
133 'Service', 'ServiceProxy',
134 'SubscribeListener', 'Publisher', 'Subscriber',
135 ]
136
[end of clients/rospy/src/rospy/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/clients/rospy/src/rospy/__init__.py b/clients/rospy/src/rospy/__init__.py
--- a/clients/rospy/src/rospy/__init__.py
+++ b/clients/rospy/src/rospy/__init__.py
@@ -113,7 +113,6 @@
'logerr_once', 'logfatal_once',
'parse_rosrpc_uri',
'MasterProxy',
- 'NodeProxy',
'ROSException',
'ROSSerializationException',
'ROSInitException',
|
{"golden_diff": "diff --git a/clients/rospy/src/rospy/__init__.py b/clients/rospy/src/rospy/__init__.py\n--- a/clients/rospy/src/rospy/__init__.py\n+++ b/clients/rospy/src/rospy/__init__.py\n@@ -113,7 +113,6 @@\n 'logerr_once', 'logfatal_once',\n 'parse_rosrpc_uri',\n 'MasterProxy',\n- 'NodeProxy', \n 'ROSException',\n 'ROSSerializationException',\n 'ROSInitException',\n", "issue": "Rospy import *\nHi, \r\n\r\nDoing \r\n```python\r\nfrom rospy import *\r\n```\r\nraises the following exception : \r\n```\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\nAttributeError: module 'rospy' has no attribute 'NodeProxy'\r\n```\r\nAfter some investigations, `NodeProxy` doesn't seem to exist anymore in the codebase. Simply removing it from the exports should do the trick.\r\n\n", "before_files": [{"content": "# Software License Agreement (BSD License)\n#\n# Copyright (c) 2008, Willow Garage, Inc.\n# All rights reserved.\n#\n# Redistribution and use in source and binary forms, with or without\n# modification, are permitted provided that the following conditions\n# are met:\n#\n# * Redistributions of source code must retain the above copyright\n# notice, this list of conditions and the following disclaimer.\n# * Redistributions in binary form must reproduce the above\n# copyright notice, this list of conditions and the following\n# disclaimer in the documentation and/or other materials provided\n# with the distribution.\n# * Neither the name of Willow Garage, Inc. nor the names of its\n# contributors may be used to endorse or promote products derived\n# from this software without specific prior written permission.\n#\n# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS\n# \"AS IS\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT\n# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS\n# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE\n# COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,\n# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,\n# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;\n# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER\n# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT\n# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN\n# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE\n# POSSIBILITY OF SUCH DAMAGE.\n#\n# Copyright (c) 2008, Willow Garage, Inc.\n# Revision $Id$\n\n\"\"\"\nROS client library for Python.\nSee U{http://ros.org/wiki/rospy}\n@author: Ken Conley (kwc)\n\"\"\"\n\n# import symbols into rospy namespace\n# NOTE: there are much better ways to configure python module\n# dictionaries, but the rospy codebase isn't quite in shape for that\n# yet\n\nfrom std_msgs.msg import Header\n\nfrom .client import spin, myargv, init_node, \\\n get_published_topics, \\\n wait_for_message, \\\n get_master, \\\n on_shutdown, \\\n get_param, get_param_cached, get_param_names, set_param, delete_param, has_param, search_param,\\\n DEBUG, INFO, WARN, ERROR, FATAL\nfrom .timer import sleep, Rate, Timer\nfrom .core import is_shutdown, signal_shutdown, \\\n get_node_uri, get_ros_root, \\\n logdebug, logwarn, loginfo, logout, logerr, logfatal, \\\n logdebug_throttle, logwarn_throttle, loginfo_throttle, logerr_throttle, logfatal_throttle, \\\n logdebug_throttle_identical, logwarn_throttle_identical, loginfo_throttle_identical, logerr_throttle_identical, logfatal_throttle_identical, \\\n logdebug_once, logwarn_once, loginfo_once, logerr_once, logfatal_once, \\\n parse_rosrpc_uri\nfrom .exceptions import *\nfrom .msg import AnyMsg\nfrom .msproxy import MasterProxy\nfrom .names import get_name, get_caller_id, get_namespace, resolve_name, remap_name\nfrom .rostime import Time, Duration, get_rostime, get_time\nfrom .service import ServiceException\n\n# - use tcp ros implementation of services\nfrom .impl.tcpros_service import Service, ServiceProxy, wait_for_service\nfrom .topics import Message, SubscribeListener, Publisher, Subscriber\n\n## \\defgroup validators Validators\n## \\defgroup clientapi Client API\n\n__all__ = [\n 'Header',\n 'spin',\n 'myargv',\n 'init_node',\n 'get_master',\n 'get_published_topics',\n 'wait_for_service',\n 'on_shutdown',\n 'get_param',\n 'get_param_cached',\n 'get_param_names',\n 'set_param',\n 'delete_param',\n 'has_param',\n 'search_param',\n 'sleep',\n 'Rate',\n 'DEBUG',\n 'INFO',\n 'WARN',\n 'ERROR',\n 'FATAL',\n 'is_shutdown',\n 'signal_shutdown',\n 'get_node_uri',\n 'get_ros_root',\n 'logdebug',\n 'logwarn', 'loginfo',\n 'logout', 'logerr', 'logfatal',\n 'logdebug_throttle',\n 'logwarn_throttle', 'loginfo_throttle',\n 'logerr_throttle', 'logfatal_throttle',\n 'logdebug_once',\n 'logwarn_once', 'loginfo_once',\n 'logerr_once', 'logfatal_once',\n 'parse_rosrpc_uri',\n 'MasterProxy',\n 'NodeProxy', \n 'ROSException',\n 'ROSSerializationException',\n 'ROSInitException',\n 'ROSInterruptException',\n 'ROSInternalException',\n 'TransportException',\n 'TransportTerminated',\n 'TransportInitError',\n 'AnyMsg', 'Message',\n 'get_name',\n 'get_caller_id',\n 'get_namespace',\n 'resolve_name',\n 'remap_name',\n 'Time', 'Duration', 'get_rostime', 'get_time',\n 'ServiceException', \n 'Service', 'ServiceProxy',\n 'SubscribeListener', 'Publisher', 'Subscriber',\n ]\n", "path": "clients/rospy/src/rospy/__init__.py"}]}
| 2,078 | 125 |
gh_patches_debug_2776
|
rasdani/github-patches
|
git_diff
|
sunpy__sunpy-1505
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
EIT data fails with wcsaxes
The wcs information in the EIT header is not being identified as celestial axes by wcslib (inside astropy.wcs). This means that wcs is not detecting them as angular axes properly so therefore the set to arcsecond formatting is failing.
</issue>
<code>
[start of sunpy/visualization/wcsaxes_compat.py]
1 # -*- coding: utf-8 -*-
2 """
3 Helpers and Functions to make WCSAxes work in SunPy
4 """
5 import warnings
6
7 import matplotlib.pyplot as plt
8
9 import astropy.units as u
10
11 try:
12 import wcsaxes
13 HAVE_WCSAXES = True
14
15 except ImportError:
16 HAVE_WCSAXES = False
17 warnings.warn("SunPy plotting is improved by installing the WCSAxes module: http://wcsaxes.readthedocs.org/en/latest/index.html")
18
19 FORCE_NO_WCSAXES = False
20
21 __all__ = ['HAVE_WCSAXES', 'is_wcsaxes', 'FORCE_NO_WCSAXES']
22
23 def is_wcsaxes(axes):
24 """
25 Test a matplotlib Axes object to see if it is an instance of WCSAxes
26
27 Parameters
28 ----------
29 axes : matplotlib Axes Object
30 Axes to test
31
32 Returns
33 -------
34 result : bool
35 Result of the test
36 """
37
38 if HAVE_WCSAXES and not FORCE_NO_WCSAXES:
39 return isinstance(axes, wcsaxes.WCSAxes)
40 else:
41 return False
42
43
44 def gca_wcs(wcs, fig=None):
45 """
46 Get the current axes, and return a WCSAxes if possible
47 """
48
49 if not fig:
50 fig = plt.gcf()
51
52 if not len(fig.get_axes()):
53 if HAVE_WCSAXES and not FORCE_NO_WCSAXES:
54 ax = plt.gca(projection=wcs)
55 else:
56 ax = plt.gca()
57
58 else:
59 ax = plt.gca()
60
61 return ax
62
63 def get_world_transform(axes):
64 if is_wcsaxes(axes):
65 transform = axes.get_transform('world')
66 else:
67 transform = axes.transData
68
69 return transform
70
71 def default_wcs_grid(axes):
72 """
73 Apply some default wcsaxes grid formatting
74 """
75 if not isinstance(axes, wcsaxes.WCSAxes):
76 raise TypeError("This axes is not a WCSAxes")
77
78 x = axes.coords[0]
79 y = axes.coords[1]
80
81 x.set_ticks(color='white')
82 y.set_ticks(color='white')
83
84 x.set_ticks_position('bl')
85 y.set_ticks_position('bl')
86
87 x.set_major_formatter('s.s')
88 y.set_major_formatter('s.s')
89
90 axes.coords.grid(color='white', alpha=0.6)
91
92 def wcsaxes_heliographic_overlay(axes):
93 """
94 Draw a heliographic overlay using wcsaxes
95 """
96 overlay = axes.get_coords_overlay('heliographicstonyhurst')
97
98 lon = overlay[0]
99 lat = overlay[1]
100
101 lon.coord_wrap = 180
102 lon.set_major_formatter('dd')
103
104 lon.set_axislabel('Solar Longitude')
105 lat.set_axislabel('Solar Latitude')
106
107 lon.set_ticks_position('tr')
108 lat.set_ticks_position('tr')
109
110 lon.set_ticks(spacing=10. * u.deg, color='white')
111 lat.set_ticks(spacing=10. * u.deg, color='white')
112
113 overlay.grid(color='white', alpha=0.5)
114
115 return overlay
116
[end of sunpy/visualization/wcsaxes_compat.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/sunpy/visualization/wcsaxes_compat.py b/sunpy/visualization/wcsaxes_compat.py
--- a/sunpy/visualization/wcsaxes_compat.py
+++ b/sunpy/visualization/wcsaxes_compat.py
@@ -84,6 +84,11 @@
x.set_ticks_position('bl')
y.set_ticks_position('bl')
+ if x.coord_type != 'longitude':
+ x.set_coord_type('longitude', coord_wrap=180.)
+ if y.coord_type != 'latitude':
+ y.set_coord_type('latitude')
+
x.set_major_formatter('s.s')
y.set_major_formatter('s.s')
|
{"golden_diff": "diff --git a/sunpy/visualization/wcsaxes_compat.py b/sunpy/visualization/wcsaxes_compat.py\n--- a/sunpy/visualization/wcsaxes_compat.py\n+++ b/sunpy/visualization/wcsaxes_compat.py\n@@ -84,6 +84,11 @@\n x.set_ticks_position('bl')\n y.set_ticks_position('bl')\n \n+ if x.coord_type != 'longitude':\n+ x.set_coord_type('longitude', coord_wrap=180.)\n+ if y.coord_type != 'latitude':\n+ y.set_coord_type('latitude')\n+\n x.set_major_formatter('s.s')\n y.set_major_formatter('s.s')\n", "issue": "EIT data fails with wcsaxes\nThe wcs information in the EIT header is not being identified as celestial axes by wcslib (inside astropy.wcs). This means that wcs is not detecting them as angular axes properly so therefore the set to arcsecond formatting is failing.\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"\nHelpers and Functions to make WCSAxes work in SunPy\n\"\"\"\nimport warnings\n\nimport matplotlib.pyplot as plt\n\nimport astropy.units as u\n\ntry:\n import wcsaxes\n HAVE_WCSAXES = True\n\nexcept ImportError:\n HAVE_WCSAXES = False\n warnings.warn(\"SunPy plotting is improved by installing the WCSAxes module: http://wcsaxes.readthedocs.org/en/latest/index.html\")\n\nFORCE_NO_WCSAXES = False\n\n__all__ = ['HAVE_WCSAXES', 'is_wcsaxes', 'FORCE_NO_WCSAXES']\n\ndef is_wcsaxes(axes):\n \"\"\"\n Test a matplotlib Axes object to see if it is an instance of WCSAxes\n\n Parameters\n ----------\n axes : matplotlib Axes Object\n Axes to test\n\n Returns\n -------\n result : bool\n Result of the test\n \"\"\"\n\n if HAVE_WCSAXES and not FORCE_NO_WCSAXES:\n return isinstance(axes, wcsaxes.WCSAxes)\n else:\n return False\n\n\ndef gca_wcs(wcs, fig=None):\n \"\"\"\n Get the current axes, and return a WCSAxes if possible\n \"\"\"\n\n if not fig:\n fig = plt.gcf()\n\n if not len(fig.get_axes()):\n if HAVE_WCSAXES and not FORCE_NO_WCSAXES:\n ax = plt.gca(projection=wcs)\n else:\n ax = plt.gca()\n\n else:\n ax = plt.gca()\n\n return ax\n\ndef get_world_transform(axes):\n if is_wcsaxes(axes):\n transform = axes.get_transform('world')\n else:\n transform = axes.transData\n\n return transform\n\ndef default_wcs_grid(axes):\n \"\"\"\n Apply some default wcsaxes grid formatting\n \"\"\"\n if not isinstance(axes, wcsaxes.WCSAxes):\n raise TypeError(\"This axes is not a WCSAxes\")\n\n x = axes.coords[0]\n y = axes.coords[1]\n\n x.set_ticks(color='white')\n y.set_ticks(color='white')\n\n x.set_ticks_position('bl')\n y.set_ticks_position('bl')\n\n x.set_major_formatter('s.s')\n y.set_major_formatter('s.s')\n\n axes.coords.grid(color='white', alpha=0.6)\n\ndef wcsaxes_heliographic_overlay(axes):\n \"\"\"\n Draw a heliographic overlay using wcsaxes\n \"\"\"\n overlay = axes.get_coords_overlay('heliographicstonyhurst')\n\n lon = overlay[0]\n lat = overlay[1]\n\n lon.coord_wrap = 180\n lon.set_major_formatter('dd')\n\n lon.set_axislabel('Solar Longitude')\n lat.set_axislabel('Solar Latitude')\n\n lon.set_ticks_position('tr')\n lat.set_ticks_position('tr')\n\n lon.set_ticks(spacing=10. * u.deg, color='white')\n lat.set_ticks(spacing=10. * u.deg, color='white')\n\n overlay.grid(color='white', alpha=0.5)\n\n return overlay\n", "path": "sunpy/visualization/wcsaxes_compat.py"}]}
| 1,518 | 146 |
gh_patches_debug_41320
|
rasdani/github-patches
|
git_diff
|
deepchecks__deepchecks-1528
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[FEAT] MixedDataTypes to display a few examples of each type
Current display:

Desired - to see a few unique examples from each type. currently have to inspect column in order to do it, e.g.:

would want the check to display it by itself
</issue>
<code>
[start of deepchecks/tabular/checks/data_integrity/mixed_data_types.py]
1 # ----------------------------------------------------------------------------
2 # Copyright (C) 2021-2022 Deepchecks (https://www.deepchecks.com)
3 #
4 # This file is part of Deepchecks.
5 # Deepchecks is distributed under the terms of the GNU Affero General
6 # Public License (version 3 or later).
7 # You should have received a copy of the GNU Affero General Public License
8 # along with Deepchecks. If not, see <http://www.gnu.org/licenses/>.
9 # ----------------------------------------------------------------------------
10 #
11 """module contains Mixed Types check."""
12 from typing import List, Tuple, Union
13
14 import numpy as np
15 import pandas as pd
16
17 from deepchecks.core import CheckResult, ConditionCategory, ConditionResult
18 from deepchecks.tabular import Context, SingleDatasetCheck
19 from deepchecks.utils.dataframes import select_from_dataframe
20 from deepchecks.utils.features import N_TOP_MESSAGE, column_importance_sorter_df
21 from deepchecks.utils.strings import format_percent, is_string_column
22 from deepchecks.utils.typing import Hashable
23
24 __all__ = ['MixedDataTypes']
25
26
27 class MixedDataTypes(SingleDatasetCheck):
28 """Detect columns which contain a mix of numerical and string values.
29
30 Parameters
31 ----------
32 columns : Union[Hashable, List[Hashable]] , default: None
33 Columns to check, if none are given checks all columns
34 except ignored ones.
35 ignore_columns : Union[Hashable, List[Hashable]] , default: None
36 Columns to ignore, if none given checks based on columns
37 variable.
38 n_top_columns : int , optional
39 amount of columns to show ordered by feature importance (date, index, label are first)
40 """
41
42 def __init__(
43 self,
44 columns: Union[Hashable, List[Hashable], None] = None,
45 ignore_columns: Union[Hashable, List[Hashable], None] = None,
46 n_top_columns: int = 10,
47 **kwargs
48 ):
49 super().__init__(**kwargs)
50 self.columns = columns
51 self.ignore_columns = ignore_columns
52 self.n_top_columns = n_top_columns
53
54 def run_logic(self, context: Context, dataset_type: str = 'train') -> CheckResult:
55 """Run check.
56
57 Returns
58 -------
59 CheckResult
60 value is a dict where the key is the column name as key and the value is the ratio 'strings' and 'numbers'
61 for any column with mixed data types.
62 numbers will also include hidden numbers in string representation.
63 """
64 if dataset_type == 'train':
65 dataset = context.train
66 else:
67 dataset = context.test
68 features_importance = context.features_importance
69
70 df = select_from_dataframe(dataset.data, self.columns, self.ignore_columns)
71
72 # Result value: { Column Name: {string: pct, numbers: pct}}
73 display_dict = {}
74 result_dict = {}
75
76 for column_name in df.columns:
77 column_data = df[column_name].dropna()
78 mix = self._get_data_mix(column_data)
79 result_dict[column_name] = mix
80 if mix:
81 # Format percents for display
82 display_dict[column_name] = {k: format_percent(v) for k, v in mix.items()}
83
84 if display_dict:
85 df_graph = pd.DataFrame.from_dict(display_dict)
86 df_graph = column_importance_sorter_df(df_graph.T, dataset, features_importance,
87 self.n_top_columns).T
88 display = [N_TOP_MESSAGE % self.n_top_columns, df_graph]
89 else:
90 display = None
91
92 return CheckResult(result_dict, display=display)
93
94 @classmethod
95 def _get_data_mix(cls, column_data: pd.Series) -> dict:
96 if is_string_column(column_data):
97 return cls._check_mixed_percentage(column_data)
98 return {}
99
100 @classmethod
101 def _check_mixed_percentage(cls, column_data: pd.Series) -> dict:
102 total_rows = column_data.count()
103
104 def is_float(x) -> bool:
105 try:
106 float(x)
107 return True
108 except ValueError:
109 return False
110
111 nums = sum(column_data.apply(is_float))
112 if nums in (total_rows, 0):
113 return {}
114
115 # Then we've got a mix
116 nums_pct = nums / total_rows
117 strs_pct = (np.abs(nums - total_rows)) / total_rows
118
119 return {'strings': strs_pct, 'numbers': nums_pct}
120
121 def add_condition_rare_type_ratio_not_in_range(self, ratio_range: Tuple[float, float] = (0.01, 0.1)):
122 """Add condition - Whether the ratio of rarer data type (strings or numbers) is not in the "danger zone".
123
124 The "danger zone" represents the following logic - if the rarer data type is, for example, 30% of the data,
125 than the column is presumably supposed to contain both numbers and string values. If the rarer data type is,
126 for example, less than 1% of the data, than it's presumably a contamination, but a negligible one. In the range
127 between, there is a real chance that the rarer data type may represent a problem to model training and
128 inference.
129
130 Parameters
131 ----------
132 ratio_range : Tuple[float, float] , default: (0.01 , 0.1)
133 The range between which the ratio of rarer data type in the column is
134 considered a problem.
135 """
136 def condition(result):
137 no_mix_columns = []
138 failing_columns = []
139 for col, ratios in result.items():
140 # Columns without a mix contains empty dict for ratios
141 if not ratios:
142 no_mix_columns.append(col)
143 continue
144 rarer_ratio = min(ratios['strings'], ratios['numbers'])
145 if ratio_range[0] < rarer_ratio < ratio_range[1]:
146 failing_columns.append(col)
147 if failing_columns:
148 details = f'Found {len(failing_columns)} out of {len(result)} columns with non-negligible quantities ' \
149 f'of samples with a different data type from the majority of samples: {failing_columns}'
150 return ConditionResult(ConditionCategory.WARN, details)
151 details = f'{len(result)} columns passed: found {len(result) - len(no_mix_columns)} columns with ' \
152 f'negligible types mix, and {len(no_mix_columns)} columns without any types mix'
153 return ConditionResult(ConditionCategory.PASS, details)
154
155 name = f'Rare data types in column are either more than {format_percent(ratio_range[1])} or less ' \
156 f'than {format_percent(ratio_range[0])} of the data'
157 return self.add_condition(name, condition)
158
[end of deepchecks/tabular/checks/data_integrity/mixed_data_types.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/deepchecks/tabular/checks/data_integrity/mixed_data_types.py b/deepchecks/tabular/checks/data_integrity/mixed_data_types.py
--- a/deepchecks/tabular/checks/data_integrity/mixed_data_types.py
+++ b/deepchecks/tabular/checks/data_integrity/mixed_data_types.py
@@ -18,7 +18,7 @@
from deepchecks.tabular import Context, SingleDatasetCheck
from deepchecks.utils.dataframes import select_from_dataframe
from deepchecks.utils.features import N_TOP_MESSAGE, column_importance_sorter_df
-from deepchecks.utils.strings import format_percent, is_string_column
+from deepchecks.utils.strings import format_list, format_number, format_percent, get_ellipsis, is_string_column
from deepchecks.utils.typing import Hashable
__all__ = ['MixedDataTypes']
@@ -79,7 +79,13 @@
result_dict[column_name] = mix
if mix:
# Format percents for display
- display_dict[column_name] = {k: format_percent(v) for k, v in mix.items()}
+ formated_mix = {}
+ formated_mix['Strings'] = format_percent(mix['strings'])
+ formated_mix['Numbers'] = format_percent(mix['numbers'])
+ formated_mix['Strings examples'] = [get_ellipsis(strr, 15) for strr in mix['strings_examples']]
+ formated_mix['Numbers examples'] = '[' + format_list([format_number(float(num))
+ for num in mix['numbers_examples']]) + ']'
+ display_dict[column_name] = formated_mix
if display_dict:
df_graph = pd.DataFrame.from_dict(display_dict)
@@ -91,21 +97,26 @@
return CheckResult(result_dict, display=display)
- @classmethod
- def _get_data_mix(cls, column_data: pd.Series) -> dict:
+ def _get_data_mix(self, column_data: pd.Series) -> dict:
if is_string_column(column_data):
- return cls._check_mixed_percentage(column_data)
+ return self._check_mixed_percentage(column_data)
return {}
- @classmethod
- def _check_mixed_percentage(cls, column_data: pd.Series) -> dict:
+ def _check_mixed_percentage(self, column_data: pd.Series) -> dict:
total_rows = column_data.count()
+ numbers_in_col = set()
+ strings_in_col = set()
+
def is_float(x) -> bool:
try:
float(x)
+ if len(numbers_in_col) < 3:
+ numbers_in_col.add(x)
return True
except ValueError:
+ if len(strings_in_col) < 3:
+ strings_in_col.add(x)
return False
nums = sum(column_data.apply(is_float))
@@ -116,7 +127,8 @@
nums_pct = nums / total_rows
strs_pct = (np.abs(nums - total_rows)) / total_rows
- return {'strings': strs_pct, 'numbers': nums_pct}
+ return {'strings': strs_pct, 'numbers': nums_pct,
+ 'strings_examples': strings_in_col, 'numbers_examples': numbers_in_col}
def add_condition_rare_type_ratio_not_in_range(self, ratio_range: Tuple[float, float] = (0.01, 0.1)):
"""Add condition - Whether the ratio of rarer data type (strings or numbers) is not in the "danger zone".
|
{"golden_diff": "diff --git a/deepchecks/tabular/checks/data_integrity/mixed_data_types.py b/deepchecks/tabular/checks/data_integrity/mixed_data_types.py\n--- a/deepchecks/tabular/checks/data_integrity/mixed_data_types.py\n+++ b/deepchecks/tabular/checks/data_integrity/mixed_data_types.py\n@@ -18,7 +18,7 @@\n from deepchecks.tabular import Context, SingleDatasetCheck\n from deepchecks.utils.dataframes import select_from_dataframe\n from deepchecks.utils.features import N_TOP_MESSAGE, column_importance_sorter_df\n-from deepchecks.utils.strings import format_percent, is_string_column\n+from deepchecks.utils.strings import format_list, format_number, format_percent, get_ellipsis, is_string_column\n from deepchecks.utils.typing import Hashable\n \n __all__ = ['MixedDataTypes']\n@@ -79,7 +79,13 @@\n result_dict[column_name] = mix\n if mix:\n # Format percents for display\n- display_dict[column_name] = {k: format_percent(v) for k, v in mix.items()}\n+ formated_mix = {}\n+ formated_mix['Strings'] = format_percent(mix['strings'])\n+ formated_mix['Numbers'] = format_percent(mix['numbers'])\n+ formated_mix['Strings examples'] = [get_ellipsis(strr, 15) for strr in mix['strings_examples']]\n+ formated_mix['Numbers examples'] = '[' + format_list([format_number(float(num))\n+ for num in mix['numbers_examples']]) + ']'\n+ display_dict[column_name] = formated_mix\n \n if display_dict:\n df_graph = pd.DataFrame.from_dict(display_dict)\n@@ -91,21 +97,26 @@\n \n return CheckResult(result_dict, display=display)\n \n- @classmethod\n- def _get_data_mix(cls, column_data: pd.Series) -> dict:\n+ def _get_data_mix(self, column_data: pd.Series) -> dict:\n if is_string_column(column_data):\n- return cls._check_mixed_percentage(column_data)\n+ return self._check_mixed_percentage(column_data)\n return {}\n \n- @classmethod\n- def _check_mixed_percentage(cls, column_data: pd.Series) -> dict:\n+ def _check_mixed_percentage(self, column_data: pd.Series) -> dict:\n total_rows = column_data.count()\n \n+ numbers_in_col = set()\n+ strings_in_col = set()\n+\n def is_float(x) -> bool:\n try:\n float(x)\n+ if len(numbers_in_col) < 3:\n+ numbers_in_col.add(x)\n return True\n except ValueError:\n+ if len(strings_in_col) < 3:\n+ strings_in_col.add(x)\n return False\n \n nums = sum(column_data.apply(is_float))\n@@ -116,7 +127,8 @@\n nums_pct = nums / total_rows\n strs_pct = (np.abs(nums - total_rows)) / total_rows\n \n- return {'strings': strs_pct, 'numbers': nums_pct}\n+ return {'strings': strs_pct, 'numbers': nums_pct,\n+ 'strings_examples': strings_in_col, 'numbers_examples': numbers_in_col}\n \n def add_condition_rare_type_ratio_not_in_range(self, ratio_range: Tuple[float, float] = (0.01, 0.1)):\n \"\"\"Add condition - Whether the ratio of rarer data type (strings or numbers) is not in the \"danger zone\".\n", "issue": "[FEAT] MixedDataTypes to display a few examples of each type\nCurrent display:\r\n\r\n\r\nDesired - to see a few unique examples from each type. currently have to inspect column in order to do it, e.g.:\r\n\r\nwould want the check to display it by itself\r\n\n", "before_files": [{"content": "# ----------------------------------------------------------------------------\n# Copyright (C) 2021-2022 Deepchecks (https://www.deepchecks.com)\n#\n# This file is part of Deepchecks.\n# Deepchecks is distributed under the terms of the GNU Affero General\n# Public License (version 3 or later).\n# You should have received a copy of the GNU Affero General Public License\n# along with Deepchecks. If not, see <http://www.gnu.org/licenses/>.\n# ----------------------------------------------------------------------------\n#\n\"\"\"module contains Mixed Types check.\"\"\"\nfrom typing import List, Tuple, Union\n\nimport numpy as np\nimport pandas as pd\n\nfrom deepchecks.core import CheckResult, ConditionCategory, ConditionResult\nfrom deepchecks.tabular import Context, SingleDatasetCheck\nfrom deepchecks.utils.dataframes import select_from_dataframe\nfrom deepchecks.utils.features import N_TOP_MESSAGE, column_importance_sorter_df\nfrom deepchecks.utils.strings import format_percent, is_string_column\nfrom deepchecks.utils.typing import Hashable\n\n__all__ = ['MixedDataTypes']\n\n\nclass MixedDataTypes(SingleDatasetCheck):\n \"\"\"Detect columns which contain a mix of numerical and string values.\n\n Parameters\n ----------\n columns : Union[Hashable, List[Hashable]] , default: None\n Columns to check, if none are given checks all columns\n except ignored ones.\n ignore_columns : Union[Hashable, List[Hashable]] , default: None\n Columns to ignore, if none given checks based on columns\n variable.\n n_top_columns : int , optional\n amount of columns to show ordered by feature importance (date, index, label are first)\n \"\"\"\n\n def __init__(\n self,\n columns: Union[Hashable, List[Hashable], None] = None,\n ignore_columns: Union[Hashable, List[Hashable], None] = None,\n n_top_columns: int = 10,\n **kwargs\n ):\n super().__init__(**kwargs)\n self.columns = columns\n self.ignore_columns = ignore_columns\n self.n_top_columns = n_top_columns\n\n def run_logic(self, context: Context, dataset_type: str = 'train') -> CheckResult:\n \"\"\"Run check.\n\n Returns\n -------\n CheckResult\n value is a dict where the key is the column name as key and the value is the ratio 'strings' and 'numbers'\n for any column with mixed data types.\n numbers will also include hidden numbers in string representation.\n \"\"\"\n if dataset_type == 'train':\n dataset = context.train\n else:\n dataset = context.test\n features_importance = context.features_importance\n\n df = select_from_dataframe(dataset.data, self.columns, self.ignore_columns)\n\n # Result value: { Column Name: {string: pct, numbers: pct}}\n display_dict = {}\n result_dict = {}\n\n for column_name in df.columns:\n column_data = df[column_name].dropna()\n mix = self._get_data_mix(column_data)\n result_dict[column_name] = mix\n if mix:\n # Format percents for display\n display_dict[column_name] = {k: format_percent(v) for k, v in mix.items()}\n\n if display_dict:\n df_graph = pd.DataFrame.from_dict(display_dict)\n df_graph = column_importance_sorter_df(df_graph.T, dataset, features_importance,\n self.n_top_columns).T\n display = [N_TOP_MESSAGE % self.n_top_columns, df_graph]\n else:\n display = None\n\n return CheckResult(result_dict, display=display)\n\n @classmethod\n def _get_data_mix(cls, column_data: pd.Series) -> dict:\n if is_string_column(column_data):\n return cls._check_mixed_percentage(column_data)\n return {}\n\n @classmethod\n def _check_mixed_percentage(cls, column_data: pd.Series) -> dict:\n total_rows = column_data.count()\n\n def is_float(x) -> bool:\n try:\n float(x)\n return True\n except ValueError:\n return False\n\n nums = sum(column_data.apply(is_float))\n if nums in (total_rows, 0):\n return {}\n\n # Then we've got a mix\n nums_pct = nums / total_rows\n strs_pct = (np.abs(nums - total_rows)) / total_rows\n\n return {'strings': strs_pct, 'numbers': nums_pct}\n\n def add_condition_rare_type_ratio_not_in_range(self, ratio_range: Tuple[float, float] = (0.01, 0.1)):\n \"\"\"Add condition - Whether the ratio of rarer data type (strings or numbers) is not in the \"danger zone\".\n\n The \"danger zone\" represents the following logic - if the rarer data type is, for example, 30% of the data,\n than the column is presumably supposed to contain both numbers and string values. If the rarer data type is,\n for example, less than 1% of the data, than it's presumably a contamination, but a negligible one. In the range\n between, there is a real chance that the rarer data type may represent a problem to model training and\n inference.\n\n Parameters\n ----------\n ratio_range : Tuple[float, float] , default: (0.01 , 0.1)\n The range between which the ratio of rarer data type in the column is\n considered a problem.\n \"\"\"\n def condition(result):\n no_mix_columns = []\n failing_columns = []\n for col, ratios in result.items():\n # Columns without a mix contains empty dict for ratios\n if not ratios:\n no_mix_columns.append(col)\n continue\n rarer_ratio = min(ratios['strings'], ratios['numbers'])\n if ratio_range[0] < rarer_ratio < ratio_range[1]:\n failing_columns.append(col)\n if failing_columns:\n details = f'Found {len(failing_columns)} out of {len(result)} columns with non-negligible quantities ' \\\n f'of samples with a different data type from the majority of samples: {failing_columns}'\n return ConditionResult(ConditionCategory.WARN, details)\n details = f'{len(result)} columns passed: found {len(result) - len(no_mix_columns)} columns with ' \\\n f'negligible types mix, and {len(no_mix_columns)} columns without any types mix'\n return ConditionResult(ConditionCategory.PASS, details)\n\n name = f'Rare data types in column are either more than {format_percent(ratio_range[1])} or less ' \\\n f'than {format_percent(ratio_range[0])} of the data'\n return self.add_condition(name, condition)\n", "path": "deepchecks/tabular/checks/data_integrity/mixed_data_types.py"}]}
| 2,533 | 765 |
gh_patches_debug_29580
|
rasdani/github-patches
|
git_diff
|
mit-ll-responsible-ai__hydra-zen-175
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Inconsistent static analysis via pyright
```python
x = [1, 2, 3]
make_config(a=[1, 2, 3])
make_config(a=x) # pyright marks this as invalid based on our annotations
```
This seems like a pyright issue, but we might consider revising annotations to accommodate
</issue>
<code>
[start of src/hydra_zen/typing/_implementations.py]
1 # Copyright (c) 2021 Massachusetts Institute of Technology
2 # SPDX-License-Identifier: MIT
3
4 from dataclasses import Field
5 from enum import Enum
6 from pathlib import Path
7 from typing import (
8 Any,
9 Callable,
10 Counter,
11 Deque,
12 Dict,
13 FrozenSet,
14 Generic,
15 List,
16 NewType,
17 Set,
18 Tuple,
19 TypeVar,
20 Union,
21 )
22
23 from omegaconf import DictConfig, ListConfig
24 from typing_extensions import Protocol, runtime_checkable
25
26 __all__ = [
27 "Just",
28 "Builds",
29 "PartialBuilds",
30 "Partial",
31 "Importable",
32 "SupportedPrimitive",
33 ]
34
35
36 _T = TypeVar("_T", covariant=True)
37
38
39 class Partial(Generic[_T]):
40 func: Callable[..., _T]
41 args: Tuple[Any, ...]
42 keywords: Dict[str, Any]
43
44 def __init__(
45 self, func: Callable[..., _T], *args: Any, **kwargs: Any
46 ) -> None: # pragma: no cover
47 ...
48
49 def __call__(self, *args: Any, **kwargs: Any) -> _T: # pragma: no cover
50 ...
51
52
53 InterpStr = NewType("InterpStr", str)
54
55 Importable = TypeVar("Importable")
56
57
58 class _DataClass(Protocol): # pragma: no cover
59 # doesn't provide __init__, __getattribute__, etc.
60 __dataclass_fields__: Dict[str, Field]
61
62
63 class DataClass(_DataClass, Protocol): # pragma: no cover
64 def __init__(self, *args, **kwargs) -> None:
65 ...
66
67 def __getattribute__(self, name: str) -> Any:
68 ...
69
70 def __setattr__(self, name: str, value: Any) -> None:
71 ...
72
73
74 @runtime_checkable
75 class Builds(DataClass, Protocol[_T]): # pragma: no cover
76
77 _target_: str
78
79
80 @runtime_checkable
81 class Just(Builds, Protocol[_T]): # pragma: no cover
82 path: str # interpolated string for importing obj
83 _target_: str = "hydra_zen.funcs.get_obj"
84
85
86 @runtime_checkable
87 class PartialBuilds(Builds, Protocol[_T]): # pragma: no cover
88 _target_: str = "hydra_zen.funcs.zen_processing"
89 _zen_target: str
90 _zen_partial: bool = True
91
92
93 @runtime_checkable
94 class HasTarget(Protocol): # pragma: no cover
95 _target_: str
96
97
98 @runtime_checkable
99 class HasPartialTarget(Protocol): # pragma: no cover
100 _zen_partial: bool = True
101
102
103 _HydraPrimitive = Union[
104 bool,
105 None,
106 int,
107 float,
108 str,
109 ]
110
111 _SupportedPrimitive = Union[
112 _HydraPrimitive,
113 ListConfig,
114 DictConfig,
115 type,
116 Callable,
117 Enum,
118 _DataClass,
119 complex,
120 Path,
121 range,
122 ]
123
124 SupportedPrimitive = Union[
125 _SupportedPrimitive,
126 Dict[_HydraPrimitive, "SupportedPrimitive"],
127 Counter[_HydraPrimitive],
128 Set["SupportedPrimitive"],
129 FrozenSet["SupportedPrimitive"],
130 Deque["SupportedPrimitive"],
131 List["SupportedPrimitive"],
132 Tuple["SupportedPrimitive", ...],
133 ]
134
[end of src/hydra_zen/typing/_implementations.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/src/hydra_zen/typing/_implementations.py b/src/hydra_zen/typing/_implementations.py
--- a/src/hydra_zen/typing/_implementations.py
+++ b/src/hydra_zen/typing/_implementations.py
@@ -7,21 +7,19 @@
from typing import (
Any,
Callable,
- Counter,
- Deque,
Dict,
FrozenSet,
Generic,
- List,
+ Mapping,
NewType,
- Set,
+ Sequence,
Tuple,
TypeVar,
Union,
)
from omegaconf import DictConfig, ListConfig
-from typing_extensions import Protocol, runtime_checkable
+from typing_extensions import Protocol, TypedDict, runtime_checkable
__all__ = [
"Just",
@@ -33,6 +31,10 @@
]
+class EmptyDict(TypedDict):
+ pass
+
+
_T = TypeVar("_T", covariant=True)
@@ -119,15 +121,21 @@
complex,
Path,
range,
+ set,
+ EmptyDict, # not covered by Mapping[..., ...]
]
SupportedPrimitive = Union[
_SupportedPrimitive,
- Dict[_HydraPrimitive, "SupportedPrimitive"],
- Counter[_HydraPrimitive],
- Set["SupportedPrimitive"],
FrozenSet["SupportedPrimitive"],
- Deque["SupportedPrimitive"],
- List["SupportedPrimitive"],
- Tuple["SupportedPrimitive", ...],
+ # Even thought this is redundant with Sequence, it seems to
+ # be needed for pyright to do proper checking of tuple contents
+ Tuple["SupportedPrimitive"],
+ # Mutable generic containers need to be invariant, so
+ # we have to settle for Sequence/Mapping. While this
+ # is overly permissive in terms of sequence-type, it
+ # at least affords quality checking of sequence content
+ Sequence["SupportedPrimitive"],
+ # Mapping is covariant only in value
+ Mapping[Any, "SupportedPrimitive"],
]
|
{"golden_diff": "diff --git a/src/hydra_zen/typing/_implementations.py b/src/hydra_zen/typing/_implementations.py\n--- a/src/hydra_zen/typing/_implementations.py\n+++ b/src/hydra_zen/typing/_implementations.py\n@@ -7,21 +7,19 @@\n from typing import (\n Any,\n Callable,\n- Counter,\n- Deque,\n Dict,\n FrozenSet,\n Generic,\n- List,\n+ Mapping,\n NewType,\n- Set,\n+ Sequence,\n Tuple,\n TypeVar,\n Union,\n )\n \n from omegaconf import DictConfig, ListConfig\n-from typing_extensions import Protocol, runtime_checkable\n+from typing_extensions import Protocol, TypedDict, runtime_checkable\n \n __all__ = [\n \"Just\",\n@@ -33,6 +31,10 @@\n ]\n \n \n+class EmptyDict(TypedDict):\n+ pass\n+\n+\n _T = TypeVar(\"_T\", covariant=True)\n \n \n@@ -119,15 +121,21 @@\n complex,\n Path,\n range,\n+ set,\n+ EmptyDict, # not covered by Mapping[..., ...]\n ]\n \n SupportedPrimitive = Union[\n _SupportedPrimitive,\n- Dict[_HydraPrimitive, \"SupportedPrimitive\"],\n- Counter[_HydraPrimitive],\n- Set[\"SupportedPrimitive\"],\n FrozenSet[\"SupportedPrimitive\"],\n- Deque[\"SupportedPrimitive\"],\n- List[\"SupportedPrimitive\"],\n- Tuple[\"SupportedPrimitive\", ...],\n+ # Even thought this is redundant with Sequence, it seems to\n+ # be needed for pyright to do proper checking of tuple contents\n+ Tuple[\"SupportedPrimitive\"],\n+ # Mutable generic containers need to be invariant, so\n+ # we have to settle for Sequence/Mapping. While this\n+ # is overly permissive in terms of sequence-type, it\n+ # at least affords quality checking of sequence content\n+ Sequence[\"SupportedPrimitive\"],\n+ # Mapping is covariant only in value\n+ Mapping[Any, \"SupportedPrimitive\"],\n ]\n", "issue": "Inconsistent static analysis via pyright\n```python\r\nx = [1, 2, 3]\r\nmake_config(a=[1, 2, 3])\r\nmake_config(a=x) # pyright marks this as invalid based on our annotations\r\n```\r\n\r\nThis seems like a pyright issue, but we might consider revising annotations to accommodate\n", "before_files": [{"content": "# Copyright (c) 2021 Massachusetts Institute of Technology\n# SPDX-License-Identifier: MIT\n\nfrom dataclasses import Field\nfrom enum import Enum\nfrom pathlib import Path\nfrom typing import (\n Any,\n Callable,\n Counter,\n Deque,\n Dict,\n FrozenSet,\n Generic,\n List,\n NewType,\n Set,\n Tuple,\n TypeVar,\n Union,\n)\n\nfrom omegaconf import DictConfig, ListConfig\nfrom typing_extensions import Protocol, runtime_checkable\n\n__all__ = [\n \"Just\",\n \"Builds\",\n \"PartialBuilds\",\n \"Partial\",\n \"Importable\",\n \"SupportedPrimitive\",\n]\n\n\n_T = TypeVar(\"_T\", covariant=True)\n\n\nclass Partial(Generic[_T]):\n func: Callable[..., _T]\n args: Tuple[Any, ...]\n keywords: Dict[str, Any]\n\n def __init__(\n self, func: Callable[..., _T], *args: Any, **kwargs: Any\n ) -> None: # pragma: no cover\n ...\n\n def __call__(self, *args: Any, **kwargs: Any) -> _T: # pragma: no cover\n ...\n\n\nInterpStr = NewType(\"InterpStr\", str)\n\nImportable = TypeVar(\"Importable\")\n\n\nclass _DataClass(Protocol): # pragma: no cover\n # doesn't provide __init__, __getattribute__, etc.\n __dataclass_fields__: Dict[str, Field]\n\n\nclass DataClass(_DataClass, Protocol): # pragma: no cover\n def __init__(self, *args, **kwargs) -> None:\n ...\n\n def __getattribute__(self, name: str) -> Any:\n ...\n\n def __setattr__(self, name: str, value: Any) -> None:\n ...\n\n\n@runtime_checkable\nclass Builds(DataClass, Protocol[_T]): # pragma: no cover\n\n _target_: str\n\n\n@runtime_checkable\nclass Just(Builds, Protocol[_T]): # pragma: no cover\n path: str # interpolated string for importing obj\n _target_: str = \"hydra_zen.funcs.get_obj\"\n\n\n@runtime_checkable\nclass PartialBuilds(Builds, Protocol[_T]): # pragma: no cover\n _target_: str = \"hydra_zen.funcs.zen_processing\"\n _zen_target: str\n _zen_partial: bool = True\n\n\n@runtime_checkable\nclass HasTarget(Protocol): # pragma: no cover\n _target_: str\n\n\n@runtime_checkable\nclass HasPartialTarget(Protocol): # pragma: no cover\n _zen_partial: bool = True\n\n\n_HydraPrimitive = Union[\n bool,\n None,\n int,\n float,\n str,\n]\n\n_SupportedPrimitive = Union[\n _HydraPrimitive,\n ListConfig,\n DictConfig,\n type,\n Callable,\n Enum,\n _DataClass,\n complex,\n Path,\n range,\n]\n\nSupportedPrimitive = Union[\n _SupportedPrimitive,\n Dict[_HydraPrimitive, \"SupportedPrimitive\"],\n Counter[_HydraPrimitive],\n Set[\"SupportedPrimitive\"],\n FrozenSet[\"SupportedPrimitive\"],\n Deque[\"SupportedPrimitive\"],\n List[\"SupportedPrimitive\"],\n Tuple[\"SupportedPrimitive\", ...],\n]\n", "path": "src/hydra_zen/typing/_implementations.py"}]}
| 1,647 | 453 |
gh_patches_debug_44429
|
rasdani/github-patches
|
git_diff
|
pytorch__ignite-408
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Remove `activation` argument from AveragePrecision and ROC_AUC
I propose to remove `activation` argument from `AveragePrecision` and `ROC_AUC`:
https://github.com/pytorch/ignite/blob/862ab0073e461a32bf5b53fa015f88c143ae9079/ignite/contrib/metrics/roc_auc.py#L23-L28
to be coherent with `Accuracy` et friends and the usage of `output_transform` to apply the binarization/activation :
https://github.com/pytorch/ignite/blob/862ab0073e461a32bf5b53fa015f88c143ae9079/ignite/metrics/accuracy.py#L81-L93
</issue>
<code>
[start of ignite/contrib/metrics/roc_auc.py]
1 from functools import partial
2 from ignite.metrics import EpochMetric
3
4
5 def roc_auc_compute_fn(y_preds, y_targets, activation=None):
6 try:
7 from sklearn.metrics import roc_auc_score
8 except ImportError:
9 raise RuntimeError("This contrib module requires sklearn to be installed.")
10
11 y_true = y_targets.numpy()
12 if activation is not None:
13 y_preds = activation(y_preds)
14 y_pred = y_preds.numpy()
15 return roc_auc_score(y_true, y_pred)
16
17
18 class ROC_AUC(EpochMetric):
19 """Computes Area Under the Receiver Operating Characteristic Curve (ROC AUC)
20 accumulating predictions and the ground-truth during an epoch and applying
21 `sklearn.metrics.roc_auc_score <http://scikit-learn.org/stable/modules/generated/
22 sklearn.metrics.roc_auc_score.html#sklearn.metrics.roc_auc_score>`_ .
23
24 Args:
25 activation (callable, optional): optional function to apply on prediction tensors,
26 e.g. `activation=torch.sigmoid` to transform logits.
27 output_transform (callable, optional): a callable that is used to transform the
28 :class:`~ignite.engine.Engine`'s `process_function`'s output into the
29 form expected by the metric. This can be useful if, for example, you have a multi-output model and
30 you want to compute the metric with respect to one of the outputs.
31
32 """
33 def __init__(self, activation=None, output_transform=lambda x: x):
34 super(ROC_AUC, self).__init__(partial(roc_auc_compute_fn, activation=activation),
35 output_transform=output_transform)
36
[end of ignite/contrib/metrics/roc_auc.py]
[start of ignite/contrib/metrics/average_precision.py]
1 from functools import partial
2 from ignite.metrics import EpochMetric
3
4
5 def average_precision_compute_fn(y_preds, y_targets, activation=None):
6 try:
7 from sklearn.metrics import average_precision_score
8 except ImportError:
9 raise RuntimeError("This contrib module requires sklearn to be installed.")
10
11 y_true = y_targets.numpy()
12 if activation is not None:
13 y_preds = activation(y_preds)
14 y_pred = y_preds.numpy()
15 return average_precision_score(y_true, y_pred)
16
17
18 class AveragePrecision(EpochMetric):
19 """Computes Average Precision accumulating predictions and the ground-truth during an epoch
20 and applying `sklearn.metrics.average_precision_score <http://scikit-learn.org/stable/modules/generated/
21 sklearn.metrics.average_precision_score.html#sklearn.metrics.average_precision_score>`_ .
22
23 Args:
24 activation (callable, optional): optional function to apply on prediction tensors,
25 e.g. `activation=torch.sigmoid` to transform logits.
26 output_transform (callable, optional): a callable that is used to transform the
27 :class:`~ignite.engine.Engine`'s `process_function`'s output into the
28 form expected by the metric. This can be useful if, for example, you have a multi-output model and
29 you want to compute the metric with respect to one of the outputs.
30
31 """
32 def __init__(self, activation=None, output_transform=lambda x: x):
33 super(AveragePrecision, self).__init__(partial(average_precision_compute_fn, activation=activation),
34 output_transform=output_transform)
35
[end of ignite/contrib/metrics/average_precision.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/ignite/contrib/metrics/average_precision.py b/ignite/contrib/metrics/average_precision.py
--- a/ignite/contrib/metrics/average_precision.py
+++ b/ignite/contrib/metrics/average_precision.py
@@ -1,16 +1,13 @@
-from functools import partial
from ignite.metrics import EpochMetric
-def average_precision_compute_fn(y_preds, y_targets, activation=None):
+def average_precision_compute_fn(y_preds, y_targets):
try:
from sklearn.metrics import average_precision_score
except ImportError:
raise RuntimeError("This contrib module requires sklearn to be installed.")
y_true = y_targets.numpy()
- if activation is not None:
- y_preds = activation(y_preds)
y_pred = y_preds.numpy()
return average_precision_score(y_true, y_pred)
@@ -21,14 +18,23 @@
sklearn.metrics.average_precision_score.html#sklearn.metrics.average_precision_score>`_ .
Args:
- activation (callable, optional): optional function to apply on prediction tensors,
- e.g. `activation=torch.sigmoid` to transform logits.
output_transform (callable, optional): a callable that is used to transform the
:class:`~ignite.engine.Engine`'s `process_function`'s output into the
form expected by the metric. This can be useful if, for example, you have a multi-output model and
you want to compute the metric with respect to one of the outputs.
+ AveragePrecision expects y to be comprised of 0's and 1's. y_pred must either be probability estimates or
+ confidence values. To apply an activation to y_pred, use output_transform as shown below:
+
+ .. code-block:: python
+
+ def activated_output_transform(output):
+ y_pred, y = output
+ y_pred = torch.softmax(y_pred)
+ return y_pred, y
+
+ avg_precision = AveragePrecision(activated_output_transform)
+
"""
def __init__(self, activation=None, output_transform=lambda x: x):
- super(AveragePrecision, self).__init__(partial(average_precision_compute_fn, activation=activation),
- output_transform=output_transform)
+ super(AveragePrecision, self).__init__(average_precision_compute_fn, output_transform=output_transform)
diff --git a/ignite/contrib/metrics/roc_auc.py b/ignite/contrib/metrics/roc_auc.py
--- a/ignite/contrib/metrics/roc_auc.py
+++ b/ignite/contrib/metrics/roc_auc.py
@@ -1,16 +1,13 @@
-from functools import partial
from ignite.metrics import EpochMetric
-def roc_auc_compute_fn(y_preds, y_targets, activation=None):
+def roc_auc_compute_fn(y_preds, y_targets):
try:
from sklearn.metrics import roc_auc_score
except ImportError:
raise RuntimeError("This contrib module requires sklearn to be installed.")
y_true = y_targets.numpy()
- if activation is not None:
- y_preds = activation(y_preds)
y_pred = y_preds.numpy()
return roc_auc_score(y_true, y_pred)
@@ -22,14 +19,23 @@
sklearn.metrics.roc_auc_score.html#sklearn.metrics.roc_auc_score>`_ .
Args:
- activation (callable, optional): optional function to apply on prediction tensors,
- e.g. `activation=torch.sigmoid` to transform logits.
output_transform (callable, optional): a callable that is used to transform the
:class:`~ignite.engine.Engine`'s `process_function`'s output into the
form expected by the metric. This can be useful if, for example, you have a multi-output model and
you want to compute the metric with respect to one of the outputs.
+ ROC_AUC expects y to be comprised of 0's and 1's. y_pred must either be probability estimates or confidence
+ values. To apply an activation to y_pred, use output_transform as shown below:
+
+ .. code-block:: python
+
+ def activated_output_transform(output):
+ y_pred, y = output
+ y_pred = torch.sigmoid(y_pred)
+ return y_pred, y
+
+ roc_auc = ROC_AUC(activated_output_transform)
+
"""
- def __init__(self, activation=None, output_transform=lambda x: x):
- super(ROC_AUC, self).__init__(partial(roc_auc_compute_fn, activation=activation),
- output_transform=output_transform)
+ def __init__(self, output_transform=lambda x: x):
+ super(ROC_AUC, self).__init__(roc_auc_compute_fn, output_transform=output_transform)
|
{"golden_diff": "diff --git a/ignite/contrib/metrics/average_precision.py b/ignite/contrib/metrics/average_precision.py\n--- a/ignite/contrib/metrics/average_precision.py\n+++ b/ignite/contrib/metrics/average_precision.py\n@@ -1,16 +1,13 @@\n-from functools import partial\n from ignite.metrics import EpochMetric\n \n \n-def average_precision_compute_fn(y_preds, y_targets, activation=None):\n+def average_precision_compute_fn(y_preds, y_targets):\n try:\n from sklearn.metrics import average_precision_score\n except ImportError:\n raise RuntimeError(\"This contrib module requires sklearn to be installed.\")\n \n y_true = y_targets.numpy()\n- if activation is not None:\n- y_preds = activation(y_preds)\n y_pred = y_preds.numpy()\n return average_precision_score(y_true, y_pred)\n \n@@ -21,14 +18,23 @@\n sklearn.metrics.average_precision_score.html#sklearn.metrics.average_precision_score>`_ .\n \n Args:\n- activation (callable, optional): optional function to apply on prediction tensors,\n- e.g. `activation=torch.sigmoid` to transform logits.\n output_transform (callable, optional): a callable that is used to transform the\n :class:`~ignite.engine.Engine`'s `process_function`'s output into the\n form expected by the metric. This can be useful if, for example, you have a multi-output model and\n you want to compute the metric with respect to one of the outputs.\n \n+ AveragePrecision expects y to be comprised of 0's and 1's. y_pred must either be probability estimates or\n+ confidence values. To apply an activation to y_pred, use output_transform as shown below:\n+\n+ .. code-block:: python\n+\n+ def activated_output_transform(output):\n+ y_pred, y = output\n+ y_pred = torch.softmax(y_pred)\n+ return y_pred, y\n+\n+ avg_precision = AveragePrecision(activated_output_transform)\n+\n \"\"\"\n def __init__(self, activation=None, output_transform=lambda x: x):\n- super(AveragePrecision, self).__init__(partial(average_precision_compute_fn, activation=activation),\n- output_transform=output_transform)\n+ super(AveragePrecision, self).__init__(average_precision_compute_fn, output_transform=output_transform)\ndiff --git a/ignite/contrib/metrics/roc_auc.py b/ignite/contrib/metrics/roc_auc.py\n--- a/ignite/contrib/metrics/roc_auc.py\n+++ b/ignite/contrib/metrics/roc_auc.py\n@@ -1,16 +1,13 @@\n-from functools import partial\n from ignite.metrics import EpochMetric\n \n \n-def roc_auc_compute_fn(y_preds, y_targets, activation=None):\n+def roc_auc_compute_fn(y_preds, y_targets):\n try:\n from sklearn.metrics import roc_auc_score\n except ImportError:\n raise RuntimeError(\"This contrib module requires sklearn to be installed.\")\n \n y_true = y_targets.numpy()\n- if activation is not None:\n- y_preds = activation(y_preds)\n y_pred = y_preds.numpy()\n return roc_auc_score(y_true, y_pred)\n \n@@ -22,14 +19,23 @@\n sklearn.metrics.roc_auc_score.html#sklearn.metrics.roc_auc_score>`_ .\n \n Args:\n- activation (callable, optional): optional function to apply on prediction tensors,\n- e.g. `activation=torch.sigmoid` to transform logits.\n output_transform (callable, optional): a callable that is used to transform the\n :class:`~ignite.engine.Engine`'s `process_function`'s output into the\n form expected by the metric. This can be useful if, for example, you have a multi-output model and\n you want to compute the metric with respect to one of the outputs.\n \n+ ROC_AUC expects y to be comprised of 0's and 1's. y_pred must either be probability estimates or confidence\n+ values. To apply an activation to y_pred, use output_transform as shown below:\n+\n+ .. code-block:: python\n+\n+ def activated_output_transform(output):\n+ y_pred, y = output\n+ y_pred = torch.sigmoid(y_pred)\n+ return y_pred, y\n+\n+ roc_auc = ROC_AUC(activated_output_transform)\n+\n \"\"\"\n- def __init__(self, activation=None, output_transform=lambda x: x):\n- super(ROC_AUC, self).__init__(partial(roc_auc_compute_fn, activation=activation),\n- output_transform=output_transform)\n+ def __init__(self, output_transform=lambda x: x):\n+ super(ROC_AUC, self).__init__(roc_auc_compute_fn, output_transform=output_transform)\n", "issue": "Remove `activation` argument from AveragePrecision and ROC_AUC\nI propose to remove `activation` argument from `AveragePrecision` and `ROC_AUC`:\r\nhttps://github.com/pytorch/ignite/blob/862ab0073e461a32bf5b53fa015f88c143ae9079/ignite/contrib/metrics/roc_auc.py#L23-L28\r\nto be coherent with `Accuracy` et friends and the usage of `output_transform` to apply the binarization/activation :\r\nhttps://github.com/pytorch/ignite/blob/862ab0073e461a32bf5b53fa015f88c143ae9079/ignite/metrics/accuracy.py#L81-L93\r\n\r\n\n", "before_files": [{"content": "from functools import partial\nfrom ignite.metrics import EpochMetric\n\n\ndef roc_auc_compute_fn(y_preds, y_targets, activation=None):\n try:\n from sklearn.metrics import roc_auc_score\n except ImportError:\n raise RuntimeError(\"This contrib module requires sklearn to be installed.\")\n\n y_true = y_targets.numpy()\n if activation is not None:\n y_preds = activation(y_preds)\n y_pred = y_preds.numpy()\n return roc_auc_score(y_true, y_pred)\n\n\nclass ROC_AUC(EpochMetric):\n \"\"\"Computes Area Under the Receiver Operating Characteristic Curve (ROC AUC)\n accumulating predictions and the ground-truth during an epoch and applying\n `sklearn.metrics.roc_auc_score <http://scikit-learn.org/stable/modules/generated/\n sklearn.metrics.roc_auc_score.html#sklearn.metrics.roc_auc_score>`_ .\n\n Args:\n activation (callable, optional): optional function to apply on prediction tensors,\n e.g. `activation=torch.sigmoid` to transform logits.\n output_transform (callable, optional): a callable that is used to transform the\n :class:`~ignite.engine.Engine`'s `process_function`'s output into the\n form expected by the metric. This can be useful if, for example, you have a multi-output model and\n you want to compute the metric with respect to one of the outputs.\n\n \"\"\"\n def __init__(self, activation=None, output_transform=lambda x: x):\n super(ROC_AUC, self).__init__(partial(roc_auc_compute_fn, activation=activation),\n output_transform=output_transform)\n", "path": "ignite/contrib/metrics/roc_auc.py"}, {"content": "from functools import partial\nfrom ignite.metrics import EpochMetric\n\n\ndef average_precision_compute_fn(y_preds, y_targets, activation=None):\n try:\n from sklearn.metrics import average_precision_score\n except ImportError:\n raise RuntimeError(\"This contrib module requires sklearn to be installed.\")\n\n y_true = y_targets.numpy()\n if activation is not None:\n y_preds = activation(y_preds)\n y_pred = y_preds.numpy()\n return average_precision_score(y_true, y_pred)\n\n\nclass AveragePrecision(EpochMetric):\n \"\"\"Computes Average Precision accumulating predictions and the ground-truth during an epoch\n and applying `sklearn.metrics.average_precision_score <http://scikit-learn.org/stable/modules/generated/\n sklearn.metrics.average_precision_score.html#sklearn.metrics.average_precision_score>`_ .\n\n Args:\n activation (callable, optional): optional function to apply on prediction tensors,\n e.g. `activation=torch.sigmoid` to transform logits.\n output_transform (callable, optional): a callable that is used to transform the\n :class:`~ignite.engine.Engine`'s `process_function`'s output into the\n form expected by the metric. This can be useful if, for example, you have a multi-output model and\n you want to compute the metric with respect to one of the outputs.\n\n \"\"\"\n def __init__(self, activation=None, output_transform=lambda x: x):\n super(AveragePrecision, self).__init__(partial(average_precision_compute_fn, activation=activation),\n output_transform=output_transform)\n", "path": "ignite/contrib/metrics/average_precision.py"}]}
| 1,539 | 1,012 |
gh_patches_debug_23849
|
rasdani/github-patches
|
git_diff
|
cloud-custodian__cloud-custodian-6943
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Key error when adding a policy statement to an SNS Topic
### Describe the bug
When adding/describing a statement to an SNS topic a key error is thrown when attempting to retrieve the `Sid` of the policy.
### What did you expect to happen?
A policy statement should be described and added without error
### Cloud Provider
Amazon Web Services (AWS)
### Cloud Custodian version and dependency information
```shell
Custodian: 0.9.13
Python: 3.8.11 (default, Jun 29 2021, 03:17:05)
[Clang 12.0.0 (clang-1200.0.32.29)]
Platform: posix.uname_result(sysname='Darwin', nodename='f8ffc250c2ee', release='20.6.0', version='Darwin Kernel Version 20.6.0: Mon Aug 30 06:12:21 PDT 2021; root:xnu-7195.141.6~3/RELEASE_X86_64', machine='x86_64')
Using venv: True
Docker: False
Installed:
PyJWT==1.7.1
PyYAML==5.4.1
adal==1.2.7
appdirs==1.4.4
applicationinsights==0.11.10
apscheduler==3.7.0
argcomplete==1.12.3
attrs==21.2.0
azure-common==1.1.27
azure-core==1.15.0
azure-cosmos==3.2.0
azure-cosmosdb-nspkg==2.0.2
azure-cosmosdb-table==1.0.6
azure-functions==1.7.2
azure-graphrbac==0.61.1
azure-identity==1.6.0
azure-keyvault==4.1.0
azure-keyvault-certificates==4.3.0
azure-keyvault-keys==4.4.0
azure-keyvault-secrets==4.3.0
azure-mgmt-apimanagement==1.0.0
azure-mgmt-applicationinsights==1.0.0
azure-mgmt-authorization==1.0.0
azure-mgmt-batch==15.0.0
azure-mgmt-cdn==10.0.0
azure-mgmt-cognitiveservices==11.0.0
azure-mgmt-compute==19.0.0
azure-mgmt-containerinstance==7.0.0
azure-mgmt-containerregistry==8.0.0b1
azure-mgmt-containerservice==15.1.0
azure-mgmt-core==1.2.2
azure-mgmt-cosmosdb==6.4.0
azure-mgmt-costmanagement==1.0.0
azure-mgmt-databricks==1.0.0b1
azure-mgmt-datafactory==1.1.0
azure-mgmt-datalake-store==1.0.0
azure-mgmt-dns==8.0.0b1
azure-mgmt-eventgrid==8.0.0
azure-mgmt-eventhub==8.0.0
azure-mgmt-frontdoor==1.0.0
azure-mgmt-hdinsight==7.0.0
azure-mgmt-iothub==1.0.0
azure-mgmt-keyvault==8.0.0
azure-mgmt-logic==9.0.0
azure-mgmt-managementgroups==1.0.0b1
azure-mgmt-monitor==2.0.0
azure-mgmt-msi==1.0.0
azure-mgmt-network==17.1.0
azure-mgmt-policyinsights==1.0.0
azure-mgmt-rdbms==8.1.0
azure-mgmt-redis==12.0.0
azure-mgmt-resource==16.1.0
azure-mgmt-resourcegraph==7.0.0
azure-mgmt-search==8.0.0
azure-mgmt-servicefabric==1.0.0
azure-mgmt-sql==1.0.0
azure-mgmt-storage==17.1.0
azure-mgmt-subscription==1.0.0
azure-mgmt-trafficmanager==0.51.0
azure-mgmt-web==2.0.0
azure-nspkg==3.0.2
azure-storage-blob==12.8.1
azure-storage-common==2.1.0
azure-storage-file==2.1.0
azure-storage-file-share==12.5.0
azure-storage-queue==12.1.6
boto3==1.17.102
botocore==1.20.102
c7n==0.9.13
cachetools==4.2.2
certifi==2021.5.30
cffi==1.14.5
chardet==4.0.0
click==7.1.2
cryptography==3.4.7
decorator==5.0.9
distlib==0.3.2
dogpile.cache==1.1.3
google-api-core==1.30.0
google-api-python-client==1.12.8
google-auth==1.32.0
google-auth-httplib2==0.1.0
google-cloud-core==1.7.1
google-cloud-logging==1.15.1
google-cloud-monitoring==0.34.0
google-cloud-storage==1.40.0
google-crc32c==1.1.2
google-resumable-media==1.3.1
googleapis-common-protos==1.53.0
httplib2==0.19.1
idna==2.10
importlib-metadata==4.6.0
iso8601==0.1.14
isodate==0.6.0
jmespath==0.10.0
jsonpatch==1.32
jsonpointer==2.1
jsonschema==3.2.0
keystoneauth1==4.3.1
kubernetes==10.0.1
mock==4.0.3
msal==1.12.0
msal-extensions==0.3.0
msrest==0.6.21
msrestazure==0.6.4
munch==2.5.0
netaddr==0.7.20
netifaces==0.11.0
oauthlib==3.1.1
openstacksdk==0.52.0
os-service-types==1.7.0
packaging==20.9
pbr==5.6.0
portalocker==1.7.1
protobuf==3.17.3
pyasn1==0.4.8
pyasn1-modules==0.2.8
pycparser==2.20
pyparsing==2.4.7
pyrsistent==0.18.0
python-dateutil==2.8.1
pytz==2021.1
pyyaml==5.4.1
ratelimiter==1.2.0.post0
requests==2.25.1
requests-oauthlib==1.3.0
requestsexceptions==1.4.0
retrying==1.3.3
rsa==4.7.2
s3transfer==0.4.2
setuptools==56.0.0
six==1.16.0
stevedore==3.3.0
tabulate==0.8.9
typing-extensions==3.10.0.0
tzlocal==2.1
uritemplate==3.0.1
urllib3==1.26.6
websocket-client==1.1.0
zipp==3.5.0
```
### Policy
```shell
- name: sns-encrypt-transit
resource: aws.sns
description: All SNS topics must be encrypted in transit
filters:
- not:
- type: has-statement
statements:
- Action:
- sns:GetTopicAttributes
- sns:SetTopicAttributes
- sns:AddPermission
- sns:RemovePermission
- sns:DeleteTopic
- sns:Subscribe
- sns:ListSubscriptionsByTopic
- sns:Publish
- sns:Receive
Condition:
Bool:
aws:SecureTransport: 'false'
Effect: Deny
Principal: '*'
actions:
- type: modify-policy
add-statements:
- Action:
- sns:GetTopicAttributes
- sns:SetTopicAttributes
- sns:AddPermission
- sns:RemovePermission
- sns:DeleteTopic
- sns:Subscribe
- sns:ListSubscriptionsByTopic
- sns:Publish
- sns:Receive
Condition:
Bool:
aws:SecureTransport: 'false'
Effect: Deny
Principal: '*'
Resource: '*'
Sid: DenyNotEncryptedInTrans
- type: notify
action_desc: 'ACTION Taken: Custodian has added the policy statement to enable encryption in transit.'
cc:
- [email protected]
from: [email protected]
template: default.html
to:
- [email protected]
- resource-owner
transport:
type: sns
topic: arn:aws:sns:us-east-1:0123456789:mailer
violation_desc: The following SNS Topic(s) do not have encryption in transit enabled.
```
### Relevant log/traceback output
```shell
2021-10-08 12:58:33,582 - custodian.output - ERROR - Error while executing policy
Traceback (most recent call last):
File "/usr/local/lib/python3.7/dist-packages/c7n/policy.py", line 322, in run
results = a.process(resources)
File "/usr/local/lib/python3.7/dist-packages/c7n/resources/sns.py", line 330, in process
new_policy, added = self.add_statements(new_policy)
File "/usr/local/lib/python3.7/dist-packages/c7n/actions/policy.py", line 100, in add_statements
current = {s['Sid']: s for s in policy_statements}
File "/usr/local/lib/python3.7/dist-packages/c7n/actions/policy.py", line 100, in
current = {s['Sid']: s for s in policy_statements}
KeyError: 'Sid'
```
### Extra information or context
_No response_
</issue>
<code>
[start of c7n/actions/policy.py]
1 # Copyright The Cloud Custodian Authors.
2 # SPDX-License-Identifier: Apache-2.0
3
4 from .core import BaseAction
5 from c7n import utils
6
7
8 class RemovePolicyBase(BaseAction):
9
10 schema = utils.type_schema(
11 'remove-statements',
12 required=['statement_ids'],
13 statement_ids={'oneOf': [
14 {'enum': ['matched', "*"]},
15 {'type': 'array', 'items': {'type': 'string'}}]})
16
17 def process_policy(self, policy, resource, matched_key):
18 statements = policy.get('Statement', [])
19 resource_statements = resource.get(matched_key, ())
20
21 return remove_statements(
22 self.data['statement_ids'], statements, resource_statements)
23
24
25 def remove_statements(match_ids, statements, matched=()):
26 found = []
27 for s in list(statements):
28 s_found = False
29 if match_ids == '*':
30 s_found = True
31 elif match_ids == 'matched':
32 if s in matched:
33 s_found = True
34 elif 'Sid' in s and s['Sid'] in match_ids:
35 s_found = True
36 if s_found:
37 found.append(s)
38 statements.remove(s)
39 if not found:
40 return None, found
41 return statements, found
42
43
44 class ModifyPolicyBase(BaseAction):
45 """Action to modify resource IAM policy statements.
46
47 Applies to all resources with embedded IAM Policies.
48
49 :example:
50
51 .. code-block:: yaml
52
53 policies:
54 - name: sns-yank-cross-account
55 resource: sns
56 filters:
57 - type: cross-account
58 actions:
59 - type: modify-policy
60 add-statements: [{
61 "Sid": "ReplaceWithMe",
62 "Effect": "Allow",
63 "Principal": "*",
64 "Action": ["SNS:GetTopicAttributes"],
65 "Resource": topic_arn,
66 }]
67 remove-statements: '*'
68 """
69
70 schema_alias = True
71 schema = utils.type_schema(
72 'modify-policy',
73 **{
74 'add-statements': {
75 'type': 'array',
76 'items': {'$ref': '#/definitions/iam-statement'},
77 },
78 'remove-statements': {
79 'type': ['array', 'string'],
80 'oneOf': [
81 {'enum': ['matched', '*']},
82 {'type': 'array', 'items': {'type': 'string'}}
83 ],
84 }
85 }
86 )
87
88 def __init__(self, data=None, manager=None):
89 if manager is not None:
90 config_args = {
91 'account_id': manager.config.account_id,
92 'region': manager.config.region
93 }
94 self.data = utils.format_string_values(data, **config_args)
95 else:
96 self.data = utils.format_string_values(data)
97 self.manager = manager
98
99 def add_statements(self, policy_statements):
100 current = {s['Sid']: s for s in policy_statements}
101 additional = {s['Sid']: s for s in self.data.get('add-statements', [])}
102 current.update(additional)
103 return list(current.values()), bool(additional)
104
105 def remove_statements(self, policy_statements, resource, matched_key):
106 statement_ids = self.data.get('remove-statements', [])
107 found = []
108 if len(statement_ids) == 0:
109 return policy_statements, found
110 resource_statements = resource.get(matched_key, ())
111 return remove_statements(
112 statement_ids, policy_statements, resource_statements)
113
[end of c7n/actions/policy.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/c7n/actions/policy.py b/c7n/actions/policy.py
--- a/c7n/actions/policy.py
+++ b/c7n/actions/policy.py
@@ -1,5 +1,6 @@
# Copyright The Cloud Custodian Authors.
# SPDX-License-Identifier: Apache-2.0
+import hashlib
from .core import BaseAction
from c7n import utils
@@ -41,6 +42,11 @@
return statements, found
+def statement_id(s):
+ # for statements without a sid, use a checksum for identity
+ return hashlib.sha224(utils.dumps(s, indent=0).encode('utf8')).hexdigest()
+
+
class ModifyPolicyBase(BaseAction):
"""Action to modify resource IAM policy statements.
@@ -97,8 +103,8 @@
self.manager = manager
def add_statements(self, policy_statements):
- current = {s['Sid']: s for s in policy_statements}
- additional = {s['Sid']: s for s in self.data.get('add-statements', [])}
+ current = {s.get('Sid', statement_id(s)): s for s in policy_statements}
+ additional = {s.get('Sid', statement_id(s)): s for s in self.data.get('add-statements', [])}
current.update(additional)
return list(current.values()), bool(additional)
|
{"golden_diff": "diff --git a/c7n/actions/policy.py b/c7n/actions/policy.py\n--- a/c7n/actions/policy.py\n+++ b/c7n/actions/policy.py\n@@ -1,5 +1,6 @@\n # Copyright The Cloud Custodian Authors.\n # SPDX-License-Identifier: Apache-2.0\n+import hashlib\n \n from .core import BaseAction\n from c7n import utils\n@@ -41,6 +42,11 @@\n return statements, found\n \n \n+def statement_id(s):\n+ # for statements without a sid, use a checksum for identity\n+ return hashlib.sha224(utils.dumps(s, indent=0).encode('utf8')).hexdigest()\n+\n+\n class ModifyPolicyBase(BaseAction):\n \"\"\"Action to modify resource IAM policy statements.\n \n@@ -97,8 +103,8 @@\n self.manager = manager\n \n def add_statements(self, policy_statements):\n- current = {s['Sid']: s for s in policy_statements}\n- additional = {s['Sid']: s for s in self.data.get('add-statements', [])}\n+ current = {s.get('Sid', statement_id(s)): s for s in policy_statements}\n+ additional = {s.get('Sid', statement_id(s)): s for s in self.data.get('add-statements', [])}\n current.update(additional)\n return list(current.values()), bool(additional)\n", "issue": "Key error when adding a policy statement to an SNS Topic\n### Describe the bug\n\nWhen adding/describing a statement to an SNS topic a key error is thrown when attempting to retrieve the `Sid` of the policy.\n\n### What did you expect to happen?\n\nA policy statement should be described and added without error\n\n### Cloud Provider\n\nAmazon Web Services (AWS)\n\n### Cloud Custodian version and dependency information\n\n```shell\nCustodian: 0.9.13\r\nPython: 3.8.11 (default, Jun 29 2021, 03:17:05) \r\n [Clang 12.0.0 (clang-1200.0.32.29)]\r\nPlatform: posix.uname_result(sysname='Darwin', nodename='f8ffc250c2ee', release='20.6.0', version='Darwin Kernel Version 20.6.0: Mon Aug 30 06:12:21 PDT 2021; root:xnu-7195.141.6~3/RELEASE_X86_64', machine='x86_64')\r\nUsing venv: True\r\nDocker: False\r\nInstalled: \r\n\r\nPyJWT==1.7.1\r\nPyYAML==5.4.1\r\nadal==1.2.7\r\nappdirs==1.4.4\r\napplicationinsights==0.11.10\r\napscheduler==3.7.0\r\nargcomplete==1.12.3\r\nattrs==21.2.0\r\nazure-common==1.1.27\r\nazure-core==1.15.0\r\nazure-cosmos==3.2.0\r\nazure-cosmosdb-nspkg==2.0.2\r\nazure-cosmosdb-table==1.0.6\r\nazure-functions==1.7.2\r\nazure-graphrbac==0.61.1\r\nazure-identity==1.6.0\r\nazure-keyvault==4.1.0\r\nazure-keyvault-certificates==4.3.0\r\nazure-keyvault-keys==4.4.0\r\nazure-keyvault-secrets==4.3.0\r\nazure-mgmt-apimanagement==1.0.0\r\nazure-mgmt-applicationinsights==1.0.0\r\nazure-mgmt-authorization==1.0.0\r\nazure-mgmt-batch==15.0.0\r\nazure-mgmt-cdn==10.0.0\r\nazure-mgmt-cognitiveservices==11.0.0\r\nazure-mgmt-compute==19.0.0\r\nazure-mgmt-containerinstance==7.0.0\r\nazure-mgmt-containerregistry==8.0.0b1\r\nazure-mgmt-containerservice==15.1.0\r\nazure-mgmt-core==1.2.2\r\nazure-mgmt-cosmosdb==6.4.0\r\nazure-mgmt-costmanagement==1.0.0\r\nazure-mgmt-databricks==1.0.0b1\r\nazure-mgmt-datafactory==1.1.0\r\nazure-mgmt-datalake-store==1.0.0\r\nazure-mgmt-dns==8.0.0b1\r\nazure-mgmt-eventgrid==8.0.0\r\nazure-mgmt-eventhub==8.0.0\r\nazure-mgmt-frontdoor==1.0.0\r\nazure-mgmt-hdinsight==7.0.0\r\nazure-mgmt-iothub==1.0.0\r\nazure-mgmt-keyvault==8.0.0\r\nazure-mgmt-logic==9.0.0\r\nazure-mgmt-managementgroups==1.0.0b1\r\nazure-mgmt-monitor==2.0.0\r\nazure-mgmt-msi==1.0.0\r\nazure-mgmt-network==17.1.0\r\nazure-mgmt-policyinsights==1.0.0\r\nazure-mgmt-rdbms==8.1.0\r\nazure-mgmt-redis==12.0.0\r\nazure-mgmt-resource==16.1.0\r\nazure-mgmt-resourcegraph==7.0.0\r\nazure-mgmt-search==8.0.0\r\nazure-mgmt-servicefabric==1.0.0\r\nazure-mgmt-sql==1.0.0\r\nazure-mgmt-storage==17.1.0\r\nazure-mgmt-subscription==1.0.0\r\nazure-mgmt-trafficmanager==0.51.0\r\nazure-mgmt-web==2.0.0\r\nazure-nspkg==3.0.2\r\nazure-storage-blob==12.8.1\r\nazure-storage-common==2.1.0\r\nazure-storage-file==2.1.0\r\nazure-storage-file-share==12.5.0\r\nazure-storage-queue==12.1.6\r\nboto3==1.17.102\r\nbotocore==1.20.102\r\nc7n==0.9.13\r\ncachetools==4.2.2\r\ncertifi==2021.5.30\r\ncffi==1.14.5\r\nchardet==4.0.0\r\nclick==7.1.2\r\ncryptography==3.4.7\r\ndecorator==5.0.9\r\ndistlib==0.3.2\r\ndogpile.cache==1.1.3\r\ngoogle-api-core==1.30.0\r\ngoogle-api-python-client==1.12.8\r\ngoogle-auth==1.32.0\r\ngoogle-auth-httplib2==0.1.0\r\ngoogle-cloud-core==1.7.1\r\ngoogle-cloud-logging==1.15.1\r\ngoogle-cloud-monitoring==0.34.0\r\ngoogle-cloud-storage==1.40.0\r\ngoogle-crc32c==1.1.2\r\ngoogle-resumable-media==1.3.1\r\ngoogleapis-common-protos==1.53.0\r\nhttplib2==0.19.1\r\nidna==2.10\r\nimportlib-metadata==4.6.0\r\niso8601==0.1.14\r\nisodate==0.6.0\r\njmespath==0.10.0\r\njsonpatch==1.32\r\njsonpointer==2.1\r\njsonschema==3.2.0\r\nkeystoneauth1==4.3.1\r\nkubernetes==10.0.1\r\nmock==4.0.3\r\nmsal==1.12.0\r\nmsal-extensions==0.3.0\r\nmsrest==0.6.21\r\nmsrestazure==0.6.4\r\nmunch==2.5.0\r\nnetaddr==0.7.20\r\nnetifaces==0.11.0\r\noauthlib==3.1.1\r\nopenstacksdk==0.52.0\r\nos-service-types==1.7.0\r\npackaging==20.9\r\npbr==5.6.0\r\nportalocker==1.7.1\r\nprotobuf==3.17.3\r\npyasn1==0.4.8\r\npyasn1-modules==0.2.8\r\npycparser==2.20\r\npyparsing==2.4.7\r\npyrsistent==0.18.0\r\npython-dateutil==2.8.1\r\npytz==2021.1\r\npyyaml==5.4.1\r\nratelimiter==1.2.0.post0\r\nrequests==2.25.1\r\nrequests-oauthlib==1.3.0\r\nrequestsexceptions==1.4.0\r\nretrying==1.3.3\r\nrsa==4.7.2\r\ns3transfer==0.4.2\r\nsetuptools==56.0.0\r\nsix==1.16.0\r\nstevedore==3.3.0\r\ntabulate==0.8.9\r\ntyping-extensions==3.10.0.0\r\ntzlocal==2.1\r\nuritemplate==3.0.1\r\nurllib3==1.26.6\r\nwebsocket-client==1.1.0\r\nzipp==3.5.0\n```\n\n\n### Policy\n\n```shell\n- name: sns-encrypt-transit\r\n resource: aws.sns\r\n description: All SNS topics must be encrypted in transit\r\n filters:\r\n - not:\r\n - type: has-statement\r\n statements:\r\n - Action:\r\n - sns:GetTopicAttributes\r\n - sns:SetTopicAttributes\r\n - sns:AddPermission\r\n - sns:RemovePermission\r\n - sns:DeleteTopic\r\n - sns:Subscribe\r\n - sns:ListSubscriptionsByTopic\r\n - sns:Publish\r\n - sns:Receive\r\n Condition:\r\n Bool:\r\n aws:SecureTransport: 'false'\r\n Effect: Deny\r\n Principal: '*'\r\n actions:\r\n - type: modify-policy\r\n add-statements:\r\n - Action:\r\n - sns:GetTopicAttributes\r\n - sns:SetTopicAttributes\r\n - sns:AddPermission\r\n - sns:RemovePermission\r\n - sns:DeleteTopic\r\n - sns:Subscribe\r\n - sns:ListSubscriptionsByTopic\r\n - sns:Publish\r\n - sns:Receive\r\n Condition:\r\n Bool:\r\n aws:SecureTransport: 'false'\r\n Effect: Deny\r\n Principal: '*'\r\n Resource: '*'\r\n Sid: DenyNotEncryptedInTrans\r\n - type: notify\r\n action_desc: 'ACTION Taken: Custodian has added the policy statement to enable encryption in transit.'\r\n cc:\r\n - [email protected]\r\n from: [email protected]\r\n template: default.html\r\n to:\r\n - [email protected]\r\n - resource-owner\r\n transport:\r\n type: sns\r\n topic: arn:aws:sns:us-east-1:0123456789:mailer\r\n violation_desc: The following SNS Topic(s) do not have encryption in transit enabled.\n```\n\n\n### Relevant log/traceback output\n\n```shell\n2021-10-08 12:58:33,582 - custodian.output - ERROR - Error while executing policy\r\nTraceback (most recent call last):\r\n File \"/usr/local/lib/python3.7/dist-packages/c7n/policy.py\", line 322, in run\r\n results = a.process(resources)\r\n File \"/usr/local/lib/python3.7/dist-packages/c7n/resources/sns.py\", line 330, in process\r\n new_policy, added = self.add_statements(new_policy)\r\n File \"/usr/local/lib/python3.7/dist-packages/c7n/actions/policy.py\", line 100, in add_statements\r\n current = {s['Sid']: s for s in policy_statements}\r\n File \"/usr/local/lib/python3.7/dist-packages/c7n/actions/policy.py\", line 100, in \r\n current = {s['Sid']: s for s in policy_statements}\r\nKeyError: 'Sid'\n```\n\n\n### Extra information or context\n\n_No response_\n", "before_files": [{"content": "# Copyright The Cloud Custodian Authors.\n# SPDX-License-Identifier: Apache-2.0\n\nfrom .core import BaseAction\nfrom c7n import utils\n\n\nclass RemovePolicyBase(BaseAction):\n\n schema = utils.type_schema(\n 'remove-statements',\n required=['statement_ids'],\n statement_ids={'oneOf': [\n {'enum': ['matched', \"*\"]},\n {'type': 'array', 'items': {'type': 'string'}}]})\n\n def process_policy(self, policy, resource, matched_key):\n statements = policy.get('Statement', [])\n resource_statements = resource.get(matched_key, ())\n\n return remove_statements(\n self.data['statement_ids'], statements, resource_statements)\n\n\ndef remove_statements(match_ids, statements, matched=()):\n found = []\n for s in list(statements):\n s_found = False\n if match_ids == '*':\n s_found = True\n elif match_ids == 'matched':\n if s in matched:\n s_found = True\n elif 'Sid' in s and s['Sid'] in match_ids:\n s_found = True\n if s_found:\n found.append(s)\n statements.remove(s)\n if not found:\n return None, found\n return statements, found\n\n\nclass ModifyPolicyBase(BaseAction):\n \"\"\"Action to modify resource IAM policy statements.\n\n Applies to all resources with embedded IAM Policies.\n\n :example:\n\n .. code-block:: yaml\n\n policies:\n - name: sns-yank-cross-account\n resource: sns\n filters:\n - type: cross-account\n actions:\n - type: modify-policy\n add-statements: [{\n \"Sid\": \"ReplaceWithMe\",\n \"Effect\": \"Allow\",\n \"Principal\": \"*\",\n \"Action\": [\"SNS:GetTopicAttributes\"],\n \"Resource\": topic_arn,\n }]\n remove-statements: '*'\n \"\"\"\n\n schema_alias = True\n schema = utils.type_schema(\n 'modify-policy',\n **{\n 'add-statements': {\n 'type': 'array',\n 'items': {'$ref': '#/definitions/iam-statement'},\n },\n 'remove-statements': {\n 'type': ['array', 'string'],\n 'oneOf': [\n {'enum': ['matched', '*']},\n {'type': 'array', 'items': {'type': 'string'}}\n ],\n }\n }\n )\n\n def __init__(self, data=None, manager=None):\n if manager is not None:\n config_args = {\n 'account_id': manager.config.account_id,\n 'region': manager.config.region\n }\n self.data = utils.format_string_values(data, **config_args)\n else:\n self.data = utils.format_string_values(data)\n self.manager = manager\n\n def add_statements(self, policy_statements):\n current = {s['Sid']: s for s in policy_statements}\n additional = {s['Sid']: s for s in self.data.get('add-statements', [])}\n current.update(additional)\n return list(current.values()), bool(additional)\n\n def remove_statements(self, policy_statements, resource, matched_key):\n statement_ids = self.data.get('remove-statements', [])\n found = []\n if len(statement_ids) == 0:\n return policy_statements, found\n resource_statements = resource.get(matched_key, ())\n return remove_statements(\n statement_ids, policy_statements, resource_statements)\n", "path": "c7n/actions/policy.py"}]}
| 3,974 | 307 |
gh_patches_debug_21131
|
rasdani/github-patches
|
git_diff
|
pypi__warehouse-2023
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Search by topic broken ?
Browsing packages, then choosing the Internet / WWW9HTTP / Browser topics gives no result:
https://pypi.org/search/?q=&o=&c=Topic+%3A%3A+Internet+%3A%3A+WWW%2FHTTP+%3A%3A+Browsers
There should be at least the [mechanoid package](https://pypi.org/project/mechanoid/)
Using firefox 50.1:0 on Ubuntu 16.04
</issue>
<code>
[start of warehouse/cli/search/reindex.py]
1 # Licensed under the Apache License, Version 2.0 (the "License");
2 # you may not use this file except in compliance with the License.
3 # You may obtain a copy of the License at
4 #
5 # http://www.apache.org/licenses/LICENSE-2.0
6 #
7 # Unless required by applicable law or agreed to in writing, software
8 # distributed under the License is distributed on an "AS IS" BASIS,
9 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
10 # See the License for the specific language governing permissions and
11 # limitations under the License.
12
13 import binascii
14 import os
15
16 import click
17
18 from elasticsearch.helpers import parallel_bulk
19 from sqlalchemy.orm import lazyload, joinedload, load_only
20
21 from warehouse.cli.search import search
22 from warehouse.db import Session
23 from warehouse.packaging.models import Release, Project
24 from warehouse.packaging.search import Project as ProjectDocType
25 from warehouse.search import get_index
26 from warehouse.utils.db import windowed_query
27
28
29 def _project_docs(db):
30 releases = (
31 db.query(Release)
32 .options(load_only(
33 "summary", "description", "author",
34 "author_email", "maintainer", "maintainer_email",
35 "home_page", "download_url", "keywords", "platform",
36 "created"))
37 .options(lazyload("*"),
38 (joinedload(Release.project)
39 .load_only("normalized_name", "name")
40 .joinedload(Project.releases)
41 .load_only("version", "is_prerelease")),
42 joinedload(Release._classifiers).load_only("classifier"))
43 .distinct(Release.name)
44 .order_by(Release.name, Release._pypi_ordering.desc())
45 )
46 for release in windowed_query(releases, Release.name, 1000):
47 p = ProjectDocType.from_db(release)
48 p.full_clean()
49 yield p.to_dict(include_meta=True)
50
51
52 @search.command()
53 @click.pass_obj
54 def reindex(config, **kwargs):
55 """
56 Recreate the Search Index.
57 """
58 client = config.registry["elasticsearch.client"]
59 db = Session(bind=config.registry["sqlalchemy.engine"])
60 number_of_replicas = config.registry.get("elasticsearch.replicas", 0)
61 refresh_interval = config.registry.get("elasticsearch.interval", "1s")
62
63 # We use a randomly named index so that we can do a zero downtime reindex.
64 # Essentially we'll use a randomly named index which we will use until all
65 # of the data has been reindexed, at which point we'll point an alias at
66 # our randomly named index, and then delete the old randomly named index.
67
68 # Create the new index and associate all of our doc types with it.
69 index_base = config.registry["elasticsearch.index"]
70 random_token = binascii.hexlify(os.urandom(5)).decode("ascii")
71 new_index_name = "{}-{}".format(index_base, random_token)
72 doc_types = config.registry.get("search.doc_types", set())
73
74 # Create the new index with zero replicas and index refreshes disabled
75 # while we are bulk indexing.
76 new_index = get_index(
77 new_index_name,
78 doc_types,
79 using=client,
80 shards=config.registry.get("elasticsearch.shards", 1),
81 replicas=0,
82 interval="-1",
83 )
84
85 # From this point on, if any error occurs, we want to be able to delete our
86 # in progress index.
87 try:
88 db.execute("SET statement_timeout = '600s'")
89
90 for _ in parallel_bulk(client, _project_docs(db)):
91 pass
92 except:
93 new_index.delete()
94 raise
95 finally:
96 db.rollback()
97 db.close()
98
99 # Now that we've finished indexing all of our data we can optimize it and
100 # update the replicas and refresh intervals.
101 client.indices.forcemerge(index=new_index_name)
102 client.indices.put_settings(
103 index=new_index_name,
104 body={
105 "index": {
106 "number_of_replicas": number_of_replicas,
107 "refresh_interval": refresh_interval,
108 }
109 }
110 )
111
112 # Point the alias at our new randomly named index and delete the old index.
113 if client.indices.exists_alias(name=index_base):
114 to_delete = set()
115 actions = []
116 for name in client.indices.get_alias(name=index_base):
117 to_delete.add(name)
118 actions.append({"remove": {"index": name, "alias": index_base}})
119 actions.append({"add": {"index": new_index_name, "alias": index_base}})
120 client.indices.update_aliases({"actions": actions})
121 client.indices.delete(",".join(to_delete))
122 else:
123 client.indices.put_alias(name=index_base, index=new_index_name)
124
[end of warehouse/cli/search/reindex.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/warehouse/cli/search/reindex.py b/warehouse/cli/search/reindex.py
--- a/warehouse/cli/search/reindex.py
+++ b/warehouse/cli/search/reindex.py
@@ -70,6 +70,7 @@
random_token = binascii.hexlify(os.urandom(5)).decode("ascii")
new_index_name = "{}-{}".format(index_base, random_token)
doc_types = config.registry.get("search.doc_types", set())
+ shards = config.registry.get("elasticsearch.shards", 1)
# Create the new index with zero replicas and index refreshes disabled
# while we are bulk indexing.
@@ -77,10 +78,11 @@
new_index_name,
doc_types,
using=client,
- shards=config.registry.get("elasticsearch.shards", 1),
+ shards=shards,
replicas=0,
interval="-1",
)
+ new_index.create(wait_for_active_shards=shards)
# From this point on, if any error occurs, we want to be able to delete our
# in progress index.
|
{"golden_diff": "diff --git a/warehouse/cli/search/reindex.py b/warehouse/cli/search/reindex.py\n--- a/warehouse/cli/search/reindex.py\n+++ b/warehouse/cli/search/reindex.py\n@@ -70,6 +70,7 @@\n random_token = binascii.hexlify(os.urandom(5)).decode(\"ascii\")\n new_index_name = \"{}-{}\".format(index_base, random_token)\n doc_types = config.registry.get(\"search.doc_types\", set())\n+ shards = config.registry.get(\"elasticsearch.shards\", 1)\n \n # Create the new index with zero replicas and index refreshes disabled\n # while we are bulk indexing.\n@@ -77,10 +78,11 @@\n new_index_name,\n doc_types,\n using=client,\n- shards=config.registry.get(\"elasticsearch.shards\", 1),\n+ shards=shards,\n replicas=0,\n interval=\"-1\",\n )\n+ new_index.create(wait_for_active_shards=shards)\n \n # From this point on, if any error occurs, we want to be able to delete our\n # in progress index.\n", "issue": "Search by topic broken ?\nBrowsing packages, then choosing the Internet / WWW9HTTP / Browser topics gives no result:\r\n\r\nhttps://pypi.org/search/?q=&o=&c=Topic+%3A%3A+Internet+%3A%3A+WWW%2FHTTP+%3A%3A+Browsers\r\n\r\nThere should be at least the [mechanoid package](https://pypi.org/project/mechanoid/)\r\n\r\nUsing firefox 50.1:0 on Ubuntu 16.04\n", "before_files": [{"content": "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport binascii\nimport os\n\nimport click\n\nfrom elasticsearch.helpers import parallel_bulk\nfrom sqlalchemy.orm import lazyload, joinedload, load_only\n\nfrom warehouse.cli.search import search\nfrom warehouse.db import Session\nfrom warehouse.packaging.models import Release, Project\nfrom warehouse.packaging.search import Project as ProjectDocType\nfrom warehouse.search import get_index\nfrom warehouse.utils.db import windowed_query\n\n\ndef _project_docs(db):\n releases = (\n db.query(Release)\n .options(load_only(\n \"summary\", \"description\", \"author\",\n \"author_email\", \"maintainer\", \"maintainer_email\",\n \"home_page\", \"download_url\", \"keywords\", \"platform\",\n \"created\"))\n .options(lazyload(\"*\"),\n (joinedload(Release.project)\n .load_only(\"normalized_name\", \"name\")\n .joinedload(Project.releases)\n .load_only(\"version\", \"is_prerelease\")),\n joinedload(Release._classifiers).load_only(\"classifier\"))\n .distinct(Release.name)\n .order_by(Release.name, Release._pypi_ordering.desc())\n )\n for release in windowed_query(releases, Release.name, 1000):\n p = ProjectDocType.from_db(release)\n p.full_clean()\n yield p.to_dict(include_meta=True)\n\n\[email protected]()\[email protected]_obj\ndef reindex(config, **kwargs):\n \"\"\"\n Recreate the Search Index.\n \"\"\"\n client = config.registry[\"elasticsearch.client\"]\n db = Session(bind=config.registry[\"sqlalchemy.engine\"])\n number_of_replicas = config.registry.get(\"elasticsearch.replicas\", 0)\n refresh_interval = config.registry.get(\"elasticsearch.interval\", \"1s\")\n\n # We use a randomly named index so that we can do a zero downtime reindex.\n # Essentially we'll use a randomly named index which we will use until all\n # of the data has been reindexed, at which point we'll point an alias at\n # our randomly named index, and then delete the old randomly named index.\n\n # Create the new index and associate all of our doc types with it.\n index_base = config.registry[\"elasticsearch.index\"]\n random_token = binascii.hexlify(os.urandom(5)).decode(\"ascii\")\n new_index_name = \"{}-{}\".format(index_base, random_token)\n doc_types = config.registry.get(\"search.doc_types\", set())\n\n # Create the new index with zero replicas and index refreshes disabled\n # while we are bulk indexing.\n new_index = get_index(\n new_index_name,\n doc_types,\n using=client,\n shards=config.registry.get(\"elasticsearch.shards\", 1),\n replicas=0,\n interval=\"-1\",\n )\n\n # From this point on, if any error occurs, we want to be able to delete our\n # in progress index.\n try:\n db.execute(\"SET statement_timeout = '600s'\")\n\n for _ in parallel_bulk(client, _project_docs(db)):\n pass\n except:\n new_index.delete()\n raise\n finally:\n db.rollback()\n db.close()\n\n # Now that we've finished indexing all of our data we can optimize it and\n # update the replicas and refresh intervals.\n client.indices.forcemerge(index=new_index_name)\n client.indices.put_settings(\n index=new_index_name,\n body={\n \"index\": {\n \"number_of_replicas\": number_of_replicas,\n \"refresh_interval\": refresh_interval,\n }\n }\n )\n\n # Point the alias at our new randomly named index and delete the old index.\n if client.indices.exists_alias(name=index_base):\n to_delete = set()\n actions = []\n for name in client.indices.get_alias(name=index_base):\n to_delete.add(name)\n actions.append({\"remove\": {\"index\": name, \"alias\": index_base}})\n actions.append({\"add\": {\"index\": new_index_name, \"alias\": index_base}})\n client.indices.update_aliases({\"actions\": actions})\n client.indices.delete(\",\".join(to_delete))\n else:\n client.indices.put_alias(name=index_base, index=new_index_name)\n", "path": "warehouse/cli/search/reindex.py"}]}
| 1,914 | 243 |
gh_patches_debug_29581
|
rasdani/github-patches
|
git_diff
|
svthalia__concrexit-2709
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Food API: AttributeError: 'Event' object has no attribute 'member_registration'
Sentry Issue: [CONCREXIT-KG](https://sentry.io/organizations/thalia/issues/3768057031/?referrer=github_integration)
```
AttributeError: 'Event' object has no attribute 'member_registration'
(11 additional frame(s) were not displayed)
...
File "rest_framework/serializers.py", line 253, in data
self._data = self.to_representation(self.instance)
File "rest_framework/serializers.py", line 522, in to_representation
ret[field.field_name] = field.to_representation(attribute)
File "rest_framework/serializers.py", line 522, in to_representation
ret[field.field_name] = field.to_representation(attribute)
File "rest_framework/fields.py", line 1838, in to_representation
return method(value)
File "events/api/v2/serializers/event.py", line 83, in _registration_status
if self.context["request"].member and len(instance.member_registration) > 0:
```
</issue>
<code>
[start of website/pizzas/api/v2/views.py]
1 from oauth2_provider.contrib.rest_framework import IsAuthenticatedOrTokenHasScope
2 from rest_framework import filters as framework_filters
3 from rest_framework import status
4 from rest_framework.generics import (
5 CreateAPIView,
6 DestroyAPIView,
7 ListAPIView,
8 RetrieveAPIView,
9 UpdateAPIView,
10 get_object_or_404,
11 )
12 from rest_framework.response import Response
13
14 from payments.exceptions import PaymentError
15 from payments.services import delete_payment
16 from pizzas.api.v2 import filters
17 from pizzas.api.v2.serializers import (
18 FoodOrderCreateSerializer,
19 FoodOrderSerializer,
20 FoodOrderUpdateSerializer,
21 ProductSerializer,
22 )
23 from pizzas.api.v2.serializers.food_event import FoodEventSerializer
24 from pizzas.models import FoodEvent, FoodOrder, Product
25 from thaliawebsite.api.v2.permissions import IsAuthenticatedOrTokenHasScopeForMethod
26
27
28 class FoodEventListView(ListAPIView):
29 """Returns an overview of all food events."""
30
31 serializer_class = FoodEventSerializer
32 queryset = FoodEvent.objects.all()
33 filter_backends = (
34 framework_filters.OrderingFilter,
35 filters.FoodEventDateFilterBackend,
36 )
37 ordering_fields = ("start", "end")
38 permission_classes = [
39 IsAuthenticatedOrTokenHasScope,
40 ]
41 required_scopes = ["food:read"]
42
43
44 class FoodEventDetailView(RetrieveAPIView):
45 """Returns one single food event."""
46
47 serializer_class = FoodEventSerializer
48 queryset = FoodEvent.objects.all()
49 permission_classes = [
50 IsAuthenticatedOrTokenHasScope,
51 ]
52 required_scopes = ["food:read"]
53
54
55 class FoodEventProductsListView(ListAPIView):
56 """Returns an overview of all products."""
57
58 serializer_class = ProductSerializer
59 queryset = Product.available_products.all()
60 filter_backends = (framework_filters.SearchFilter,)
61 search_fields = ("name",)
62 permission_classes = [
63 IsAuthenticatedOrTokenHasScope,
64 ]
65 required_scopes = ["food:read"]
66
67
68 class FoodEventOrderDetailView(
69 RetrieveAPIView, CreateAPIView, UpdateAPIView, DestroyAPIView
70 ):
71 """Returns details of a food order."""
72
73 permission_classes = [
74 IsAuthenticatedOrTokenHasScopeForMethod,
75 ]
76 required_scopes_per_method = {
77 "GET": ["food:read"],
78 "POST": ["food:order"],
79 "PUT": ["food:order"],
80 "PATCH": ["food:order"],
81 "DELETE": ["food:order"],
82 }
83
84 def get_serializer_class(self):
85 if self.request.method.lower() == "get":
86 return FoodOrderSerializer
87 if self.request.method.lower() == "post":
88 return FoodOrderCreateSerializer
89 return FoodOrderUpdateSerializer
90
91 def get_queryset(self):
92 return FoodOrder.objects.filter(food_event=self.food_event)
93
94 def get_object(self):
95 queryset = self.filter_queryset(self.get_queryset())
96 obj = get_object_or_404(queryset, member=self.request.member)
97
98 # May raise a permission denied
99 self.check_object_permissions(self.request, obj)
100
101 return obj
102
103 def dispatch(self, request, *args, **kwargs):
104 self.food_event = get_object_or_404(FoodEvent, pk=self.kwargs.get("pk"))
105 try:
106 return super().dispatch(request, *args, **kwargs)
107 except PaymentError as e:
108 return Response(
109 str(e),
110 status=status.HTTP_403_FORBIDDEN,
111 )
112
113 def update(self, request, *args, **kwargs):
114 instance = self.get_object()
115
116 if instance.payment:
117 delete_payment(instance, member=request.member, ignore_change_window=True)
118
119 super().update(request, *args, **kwargs)
120
121 return Response(
122 FoodOrderSerializer(instance, context=self.get_serializer_context()).data
123 )
124
125 def create(self, request, *args, **kwargs):
126 serializer = self.get_serializer(data=request.data)
127 serializer.is_valid(raise_exception=True)
128 self.perform_create(serializer)
129 return Response(
130 FoodOrderSerializer(
131 serializer.instance, context=self.get_serializer_context()
132 ).data,
133 status=status.HTTP_201_CREATED,
134 )
135
[end of website/pizzas/api/v2/views.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/website/pizzas/api/v2/views.py b/website/pizzas/api/v2/views.py
--- a/website/pizzas/api/v2/views.py
+++ b/website/pizzas/api/v2/views.py
@@ -1,3 +1,5 @@
+from django.db.models import Prefetch
+
from oauth2_provider.contrib.rest_framework import IsAuthenticatedOrTokenHasScope
from rest_framework import filters as framework_filters
from rest_framework import status
@@ -11,6 +13,7 @@
)
from rest_framework.response import Response
+from events.models.event_registration import EventRegistration
from payments.exceptions import PaymentError
from payments.services import delete_payment
from pizzas.api.v2 import filters
@@ -45,12 +48,25 @@
"""Returns one single food event."""
serializer_class = FoodEventSerializer
- queryset = FoodEvent.objects.all()
permission_classes = [
IsAuthenticatedOrTokenHasScope,
]
required_scopes = ["food:read"]
+ def get_queryset(self):
+ events = FoodEvent.objects.all()
+ if self.request.member:
+ events = events.prefetch_related(
+ Prefetch(
+ "event__eventregistration_set",
+ to_attr="member_registration",
+ queryset=EventRegistration.objects.filter(
+ member=self.request.member
+ ).select_properties("queue_position"),
+ )
+ )
+ return events
+
class FoodEventProductsListView(ListAPIView):
"""Returns an overview of all products."""
|
{"golden_diff": "diff --git a/website/pizzas/api/v2/views.py b/website/pizzas/api/v2/views.py\n--- a/website/pizzas/api/v2/views.py\n+++ b/website/pizzas/api/v2/views.py\n@@ -1,3 +1,5 @@\n+from django.db.models import Prefetch\n+\n from oauth2_provider.contrib.rest_framework import IsAuthenticatedOrTokenHasScope\n from rest_framework import filters as framework_filters\n from rest_framework import status\n@@ -11,6 +13,7 @@\n )\n from rest_framework.response import Response\n \n+from events.models.event_registration import EventRegistration\n from payments.exceptions import PaymentError\n from payments.services import delete_payment\n from pizzas.api.v2 import filters\n@@ -45,12 +48,25 @@\n \"\"\"Returns one single food event.\"\"\"\n \n serializer_class = FoodEventSerializer\n- queryset = FoodEvent.objects.all()\n permission_classes = [\n IsAuthenticatedOrTokenHasScope,\n ]\n required_scopes = [\"food:read\"]\n \n+ def get_queryset(self):\n+ events = FoodEvent.objects.all()\n+ if self.request.member:\n+ events = events.prefetch_related(\n+ Prefetch(\n+ \"event__eventregistration_set\",\n+ to_attr=\"member_registration\",\n+ queryset=EventRegistration.objects.filter(\n+ member=self.request.member\n+ ).select_properties(\"queue_position\"),\n+ )\n+ )\n+ return events\n+\n \n class FoodEventProductsListView(ListAPIView):\n \"\"\"Returns an overview of all products.\"\"\"\n", "issue": "Food API: AttributeError: 'Event' object has no attribute 'member_registration'\nSentry Issue: [CONCREXIT-KG](https://sentry.io/organizations/thalia/issues/3768057031/?referrer=github_integration)\n\n```\nAttributeError: 'Event' object has no attribute 'member_registration'\n(11 additional frame(s) were not displayed)\n...\n File \"rest_framework/serializers.py\", line 253, in data\n self._data = self.to_representation(self.instance)\n File \"rest_framework/serializers.py\", line 522, in to_representation\n ret[field.field_name] = field.to_representation(attribute)\n File \"rest_framework/serializers.py\", line 522, in to_representation\n ret[field.field_name] = field.to_representation(attribute)\n File \"rest_framework/fields.py\", line 1838, in to_representation\n return method(value)\n File \"events/api/v2/serializers/event.py\", line 83, in _registration_status\n if self.context[\"request\"].member and len(instance.member_registration) > 0:\n```\n", "before_files": [{"content": "from oauth2_provider.contrib.rest_framework import IsAuthenticatedOrTokenHasScope\nfrom rest_framework import filters as framework_filters\nfrom rest_framework import status\nfrom rest_framework.generics import (\n CreateAPIView,\n DestroyAPIView,\n ListAPIView,\n RetrieveAPIView,\n UpdateAPIView,\n get_object_or_404,\n)\nfrom rest_framework.response import Response\n\nfrom payments.exceptions import PaymentError\nfrom payments.services import delete_payment\nfrom pizzas.api.v2 import filters\nfrom pizzas.api.v2.serializers import (\n FoodOrderCreateSerializer,\n FoodOrderSerializer,\n FoodOrderUpdateSerializer,\n ProductSerializer,\n)\nfrom pizzas.api.v2.serializers.food_event import FoodEventSerializer\nfrom pizzas.models import FoodEvent, FoodOrder, Product\nfrom thaliawebsite.api.v2.permissions import IsAuthenticatedOrTokenHasScopeForMethod\n\n\nclass FoodEventListView(ListAPIView):\n \"\"\"Returns an overview of all food events.\"\"\"\n\n serializer_class = FoodEventSerializer\n queryset = FoodEvent.objects.all()\n filter_backends = (\n framework_filters.OrderingFilter,\n filters.FoodEventDateFilterBackend,\n )\n ordering_fields = (\"start\", \"end\")\n permission_classes = [\n IsAuthenticatedOrTokenHasScope,\n ]\n required_scopes = [\"food:read\"]\n\n\nclass FoodEventDetailView(RetrieveAPIView):\n \"\"\"Returns one single food event.\"\"\"\n\n serializer_class = FoodEventSerializer\n queryset = FoodEvent.objects.all()\n permission_classes = [\n IsAuthenticatedOrTokenHasScope,\n ]\n required_scopes = [\"food:read\"]\n\n\nclass FoodEventProductsListView(ListAPIView):\n \"\"\"Returns an overview of all products.\"\"\"\n\n serializer_class = ProductSerializer\n queryset = Product.available_products.all()\n filter_backends = (framework_filters.SearchFilter,)\n search_fields = (\"name\",)\n permission_classes = [\n IsAuthenticatedOrTokenHasScope,\n ]\n required_scopes = [\"food:read\"]\n\n\nclass FoodEventOrderDetailView(\n RetrieveAPIView, CreateAPIView, UpdateAPIView, DestroyAPIView\n):\n \"\"\"Returns details of a food order.\"\"\"\n\n permission_classes = [\n IsAuthenticatedOrTokenHasScopeForMethod,\n ]\n required_scopes_per_method = {\n \"GET\": [\"food:read\"],\n \"POST\": [\"food:order\"],\n \"PUT\": [\"food:order\"],\n \"PATCH\": [\"food:order\"],\n \"DELETE\": [\"food:order\"],\n }\n\n def get_serializer_class(self):\n if self.request.method.lower() == \"get\":\n return FoodOrderSerializer\n if self.request.method.lower() == \"post\":\n return FoodOrderCreateSerializer\n return FoodOrderUpdateSerializer\n\n def get_queryset(self):\n return FoodOrder.objects.filter(food_event=self.food_event)\n\n def get_object(self):\n queryset = self.filter_queryset(self.get_queryset())\n obj = get_object_or_404(queryset, member=self.request.member)\n\n # May raise a permission denied\n self.check_object_permissions(self.request, obj)\n\n return obj\n\n def dispatch(self, request, *args, **kwargs):\n self.food_event = get_object_or_404(FoodEvent, pk=self.kwargs.get(\"pk\"))\n try:\n return super().dispatch(request, *args, **kwargs)\n except PaymentError as e:\n return Response(\n str(e),\n status=status.HTTP_403_FORBIDDEN,\n )\n\n def update(self, request, *args, **kwargs):\n instance = self.get_object()\n\n if instance.payment:\n delete_payment(instance, member=request.member, ignore_change_window=True)\n\n super().update(request, *args, **kwargs)\n\n return Response(\n FoodOrderSerializer(instance, context=self.get_serializer_context()).data\n )\n\n def create(self, request, *args, **kwargs):\n serializer = self.get_serializer(data=request.data)\n serializer.is_valid(raise_exception=True)\n self.perform_create(serializer)\n return Response(\n FoodOrderSerializer(\n serializer.instance, context=self.get_serializer_context()\n ).data,\n status=status.HTTP_201_CREATED,\n )\n", "path": "website/pizzas/api/v2/views.py"}]}
| 1,954 | 325 |
gh_patches_debug_19964
|
rasdani/github-patches
|
git_diff
|
matrix-org__synapse-6151
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Uploading a too large file: wrong error code
When uploading a too large file, synapse responds with a 413 (OK) but with errcode M_UNKNOWN. According to the [spec](https://matrix.org/docs/spec/client_server/latest#post-matrix-media-r0-upload), it should be "M_TOO_LARGE"
Received responseCode: 413
Received responseBody:
`{"errcode":"M_UNKNOWN","error":"Upload request body is too large"}`
</issue>
<code>
[start of synapse/rest/media/v1/upload_resource.py]
1 # -*- coding: utf-8 -*-
2 # Copyright 2014-2016 OpenMarket Ltd
3 #
4 # Licensed under the Apache License, Version 2.0 (the "License");
5 # you may not use this file except in compliance with the License.
6 # You may obtain a copy of the License at
7 #
8 # http://www.apache.org/licenses/LICENSE-2.0
9 #
10 # Unless required by applicable law or agreed to in writing, software
11 # distributed under the License is distributed on an "AS IS" BASIS,
12 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 # See the License for the specific language governing permissions and
14 # limitations under the License.
15
16 import logging
17
18 from twisted.web.server import NOT_DONE_YET
19
20 from synapse.api.errors import SynapseError
21 from synapse.http.server import (
22 DirectServeResource,
23 respond_with_json,
24 wrap_json_request_handler,
25 )
26 from synapse.http.servlet import parse_string
27
28 logger = logging.getLogger(__name__)
29
30
31 class UploadResource(DirectServeResource):
32 isLeaf = True
33
34 def __init__(self, hs, media_repo):
35 super().__init__()
36
37 self.media_repo = media_repo
38 self.filepaths = media_repo.filepaths
39 self.store = hs.get_datastore()
40 self.clock = hs.get_clock()
41 self.server_name = hs.hostname
42 self.auth = hs.get_auth()
43 self.max_upload_size = hs.config.max_upload_size
44 self.clock = hs.get_clock()
45
46 def render_OPTIONS(self, request):
47 respond_with_json(request, 200, {}, send_cors=True)
48 return NOT_DONE_YET
49
50 @wrap_json_request_handler
51 async def _async_render_POST(self, request):
52 requester = await self.auth.get_user_by_req(request)
53 # TODO: The checks here are a bit late. The content will have
54 # already been uploaded to a tmp file at this point
55 content_length = request.getHeader(b"Content-Length").decode("ascii")
56 if content_length is None:
57 raise SynapseError(msg="Request must specify a Content-Length", code=400)
58 if int(content_length) > self.max_upload_size:
59 raise SynapseError(msg="Upload request body is too large", code=413)
60
61 upload_name = parse_string(request, b"filename", encoding=None)
62 if upload_name:
63 try:
64 upload_name = upload_name.decode("utf8")
65 except UnicodeDecodeError:
66 raise SynapseError(
67 msg="Invalid UTF-8 filename parameter: %r" % (upload_name), code=400
68 )
69
70 headers = request.requestHeaders
71
72 if headers.hasHeader(b"Content-Type"):
73 media_type = headers.getRawHeaders(b"Content-Type")[0].decode("ascii")
74 else:
75 raise SynapseError(msg="Upload request missing 'Content-Type'", code=400)
76
77 # if headers.hasHeader(b"Content-Disposition"):
78 # disposition = headers.getRawHeaders(b"Content-Disposition")[0]
79 # TODO(markjh): parse content-dispostion
80
81 content_uri = await self.media_repo.create_content(
82 media_type, upload_name, request.content, content_length, requester.user
83 )
84
85 logger.info("Uploaded content with URI %r", content_uri)
86
87 respond_with_json(request, 200, {"content_uri": content_uri}, send_cors=True)
88
[end of synapse/rest/media/v1/upload_resource.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/synapse/rest/media/v1/upload_resource.py b/synapse/rest/media/v1/upload_resource.py
--- a/synapse/rest/media/v1/upload_resource.py
+++ b/synapse/rest/media/v1/upload_resource.py
@@ -17,7 +17,7 @@
from twisted.web.server import NOT_DONE_YET
-from synapse.api.errors import SynapseError
+from synapse.api.errors import Codes, SynapseError
from synapse.http.server import (
DirectServeResource,
respond_with_json,
@@ -56,7 +56,11 @@
if content_length is None:
raise SynapseError(msg="Request must specify a Content-Length", code=400)
if int(content_length) > self.max_upload_size:
- raise SynapseError(msg="Upload request body is too large", code=413)
+ raise SynapseError(
+ msg="Upload request body is too large",
+ code=413,
+ errcode=Codes.TOO_LARGE,
+ )
upload_name = parse_string(request, b"filename", encoding=None)
if upload_name:
|
{"golden_diff": "diff --git a/synapse/rest/media/v1/upload_resource.py b/synapse/rest/media/v1/upload_resource.py\n--- a/synapse/rest/media/v1/upload_resource.py\n+++ b/synapse/rest/media/v1/upload_resource.py\n@@ -17,7 +17,7 @@\n \n from twisted.web.server import NOT_DONE_YET\n \n-from synapse.api.errors import SynapseError\n+from synapse.api.errors import Codes, SynapseError\n from synapse.http.server import (\n DirectServeResource,\n respond_with_json,\n@@ -56,7 +56,11 @@\n if content_length is None:\n raise SynapseError(msg=\"Request must specify a Content-Length\", code=400)\n if int(content_length) > self.max_upload_size:\n- raise SynapseError(msg=\"Upload request body is too large\", code=413)\n+ raise SynapseError(\n+ msg=\"Upload request body is too large\",\n+ code=413,\n+ errcode=Codes.TOO_LARGE,\n+ )\n \n upload_name = parse_string(request, b\"filename\", encoding=None)\n if upload_name:\n", "issue": "Uploading a too large file: wrong error code\nWhen uploading a too large file, synapse responds with a 413 (OK) but with errcode M_UNKNOWN. According to the [spec](https://matrix.org/docs/spec/client_server/latest#post-matrix-media-r0-upload), it should be \"M_TOO_LARGE\"\r\n\r\nReceived responseCode: 413\r\nReceived responseBody:\r\n`{\"errcode\":\"M_UNKNOWN\",\"error\":\"Upload request body is too large\"}`\r\n\r\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n# Copyright 2014-2016 OpenMarket Ltd\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport logging\n\nfrom twisted.web.server import NOT_DONE_YET\n\nfrom synapse.api.errors import SynapseError\nfrom synapse.http.server import (\n DirectServeResource,\n respond_with_json,\n wrap_json_request_handler,\n)\nfrom synapse.http.servlet import parse_string\n\nlogger = logging.getLogger(__name__)\n\n\nclass UploadResource(DirectServeResource):\n isLeaf = True\n\n def __init__(self, hs, media_repo):\n super().__init__()\n\n self.media_repo = media_repo\n self.filepaths = media_repo.filepaths\n self.store = hs.get_datastore()\n self.clock = hs.get_clock()\n self.server_name = hs.hostname\n self.auth = hs.get_auth()\n self.max_upload_size = hs.config.max_upload_size\n self.clock = hs.get_clock()\n\n def render_OPTIONS(self, request):\n respond_with_json(request, 200, {}, send_cors=True)\n return NOT_DONE_YET\n\n @wrap_json_request_handler\n async def _async_render_POST(self, request):\n requester = await self.auth.get_user_by_req(request)\n # TODO: The checks here are a bit late. The content will have\n # already been uploaded to a tmp file at this point\n content_length = request.getHeader(b\"Content-Length\").decode(\"ascii\")\n if content_length is None:\n raise SynapseError(msg=\"Request must specify a Content-Length\", code=400)\n if int(content_length) > self.max_upload_size:\n raise SynapseError(msg=\"Upload request body is too large\", code=413)\n\n upload_name = parse_string(request, b\"filename\", encoding=None)\n if upload_name:\n try:\n upload_name = upload_name.decode(\"utf8\")\n except UnicodeDecodeError:\n raise SynapseError(\n msg=\"Invalid UTF-8 filename parameter: %r\" % (upload_name), code=400\n )\n\n headers = request.requestHeaders\n\n if headers.hasHeader(b\"Content-Type\"):\n media_type = headers.getRawHeaders(b\"Content-Type\")[0].decode(\"ascii\")\n else:\n raise SynapseError(msg=\"Upload request missing 'Content-Type'\", code=400)\n\n # if headers.hasHeader(b\"Content-Disposition\"):\n # disposition = headers.getRawHeaders(b\"Content-Disposition\")[0]\n # TODO(markjh): parse content-dispostion\n\n content_uri = await self.media_repo.create_content(\n media_type, upload_name, request.content, content_length, requester.user\n )\n\n logger.info(\"Uploaded content with URI %r\", content_uri)\n\n respond_with_json(request, 200, {\"content_uri\": content_uri}, send_cors=True)\n", "path": "synapse/rest/media/v1/upload_resource.py"}]}
| 1,538 | 248 |
gh_patches_debug_5760
|
rasdani/github-patches
|
git_diff
|
NVIDIA__NVFlare-363
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Users are not warned when running poc command
</issue>
<code>
[start of nvflare/lighter/poc.py]
1 # Copyright (c) 2021-2022, NVIDIA CORPORATION. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import argparse
16 import os
17 import pathlib
18 import shutil
19
20
21 def clone_client(num_clients: int):
22 current_path = os.getcwd()
23 poc_folder = os.path.join(current_path, "poc")
24 src_folder = os.path.join(poc_folder, "client")
25 for index in range(1, num_clients + 1):
26 dst_folder = os.path.join(poc_folder, f"site-{index}")
27 shutil.copytree(src_folder, dst_folder)
28 start_sh = open(os.path.join(dst_folder, "startup", "start.sh"), "rt")
29 content = start_sh.read()
30 start_sh.close()
31 content = content.replace("NNN", f"{index}")
32 with open(os.path.join(dst_folder, "startup", "start.sh"), "wt") as f:
33 f.write(content)
34 shutil.rmtree(src_folder)
35
36
37 def main():
38 parser = argparse.ArgumentParser()
39 parser.add_argument("-n", "--num_clients", type=int, default=1, help="number of client folders to create")
40
41 args = parser.parse_args()
42
43 file_dir_path = pathlib.Path(__file__).parent.absolute()
44 poc_zip_path = file_dir_path.parent / "poc.zip"
45 poc_folder_path = file_dir_path.parent / "poc"
46 answer = input("This will delete poc folder in current directory and create a new one. Is it OK to proceed? (y/N) ")
47 if answer.strip().upper() == "Y":
48 dest_poc_folder = os.path.join(os.getcwd(), "poc")
49 shutil.rmtree(dest_poc_folder, ignore_errors=True)
50 try:
51 shutil.unpack_archive(poc_zip_path)
52 except shutil.ReadError:
53 print(f"poc.zip not found at {poc_zip_path}, try to use template poc folder")
54 try:
55 shutil.copytree(poc_folder_path, dest_poc_folder)
56 except BaseException:
57 print(f"Unable to copy poc folder from {poc_folder_path}. Exit")
58 exit(1)
59 for root, dirs, files in os.walk(dest_poc_folder):
60 for file in files:
61 if file.endswith(".sh"):
62 os.chmod(os.path.join(root, file), 0o755)
63 clone_client(args.num_clients)
64 print("Successfully creating poc folder. Please read poc/Readme.rst for user guide.")
65
66
67 if __name__ == "__main__":
68 main()
69
[end of nvflare/lighter/poc.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/nvflare/lighter/poc.py b/nvflare/lighter/poc.py
--- a/nvflare/lighter/poc.py
+++ b/nvflare/lighter/poc.py
@@ -62,6 +62,7 @@
os.chmod(os.path.join(root, file), 0o755)
clone_client(args.num_clients)
print("Successfully creating poc folder. Please read poc/Readme.rst for user guide.")
+ print("\n\nWARNING:\n******* Files generated by this poc command are NOT intended for production environments.")
if __name__ == "__main__":
|
{"golden_diff": "diff --git a/nvflare/lighter/poc.py b/nvflare/lighter/poc.py\n--- a/nvflare/lighter/poc.py\n+++ b/nvflare/lighter/poc.py\n@@ -62,6 +62,7 @@\n os.chmod(os.path.join(root, file), 0o755)\n clone_client(args.num_clients)\n print(\"Successfully creating poc folder. Please read poc/Readme.rst for user guide.\")\n+ print(\"\\n\\nWARNING:\\n******* Files generated by this poc command are NOT intended for production environments.\")\n \n \n if __name__ == \"__main__\":\n", "issue": "Users are not warned when running poc command\n\n", "before_files": [{"content": "# Copyright (c) 2021-2022, NVIDIA CORPORATION. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport argparse\nimport os\nimport pathlib\nimport shutil\n\n\ndef clone_client(num_clients: int):\n current_path = os.getcwd()\n poc_folder = os.path.join(current_path, \"poc\")\n src_folder = os.path.join(poc_folder, \"client\")\n for index in range(1, num_clients + 1):\n dst_folder = os.path.join(poc_folder, f\"site-{index}\")\n shutil.copytree(src_folder, dst_folder)\n start_sh = open(os.path.join(dst_folder, \"startup\", \"start.sh\"), \"rt\")\n content = start_sh.read()\n start_sh.close()\n content = content.replace(\"NNN\", f\"{index}\")\n with open(os.path.join(dst_folder, \"startup\", \"start.sh\"), \"wt\") as f:\n f.write(content)\n shutil.rmtree(src_folder)\n\n\ndef main():\n parser = argparse.ArgumentParser()\n parser.add_argument(\"-n\", \"--num_clients\", type=int, default=1, help=\"number of client folders to create\")\n\n args = parser.parse_args()\n\n file_dir_path = pathlib.Path(__file__).parent.absolute()\n poc_zip_path = file_dir_path.parent / \"poc.zip\"\n poc_folder_path = file_dir_path.parent / \"poc\"\n answer = input(\"This will delete poc folder in current directory and create a new one. Is it OK to proceed? (y/N) \")\n if answer.strip().upper() == \"Y\":\n dest_poc_folder = os.path.join(os.getcwd(), \"poc\")\n shutil.rmtree(dest_poc_folder, ignore_errors=True)\n try:\n shutil.unpack_archive(poc_zip_path)\n except shutil.ReadError:\n print(f\"poc.zip not found at {poc_zip_path}, try to use template poc folder\")\n try:\n shutil.copytree(poc_folder_path, dest_poc_folder)\n except BaseException:\n print(f\"Unable to copy poc folder from {poc_folder_path}. Exit\")\n exit(1)\n for root, dirs, files in os.walk(dest_poc_folder):\n for file in files:\n if file.endswith(\".sh\"):\n os.chmod(os.path.join(root, file), 0o755)\n clone_client(args.num_clients)\n print(\"Successfully creating poc folder. Please read poc/Readme.rst for user guide.\")\n\n\nif __name__ == \"__main__\":\n main()\n", "path": "nvflare/lighter/poc.py"}]}
| 1,331 | 134 |
gh_patches_debug_14571
|
rasdani/github-patches
|
git_diff
|
wagtail__wagtail-1811
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
wagtailsearch.views.frontend needs updating to use non-deprecated search method
Currently using Page.search, which is deprecated in Wagtail 1.2 - should be updated to use the QuerySet-based mechanism instead.
https://github.com/torchbox/wagtail/blob/master/wagtail/wagtailsearch/views/frontend.py#L41
</issue>
<code>
[start of wagtail/wagtailsearch/views/frontend.py]
1 from django.conf import settings
2 from django.shortcuts import render
3 from django.http import JsonResponse
4 from django.core.paginator import Paginator, EmptyPage, PageNotAnInteger
5
6 from wagtail.wagtailcore import models
7 from wagtail.wagtailsearch.models import Query
8
9
10 def search(
11 request,
12 template=None,
13 template_ajax=None,
14 results_per_page=10,
15 use_json=False,
16 json_attrs=['title', 'url'],
17 show_unpublished=False,
18 search_title_only=False,
19 extra_filters={},
20 path=None):
21
22 # Get default templates
23 if template is None:
24 if hasattr(settings, 'WAGTAILSEARCH_RESULTS_TEMPLATE'):
25 template = settings.WAGTAILSEARCH_RESULTS_TEMPLATE
26 else:
27 template = 'wagtailsearch/search_results.html'
28
29 if template_ajax is None:
30 if hasattr(settings, 'WAGTAILSEARCH_RESULTS_TEMPLATE_AJAX'):
31 template_ajax = settings.WAGTAILSEARCH_RESULTS_TEMPLATE_AJAX
32 else:
33 template_ajax = template
34
35 # Get query string and page from GET paramters
36 query_string = request.GET.get('q', '')
37 page = request.GET.get('page', request.GET.get('p', 1))
38
39 # Search
40 if query_string != '':
41 search_results = models.Page.search(
42 query_string,
43 show_unpublished=show_unpublished,
44 search_title_only=search_title_only,
45 extra_filters=extra_filters,
46 path=path if path else request.site.root_page.path
47 )
48
49 # Get query object
50 query = Query.get(query_string)
51
52 # Add hit
53 query.add_hit()
54
55 # Pagination
56 paginator = Paginator(search_results, results_per_page)
57 try:
58 search_results = paginator.page(page)
59 except PageNotAnInteger:
60 search_results = paginator.page(1)
61 except EmptyPage:
62 search_results = paginator.page(paginator.num_pages)
63 else:
64 query = None
65 search_results = None
66
67 if use_json:
68 # Return a json response
69 if search_results:
70 search_results_json = []
71 for result in search_results:
72 result_specific = result.specific
73
74 search_results_json.append(dict(
75 (attr, getattr(result_specific, attr))
76 for attr in json_attrs
77 if hasattr(result_specific, attr)
78 ))
79
80 return JsonResponse(search_results_json, safe=False)
81 else:
82 return JsonResponse([], safe=False)
83 else: # Render a template
84 if request.is_ajax() and template_ajax:
85 template = template_ajax
86
87 return render(request, template, dict(
88 query_string=query_string,
89 search_results=search_results,
90 is_ajax=request.is_ajax(),
91 query=query
92 ))
93
[end of wagtail/wagtailsearch/views/frontend.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/wagtail/wagtailsearch/views/frontend.py b/wagtail/wagtailsearch/views/frontend.py
--- a/wagtail/wagtailsearch/views/frontend.py
+++ b/wagtail/wagtailsearch/views/frontend.py
@@ -38,13 +38,18 @@
# Search
if query_string != '':
- search_results = models.Page.search(
- query_string,
- show_unpublished=show_unpublished,
- search_title_only=search_title_only,
- extra_filters=extra_filters,
- path=path if path else request.site.root_page.path
- )
+ pages = models.Page.objects.filter(path__startswith=(path or request.site.root_page.path))
+
+ if not show_unpublished:
+ pages = pages.live()
+
+ if extra_filters:
+ pages = pages.filter(**extra_filters)
+
+ if search_title_only:
+ search_results = pages.search(query_string, fields=['title'])
+ else:
+ search_results = pages.search(query_string)
# Get query object
query = Query.get(query_string)
|
{"golden_diff": "diff --git a/wagtail/wagtailsearch/views/frontend.py b/wagtail/wagtailsearch/views/frontend.py\n--- a/wagtail/wagtailsearch/views/frontend.py\n+++ b/wagtail/wagtailsearch/views/frontend.py\n@@ -38,13 +38,18 @@\n \n # Search\n if query_string != '':\n- search_results = models.Page.search(\n- query_string,\n- show_unpublished=show_unpublished,\n- search_title_only=search_title_only,\n- extra_filters=extra_filters,\n- path=path if path else request.site.root_page.path\n- )\n+ pages = models.Page.objects.filter(path__startswith=(path or request.site.root_page.path))\n+\n+ if not show_unpublished:\n+ pages = pages.live()\n+\n+ if extra_filters:\n+ pages = pages.filter(**extra_filters)\n+\n+ if search_title_only:\n+ search_results = pages.search(query_string, fields=['title'])\n+ else:\n+ search_results = pages.search(query_string)\n \n # Get query object\n query = Query.get(query_string)\n", "issue": "wagtailsearch.views.frontend needs updating to use non-deprecated search method\nCurrently using Page.search, which is deprecated in Wagtail 1.2 - should be updated to use the QuerySet-based mechanism instead.\n\nhttps://github.com/torchbox/wagtail/blob/master/wagtail/wagtailsearch/views/frontend.py#L41\n\n", "before_files": [{"content": "from django.conf import settings\nfrom django.shortcuts import render\nfrom django.http import JsonResponse\nfrom django.core.paginator import Paginator, EmptyPage, PageNotAnInteger\n\nfrom wagtail.wagtailcore import models\nfrom wagtail.wagtailsearch.models import Query\n\n\ndef search(\n request,\n template=None,\n template_ajax=None,\n results_per_page=10,\n use_json=False,\n json_attrs=['title', 'url'],\n show_unpublished=False,\n search_title_only=False,\n extra_filters={},\n path=None):\n\n # Get default templates\n if template is None:\n if hasattr(settings, 'WAGTAILSEARCH_RESULTS_TEMPLATE'):\n template = settings.WAGTAILSEARCH_RESULTS_TEMPLATE\n else:\n template = 'wagtailsearch/search_results.html'\n\n if template_ajax is None:\n if hasattr(settings, 'WAGTAILSEARCH_RESULTS_TEMPLATE_AJAX'):\n template_ajax = settings.WAGTAILSEARCH_RESULTS_TEMPLATE_AJAX\n else:\n template_ajax = template\n\n # Get query string and page from GET paramters\n query_string = request.GET.get('q', '')\n page = request.GET.get('page', request.GET.get('p', 1))\n\n # Search\n if query_string != '':\n search_results = models.Page.search(\n query_string,\n show_unpublished=show_unpublished,\n search_title_only=search_title_only,\n extra_filters=extra_filters,\n path=path if path else request.site.root_page.path\n )\n\n # Get query object\n query = Query.get(query_string)\n\n # Add hit\n query.add_hit()\n\n # Pagination\n paginator = Paginator(search_results, results_per_page)\n try:\n search_results = paginator.page(page)\n except PageNotAnInteger:\n search_results = paginator.page(1)\n except EmptyPage:\n search_results = paginator.page(paginator.num_pages)\n else:\n query = None\n search_results = None\n\n if use_json:\n # Return a json response\n if search_results:\n search_results_json = []\n for result in search_results:\n result_specific = result.specific\n\n search_results_json.append(dict(\n (attr, getattr(result_specific, attr))\n for attr in json_attrs\n if hasattr(result_specific, attr)\n ))\n\n return JsonResponse(search_results_json, safe=False)\n else:\n return JsonResponse([], safe=False)\n else: # Render a template\n if request.is_ajax() and template_ajax:\n template = template_ajax\n\n return render(request, template, dict(\n query_string=query_string,\n search_results=search_results,\n is_ajax=request.is_ajax(),\n query=query\n ))\n", "path": "wagtail/wagtailsearch/views/frontend.py"}]}
| 1,363 | 238 |
gh_patches_debug_61667
|
rasdani/github-patches
|
git_diff
|
pallets__click-2714
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Docs wrongly links PRs and Issues to flask

Environment:
- Python version: N/A
- Click version: N/A
Docs wrongly links PRs and Issues to flask

Environment:
- Python version: N/A
- Click version: N/A
</issue>
<code>
[start of docs/conf.py]
1 from pallets_sphinx_themes import get_version
2 from pallets_sphinx_themes import ProjectLink
3
4 # Project --------------------------------------------------------------
5
6 project = "Click"
7 copyright = "2014 Pallets"
8 author = "Pallets"
9 release, version = get_version("Click")
10
11 # General --------------------------------------------------------------
12
13 default_role = "code"
14 extensions = [
15 "sphinx.ext.autodoc",
16 "sphinx.ext.extlinks",
17 "sphinx.ext.intersphinx",
18 "sphinx_tabs.tabs",
19 "sphinxcontrib.log_cabinet",
20 "pallets_sphinx_themes",
21 ]
22 autodoc_member_order = "bysource"
23 autodoc_typehints = "description"
24 autodoc_preserve_defaults = True
25 extlinks = {
26 "issue": ("https://github.com/pallets/flask/issues/%s", "#%s"),
27 "pr": ("https://github.com/pallets/flask/pull/%s", "#%s"),
28 }
29 intersphinx_mapping = {
30 "python": ("https://docs.python.org/3/", None),
31 }
32
33 # HTML -----------------------------------------------------------------
34
35 html_theme = "click"
36 html_theme_options = {"index_sidebar_logo": False}
37 html_context = {
38 "project_links": [
39 ProjectLink("Donate", "https://palletsprojects.com/donate"),
40 ProjectLink("PyPI Releases", "https://pypi.org/project/click/"),
41 ProjectLink("Source Code", "https://github.com/pallets/click/"),
42 ProjectLink("Issue Tracker", "https://github.com/pallets/click/issues/"),
43 ProjectLink("Chat", "https://discord.gg/pallets"),
44 ]
45 }
46 html_sidebars = {
47 "index": ["project.html", "localtoc.html", "searchbox.html", "ethicalads.html"],
48 "**": ["localtoc.html", "relations.html", "searchbox.html", "ethicalads.html"],
49 }
50 singlehtml_sidebars = {"index": ["project.html", "localtoc.html", "ethicalads.html"]}
51 html_static_path = ["_static"]
52 html_favicon = "_static/click-icon.png"
53 html_logo = "_static/click-logo-sidebar.png"
54 html_title = f"Click Documentation ({version})"
55 html_show_sourcelink = False
56
[end of docs/conf.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/docs/conf.py b/docs/conf.py
--- a/docs/conf.py
+++ b/docs/conf.py
@@ -23,8 +23,8 @@
autodoc_typehints = "description"
autodoc_preserve_defaults = True
extlinks = {
- "issue": ("https://github.com/pallets/flask/issues/%s", "#%s"),
- "pr": ("https://github.com/pallets/flask/pull/%s", "#%s"),
+ "issue": ("https://github.com/pallets/click/issues/%s", "#%s"),
+ "pr": ("https://github.com/pallets/click/pull/%s", "#%s"),
}
intersphinx_mapping = {
"python": ("https://docs.python.org/3/", None),
|
{"golden_diff": "diff --git a/docs/conf.py b/docs/conf.py\n--- a/docs/conf.py\n+++ b/docs/conf.py\n@@ -23,8 +23,8 @@\n autodoc_typehints = \"description\"\n autodoc_preserve_defaults = True\n extlinks = {\n- \"issue\": (\"https://github.com/pallets/flask/issues/%s\", \"#%s\"),\n- \"pr\": (\"https://github.com/pallets/flask/pull/%s\", \"#%s\"),\n+ \"issue\": (\"https://github.com/pallets/click/issues/%s\", \"#%s\"),\n+ \"pr\": (\"https://github.com/pallets/click/pull/%s\", \"#%s\"),\n }\n intersphinx_mapping = {\n \"python\": (\"https://docs.python.org/3/\", None),\n", "issue": "Docs wrongly links PRs and Issues to flask\n\r\n\r\nEnvironment:\r\n\r\n- Python version: N/A\r\n- Click version: N/A\r\n\nDocs wrongly links PRs and Issues to flask\n\r\n\r\nEnvironment:\r\n\r\n- Python version: N/A\r\n- Click version: N/A\r\n\n", "before_files": [{"content": "from pallets_sphinx_themes import get_version\nfrom pallets_sphinx_themes import ProjectLink\n\n# Project --------------------------------------------------------------\n\nproject = \"Click\"\ncopyright = \"2014 Pallets\"\nauthor = \"Pallets\"\nrelease, version = get_version(\"Click\")\n\n# General --------------------------------------------------------------\n\ndefault_role = \"code\"\nextensions = [\n \"sphinx.ext.autodoc\",\n \"sphinx.ext.extlinks\",\n \"sphinx.ext.intersphinx\",\n \"sphinx_tabs.tabs\",\n \"sphinxcontrib.log_cabinet\",\n \"pallets_sphinx_themes\",\n]\nautodoc_member_order = \"bysource\"\nautodoc_typehints = \"description\"\nautodoc_preserve_defaults = True\nextlinks = {\n \"issue\": (\"https://github.com/pallets/flask/issues/%s\", \"#%s\"),\n \"pr\": (\"https://github.com/pallets/flask/pull/%s\", \"#%s\"),\n}\nintersphinx_mapping = {\n \"python\": (\"https://docs.python.org/3/\", None),\n}\n\n# HTML -----------------------------------------------------------------\n\nhtml_theme = \"click\"\nhtml_theme_options = {\"index_sidebar_logo\": False}\nhtml_context = {\n \"project_links\": [\n ProjectLink(\"Donate\", \"https://palletsprojects.com/donate\"),\n ProjectLink(\"PyPI Releases\", \"https://pypi.org/project/click/\"),\n ProjectLink(\"Source Code\", \"https://github.com/pallets/click/\"),\n ProjectLink(\"Issue Tracker\", \"https://github.com/pallets/click/issues/\"),\n ProjectLink(\"Chat\", \"https://discord.gg/pallets\"),\n ]\n}\nhtml_sidebars = {\n \"index\": [\"project.html\", \"localtoc.html\", \"searchbox.html\", \"ethicalads.html\"],\n \"**\": [\"localtoc.html\", \"relations.html\", \"searchbox.html\", \"ethicalads.html\"],\n}\nsinglehtml_sidebars = {\"index\": [\"project.html\", \"localtoc.html\", \"ethicalads.html\"]}\nhtml_static_path = [\"_static\"]\nhtml_favicon = \"_static/click-icon.png\"\nhtml_logo = \"_static/click-logo-sidebar.png\"\nhtml_title = f\"Click Documentation ({version})\"\nhtml_show_sourcelink = False\n", "path": "docs/conf.py"}]}
| 1,272 | 173 |
gh_patches_debug_1188
|
rasdani/github-patches
|
git_diff
|
spack__spack-18268
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Installation issue: dbus (missing libsm dependency)
<!-- Thanks for taking the time to report this build failure. To proceed with the report please:
1. Title the issue "Installation issue: <name-of-the-package>".
2. Provide the information required below.
We encourage you to try, as much as possible, to reduce your problem to the minimal example that still reproduces the issue. That would help us a lot in fixing it quickly and effectively! -->
I am trying to install visit, and I am hitting an error when it tries to install dbus. This appears
to be due to dbus depending on libSM (and through that libuuid), but not declaring that dependency in Spack. So in my build of visit, the libuuid dependency is picked up and set to use the spack installed libuuid via some other package visit depends on, but dbus ends up using the system installed libSM, and there is a mismatch between the two. But the dbus
package should not be linking against system libSM.
### Steps to reproduce the issue
I am trying to install visit, and I am hitting an error when it tries to install dbus. This appears
to be
spack install [email protected]%[email protected] ^[email protected]
eventually aborts with
CCLD dbus-run-session
/lib/../lib64/libSM.so: undefined reference to `uuid_unparse_lower@UUID_1.0'
/lib/../lib64/libSM.so: undefined reference to `uuid_generate@UUID_1.0'
collect2: error: ld returned 1 exit status
Error appears due to the attempt to link the system /lib64/libSM.so
### Information on your system
spack debug report
* **Spack:** 0.14.2
* **Python:** 2.7.16
* **Platform:** linux-rhel7-broadwell
### Additional information
[spack-build-env.txt](https://github.com/spack/spack/files/5125717/spack-build-env.txt)
[spack-build-out.txt](https://github.com/spack/spack/files/5125718/spack-build-out.txt)
No maintainers for dbus
### General information
<!-- These boxes can be checked by replacing [ ] with [x] or by clicking them after submitting the issue. -->
- [x ] I have run `spack debug report` and reported the version of Spack/Python/Platform
- [x] I have run `spack maintainers <name-of-the-package>` and @mentioned any maintainers
- [x ] I have uploaded the build log and environment files
- [ x] I have searched the issues of this repo and believe this is not a duplicate
</issue>
<code>
[start of var/spack/repos/builtin/packages/dbus/package.py]
1 # Copyright 2013-2020 Lawrence Livermore National Security, LLC and other
2 # Spack Project Developers. See the top-level COPYRIGHT file for details.
3 #
4 # SPDX-License-Identifier: (Apache-2.0 OR MIT)
5
6 from spack import *
7
8
9 class Dbus(Package):
10 """D-Bus is a message bus system, a simple way for applications to
11 talk to one another. D-Bus supplies both a system daemon (for
12 events such new hardware device printer queue ) and a
13 per-user-login-session daemon (for general IPC needs among user
14 applications). Also, the message bus is built on top of a
15 general one-to-one message passing framework, which can be used
16 by any two applications to communicate directly (without going
17 through the message bus daemon)."""
18
19 homepage = "http://dbus.freedesktop.org/"
20 url = "http://dbus.freedesktop.org/releases/dbus/dbus-1.8.8.tar.gz"
21
22 version('1.12.8', sha256='e2dc99e7338303393b6663a98320aba6a63421bcdaaf571c8022f815e5896eb3')
23 version('1.11.2', sha256='5abc4c57686fa82669ad0039830788f9b03fdc4fff487f0ccf6c9d56ba2645c9')
24 version('1.9.0', sha256='38ebc695b5cbbd239e0f149aa5d5395f0051a0fec1b74f21ff2921b22a31c171')
25 version('1.8.8', sha256='dfab263649a979d0fff64a30cac374891a8e9940350e41f3bbd7679af32bd1fd')
26 version('1.8.6', sha256='eded83ca007b719f32761e60fd8b9ffd0f5796a4caf455b01b5a5ef740ebd23f')
27 version('1.8.4', sha256='3ef63dc8d0111042071ee7f7bafa0650c6ce2d7be957ef0b7ec269495a651ff8')
28 version('1.8.2', sha256='5689f7411165adc953f37974e276a3028db94447c76e8dd92efe910c6d3bae08')
29
30 depends_on('pkgconfig', type='build')
31 depends_on('expat')
32 depends_on('glib')
33
34 def install(self, spec, prefix):
35 configure(
36 "--prefix=%s" % prefix,
37 "--disable-systemd",
38 "--disable-launchd")
39 make()
40 make("install")
41
42 # dbus needs a machine id generated after install
43 dbus_uuidgen = Executable(join_path(prefix.bin, 'dbus-uuidgen'))
44 dbus_uuidgen('--ensure')
45
[end of var/spack/repos/builtin/packages/dbus/package.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/var/spack/repos/builtin/packages/dbus/package.py b/var/spack/repos/builtin/packages/dbus/package.py
--- a/var/spack/repos/builtin/packages/dbus/package.py
+++ b/var/spack/repos/builtin/packages/dbus/package.py
@@ -30,6 +30,7 @@
depends_on('pkgconfig', type='build')
depends_on('expat')
depends_on('glib')
+ depends_on('libsm')
def install(self, spec, prefix):
configure(
|
{"golden_diff": "diff --git a/var/spack/repos/builtin/packages/dbus/package.py b/var/spack/repos/builtin/packages/dbus/package.py\n--- a/var/spack/repos/builtin/packages/dbus/package.py\n+++ b/var/spack/repos/builtin/packages/dbus/package.py\n@@ -30,6 +30,7 @@\n depends_on('pkgconfig', type='build')\n depends_on('expat')\n depends_on('glib')\n+ depends_on('libsm')\n \n def install(self, spec, prefix):\n configure(\n", "issue": "Installation issue: dbus (missing libsm dependency)\n\r\n<!-- Thanks for taking the time to report this build failure. To proceed with the report please:\r\n\r\n1. Title the issue \"Installation issue: <name-of-the-package>\".\r\n2. Provide the information required below.\r\n\r\nWe encourage you to try, as much as possible, to reduce your problem to the minimal example that still reproduces the issue. That would help us a lot in fixing it quickly and effectively! -->\r\n\r\nI am trying to install visit, and I am hitting an error when it tries to install dbus. This appears\r\nto be due to dbus depending on libSM (and through that libuuid), but not declaring that dependency in Spack. So in my build of visit, the libuuid dependency is picked up and set to use the spack installed libuuid via some other package visit depends on, but dbus ends up using the system installed libSM, and there is a mismatch between the two. But the dbus\r\npackage should not be linking against system libSM.\r\n\r\n### Steps to reproduce the issue\r\nI am trying to install visit, and I am hitting an error when it tries to install dbus. This appears\r\nto be \r\nspack install [email protected]%[email protected] ^[email protected]\r\neventually aborts with\r\n CCLD dbus-run-session\r\n/lib/../lib64/libSM.so: undefined reference to `uuid_unparse_lower@UUID_1.0'\r\n/lib/../lib64/libSM.so: undefined reference to `uuid_generate@UUID_1.0'\r\ncollect2: error: ld returned 1 exit status\r\n\r\nError appears due to the attempt to link the system /lib64/libSM.so\r\n\r\n### Information on your system\r\nspack debug report\r\n* **Spack:** 0.14.2\r\n* **Python:** 2.7.16\r\n* **Platform:** linux-rhel7-broadwell\r\n\r\n### Additional information\r\n[spack-build-env.txt](https://github.com/spack/spack/files/5125717/spack-build-env.txt)\r\n[spack-build-out.txt](https://github.com/spack/spack/files/5125718/spack-build-out.txt)\r\n\r\nNo maintainers for dbus\r\n\r\n### General information\r\n\r\n<!-- These boxes can be checked by replacing [ ] with [x] or by clicking them after submitting the issue. -->\r\n- [x ] I have run `spack debug report` and reported the version of Spack/Python/Platform\r\n- [x] I have run `spack maintainers <name-of-the-package>` and @mentioned any maintainers\r\n- [x ] I have uploaded the build log and environment files\r\n- [ x] I have searched the issues of this repo and believe this is not a duplicate\r\n\n", "before_files": [{"content": "# Copyright 2013-2020 Lawrence Livermore National Security, LLC and other\n# Spack Project Developers. See the top-level COPYRIGHT file for details.\n#\n# SPDX-License-Identifier: (Apache-2.0 OR MIT)\n\nfrom spack import *\n\n\nclass Dbus(Package):\n \"\"\"D-Bus is a message bus system, a simple way for applications to\n talk to one another. D-Bus supplies both a system daemon (for\n events such new hardware device printer queue ) and a\n per-user-login-session daemon (for general IPC needs among user\n applications). Also, the message bus is built on top of a\n general one-to-one message passing framework, which can be used\n by any two applications to communicate directly (without going\n through the message bus daemon).\"\"\"\n\n homepage = \"http://dbus.freedesktop.org/\"\n url = \"http://dbus.freedesktop.org/releases/dbus/dbus-1.8.8.tar.gz\"\n\n version('1.12.8', sha256='e2dc99e7338303393b6663a98320aba6a63421bcdaaf571c8022f815e5896eb3')\n version('1.11.2', sha256='5abc4c57686fa82669ad0039830788f9b03fdc4fff487f0ccf6c9d56ba2645c9')\n version('1.9.0', sha256='38ebc695b5cbbd239e0f149aa5d5395f0051a0fec1b74f21ff2921b22a31c171')\n version('1.8.8', sha256='dfab263649a979d0fff64a30cac374891a8e9940350e41f3bbd7679af32bd1fd')\n version('1.8.6', sha256='eded83ca007b719f32761e60fd8b9ffd0f5796a4caf455b01b5a5ef740ebd23f')\n version('1.8.4', sha256='3ef63dc8d0111042071ee7f7bafa0650c6ce2d7be957ef0b7ec269495a651ff8')\n version('1.8.2', sha256='5689f7411165adc953f37974e276a3028db94447c76e8dd92efe910c6d3bae08')\n\n depends_on('pkgconfig', type='build')\n depends_on('expat')\n depends_on('glib')\n\n def install(self, spec, prefix):\n configure(\n \"--prefix=%s\" % prefix,\n \"--disable-systemd\",\n \"--disable-launchd\")\n make()\n make(\"install\")\n\n # dbus needs a machine id generated after install\n dbus_uuidgen = Executable(join_path(prefix.bin, 'dbus-uuidgen'))\n dbus_uuidgen('--ensure')\n", "path": "var/spack/repos/builtin/packages/dbus/package.py"}]}
| 2,038 | 114 |
gh_patches_debug_8953
|
rasdani/github-patches
|
git_diff
|
bridgecrewio__checkov-3600
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Guidelines not displayed when check fails
I am crafting some custom rules and am unable to get the guidelines section to print out when the rule fails.
When I run the out-of-the box rules, guidelines prints. How do I get my custom rules to generate the guideline so that my users can access url?
```
metadata:
name: "Ensure instance profile is configured - https://test.example.com/EC2/MK_AWS_EC2_2.md?at=refs%2Fheads%2Fdevelop"
guidelines: "https://test.example.com/EC2/MK_AWS_EC2_2.md?at=refs%2Fheads%2Fdevelop"
id: "CKV_AWS_TF_EC2_2"
category: "IAM"
severity: "TBD"
scope:
provider: "aws"
definition:
cond_type: "attribute"
resource_types:
- "aws_instance"
attribute: "iam_instance_profile"
operator: "exists"
```
**Version (please complete the following information):**
- Checkov Version [2.0.1065]
</issue>
<code>
[start of checkov/common/checks_infra/checks_parser.py]
1 from __future__ import annotations
2
3 from typing import Dict, Any, List, Optional, Type, TYPE_CHECKING
4
5 from checkov.common.checks_infra.solvers import (
6 EqualsAttributeSolver,
7 NotEqualsAttributeSolver,
8 RegexMatchAttributeSolver,
9 NotRegexMatchAttributeSolver,
10 ExistsAttributeSolver,
11 AnyResourceSolver,
12 ContainsAttributeSolver,
13 NotExistsAttributeSolver,
14 WithinAttributeSolver,
15 NotContainsAttributeSolver,
16 StartingWithAttributeSolver,
17 NotStartingWithAttributeSolver,
18 EndingWithAttributeSolver,
19 NotEndingWithAttributeSolver,
20 AndSolver,
21 OrSolver,
22 NotSolver,
23 ConnectionExistsSolver,
24 ConnectionNotExistsSolver,
25 AndConnectionSolver,
26 OrConnectionSolver,
27 WithinFilterSolver,
28 GreaterThanAttributeSolver,
29 GreaterThanOrEqualAttributeSolver,
30 LessThanAttributeSolver,
31 LessThanOrEqualAttributeSolver,
32 SubsetAttributeSolver,
33 NotSubsetAttributeSolver,
34 IsEmptyAttributeSolver,
35 IsNotEmptyAttributeSolver,
36 LengthEqualsAttributeSolver,
37 LengthNotEqualsAttributeSolver,
38 LengthGreaterThanAttributeSolver,
39 LengthLessThanAttributeSolver,
40 LengthLessThanOrEqualAttributeSolver,
41 LengthGreaterThanOrEqualAttributeSolver,
42 IsTrueAttributeSolver,
43 IsFalseAttributeSolver,
44 IntersectsAttributeSolver,
45 NotIntersectsAttributeSolver
46 )
47 from checkov.common.checks_infra.solvers.connections_solvers.connection_one_exists_solver import \
48 ConnectionOneExistsSolver
49 from checkov.common.graph.checks_infra.base_check import BaseGraphCheck
50 from checkov.common.graph.checks_infra.base_parser import BaseGraphCheckParser
51 from checkov.common.graph.checks_infra.enums import SolverType
52 from checkov.common.graph.checks_infra.solvers.base_solver import BaseSolver
53 from checkov.common.util.type_forcers import force_list
54
55 if TYPE_CHECKING:
56 from checkov.common.checks_infra.solvers.attribute_solvers.base_attribute_solver import BaseAttributeSolver
57 from checkov.common.checks_infra.solvers.complex_solvers.base_complex_solver import BaseComplexSolver
58 from checkov.common.checks_infra.solvers.connections_solvers.base_connection_solver import BaseConnectionSolver
59 from checkov.common.checks_infra.solvers.connections_solvers.complex_connection_solver import ComplexConnectionSolver
60 from checkov.common.checks_infra.solvers.filter_solvers.base_filter_solver import BaseFilterSolver
61
62
63 operators_to_attributes_solver_classes: dict[str, Type[BaseAttributeSolver]] = {
64 "equals": EqualsAttributeSolver,
65 "not_equals": NotEqualsAttributeSolver,
66 "regex_match": RegexMatchAttributeSolver,
67 "not_regex_match": NotRegexMatchAttributeSolver,
68 "exists": ExistsAttributeSolver,
69 "any": AnyResourceSolver,
70 "contains": ContainsAttributeSolver,
71 "not_exists": NotExistsAttributeSolver,
72 "within": WithinAttributeSolver,
73 "not_contains": NotContainsAttributeSolver,
74 "starting_with": StartingWithAttributeSolver,
75 "not_starting_with": NotStartingWithAttributeSolver,
76 "ending_with": EndingWithAttributeSolver,
77 "not_ending_with": NotEndingWithAttributeSolver,
78 "greater_than": GreaterThanAttributeSolver,
79 "greater_than_or_equal": GreaterThanOrEqualAttributeSolver,
80 "less_than": LessThanAttributeSolver,
81 "less_than_or_equal": LessThanOrEqualAttributeSolver,
82 "subset": SubsetAttributeSolver,
83 "not_subset": NotSubsetAttributeSolver,
84 "is_empty": IsEmptyAttributeSolver,
85 "is_not_empty": IsNotEmptyAttributeSolver,
86 "length_equals": LengthEqualsAttributeSolver,
87 "length_not_equals": LengthNotEqualsAttributeSolver,
88 "length_greater_than": LengthGreaterThanAttributeSolver,
89 "length_greater_than_or_equal": LengthGreaterThanOrEqualAttributeSolver,
90 "length_less_than": LengthLessThanAttributeSolver,
91 "length_less_than_or_equal": LengthLessThanOrEqualAttributeSolver,
92 "is_true": IsTrueAttributeSolver,
93 "is_false": IsFalseAttributeSolver,
94 "intersects": IntersectsAttributeSolver,
95 "not_intersects": NotIntersectsAttributeSolver
96 }
97
98 operators_to_complex_solver_classes: dict[str, Type[BaseComplexSolver]] = {
99 "and": AndSolver,
100 "or": OrSolver,
101 "not": NotSolver,
102 }
103
104 operator_to_connection_solver_classes: dict[str, Type[BaseConnectionSolver]] = {
105 "exists": ConnectionExistsSolver,
106 "one_exists": ConnectionOneExistsSolver,
107 "not_exists": ConnectionNotExistsSolver
108 }
109
110 operator_to_complex_connection_solver_classes: dict[str, Type[ComplexConnectionSolver]] = {
111 "and": AndConnectionSolver,
112 "or": OrConnectionSolver,
113 }
114
115 operator_to_filter_solver_classes: dict[str, Type[BaseFilterSolver]] = {
116 "within": WithinFilterSolver,
117 }
118
119 condition_type_to_solver_type = {
120 "": SolverType.ATTRIBUTE,
121 "attribute": SolverType.ATTRIBUTE,
122 "connection": SolverType.CONNECTION,
123 "filter": SolverType.FILTER,
124 }
125
126 JSONPATH_PREFIX = "jsonpath_"
127
128
129 class NXGraphCheckParser(BaseGraphCheckParser):
130 def parse_raw_check(self, raw_check: Dict[str, Dict[str, Any]], **kwargs: Any) -> BaseGraphCheck:
131 policy_definition = raw_check.get("definition", {})
132 check = self._parse_raw_check(policy_definition, kwargs.get("resources_types"))
133 check.id = raw_check.get("metadata", {}).get("id", "")
134 check.name = raw_check.get("metadata", {}).get("name", "")
135 check.category = raw_check.get("metadata", {}).get("category", "")
136 check.frameworks = raw_check.get("metadata", {}).get("frameworks", [])
137 solver = self.get_check_solver(check)
138 check.set_solver(solver)
139
140 return check
141
142 def _parse_raw_check(self, raw_check: Dict[str, Any], resources_types: Optional[List[str]]) -> BaseGraphCheck:
143 check = BaseGraphCheck()
144 complex_operator = get_complex_operator(raw_check)
145 if complex_operator:
146 check.type = SolverType.COMPLEX
147 check.operator = complex_operator
148 sub_solvers = raw_check.get(complex_operator, [])
149
150 # this allows flexibility for specifying the child conditions, and makes "not" more intuitive by
151 # not requiring an actual list
152 if isinstance(sub_solvers, dict):
153 sub_solvers = [sub_solvers]
154
155 for sub_solver in sub_solvers:
156 check.sub_checks.append(self._parse_raw_check(sub_solver, resources_types))
157 resources_types_of_sub_solvers = [
158 force_list(q.resource_types) for q in check.sub_checks if q is not None and q.resource_types is not None
159 ]
160 check.resource_types = list(set(sum(resources_types_of_sub_solvers, [])))
161 if any(q.type in [SolverType.CONNECTION, SolverType.COMPLEX_CONNECTION] for q in check.sub_checks):
162 check.type = SolverType.COMPLEX_CONNECTION
163
164 else:
165 resource_type = raw_check.get("resource_types", [])
166 if (
167 not resource_type
168 or (isinstance(resource_type, str) and resource_type.lower() == "all")
169 or (isinstance(resource_type, list) and resource_type[0].lower() == "all")
170 ):
171 check.resource_types = resources_types or []
172 else:
173 check.resource_types = resource_type
174
175 connected_resources_type = raw_check.get("connected_resource_types", [])
176 if connected_resources_type == ["All"] or connected_resources_type == "all":
177 check.connected_resources_types = resources_types or []
178 else:
179 check.connected_resources_types = connected_resources_type
180
181 condition_type = raw_check.get("cond_type", "")
182 check.type = condition_type_to_solver_type.get(condition_type)
183 if condition_type == "":
184 check.operator = "any"
185 else:
186 check.operator = raw_check.get("operator", "")
187 check.attribute = raw_check.get("attribute")
188 check.attribute_value = raw_check.get("value")
189
190 return check
191
192 @staticmethod
193 def get_solver_type_method(check: BaseGraphCheck) -> Optional[BaseAttributeSolver]:
194 check.is_jsonpath_check = check.operator.startswith(JSONPATH_PREFIX)
195 if check.is_jsonpath_check:
196 solver = check.operator.replace(JSONPATH_PREFIX, '')
197 else:
198 solver = check.operator
199
200 return operators_to_attributes_solver_classes.get(solver, lambda *args: None)(
201 check.resource_types, check.attribute, check.attribute_value, check.is_jsonpath_check
202 )
203
204 def get_check_solver(self, check: BaseGraphCheck) -> BaseSolver:
205 sub_solvers: List[BaseSolver] = []
206 if check.sub_checks:
207 sub_solvers = []
208 for sub_solver in check.sub_checks:
209 sub_solvers.append(self.get_check_solver(sub_solver))
210
211 type_to_solver = {
212 SolverType.COMPLEX_CONNECTION: operator_to_complex_connection_solver_classes.get(
213 check.operator, lambda *args: None
214 )(sub_solvers, check.operator),
215 SolverType.COMPLEX: operators_to_complex_solver_classes.get(check.operator, lambda *args: None)(
216 sub_solvers, check.resource_types
217 ),
218 SolverType.ATTRIBUTE: self.get_solver_type_method(check),
219 SolverType.CONNECTION: operator_to_connection_solver_classes.get(check.operator, lambda *args: None)(
220 check.resource_types, check.connected_resources_types
221 ),
222 SolverType.FILTER: operator_to_filter_solver_classes.get(check.operator, lambda *args: None)(
223 check.resource_types, check.attribute, check.attribute_value
224 ),
225 }
226
227 solver = type_to_solver.get(check.type) # type:ignore[arg-type] # if not str will return None
228 if not solver:
229 raise NotImplementedError(f"solver type {check.type} with operator {check.operator} is not supported")
230 return solver
231
232
233 def get_complex_operator(raw_check: Dict[str, Any]) -> Optional[str]:
234 for operator in operators_to_complex_solver_classes.keys():
235 if raw_check.get(operator):
236 return operator
237 return None
238
[end of checkov/common/checks_infra/checks_parser.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/checkov/common/checks_infra/checks_parser.py b/checkov/common/checks_infra/checks_parser.py
--- a/checkov/common/checks_infra/checks_parser.py
+++ b/checkov/common/checks_infra/checks_parser.py
@@ -134,6 +134,7 @@
check.name = raw_check.get("metadata", {}).get("name", "")
check.category = raw_check.get("metadata", {}).get("category", "")
check.frameworks = raw_check.get("metadata", {}).get("frameworks", [])
+ check.guideline = raw_check.get("metadata", {}).get("guideline")
solver = self.get_check_solver(check)
check.set_solver(solver)
|
{"golden_diff": "diff --git a/checkov/common/checks_infra/checks_parser.py b/checkov/common/checks_infra/checks_parser.py\n--- a/checkov/common/checks_infra/checks_parser.py\n+++ b/checkov/common/checks_infra/checks_parser.py\n@@ -134,6 +134,7 @@\n check.name = raw_check.get(\"metadata\", {}).get(\"name\", \"\")\n check.category = raw_check.get(\"metadata\", {}).get(\"category\", \"\")\n check.frameworks = raw_check.get(\"metadata\", {}).get(\"frameworks\", [])\n+ check.guideline = raw_check.get(\"metadata\", {}).get(\"guideline\")\n solver = self.get_check_solver(check)\n check.set_solver(solver)\n", "issue": "Guidelines not displayed when check fails \nI am crafting some custom rules and am unable to get the guidelines section to print out when the rule fails. \r\nWhen I run the out-of-the box rules, guidelines prints. How do I get my custom rules to generate the guideline so that my users can access url? \r\n\r\n```\r\nmetadata:\r\n name: \"Ensure instance profile is configured - https://test.example.com/EC2/MK_AWS_EC2_2.md?at=refs%2Fheads%2Fdevelop\"\r\n guidelines: \"https://test.example.com/EC2/MK_AWS_EC2_2.md?at=refs%2Fheads%2Fdevelop\"\r\n id: \"CKV_AWS_TF_EC2_2\"\r\n category: \"IAM\"\r\n severity: \"TBD\"\r\nscope:\r\nprovider: \"aws\"\r\ndefinition:\r\n cond_type: \"attribute\"\r\n resource_types:\r\n - \"aws_instance\"\r\n attribute: \"iam_instance_profile\"\r\n operator: \"exists\"\r\n```\r\n\r\n**Version (please complete the following information):**\r\n - Checkov Version [2.0.1065]\r\n\r\n\r\n\n", "before_files": [{"content": "from __future__ import annotations\n\nfrom typing import Dict, Any, List, Optional, Type, TYPE_CHECKING\n\nfrom checkov.common.checks_infra.solvers import (\n EqualsAttributeSolver,\n NotEqualsAttributeSolver,\n RegexMatchAttributeSolver,\n NotRegexMatchAttributeSolver,\n ExistsAttributeSolver,\n AnyResourceSolver,\n ContainsAttributeSolver,\n NotExistsAttributeSolver,\n WithinAttributeSolver,\n NotContainsAttributeSolver,\n StartingWithAttributeSolver,\n NotStartingWithAttributeSolver,\n EndingWithAttributeSolver,\n NotEndingWithAttributeSolver,\n AndSolver,\n OrSolver,\n NotSolver,\n ConnectionExistsSolver,\n ConnectionNotExistsSolver,\n AndConnectionSolver,\n OrConnectionSolver,\n WithinFilterSolver,\n GreaterThanAttributeSolver,\n GreaterThanOrEqualAttributeSolver,\n LessThanAttributeSolver,\n LessThanOrEqualAttributeSolver,\n SubsetAttributeSolver,\n NotSubsetAttributeSolver,\n IsEmptyAttributeSolver,\n IsNotEmptyAttributeSolver,\n LengthEqualsAttributeSolver,\n LengthNotEqualsAttributeSolver,\n LengthGreaterThanAttributeSolver,\n LengthLessThanAttributeSolver,\n LengthLessThanOrEqualAttributeSolver,\n LengthGreaterThanOrEqualAttributeSolver,\n IsTrueAttributeSolver,\n IsFalseAttributeSolver,\n IntersectsAttributeSolver,\n NotIntersectsAttributeSolver\n)\nfrom checkov.common.checks_infra.solvers.connections_solvers.connection_one_exists_solver import \\\n ConnectionOneExistsSolver\nfrom checkov.common.graph.checks_infra.base_check import BaseGraphCheck\nfrom checkov.common.graph.checks_infra.base_parser import BaseGraphCheckParser\nfrom checkov.common.graph.checks_infra.enums import SolverType\nfrom checkov.common.graph.checks_infra.solvers.base_solver import BaseSolver\nfrom checkov.common.util.type_forcers import force_list\n\nif TYPE_CHECKING:\n from checkov.common.checks_infra.solvers.attribute_solvers.base_attribute_solver import BaseAttributeSolver\n from checkov.common.checks_infra.solvers.complex_solvers.base_complex_solver import BaseComplexSolver\n from checkov.common.checks_infra.solvers.connections_solvers.base_connection_solver import BaseConnectionSolver\n from checkov.common.checks_infra.solvers.connections_solvers.complex_connection_solver import ComplexConnectionSolver\n from checkov.common.checks_infra.solvers.filter_solvers.base_filter_solver import BaseFilterSolver\n\n\noperators_to_attributes_solver_classes: dict[str, Type[BaseAttributeSolver]] = {\n \"equals\": EqualsAttributeSolver,\n \"not_equals\": NotEqualsAttributeSolver,\n \"regex_match\": RegexMatchAttributeSolver,\n \"not_regex_match\": NotRegexMatchAttributeSolver,\n \"exists\": ExistsAttributeSolver,\n \"any\": AnyResourceSolver,\n \"contains\": ContainsAttributeSolver,\n \"not_exists\": NotExistsAttributeSolver,\n \"within\": WithinAttributeSolver,\n \"not_contains\": NotContainsAttributeSolver,\n \"starting_with\": StartingWithAttributeSolver,\n \"not_starting_with\": NotStartingWithAttributeSolver,\n \"ending_with\": EndingWithAttributeSolver,\n \"not_ending_with\": NotEndingWithAttributeSolver,\n \"greater_than\": GreaterThanAttributeSolver,\n \"greater_than_or_equal\": GreaterThanOrEqualAttributeSolver,\n \"less_than\": LessThanAttributeSolver,\n \"less_than_or_equal\": LessThanOrEqualAttributeSolver,\n \"subset\": SubsetAttributeSolver,\n \"not_subset\": NotSubsetAttributeSolver,\n \"is_empty\": IsEmptyAttributeSolver,\n \"is_not_empty\": IsNotEmptyAttributeSolver,\n \"length_equals\": LengthEqualsAttributeSolver,\n \"length_not_equals\": LengthNotEqualsAttributeSolver,\n \"length_greater_than\": LengthGreaterThanAttributeSolver,\n \"length_greater_than_or_equal\": LengthGreaterThanOrEqualAttributeSolver,\n \"length_less_than\": LengthLessThanAttributeSolver,\n \"length_less_than_or_equal\": LengthLessThanOrEqualAttributeSolver,\n \"is_true\": IsTrueAttributeSolver,\n \"is_false\": IsFalseAttributeSolver,\n \"intersects\": IntersectsAttributeSolver,\n \"not_intersects\": NotIntersectsAttributeSolver\n}\n\noperators_to_complex_solver_classes: dict[str, Type[BaseComplexSolver]] = {\n \"and\": AndSolver,\n \"or\": OrSolver,\n \"not\": NotSolver,\n}\n\noperator_to_connection_solver_classes: dict[str, Type[BaseConnectionSolver]] = {\n \"exists\": ConnectionExistsSolver,\n \"one_exists\": ConnectionOneExistsSolver,\n \"not_exists\": ConnectionNotExistsSolver\n}\n\noperator_to_complex_connection_solver_classes: dict[str, Type[ComplexConnectionSolver]] = {\n \"and\": AndConnectionSolver,\n \"or\": OrConnectionSolver,\n}\n\noperator_to_filter_solver_classes: dict[str, Type[BaseFilterSolver]] = {\n \"within\": WithinFilterSolver,\n}\n\ncondition_type_to_solver_type = {\n \"\": SolverType.ATTRIBUTE,\n \"attribute\": SolverType.ATTRIBUTE,\n \"connection\": SolverType.CONNECTION,\n \"filter\": SolverType.FILTER,\n}\n\nJSONPATH_PREFIX = \"jsonpath_\"\n\n\nclass NXGraphCheckParser(BaseGraphCheckParser):\n def parse_raw_check(self, raw_check: Dict[str, Dict[str, Any]], **kwargs: Any) -> BaseGraphCheck:\n policy_definition = raw_check.get(\"definition\", {})\n check = self._parse_raw_check(policy_definition, kwargs.get(\"resources_types\"))\n check.id = raw_check.get(\"metadata\", {}).get(\"id\", \"\")\n check.name = raw_check.get(\"metadata\", {}).get(\"name\", \"\")\n check.category = raw_check.get(\"metadata\", {}).get(\"category\", \"\")\n check.frameworks = raw_check.get(\"metadata\", {}).get(\"frameworks\", [])\n solver = self.get_check_solver(check)\n check.set_solver(solver)\n\n return check\n\n def _parse_raw_check(self, raw_check: Dict[str, Any], resources_types: Optional[List[str]]) -> BaseGraphCheck:\n check = BaseGraphCheck()\n complex_operator = get_complex_operator(raw_check)\n if complex_operator:\n check.type = SolverType.COMPLEX\n check.operator = complex_operator\n sub_solvers = raw_check.get(complex_operator, [])\n\n # this allows flexibility for specifying the child conditions, and makes \"not\" more intuitive by\n # not requiring an actual list\n if isinstance(sub_solvers, dict):\n sub_solvers = [sub_solvers]\n\n for sub_solver in sub_solvers:\n check.sub_checks.append(self._parse_raw_check(sub_solver, resources_types))\n resources_types_of_sub_solvers = [\n force_list(q.resource_types) for q in check.sub_checks if q is not None and q.resource_types is not None\n ]\n check.resource_types = list(set(sum(resources_types_of_sub_solvers, [])))\n if any(q.type in [SolverType.CONNECTION, SolverType.COMPLEX_CONNECTION] for q in check.sub_checks):\n check.type = SolverType.COMPLEX_CONNECTION\n\n else:\n resource_type = raw_check.get(\"resource_types\", [])\n if (\n not resource_type\n or (isinstance(resource_type, str) and resource_type.lower() == \"all\")\n or (isinstance(resource_type, list) and resource_type[0].lower() == \"all\")\n ):\n check.resource_types = resources_types or []\n else:\n check.resource_types = resource_type\n\n connected_resources_type = raw_check.get(\"connected_resource_types\", [])\n if connected_resources_type == [\"All\"] or connected_resources_type == \"all\":\n check.connected_resources_types = resources_types or []\n else:\n check.connected_resources_types = connected_resources_type\n\n condition_type = raw_check.get(\"cond_type\", \"\")\n check.type = condition_type_to_solver_type.get(condition_type)\n if condition_type == \"\":\n check.operator = \"any\"\n else:\n check.operator = raw_check.get(\"operator\", \"\")\n check.attribute = raw_check.get(\"attribute\")\n check.attribute_value = raw_check.get(\"value\")\n\n return check\n\n @staticmethod\n def get_solver_type_method(check: BaseGraphCheck) -> Optional[BaseAttributeSolver]:\n check.is_jsonpath_check = check.operator.startswith(JSONPATH_PREFIX)\n if check.is_jsonpath_check:\n solver = check.operator.replace(JSONPATH_PREFIX, '')\n else:\n solver = check.operator\n\n return operators_to_attributes_solver_classes.get(solver, lambda *args: None)(\n check.resource_types, check.attribute, check.attribute_value, check.is_jsonpath_check\n )\n\n def get_check_solver(self, check: BaseGraphCheck) -> BaseSolver:\n sub_solvers: List[BaseSolver] = []\n if check.sub_checks:\n sub_solvers = []\n for sub_solver in check.sub_checks:\n sub_solvers.append(self.get_check_solver(sub_solver))\n\n type_to_solver = {\n SolverType.COMPLEX_CONNECTION: operator_to_complex_connection_solver_classes.get(\n check.operator, lambda *args: None\n )(sub_solvers, check.operator),\n SolverType.COMPLEX: operators_to_complex_solver_classes.get(check.operator, lambda *args: None)(\n sub_solvers, check.resource_types\n ),\n SolverType.ATTRIBUTE: self.get_solver_type_method(check),\n SolverType.CONNECTION: operator_to_connection_solver_classes.get(check.operator, lambda *args: None)(\n check.resource_types, check.connected_resources_types\n ),\n SolverType.FILTER: operator_to_filter_solver_classes.get(check.operator, lambda *args: None)(\n check.resource_types, check.attribute, check.attribute_value\n ),\n }\n\n solver = type_to_solver.get(check.type) # type:ignore[arg-type] # if not str will return None\n if not solver:\n raise NotImplementedError(f\"solver type {check.type} with operator {check.operator} is not supported\")\n return solver\n\n\ndef get_complex_operator(raw_check: Dict[str, Any]) -> Optional[str]:\n for operator in operators_to_complex_solver_classes.keys():\n if raw_check.get(operator):\n return operator\n return None\n", "path": "checkov/common/checks_infra/checks_parser.py"}]}
| 3,494 | 154 |
gh_patches_debug_23335
|
rasdani/github-patches
|
git_diff
|
Cog-Creators__Red-DiscordBot-2919
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[Mod] Bot doesn't record name changes correctly
# Other bugs
#### What were you trying to do?
Check past names using `[p]names` command
#### What were you expecting to happen?
Get past names
#### What actually happened?
I didn't get any names, because we're checking for it in wrong event (`on_member_update` instead of `on_user_update`).
#### How can we reproduce this issue?
1. Load mod cog
2. Change username
3. Use `[p]names` on yourself.
</issue>
<code>
[start of redbot/cogs/mod/events.py]
1 import logging
2 from datetime import datetime
3 from collections import defaultdict, deque
4
5 import discord
6 from redbot.core import i18n, modlog, commands
7 from redbot.core.utils.mod import is_mod_or_superior
8 from .abc import MixinMeta
9
10 _ = i18n.Translator("Mod", __file__)
11 log = logging.getLogger("red.mod")
12
13
14 class Events(MixinMeta):
15 """
16 This is a mixin for the core mod cog
17 Has a bunch of things split off to here.
18 """
19
20 async def check_duplicates(self, message):
21 guild = message.guild
22 author = message.author
23
24 guild_cache = self.cache.get(guild.id, None)
25 if guild_cache is None:
26 repeats = await self.settings.guild(guild).delete_repeats()
27 if repeats == -1:
28 return False
29 guild_cache = self.cache[guild.id] = defaultdict(lambda: deque(maxlen=repeats))
30
31 if not message.content:
32 return False
33
34 guild_cache[author].append(message.content)
35 msgs = guild_cache[author]
36 if len(msgs) == msgs.maxlen and len(set(msgs)) == 1:
37 try:
38 await message.delete()
39 return True
40 except discord.HTTPException:
41 pass
42 return False
43
44 async def check_mention_spam(self, message):
45 guild = message.guild
46 author = message.author
47
48 max_mentions = await self.settings.guild(guild).ban_mention_spam()
49 if max_mentions:
50 mentions = set(message.mentions)
51 if len(mentions) >= max_mentions:
52 try:
53 await guild.ban(author, reason=_("Mention spam (Autoban)"))
54 except discord.HTTPException:
55 log.info(
56 "Failed to ban member for mention spam in server {}.".format(guild.id)
57 )
58 else:
59 try:
60 await modlog.create_case(
61 self.bot,
62 guild,
63 message.created_at,
64 "ban",
65 author,
66 guild.me,
67 _("Mention spam (Autoban)"),
68 until=None,
69 channel=None,
70 )
71 except RuntimeError as e:
72 print(e)
73 return False
74 return True
75 return False
76
77 @commands.Cog.listener()
78 async def on_message(self, message):
79 author = message.author
80 if message.guild is None or self.bot.user == author:
81 return
82 valid_user = isinstance(author, discord.Member) and not author.bot
83 if not valid_user:
84 return
85
86 # Bots and mods or superior are ignored from the filter
87 mod_or_superior = await is_mod_or_superior(self.bot, obj=author)
88 if mod_or_superior:
89 return
90 # As are anyone configured to be
91 if await self.bot.is_automod_immune(message):
92 return
93 deleted = await self.check_duplicates(message)
94 if not deleted:
95 await self.check_mention_spam(message)
96
97 @commands.Cog.listener()
98 async def on_member_update(self, before: discord.Member, after: discord.Member):
99 if before.name != after.name:
100 async with self.settings.user(before).past_names() as name_list:
101 while None in name_list: # clean out null entries from a bug
102 name_list.remove(None)
103 if after.name in name_list:
104 # Ensure order is maintained without duplicates occuring
105 name_list.remove(after.name)
106 name_list.append(after.name)
107 while len(name_list) > 20:
108 name_list.pop(0)
109
110 if before.nick != after.nick and after.nick is not None:
111 async with self.settings.member(before).past_nicks() as nick_list:
112 while None in nick_list: # clean out null entries from a bug
113 nick_list.remove(None)
114 if after.nick in nick_list:
115 nick_list.remove(after.nick)
116 nick_list.append(after.nick)
117 while len(nick_list) > 20:
118 nick_list.pop(0)
119
[end of redbot/cogs/mod/events.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/redbot/cogs/mod/events.py b/redbot/cogs/mod/events.py
--- a/redbot/cogs/mod/events.py
+++ b/redbot/cogs/mod/events.py
@@ -95,7 +95,7 @@
await self.check_mention_spam(message)
@commands.Cog.listener()
- async def on_member_update(self, before: discord.Member, after: discord.Member):
+ async def on_user_update(self, before: discord.User, after: discord.User):
if before.name != after.name:
async with self.settings.user(before).past_names() as name_list:
while None in name_list: # clean out null entries from a bug
@@ -107,6 +107,8 @@
while len(name_list) > 20:
name_list.pop(0)
+ @commands.Cog.listener()
+ async def on_member_update(self, before: discord.Member, after: discord.Member):
if before.nick != after.nick and after.nick is not None:
async with self.settings.member(before).past_nicks() as nick_list:
while None in nick_list: # clean out null entries from a bug
|
{"golden_diff": "diff --git a/redbot/cogs/mod/events.py b/redbot/cogs/mod/events.py\n--- a/redbot/cogs/mod/events.py\n+++ b/redbot/cogs/mod/events.py\n@@ -95,7 +95,7 @@\n await self.check_mention_spam(message)\n \n @commands.Cog.listener()\n- async def on_member_update(self, before: discord.Member, after: discord.Member):\n+ async def on_user_update(self, before: discord.User, after: discord.User):\n if before.name != after.name:\n async with self.settings.user(before).past_names() as name_list:\n while None in name_list: # clean out null entries from a bug\n@@ -107,6 +107,8 @@\n while len(name_list) > 20:\n name_list.pop(0)\n \n+ @commands.Cog.listener()\n+ async def on_member_update(self, before: discord.Member, after: discord.Member):\n if before.nick != after.nick and after.nick is not None:\n async with self.settings.member(before).past_nicks() as nick_list:\n while None in nick_list: # clean out null entries from a bug\n", "issue": "[Mod] Bot doesn't record name changes correctly\n# Other bugs\r\n\r\n#### What were you trying to do?\r\n\r\nCheck past names using `[p]names` command\r\n\r\n#### What were you expecting to happen?\r\n\r\nGet past names\r\n\r\n#### What actually happened?\r\n\r\nI didn't get any names, because we're checking for it in wrong event (`on_member_update` instead of `on_user_update`).\r\n\r\n#### How can we reproduce this issue?\r\n\r\n1. Load mod cog\r\n2. Change username\r\n3. Use `[p]names` on yourself.\r\n\n", "before_files": [{"content": "import logging\nfrom datetime import datetime\nfrom collections import defaultdict, deque\n\nimport discord\nfrom redbot.core import i18n, modlog, commands\nfrom redbot.core.utils.mod import is_mod_or_superior\nfrom .abc import MixinMeta\n\n_ = i18n.Translator(\"Mod\", __file__)\nlog = logging.getLogger(\"red.mod\")\n\n\nclass Events(MixinMeta):\n \"\"\"\n This is a mixin for the core mod cog\n Has a bunch of things split off to here.\n \"\"\"\n\n async def check_duplicates(self, message):\n guild = message.guild\n author = message.author\n\n guild_cache = self.cache.get(guild.id, None)\n if guild_cache is None:\n repeats = await self.settings.guild(guild).delete_repeats()\n if repeats == -1:\n return False\n guild_cache = self.cache[guild.id] = defaultdict(lambda: deque(maxlen=repeats))\n\n if not message.content:\n return False\n\n guild_cache[author].append(message.content)\n msgs = guild_cache[author]\n if len(msgs) == msgs.maxlen and len(set(msgs)) == 1:\n try:\n await message.delete()\n return True\n except discord.HTTPException:\n pass\n return False\n\n async def check_mention_spam(self, message):\n guild = message.guild\n author = message.author\n\n max_mentions = await self.settings.guild(guild).ban_mention_spam()\n if max_mentions:\n mentions = set(message.mentions)\n if len(mentions) >= max_mentions:\n try:\n await guild.ban(author, reason=_(\"Mention spam (Autoban)\"))\n except discord.HTTPException:\n log.info(\n \"Failed to ban member for mention spam in server {}.\".format(guild.id)\n )\n else:\n try:\n await modlog.create_case(\n self.bot,\n guild,\n message.created_at,\n \"ban\",\n author,\n guild.me,\n _(\"Mention spam (Autoban)\"),\n until=None,\n channel=None,\n )\n except RuntimeError as e:\n print(e)\n return False\n return True\n return False\n\n @commands.Cog.listener()\n async def on_message(self, message):\n author = message.author\n if message.guild is None or self.bot.user == author:\n return\n valid_user = isinstance(author, discord.Member) and not author.bot\n if not valid_user:\n return\n\n # Bots and mods or superior are ignored from the filter\n mod_or_superior = await is_mod_or_superior(self.bot, obj=author)\n if mod_or_superior:\n return\n # As are anyone configured to be\n if await self.bot.is_automod_immune(message):\n return\n deleted = await self.check_duplicates(message)\n if not deleted:\n await self.check_mention_spam(message)\n\n @commands.Cog.listener()\n async def on_member_update(self, before: discord.Member, after: discord.Member):\n if before.name != after.name:\n async with self.settings.user(before).past_names() as name_list:\n while None in name_list: # clean out null entries from a bug\n name_list.remove(None)\n if after.name in name_list:\n # Ensure order is maintained without duplicates occuring\n name_list.remove(after.name)\n name_list.append(after.name)\n while len(name_list) > 20:\n name_list.pop(0)\n\n if before.nick != after.nick and after.nick is not None:\n async with self.settings.member(before).past_nicks() as nick_list:\n while None in nick_list: # clean out null entries from a bug\n nick_list.remove(None)\n if after.nick in nick_list:\n nick_list.remove(after.nick)\n nick_list.append(after.nick)\n while len(nick_list) > 20:\n nick_list.pop(0)\n", "path": "redbot/cogs/mod/events.py"}]}
| 1,750 | 255 |
gh_patches_debug_38252
|
rasdani/github-patches
|
git_diff
|
doccano__doccano-1261
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
No way to restrict text classification labels to exactly one label to assign
Most classification tasks require exactly one label for each instance. This is also true for most text classification tasks, for example with sentiment classificaiton, and the possible labels negative, neutral, positive, each instance should receive one of the three labels, assigning e.g. both neutral and positive would make not sense.
Yet the text classification task in doccano still does not allow to restrict assignment to a single label, annotators are free to assign as many labels as they want, including all of them!
This limits the use of doccano for text classification tasks rather severely. The option to allow for any number of labels (0 to all of them) would still be good to have for multilabel classification tasks (e.g. assigning topics), but that is a much rarer annotation task in general.
</issue>
<code>
[start of app/api/views/annotation.py]
1 from django.shortcuts import get_object_or_404
2 from rest_framework import generics, status
3 from rest_framework.exceptions import ValidationError
4 from rest_framework.permissions import IsAuthenticated
5 from rest_framework.response import Response
6 from rest_framework.views import APIView
7
8 from ..models import Document, Project
9 from ..permissions import (IsAnnotationApprover, IsInProjectOrAdmin,
10 IsOwnAnnotation, IsProjectAdmin)
11 from ..serializers import ApproverSerializer
12
13
14 class AnnotationList(generics.ListCreateAPIView):
15 pagination_class = None
16 permission_classes = [IsAuthenticated & IsInProjectOrAdmin]
17 swagger_schema = None
18
19 def get_serializer_class(self):
20 project = get_object_or_404(Project, pk=self.kwargs['project_id'])
21 self.serializer_class = project.get_annotation_serializer()
22 return self.serializer_class
23
24 def get_queryset(self):
25 project = get_object_or_404(Project, pk=self.kwargs['project_id'])
26 model = project.get_annotation_class()
27 queryset = model.objects.filter(document=self.kwargs['doc_id'])
28 if not project.collaborative_annotation:
29 queryset = queryset.filter(user=self.request.user)
30 return queryset
31
32 def create(self, request, *args, **kwargs):
33 self.check_single_class_classification(self.kwargs['project_id'], self.kwargs['doc_id'], request.user)
34 request.data['document'] = self.kwargs['doc_id']
35 return super().create(request, args, kwargs)
36
37 def perform_create(self, serializer):
38 serializer.save(document_id=self.kwargs['doc_id'], user=self.request.user)
39
40 def delete(self, request, *args, **kwargs):
41 queryset = self.get_queryset()
42 queryset.all().delete()
43 return Response(status=status.HTTP_204_NO_CONTENT)
44
45 @staticmethod
46 def check_single_class_classification(project_id, doc_id, user):
47 project = get_object_or_404(Project, pk=project_id)
48 if not project.single_class_classification:
49 return
50
51 model = project.get_annotation_class()
52 annotations = model.objects.filter(document_id=doc_id)
53 if not project.collaborative_annotation:
54 annotations = annotations.filter(user=user)
55
56 if annotations.exists():
57 raise ValidationError('requested to create duplicate annotation for single-class-classification project')
58
59
60 class AnnotationDetail(generics.RetrieveUpdateDestroyAPIView):
61 lookup_url_kwarg = 'annotation_id'
62 swagger_schema = None
63
64 def get_permissions(self):
65 project = get_object_or_404(Project, pk=self.kwargs['project_id'])
66 if project.collaborative_annotation:
67 self.permission_classes = [IsAuthenticated & IsInProjectOrAdmin]
68 else:
69 self.permission_classes = [IsAuthenticated & IsInProjectOrAdmin & IsOwnAnnotation]
70 return super().get_permissions()
71
72 def get_serializer_class(self):
73 project = get_object_or_404(Project, pk=self.kwargs['project_id'])
74 self.serializer_class = project.get_annotation_serializer()
75 return self.serializer_class
76
77 def get_queryset(self):
78 project = get_object_or_404(Project, pk=self.kwargs['project_id'])
79 model = project.get_annotation_class()
80 self.queryset = model.objects.all()
81 return self.queryset
82
83
84 class ApproveLabelsAPI(APIView):
85 permission_classes = [IsAuthenticated & (IsAnnotationApprover | IsProjectAdmin)]
86
87 def post(self, request, *args, **kwargs):
88 approved = self.request.data.get('approved', True)
89 document = get_object_or_404(Document, pk=self.kwargs['doc_id'])
90 document.annotations_approved_by = self.request.user if approved else None
91 document.save()
92 return Response(ApproverSerializer(document).data)
93
[end of app/api/views/annotation.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/app/api/views/annotation.py b/app/api/views/annotation.py
--- a/app/api/views/annotation.py
+++ b/app/api/views/annotation.py
@@ -1,6 +1,5 @@
from django.shortcuts import get_object_or_404
from rest_framework import generics, status
-from rest_framework.exceptions import ValidationError
from rest_framework.permissions import IsAuthenticated
from rest_framework.response import Response
from rest_framework.views import APIView
@@ -16,21 +15,24 @@
permission_classes = [IsAuthenticated & IsInProjectOrAdmin]
swagger_schema = None
+ @property
+ def project(self):
+ return get_object_or_404(Project, pk=self.kwargs['project_id'])
+
def get_serializer_class(self):
- project = get_object_or_404(Project, pk=self.kwargs['project_id'])
- self.serializer_class = project.get_annotation_serializer()
+ self.serializer_class = self.project.get_annotation_serializer()
return self.serializer_class
def get_queryset(self):
- project = get_object_or_404(Project, pk=self.kwargs['project_id'])
- model = project.get_annotation_class()
+ model = self.project.get_annotation_class()
queryset = model.objects.filter(document=self.kwargs['doc_id'])
- if not project.collaborative_annotation:
+ if not self.project.collaborative_annotation:
queryset = queryset.filter(user=self.request.user)
return queryset
def create(self, request, *args, **kwargs):
- self.check_single_class_classification(self.kwargs['project_id'], self.kwargs['doc_id'], request.user)
+ if self.project.single_class_classification:
+ self.get_queryset().delete()
request.data['document'] = self.kwargs['doc_id']
return super().create(request, args, kwargs)
@@ -42,20 +44,6 @@
queryset.all().delete()
return Response(status=status.HTTP_204_NO_CONTENT)
- @staticmethod
- def check_single_class_classification(project_id, doc_id, user):
- project = get_object_or_404(Project, pk=project_id)
- if not project.single_class_classification:
- return
-
- model = project.get_annotation_class()
- annotations = model.objects.filter(document_id=doc_id)
- if not project.collaborative_annotation:
- annotations = annotations.filter(user=user)
-
- if annotations.exists():
- raise ValidationError('requested to create duplicate annotation for single-class-classification project')
-
class AnnotationDetail(generics.RetrieveUpdateDestroyAPIView):
lookup_url_kwarg = 'annotation_id'
|
{"golden_diff": "diff --git a/app/api/views/annotation.py b/app/api/views/annotation.py\n--- a/app/api/views/annotation.py\n+++ b/app/api/views/annotation.py\n@@ -1,6 +1,5 @@\n from django.shortcuts import get_object_or_404\n from rest_framework import generics, status\n-from rest_framework.exceptions import ValidationError\n from rest_framework.permissions import IsAuthenticated\n from rest_framework.response import Response\n from rest_framework.views import APIView\n@@ -16,21 +15,24 @@\n permission_classes = [IsAuthenticated & IsInProjectOrAdmin]\n swagger_schema = None\n \n+ @property\n+ def project(self):\n+ return get_object_or_404(Project, pk=self.kwargs['project_id'])\n+\n def get_serializer_class(self):\n- project = get_object_or_404(Project, pk=self.kwargs['project_id'])\n- self.serializer_class = project.get_annotation_serializer()\n+ self.serializer_class = self.project.get_annotation_serializer()\n return self.serializer_class\n \n def get_queryset(self):\n- project = get_object_or_404(Project, pk=self.kwargs['project_id'])\n- model = project.get_annotation_class()\n+ model = self.project.get_annotation_class()\n queryset = model.objects.filter(document=self.kwargs['doc_id'])\n- if not project.collaborative_annotation:\n+ if not self.project.collaborative_annotation:\n queryset = queryset.filter(user=self.request.user)\n return queryset\n \n def create(self, request, *args, **kwargs):\n- self.check_single_class_classification(self.kwargs['project_id'], self.kwargs['doc_id'], request.user)\n+ if self.project.single_class_classification:\n+ self.get_queryset().delete()\n request.data['document'] = self.kwargs['doc_id']\n return super().create(request, args, kwargs)\n \n@@ -42,20 +44,6 @@\n queryset.all().delete()\n return Response(status=status.HTTP_204_NO_CONTENT)\n \n- @staticmethod\n- def check_single_class_classification(project_id, doc_id, user):\n- project = get_object_or_404(Project, pk=project_id)\n- if not project.single_class_classification:\n- return\n-\n- model = project.get_annotation_class()\n- annotations = model.objects.filter(document_id=doc_id)\n- if not project.collaborative_annotation:\n- annotations = annotations.filter(user=user)\n-\n- if annotations.exists():\n- raise ValidationError('requested to create duplicate annotation for single-class-classification project')\n-\n \n class AnnotationDetail(generics.RetrieveUpdateDestroyAPIView):\n lookup_url_kwarg = 'annotation_id'\n", "issue": "No way to restrict text classification labels to exactly one label to assign\nMost classification tasks require exactly one label for each instance. This is also true for most text classification tasks, for example with sentiment classificaiton, and the possible labels negative, neutral, positive, each instance should receive one of the three labels, assigning e.g. both neutral and positive would make not sense.\r\n\r\nYet the text classification task in doccano still does not allow to restrict assignment to a single label, annotators are free to assign as many labels as they want, including all of them!\r\n\r\nThis limits the use of doccano for text classification tasks rather severely. The option to allow for any number of labels (0 to all of them) would still be good to have for multilabel classification tasks (e.g. assigning topics), but that is a much rarer annotation task in general. \n", "before_files": [{"content": "from django.shortcuts import get_object_or_404\nfrom rest_framework import generics, status\nfrom rest_framework.exceptions import ValidationError\nfrom rest_framework.permissions import IsAuthenticated\nfrom rest_framework.response import Response\nfrom rest_framework.views import APIView\n\nfrom ..models import Document, Project\nfrom ..permissions import (IsAnnotationApprover, IsInProjectOrAdmin,\n IsOwnAnnotation, IsProjectAdmin)\nfrom ..serializers import ApproverSerializer\n\n\nclass AnnotationList(generics.ListCreateAPIView):\n pagination_class = None\n permission_classes = [IsAuthenticated & IsInProjectOrAdmin]\n swagger_schema = None\n\n def get_serializer_class(self):\n project = get_object_or_404(Project, pk=self.kwargs['project_id'])\n self.serializer_class = project.get_annotation_serializer()\n return self.serializer_class\n\n def get_queryset(self):\n project = get_object_or_404(Project, pk=self.kwargs['project_id'])\n model = project.get_annotation_class()\n queryset = model.objects.filter(document=self.kwargs['doc_id'])\n if not project.collaborative_annotation:\n queryset = queryset.filter(user=self.request.user)\n return queryset\n\n def create(self, request, *args, **kwargs):\n self.check_single_class_classification(self.kwargs['project_id'], self.kwargs['doc_id'], request.user)\n request.data['document'] = self.kwargs['doc_id']\n return super().create(request, args, kwargs)\n\n def perform_create(self, serializer):\n serializer.save(document_id=self.kwargs['doc_id'], user=self.request.user)\n\n def delete(self, request, *args, **kwargs):\n queryset = self.get_queryset()\n queryset.all().delete()\n return Response(status=status.HTTP_204_NO_CONTENT)\n\n @staticmethod\n def check_single_class_classification(project_id, doc_id, user):\n project = get_object_or_404(Project, pk=project_id)\n if not project.single_class_classification:\n return\n\n model = project.get_annotation_class()\n annotations = model.objects.filter(document_id=doc_id)\n if not project.collaborative_annotation:\n annotations = annotations.filter(user=user)\n\n if annotations.exists():\n raise ValidationError('requested to create duplicate annotation for single-class-classification project')\n\n\nclass AnnotationDetail(generics.RetrieveUpdateDestroyAPIView):\n lookup_url_kwarg = 'annotation_id'\n swagger_schema = None\n\n def get_permissions(self):\n project = get_object_or_404(Project, pk=self.kwargs['project_id'])\n if project.collaborative_annotation:\n self.permission_classes = [IsAuthenticated & IsInProjectOrAdmin]\n else:\n self.permission_classes = [IsAuthenticated & IsInProjectOrAdmin & IsOwnAnnotation]\n return super().get_permissions()\n\n def get_serializer_class(self):\n project = get_object_or_404(Project, pk=self.kwargs['project_id'])\n self.serializer_class = project.get_annotation_serializer()\n return self.serializer_class\n\n def get_queryset(self):\n project = get_object_or_404(Project, pk=self.kwargs['project_id'])\n model = project.get_annotation_class()\n self.queryset = model.objects.all()\n return self.queryset\n\n\nclass ApproveLabelsAPI(APIView):\n permission_classes = [IsAuthenticated & (IsAnnotationApprover | IsProjectAdmin)]\n\n def post(self, request, *args, **kwargs):\n approved = self.request.data.get('approved', True)\n document = get_object_or_404(Document, pk=self.kwargs['doc_id'])\n document.annotations_approved_by = self.request.user if approved else None\n document.save()\n return Response(ApproverSerializer(document).data)\n", "path": "app/api/views/annotation.py"}]}
| 1,669 | 561 |
gh_patches_debug_380
|
rasdani/github-patches
|
git_diff
|
dotkom__onlineweb4-1931
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
SSO base template should extend the base template
## What kind of an issue is this?
- [x] Bug report
## What is the expected behaviour?
The template for the SSO app should extend the base template so we don't have to maintain multiple base templates.
## What is the current behaviour?
It's a custom template, which looks copy/pasted from the base template.
</issue>
<code>
[start of apps/sso/views.py]
1 # -*- encoding: utf-8 -*-
2
3 import logging
4
5 from django.contrib.auth.decorators import login_required
6 from django.shortcuts import render
7 from oauth2_provider.views.base import AuthorizationView as DefaultAuthorizationView # flake8: noqa
8 from oauth2_provider.views.base import RevokeTokenView, TokenView
9
10 _log = logging.getLogger('SSO')
11
12
13 @login_required
14 def index(request):
15 """
16 This is the main SSO view
17 """
18
19 context = {}
20
21 return render(request, 'sso/index.html', context)
22
23
24 class AuthorizationView(DefaultAuthorizationView):
25 template_name = 'sso/authorize.html'
26
[end of apps/sso/views.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/apps/sso/views.py b/apps/sso/views.py
--- a/apps/sso/views.py
+++ b/apps/sso/views.py
@@ -18,7 +18,7 @@
context = {}
- return render(request, 'sso/index.html', context)
+ return render(request, 'sso/authorize.html', context)
class AuthorizationView(DefaultAuthorizationView):
|
{"golden_diff": "diff --git a/apps/sso/views.py b/apps/sso/views.py\n--- a/apps/sso/views.py\n+++ b/apps/sso/views.py\n@@ -18,7 +18,7 @@\n \n context = {}\n \n- return render(request, 'sso/index.html', context)\n+ return render(request, 'sso/authorize.html', context)\n \n \n class AuthorizationView(DefaultAuthorizationView):\n", "issue": "SSO base template should extend the base template\n## What kind of an issue is this?\r\n\r\n- [x] Bug report\r\n\r\n\r\n## What is the expected behaviour?\r\n\r\nThe template for the SSO app should extend the base template so we don't have to maintain multiple base templates.\r\n\r\n\r\n## What is the current behaviour?\r\n\r\nIt's a custom template, which looks copy/pasted from the base template.\r\n\n", "before_files": [{"content": "# -*- encoding: utf-8 -*-\n\nimport logging\n\nfrom django.contrib.auth.decorators import login_required\nfrom django.shortcuts import render\nfrom oauth2_provider.views.base import AuthorizationView as DefaultAuthorizationView # flake8: noqa\nfrom oauth2_provider.views.base import RevokeTokenView, TokenView\n\n_log = logging.getLogger('SSO')\n\n\n@login_required\ndef index(request):\n \"\"\"\n This is the main SSO view\n \"\"\"\n\n context = {}\n\n return render(request, 'sso/index.html', context)\n\n\nclass AuthorizationView(DefaultAuthorizationView):\n template_name = 'sso/authorize.html'\n", "path": "apps/sso/views.py"}]}
| 799 | 87 |
gh_patches_debug_9208
|
rasdani/github-patches
|
git_diff
|
plotly__dash-1643
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Missing classifiers for Python 3.8/3.9
Python 3.8 and 3.9 are missing from the classifiers in `setup.py`:
https://github.com/plotly/dash/blob/358c5089c929b2e99996f9d4ee6ec634f65437fe/setup.py#L55-L63
But there is no restriction to 3.7 or below in `python_requires`:
https://github.com/plotly/dash/blob/358c5089c929b2e99996f9d4ee6ec634f65437fe/setup.py#L29
Anecdotally, I've been using Dash on Python 3.9 with no issues. Reporting as an issue as I can't see any mention of Python 3.8 and 3.9 compatibility so far!
</issue>
<code>
[start of setup.py]
1 import io
2 from setuptools import setup, find_packages
3
4 main_ns = {}
5 exec(open("dash/version.py").read(), main_ns) # pylint: disable=exec-used
6
7
8 def read_req_file(req_type):
9 with open("requires-{}.txt".format(req_type)) as fp:
10 requires = (line.strip() for line in fp)
11 return [req for req in requires if req and not req.startswith("#")]
12
13
14 setup(
15 name="dash",
16 version=main_ns["__version__"],
17 author="Chris Parmer",
18 author_email="[email protected]",
19 packages=find_packages(exclude=["tests*"]),
20 include_package_data=True,
21 license="MIT",
22 description=(
23 "A Python framework for building reactive web-apps. "
24 "Developed by Plotly."
25 ),
26 long_description=io.open("README.md", encoding="utf-8").read(),
27 long_description_content_type="text/markdown",
28 install_requires=read_req_file("install"),
29 python_requires=">=2.7, !=3.0.*, !=3.1.*, !=3.2.*",
30 extras_require={
31 "dev": read_req_file("dev"),
32 "testing": read_req_file("testing"),
33 },
34 entry_points={
35 "console_scripts": [
36 "dash-generate-components = "
37 "dash.development.component_generator:cli",
38 "renderer = dash.development.build_process:renderer",
39 ],
40 "pytest11": ["dash = dash.testing.plugin"],
41 },
42 url="https://plotly.com/dash",
43 classifiers=[
44 "Development Status :: 5 - Production/Stable",
45 "Environment :: Web Environment",
46 "Framework :: Dash",
47 "Framework :: Flask",
48 "Intended Audience :: Developers",
49 "Intended Audience :: Education",
50 "Intended Audience :: Financial and Insurance Industry",
51 "Intended Audience :: Healthcare Industry",
52 "Intended Audience :: Manufacturing",
53 "Intended Audience :: Science/Research",
54 "License :: OSI Approved :: MIT License",
55 "Programming Language :: Python",
56 "Programming Language :: Python :: 2",
57 "Programming Language :: Python :: 2.7",
58 "Programming Language :: Python :: 3",
59 "Programming Language :: Python :: 3.3",
60 "Programming Language :: Python :: 3.4",
61 "Programming Language :: Python :: 3.5",
62 "Programming Language :: Python :: 3.6",
63 "Programming Language :: Python :: 3.7",
64 "Topic :: Database :: Front-Ends",
65 "Topic :: Office/Business :: Financial :: Spreadsheet",
66 "Topic :: Scientific/Engineering :: Visualization",
67 "Topic :: Software Development :: Libraries :: Application Frameworks",
68 "Topic :: Software Development :: Widget Sets",
69 ],
70 )
71
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -61,6 +61,8 @@
"Programming Language :: Python :: 3.5",
"Programming Language :: Python :: 3.6",
"Programming Language :: Python :: 3.7",
+ "Programming Language :: Python :: 3.8",
+ "Programming Language :: Python :: 3.9",
"Topic :: Database :: Front-Ends",
"Topic :: Office/Business :: Financial :: Spreadsheet",
"Topic :: Scientific/Engineering :: Visualization",
|
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -61,6 +61,8 @@\n \"Programming Language :: Python :: 3.5\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n+ \"Programming Language :: Python :: 3.8\",\n+ \"Programming Language :: Python :: 3.9\",\n \"Topic :: Database :: Front-Ends\",\n \"Topic :: Office/Business :: Financial :: Spreadsheet\",\n \"Topic :: Scientific/Engineering :: Visualization\",\n", "issue": "Missing classifiers for Python 3.8/3.9\nPython 3.8 and 3.9 are missing from the classifiers in `setup.py`:\r\n\r\nhttps://github.com/plotly/dash/blob/358c5089c929b2e99996f9d4ee6ec634f65437fe/setup.py#L55-L63\r\n\r\nBut there is no restriction to 3.7 or below in `python_requires`:\r\nhttps://github.com/plotly/dash/blob/358c5089c929b2e99996f9d4ee6ec634f65437fe/setup.py#L29\r\n\r\nAnecdotally, I've been using Dash on Python 3.9 with no issues. Reporting as an issue as I can't see any mention of Python 3.8 and 3.9 compatibility so far!\n", "before_files": [{"content": "import io\nfrom setuptools import setup, find_packages\n\nmain_ns = {}\nexec(open(\"dash/version.py\").read(), main_ns) # pylint: disable=exec-used\n\n\ndef read_req_file(req_type):\n with open(\"requires-{}.txt\".format(req_type)) as fp:\n requires = (line.strip() for line in fp)\n return [req for req in requires if req and not req.startswith(\"#\")]\n\n\nsetup(\n name=\"dash\",\n version=main_ns[\"__version__\"],\n author=\"Chris Parmer\",\n author_email=\"[email protected]\",\n packages=find_packages(exclude=[\"tests*\"]),\n include_package_data=True,\n license=\"MIT\",\n description=(\n \"A Python framework for building reactive web-apps. \"\n \"Developed by Plotly.\"\n ),\n long_description=io.open(\"README.md\", encoding=\"utf-8\").read(),\n long_description_content_type=\"text/markdown\",\n install_requires=read_req_file(\"install\"),\n python_requires=\">=2.7, !=3.0.*, !=3.1.*, !=3.2.*\",\n extras_require={\n \"dev\": read_req_file(\"dev\"),\n \"testing\": read_req_file(\"testing\"),\n },\n entry_points={\n \"console_scripts\": [\n \"dash-generate-components = \"\n \"dash.development.component_generator:cli\",\n \"renderer = dash.development.build_process:renderer\",\n ],\n \"pytest11\": [\"dash = dash.testing.plugin\"],\n },\n url=\"https://plotly.com/dash\",\n classifiers=[\n \"Development Status :: 5 - Production/Stable\",\n \"Environment :: Web Environment\",\n \"Framework :: Dash\",\n \"Framework :: Flask\",\n \"Intended Audience :: Developers\",\n \"Intended Audience :: Education\",\n \"Intended Audience :: Financial and Insurance Industry\",\n \"Intended Audience :: Healthcare Industry\",\n \"Intended Audience :: Manufacturing\",\n \"Intended Audience :: Science/Research\",\n \"License :: OSI Approved :: MIT License\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 2\",\n \"Programming Language :: Python :: 2.7\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.3\",\n \"Programming Language :: Python :: 3.4\",\n \"Programming Language :: Python :: 3.5\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Topic :: Database :: Front-Ends\",\n \"Topic :: Office/Business :: Financial :: Spreadsheet\",\n \"Topic :: Scientific/Engineering :: Visualization\",\n \"Topic :: Software Development :: Libraries :: Application Frameworks\",\n \"Topic :: Software Development :: Widget Sets\",\n ],\n)\n", "path": "setup.py"}]}
| 1,459 | 127 |
gh_patches_debug_10156
|
rasdani/github-patches
|
git_diff
|
jazzband__pip-tools-1075
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Remove "(line X)" annotations from output
#### What's the problem this feature will solve?
I love the new source annotations in 4.5.0 from #1058. But the line number annotations cause unnecessary diff churn. For example on a project I'm working on, I added a couple requirements that are alphabetically early, and it means *every* requirement afterwards has a new line number:
<img width="395" alt="Screenshot 2020-02-25 at 15 22 19" src="https://user-images.githubusercontent.com/857609/75261303-b6d72580-57e2-11ea-99ef-dcef368bb755.png">
Adding a single requirement with no dependencies can become a 100 line diff 😢
Equally sorting one's `requirements.in` would make diff noise just from changing the line numbers.
#### Describe the solution you'd like
Remove the line numbers from the output.
#### Alternative Solutions
A flag? I prefer sensible defaults though.
</issue>
<code>
[start of piptools/writer.py]
1 from __future__ import unicode_literals
2
3 import os
4 from itertools import chain
5
6 import six
7
8 from .click import unstyle
9 from .logging import log
10 from .utils import (
11 UNSAFE_PACKAGES,
12 comment,
13 dedup,
14 format_requirement,
15 get_compile_command,
16 key_from_ireq,
17 )
18
19 MESSAGE_UNHASHED_PACKAGE = comment(
20 "# WARNING: pip install will require the following package to be hashed."
21 "\n# Consider using a hashable URL like "
22 "https://github.com/jazzband/pip-tools/archive/SOMECOMMIT.zip"
23 )
24
25 MESSAGE_UNSAFE_PACKAGES_UNPINNED = comment(
26 "# WARNING: The following packages were not pinned, but pip requires them to be"
27 "\n# pinned when the requirements file includes hashes. "
28 "Consider using the --allow-unsafe flag."
29 )
30
31 MESSAGE_UNSAFE_PACKAGES = comment(
32 "# The following packages are considered to be unsafe in a requirements file:"
33 )
34
35 MESSAGE_UNINSTALLABLE = (
36 "The generated requirements file may be rejected by pip install. "
37 "See # WARNING lines for details."
38 )
39
40
41 def _comes_from_as_string(ireq):
42 if isinstance(ireq.comes_from, six.string_types):
43 return ireq.comes_from
44 return key_from_ireq(ireq.comes_from)
45
46
47 class OutputWriter(object):
48 def __init__(
49 self,
50 src_files,
51 dst_file,
52 click_ctx,
53 dry_run,
54 emit_header,
55 emit_index,
56 emit_trusted_host,
57 annotate,
58 generate_hashes,
59 default_index_url,
60 index_urls,
61 trusted_hosts,
62 format_control,
63 allow_unsafe,
64 find_links,
65 emit_find_links,
66 ):
67 self.src_files = src_files
68 self.dst_file = dst_file
69 self.click_ctx = click_ctx
70 self.dry_run = dry_run
71 self.emit_header = emit_header
72 self.emit_index = emit_index
73 self.emit_trusted_host = emit_trusted_host
74 self.annotate = annotate
75 self.generate_hashes = generate_hashes
76 self.default_index_url = default_index_url
77 self.index_urls = index_urls
78 self.trusted_hosts = trusted_hosts
79 self.format_control = format_control
80 self.allow_unsafe = allow_unsafe
81 self.find_links = find_links
82 self.emit_find_links = emit_find_links
83
84 def _sort_key(self, ireq):
85 return (not ireq.editable, str(ireq.req).lower())
86
87 def write_header(self):
88 if self.emit_header:
89 yield comment("#")
90 yield comment("# This file is autogenerated by pip-compile")
91 yield comment("# To update, run:")
92 yield comment("#")
93 compile_command = os.environ.get(
94 "CUSTOM_COMPILE_COMMAND"
95 ) or get_compile_command(self.click_ctx)
96 yield comment("# {}".format(compile_command))
97 yield comment("#")
98
99 def write_index_options(self):
100 if self.emit_index:
101 for index, index_url in enumerate(dedup(self.index_urls)):
102 if index_url.rstrip("/") == self.default_index_url:
103 continue
104 flag = "--index-url" if index == 0 else "--extra-index-url"
105 yield "{} {}".format(flag, index_url)
106
107 def write_trusted_hosts(self):
108 if self.emit_trusted_host:
109 for trusted_host in dedup(self.trusted_hosts):
110 yield "--trusted-host {}".format(trusted_host)
111
112 def write_format_controls(self):
113 for nb in dedup(self.format_control.no_binary):
114 yield "--no-binary {}".format(nb)
115 for ob in dedup(self.format_control.only_binary):
116 yield "--only-binary {}".format(ob)
117
118 def write_find_links(self):
119 if self.emit_find_links:
120 for find_link in dedup(self.find_links):
121 yield "--find-links {}".format(find_link)
122
123 def write_flags(self):
124 emitted = False
125 for line in chain(
126 self.write_index_options(),
127 self.write_find_links(),
128 self.write_trusted_hosts(),
129 self.write_format_controls(),
130 ):
131 emitted = True
132 yield line
133 if emitted:
134 yield ""
135
136 def _iter_lines(self, results, unsafe_requirements=None, markers=None, hashes=None):
137 # default values
138 unsafe_requirements = unsafe_requirements or []
139 markers = markers or {}
140 hashes = hashes or {}
141
142 # Check for unhashed or unpinned packages if at least one package does have
143 # hashes, which will trigger pip install's --require-hashes mode.
144 warn_uninstallable = False
145 has_hashes = hashes and any(hash for hash in hashes.values())
146
147 yielded = False
148
149 for line in self.write_header():
150 yield line
151 yielded = True
152 for line in self.write_flags():
153 yield line
154 yielded = True
155
156 unsafe_requirements = (
157 {r for r in results if r.name in UNSAFE_PACKAGES}
158 if not unsafe_requirements
159 else unsafe_requirements
160 )
161 packages = {r for r in results if r.name not in UNSAFE_PACKAGES}
162
163 if packages:
164 packages = sorted(packages, key=self._sort_key)
165 for ireq in packages:
166 if has_hashes and not hashes.get(ireq):
167 yield MESSAGE_UNHASHED_PACKAGE
168 warn_uninstallable = True
169 line = self._format_requirement(
170 ireq, markers.get(key_from_ireq(ireq)), hashes=hashes
171 )
172 yield line
173 yielded = True
174
175 if unsafe_requirements:
176 unsafe_requirements = sorted(unsafe_requirements, key=self._sort_key)
177 yield ""
178 yielded = True
179 if has_hashes and not self.allow_unsafe:
180 yield MESSAGE_UNSAFE_PACKAGES_UNPINNED
181 warn_uninstallable = True
182 else:
183 yield MESSAGE_UNSAFE_PACKAGES
184
185 for ireq in unsafe_requirements:
186 ireq_key = key_from_ireq(ireq)
187 if not self.allow_unsafe:
188 yield comment("# {}".format(ireq_key))
189 else:
190 line = self._format_requirement(
191 ireq, marker=markers.get(ireq_key), hashes=hashes
192 )
193 yield line
194
195 # Yield even when there's no real content, so that blank files are written
196 if not yielded:
197 yield ""
198
199 if warn_uninstallable:
200 log.warning(MESSAGE_UNINSTALLABLE)
201
202 def write(self, results, unsafe_requirements, markers, hashes):
203
204 for line in self._iter_lines(results, unsafe_requirements, markers, hashes):
205 log.info(line)
206 if not self.dry_run:
207 self.dst_file.write(unstyle(line).encode("utf-8"))
208 self.dst_file.write(os.linesep.encode("utf-8"))
209
210 def _format_requirement(self, ireq, marker=None, hashes=None):
211 ireq_hashes = (hashes if hashes is not None else {}).get(ireq)
212
213 line = format_requirement(ireq, marker=marker, hashes=ireq_hashes)
214
215 if not self.annotate:
216 return line
217
218 # Annotate what packages or reqs-ins this package is required by
219 required_by = set()
220 if hasattr(ireq, "_source_ireqs"):
221 required_by |= {
222 _comes_from_as_string(src_ireq)
223 for src_ireq in ireq._source_ireqs
224 if src_ireq.comes_from
225 }
226 elif ireq.comes_from:
227 required_by.add(_comes_from_as_string(ireq))
228 if required_by:
229 annotation = ", ".join(sorted(required_by))
230 line = "{:24}{}{}".format(
231 line,
232 " \\\n " if ireq_hashes else " ",
233 comment("# via " + annotation),
234 )
235 return line
236
[end of piptools/writer.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/piptools/writer.py b/piptools/writer.py
--- a/piptools/writer.py
+++ b/piptools/writer.py
@@ -1,6 +1,7 @@
from __future__ import unicode_literals
import os
+import re
from itertools import chain
import six
@@ -38,9 +39,12 @@
)
+strip_comes_from_line_re = re.compile(r" \(line \d+\)$")
+
+
def _comes_from_as_string(ireq):
if isinstance(ireq.comes_from, six.string_types):
- return ireq.comes_from
+ return strip_comes_from_line_re.sub("", ireq.comes_from)
return key_from_ireq(ireq.comes_from)
|
{"golden_diff": "diff --git a/piptools/writer.py b/piptools/writer.py\n--- a/piptools/writer.py\n+++ b/piptools/writer.py\n@@ -1,6 +1,7 @@\n from __future__ import unicode_literals\n \n import os\n+import re\n from itertools import chain\n \n import six\n@@ -38,9 +39,12 @@\n )\n \n \n+strip_comes_from_line_re = re.compile(r\" \\(line \\d+\\)$\")\n+\n+\n def _comes_from_as_string(ireq):\n if isinstance(ireq.comes_from, six.string_types):\n- return ireq.comes_from\n+ return strip_comes_from_line_re.sub(\"\", ireq.comes_from)\n return key_from_ireq(ireq.comes_from)\n", "issue": "Remove \"(line X)\" annotations from output\n#### What's the problem this feature will solve?\r\n\r\nI love the new source annotations in 4.5.0 from #1058. But the line number annotations cause unnecessary diff churn. For example on a project I'm working on, I added a couple requirements that are alphabetically early, and it means *every* requirement afterwards has a new line number:\r\n\r\n<img width=\"395\" alt=\"Screenshot 2020-02-25 at 15 22 19\" src=\"https://user-images.githubusercontent.com/857609/75261303-b6d72580-57e2-11ea-99ef-dcef368bb755.png\">\r\n\r\nAdding a single requirement with no dependencies can become a 100 line diff \ud83d\ude22\r\n\r\nEqually sorting one's `requirements.in` would make diff noise just from changing the line numbers.\r\n \r\n#### Describe the solution you'd like\r\n\r\nRemove the line numbers from the output.\r\n\r\n#### Alternative Solutions\r\n\r\nA flag? I prefer sensible defaults though.\r\n\n", "before_files": [{"content": "from __future__ import unicode_literals\n\nimport os\nfrom itertools import chain\n\nimport six\n\nfrom .click import unstyle\nfrom .logging import log\nfrom .utils import (\n UNSAFE_PACKAGES,\n comment,\n dedup,\n format_requirement,\n get_compile_command,\n key_from_ireq,\n)\n\nMESSAGE_UNHASHED_PACKAGE = comment(\n \"# WARNING: pip install will require the following package to be hashed.\"\n \"\\n# Consider using a hashable URL like \"\n \"https://github.com/jazzband/pip-tools/archive/SOMECOMMIT.zip\"\n)\n\nMESSAGE_UNSAFE_PACKAGES_UNPINNED = comment(\n \"# WARNING: The following packages were not pinned, but pip requires them to be\"\n \"\\n# pinned when the requirements file includes hashes. \"\n \"Consider using the --allow-unsafe flag.\"\n)\n\nMESSAGE_UNSAFE_PACKAGES = comment(\n \"# The following packages are considered to be unsafe in a requirements file:\"\n)\n\nMESSAGE_UNINSTALLABLE = (\n \"The generated requirements file may be rejected by pip install. \"\n \"See # WARNING lines for details.\"\n)\n\n\ndef _comes_from_as_string(ireq):\n if isinstance(ireq.comes_from, six.string_types):\n return ireq.comes_from\n return key_from_ireq(ireq.comes_from)\n\n\nclass OutputWriter(object):\n def __init__(\n self,\n src_files,\n dst_file,\n click_ctx,\n dry_run,\n emit_header,\n emit_index,\n emit_trusted_host,\n annotate,\n generate_hashes,\n default_index_url,\n index_urls,\n trusted_hosts,\n format_control,\n allow_unsafe,\n find_links,\n emit_find_links,\n ):\n self.src_files = src_files\n self.dst_file = dst_file\n self.click_ctx = click_ctx\n self.dry_run = dry_run\n self.emit_header = emit_header\n self.emit_index = emit_index\n self.emit_trusted_host = emit_trusted_host\n self.annotate = annotate\n self.generate_hashes = generate_hashes\n self.default_index_url = default_index_url\n self.index_urls = index_urls\n self.trusted_hosts = trusted_hosts\n self.format_control = format_control\n self.allow_unsafe = allow_unsafe\n self.find_links = find_links\n self.emit_find_links = emit_find_links\n\n def _sort_key(self, ireq):\n return (not ireq.editable, str(ireq.req).lower())\n\n def write_header(self):\n if self.emit_header:\n yield comment(\"#\")\n yield comment(\"# This file is autogenerated by pip-compile\")\n yield comment(\"# To update, run:\")\n yield comment(\"#\")\n compile_command = os.environ.get(\n \"CUSTOM_COMPILE_COMMAND\"\n ) or get_compile_command(self.click_ctx)\n yield comment(\"# {}\".format(compile_command))\n yield comment(\"#\")\n\n def write_index_options(self):\n if self.emit_index:\n for index, index_url in enumerate(dedup(self.index_urls)):\n if index_url.rstrip(\"/\") == self.default_index_url:\n continue\n flag = \"--index-url\" if index == 0 else \"--extra-index-url\"\n yield \"{} {}\".format(flag, index_url)\n\n def write_trusted_hosts(self):\n if self.emit_trusted_host:\n for trusted_host in dedup(self.trusted_hosts):\n yield \"--trusted-host {}\".format(trusted_host)\n\n def write_format_controls(self):\n for nb in dedup(self.format_control.no_binary):\n yield \"--no-binary {}\".format(nb)\n for ob in dedup(self.format_control.only_binary):\n yield \"--only-binary {}\".format(ob)\n\n def write_find_links(self):\n if self.emit_find_links:\n for find_link in dedup(self.find_links):\n yield \"--find-links {}\".format(find_link)\n\n def write_flags(self):\n emitted = False\n for line in chain(\n self.write_index_options(),\n self.write_find_links(),\n self.write_trusted_hosts(),\n self.write_format_controls(),\n ):\n emitted = True\n yield line\n if emitted:\n yield \"\"\n\n def _iter_lines(self, results, unsafe_requirements=None, markers=None, hashes=None):\n # default values\n unsafe_requirements = unsafe_requirements or []\n markers = markers or {}\n hashes = hashes or {}\n\n # Check for unhashed or unpinned packages if at least one package does have\n # hashes, which will trigger pip install's --require-hashes mode.\n warn_uninstallable = False\n has_hashes = hashes and any(hash for hash in hashes.values())\n\n yielded = False\n\n for line in self.write_header():\n yield line\n yielded = True\n for line in self.write_flags():\n yield line\n yielded = True\n\n unsafe_requirements = (\n {r for r in results if r.name in UNSAFE_PACKAGES}\n if not unsafe_requirements\n else unsafe_requirements\n )\n packages = {r for r in results if r.name not in UNSAFE_PACKAGES}\n\n if packages:\n packages = sorted(packages, key=self._sort_key)\n for ireq in packages:\n if has_hashes and not hashes.get(ireq):\n yield MESSAGE_UNHASHED_PACKAGE\n warn_uninstallable = True\n line = self._format_requirement(\n ireq, markers.get(key_from_ireq(ireq)), hashes=hashes\n )\n yield line\n yielded = True\n\n if unsafe_requirements:\n unsafe_requirements = sorted(unsafe_requirements, key=self._sort_key)\n yield \"\"\n yielded = True\n if has_hashes and not self.allow_unsafe:\n yield MESSAGE_UNSAFE_PACKAGES_UNPINNED\n warn_uninstallable = True\n else:\n yield MESSAGE_UNSAFE_PACKAGES\n\n for ireq in unsafe_requirements:\n ireq_key = key_from_ireq(ireq)\n if not self.allow_unsafe:\n yield comment(\"# {}\".format(ireq_key))\n else:\n line = self._format_requirement(\n ireq, marker=markers.get(ireq_key), hashes=hashes\n )\n yield line\n\n # Yield even when there's no real content, so that blank files are written\n if not yielded:\n yield \"\"\n\n if warn_uninstallable:\n log.warning(MESSAGE_UNINSTALLABLE)\n\n def write(self, results, unsafe_requirements, markers, hashes):\n\n for line in self._iter_lines(results, unsafe_requirements, markers, hashes):\n log.info(line)\n if not self.dry_run:\n self.dst_file.write(unstyle(line).encode(\"utf-8\"))\n self.dst_file.write(os.linesep.encode(\"utf-8\"))\n\n def _format_requirement(self, ireq, marker=None, hashes=None):\n ireq_hashes = (hashes if hashes is not None else {}).get(ireq)\n\n line = format_requirement(ireq, marker=marker, hashes=ireq_hashes)\n\n if not self.annotate:\n return line\n\n # Annotate what packages or reqs-ins this package is required by\n required_by = set()\n if hasattr(ireq, \"_source_ireqs\"):\n required_by |= {\n _comes_from_as_string(src_ireq)\n for src_ireq in ireq._source_ireqs\n if src_ireq.comes_from\n }\n elif ireq.comes_from:\n required_by.add(_comes_from_as_string(ireq))\n if required_by:\n annotation = \", \".join(sorted(required_by))\n line = \"{:24}{}{}\".format(\n line,\n \" \\\\\\n \" if ireq_hashes else \" \",\n comment(\"# via \" + annotation),\n )\n return line\n", "path": "piptools/writer.py"}]}
| 3,052 | 169 |
gh_patches_debug_56501
|
rasdani/github-patches
|
git_diff
|
canonical__microk8s-2478
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[dashboard] should be exposed via ingress
When running microk8s on the server, rather than doing port forwarding it should be possible to access the dashboard via ingress (similar to kubeflow dashboard)
</issue>
<code>
[start of scripts/wrappers/status.py]
1 #!/usr/bin/python3
2 import os
3 import argparse
4
5 from common.utils import (
6 exit_if_no_permission,
7 exit_if_stopped,
8 is_cluster_locked,
9 is_ha_enabled,
10 get_dqlite_info,
11 wait_for_ready,
12 is_cluster_ready,
13 get_available_addons,
14 get_current_arch,
15 get_addon_by_name,
16 kubectl_get,
17 kubectl_get_clusterroles,
18 )
19
20
21 def is_enabled(addon, item):
22 if addon in item:
23 return True
24 else:
25 filepath = os.path.expandvars(addon)
26 return os.path.isfile(filepath)
27
28 return False
29
30
31 def print_short(isReady, enabled_addons, disabled_addons):
32 if isReady:
33 print("microk8s is running")
34 print("addons:")
35 if enabled_addons and len(enabled_addons) > 0:
36 for enabled in enabled_addons:
37 print("{}: enabled".format(enabled["name"]))
38 if disabled_addons and len(disabled_addons) > 0:
39 for disabled in disabled_addons:
40 print("{}: disabled".format(disabled["name"]))
41 else:
42 print("microk8s is not running. Use microk8s inspect for a deeper inspection.")
43
44
45 def print_pretty(isReady, enabled_addons, disabled_addons):
46 console_formatter = "{:>3} {:<20} # {}"
47 if isReady:
48 print("microk8s is running")
49 if not is_ha_enabled():
50 print("high-availability: no")
51 else:
52 info = get_dqlite_info()
53 if ha_cluster_formed(info):
54 print("high-availability: yes")
55 else:
56 print("high-availability: no")
57
58 masters = "none"
59 standby = "none"
60 for node in info:
61 if node[1] == "voter":
62 if masters == "none":
63 masters = "{}".format(node[0])
64 else:
65 masters = "{} {}".format(masters, node[0])
66 if node[1] == "standby":
67 if standby == "none":
68 standby = "{}".format(node[0])
69 else:
70 standby = "{} {}".format(standby, node[0])
71
72 print("{:>2}{} {}".format("", "datastore master nodes:", masters))
73 print("{:>2}{} {}".format("", "datastore standby nodes:", standby))
74
75 print("addons:")
76 if enabled_addons and len(enabled_addons) > 0:
77 print("{:>2}{}".format("", "enabled:"))
78 for enabled in enabled_addons:
79 print(console_formatter.format("", enabled["name"], enabled["description"]))
80 if disabled_addons and len(disabled_addons) > 0:
81 print("{:>2}{}".format("", "disabled:"))
82 for disabled in disabled_addons:
83 print(console_formatter.format("", disabled["name"], disabled["description"]))
84 else:
85 print("microk8s is not running. Use microk8s inspect for a deeper inspection.")
86
87
88 def print_short_yaml(isReady, enabled_addons, disabled_addons):
89 print("microk8s:")
90 print("{:>2}{} {}".format("", "running:", isReady))
91
92 if isReady:
93 print("addons:")
94 for enabled in enabled_addons:
95 print(" {}: enabled".format(enabled["name"]))
96
97 for disabled in disabled_addons:
98 print(" {}: disabled".format(disabled["name"]))
99 else:
100 print(
101 "{:>2}{} {}".format(
102 "",
103 "message:",
104 "microk8s is not running. Use microk8s inspect for a deeper inspection.",
105 )
106 )
107
108
109 def print_yaml(isReady, enabled_addons, disabled_addons):
110 print("microk8s:")
111 print("{:>2}{} {}".format("", "running:", isReady))
112
113 print("{:>2}".format("high-availability:"))
114 ha_enabled = is_ha_enabled()
115 print("{:>2}{} {}".format("", "enabled:", ha_enabled))
116 if ha_enabled:
117 info = get_dqlite_info()
118 print("{:>2}{}".format("", "nodes:"))
119 for node in info:
120 print("{:>6}address: {:<1}".format("- ", node[0]))
121 print("{:>6}role: {:<1}".format("", node[1]))
122
123 if isReady:
124 print("{:>2}".format("addons:"))
125 for enabled in enabled_addons:
126 print("{:>4}name: {:<1}".format("- ", enabled["name"]))
127 print("{:>4}description: {:<1}".format("", enabled["description"]))
128 print("{:>4}version: {:<1}".format("", enabled["version"]))
129 print("{:>4}status: enabled".format(""))
130
131 for disabled in disabled_addons:
132 print("{:>4}name: {:<1}".format("- ", disabled["name"]))
133 print("{:>4}description: {:<1}".format("", disabled["description"]))
134 print("{:>4}version: {:<1}".format("", disabled["version"]))
135 print("{:>4}status: disabled".format(""))
136 else:
137 print(
138 "{:>2}{} {}".format(
139 "",
140 "message:",
141 "microk8s is not running. Use microk8s inspect for a deeper inspection.",
142 )
143 )
144
145
146 def print_addon_status(enabled):
147 if len(enabled) > 0:
148 print("enabled")
149 else:
150 print("disabled")
151
152
153 def get_status(available_addons, isReady):
154 enabled = []
155 disabled = []
156 if isReady:
157 kube_output = kubectl_get("all")
158 cluster_output = kubectl_get_clusterroles()
159 kube_output = kube_output + cluster_output
160 for addon in available_addons:
161 found = False
162 for row in kube_output.split("\n"):
163 if is_enabled(addon["check_status"], row):
164 enabled.append(addon)
165 found = True
166 break
167 if not found:
168 disabled.append(addon)
169
170 return enabled, disabled
171
172
173 def ha_cluster_formed(info):
174 voters = 0
175 for node in info:
176 if node[1] == "voter":
177 voters += 1
178 ha_formed = False
179 if voters > 2:
180 ha_formed = True
181 return ha_formed
182
183
184 if __name__ == "__main__":
185 exit_if_no_permission()
186 exit_if_stopped()
187 is_cluster_locked()
188
189 # initiate the parser with a description
190 parser = argparse.ArgumentParser(
191 description="Microk8s cluster status check.", prog="microk8s status"
192 )
193 parser.add_argument(
194 "--format",
195 help="print cluster and addon status, output can be in yaml, pretty or short",
196 default="pretty",
197 choices={"pretty", "yaml", "short"},
198 )
199 parser.add_argument(
200 "-w", "--wait-ready", action="store_true", help="wait until the cluster is in ready state"
201 )
202 parser.add_argument(
203 "-t",
204 "--timeout",
205 help="specify a timeout in seconds when waiting for the cluster to be ready.",
206 type=int,
207 default=0,
208 )
209 parser.add_argument("-a", "--addon", help="check the status of an addon.", default="all")
210 parser.add_argument(
211 "--yaml", action="store_true", help="DEPRECATED, use '--format yaml' instead"
212 )
213
214 # read arguments from the command line
215 args = parser.parse_args()
216
217 wait_ready = args.wait_ready
218 timeout = args.timeout
219 yaml_short = args.yaml
220
221 if wait_ready:
222 isReady = wait_for_ready(timeout)
223 else:
224 isReady = is_cluster_ready()
225
226 available_addons = get_available_addons(get_current_arch())
227
228 if args.addon != "all":
229 available_addons = get_addon_by_name(available_addons, args.addon)
230
231 enabled, disabled = get_status(available_addons, isReady)
232
233 if args.addon != "all":
234 print_addon_status(enabled)
235 else:
236 if args.format == "yaml":
237 print_yaml(isReady, enabled, disabled)
238 elif args.format == "short":
239 print_short(isReady, enabled, disabled)
240 else:
241 if yaml_short:
242 print_short_yaml(isReady, enabled, disabled)
243 else:
244 print_pretty(isReady, enabled, disabled)
245
[end of scripts/wrappers/status.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/scripts/wrappers/status.py b/scripts/wrappers/status.py
--- a/scripts/wrappers/status.py
+++ b/scripts/wrappers/status.py
@@ -154,7 +154,8 @@
enabled = []
disabled = []
if isReady:
- kube_output = kubectl_get("all")
+ # 'all' does not include ingress
+ kube_output = kubectl_get("all,ingress")
cluster_output = kubectl_get_clusterroles()
kube_output = kube_output + cluster_output
for addon in available_addons:
|
{"golden_diff": "diff --git a/scripts/wrappers/status.py b/scripts/wrappers/status.py\n--- a/scripts/wrappers/status.py\n+++ b/scripts/wrappers/status.py\n@@ -154,7 +154,8 @@\n enabled = []\n disabled = []\n if isReady:\n- kube_output = kubectl_get(\"all\")\n+ # 'all' does not include ingress\n+ kube_output = kubectl_get(\"all,ingress\")\n cluster_output = kubectl_get_clusterroles()\n kube_output = kube_output + cluster_output\n for addon in available_addons:\n", "issue": "[dashboard] should be exposed via ingress\nWhen running microk8s on the server, rather than doing port forwarding it should be possible to access the dashboard via ingress (similar to kubeflow dashboard)\n", "before_files": [{"content": "#!/usr/bin/python3\nimport os\nimport argparse\n\nfrom common.utils import (\n exit_if_no_permission,\n exit_if_stopped,\n is_cluster_locked,\n is_ha_enabled,\n get_dqlite_info,\n wait_for_ready,\n is_cluster_ready,\n get_available_addons,\n get_current_arch,\n get_addon_by_name,\n kubectl_get,\n kubectl_get_clusterroles,\n)\n\n\ndef is_enabled(addon, item):\n if addon in item:\n return True\n else:\n filepath = os.path.expandvars(addon)\n return os.path.isfile(filepath)\n\n return False\n\n\ndef print_short(isReady, enabled_addons, disabled_addons):\n if isReady:\n print(\"microk8s is running\")\n print(\"addons:\")\n if enabled_addons and len(enabled_addons) > 0:\n for enabled in enabled_addons:\n print(\"{}: enabled\".format(enabled[\"name\"]))\n if disabled_addons and len(disabled_addons) > 0:\n for disabled in disabled_addons:\n print(\"{}: disabled\".format(disabled[\"name\"]))\n else:\n print(\"microk8s is not running. Use microk8s inspect for a deeper inspection.\")\n\n\ndef print_pretty(isReady, enabled_addons, disabled_addons):\n console_formatter = \"{:>3} {:<20} # {}\"\n if isReady:\n print(\"microk8s is running\")\n if not is_ha_enabled():\n print(\"high-availability: no\")\n else:\n info = get_dqlite_info()\n if ha_cluster_formed(info):\n print(\"high-availability: yes\")\n else:\n print(\"high-availability: no\")\n\n masters = \"none\"\n standby = \"none\"\n for node in info:\n if node[1] == \"voter\":\n if masters == \"none\":\n masters = \"{}\".format(node[0])\n else:\n masters = \"{} {}\".format(masters, node[0])\n if node[1] == \"standby\":\n if standby == \"none\":\n standby = \"{}\".format(node[0])\n else:\n standby = \"{} {}\".format(standby, node[0])\n\n print(\"{:>2}{} {}\".format(\"\", \"datastore master nodes:\", masters))\n print(\"{:>2}{} {}\".format(\"\", \"datastore standby nodes:\", standby))\n\n print(\"addons:\")\n if enabled_addons and len(enabled_addons) > 0:\n print(\"{:>2}{}\".format(\"\", \"enabled:\"))\n for enabled in enabled_addons:\n print(console_formatter.format(\"\", enabled[\"name\"], enabled[\"description\"]))\n if disabled_addons and len(disabled_addons) > 0:\n print(\"{:>2}{}\".format(\"\", \"disabled:\"))\n for disabled in disabled_addons:\n print(console_formatter.format(\"\", disabled[\"name\"], disabled[\"description\"]))\n else:\n print(\"microk8s is not running. Use microk8s inspect for a deeper inspection.\")\n\n\ndef print_short_yaml(isReady, enabled_addons, disabled_addons):\n print(\"microk8s:\")\n print(\"{:>2}{} {}\".format(\"\", \"running:\", isReady))\n\n if isReady:\n print(\"addons:\")\n for enabled in enabled_addons:\n print(\" {}: enabled\".format(enabled[\"name\"]))\n\n for disabled in disabled_addons:\n print(\" {}: disabled\".format(disabled[\"name\"]))\n else:\n print(\n \"{:>2}{} {}\".format(\n \"\",\n \"message:\",\n \"microk8s is not running. Use microk8s inspect for a deeper inspection.\",\n )\n )\n\n\ndef print_yaml(isReady, enabled_addons, disabled_addons):\n print(\"microk8s:\")\n print(\"{:>2}{} {}\".format(\"\", \"running:\", isReady))\n\n print(\"{:>2}\".format(\"high-availability:\"))\n ha_enabled = is_ha_enabled()\n print(\"{:>2}{} {}\".format(\"\", \"enabled:\", ha_enabled))\n if ha_enabled:\n info = get_dqlite_info()\n print(\"{:>2}{}\".format(\"\", \"nodes:\"))\n for node in info:\n print(\"{:>6}address: {:<1}\".format(\"- \", node[0]))\n print(\"{:>6}role: {:<1}\".format(\"\", node[1]))\n\n if isReady:\n print(\"{:>2}\".format(\"addons:\"))\n for enabled in enabled_addons:\n print(\"{:>4}name: {:<1}\".format(\"- \", enabled[\"name\"]))\n print(\"{:>4}description: {:<1}\".format(\"\", enabled[\"description\"]))\n print(\"{:>4}version: {:<1}\".format(\"\", enabled[\"version\"]))\n print(\"{:>4}status: enabled\".format(\"\"))\n\n for disabled in disabled_addons:\n print(\"{:>4}name: {:<1}\".format(\"- \", disabled[\"name\"]))\n print(\"{:>4}description: {:<1}\".format(\"\", disabled[\"description\"]))\n print(\"{:>4}version: {:<1}\".format(\"\", disabled[\"version\"]))\n print(\"{:>4}status: disabled\".format(\"\"))\n else:\n print(\n \"{:>2}{} {}\".format(\n \"\",\n \"message:\",\n \"microk8s is not running. Use microk8s inspect for a deeper inspection.\",\n )\n )\n\n\ndef print_addon_status(enabled):\n if len(enabled) > 0:\n print(\"enabled\")\n else:\n print(\"disabled\")\n\n\ndef get_status(available_addons, isReady):\n enabled = []\n disabled = []\n if isReady:\n kube_output = kubectl_get(\"all\")\n cluster_output = kubectl_get_clusterroles()\n kube_output = kube_output + cluster_output\n for addon in available_addons:\n found = False\n for row in kube_output.split(\"\\n\"):\n if is_enabled(addon[\"check_status\"], row):\n enabled.append(addon)\n found = True\n break\n if not found:\n disabled.append(addon)\n\n return enabled, disabled\n\n\ndef ha_cluster_formed(info):\n voters = 0\n for node in info:\n if node[1] == \"voter\":\n voters += 1\n ha_formed = False\n if voters > 2:\n ha_formed = True\n return ha_formed\n\n\nif __name__ == \"__main__\":\n exit_if_no_permission()\n exit_if_stopped()\n is_cluster_locked()\n\n # initiate the parser with a description\n parser = argparse.ArgumentParser(\n description=\"Microk8s cluster status check.\", prog=\"microk8s status\"\n )\n parser.add_argument(\n \"--format\",\n help=\"print cluster and addon status, output can be in yaml, pretty or short\",\n default=\"pretty\",\n choices={\"pretty\", \"yaml\", \"short\"},\n )\n parser.add_argument(\n \"-w\", \"--wait-ready\", action=\"store_true\", help=\"wait until the cluster is in ready state\"\n )\n parser.add_argument(\n \"-t\",\n \"--timeout\",\n help=\"specify a timeout in seconds when waiting for the cluster to be ready.\",\n type=int,\n default=0,\n )\n parser.add_argument(\"-a\", \"--addon\", help=\"check the status of an addon.\", default=\"all\")\n parser.add_argument(\n \"--yaml\", action=\"store_true\", help=\"DEPRECATED, use '--format yaml' instead\"\n )\n\n # read arguments from the command line\n args = parser.parse_args()\n\n wait_ready = args.wait_ready\n timeout = args.timeout\n yaml_short = args.yaml\n\n if wait_ready:\n isReady = wait_for_ready(timeout)\n else:\n isReady = is_cluster_ready()\n\n available_addons = get_available_addons(get_current_arch())\n\n if args.addon != \"all\":\n available_addons = get_addon_by_name(available_addons, args.addon)\n\n enabled, disabled = get_status(available_addons, isReady)\n\n if args.addon != \"all\":\n print_addon_status(enabled)\n else:\n if args.format == \"yaml\":\n print_yaml(isReady, enabled, disabled)\n elif args.format == \"short\":\n print_short(isReady, enabled, disabled)\n else:\n if yaml_short:\n print_short_yaml(isReady, enabled, disabled)\n else:\n print_pretty(isReady, enabled, disabled)\n", "path": "scripts/wrappers/status.py"}]}
| 3,025 | 127 |
gh_patches_debug_13492
|
rasdani/github-patches
|
git_diff
|
alltheplaces__alltheplaces-2642
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Spider sheetz is broken
During the global build at 2021-08-11-14-42-19, spider **sheetz** failed with **526 features** and **1 errors**.
Here's [the log](https://data.alltheplaces.xyz/runs/2021-08-11-14-42-19/logs/sheetz.txt) and [the output](https://data.alltheplaces.xyz/runs/2021-08-11-14-42-19/output/sheetz.geojson) ([on a map](https://data.alltheplaces.xyz/map.html?show=https://data.alltheplaces.xyz/runs/2021-08-11-14-42-19/output/sheetz.geojson))
</issue>
<code>
[start of locations/spiders/sheetz.py]
1 import json
2 import re
3 import scrapy
4 from locations.items import GeojsonPointItem
5
6
7 class SheetzSpider(scrapy.Spider):
8 name = "sheetz"
9 item_attributes = {'brand': "Sheetz"}
10 allowed_domains = ["orderz.sheetz.com"]
11 start_urls = (
12 "https://orderz.sheetz.com/sas/store",
13 )
14
15 def parse(self, response):
16 stores = json.loads(response.body_as_unicode())
17
18 for store in stores:
19 properties = {
20 'addr_full': store['address'],
21 'city': store['city'],
22 'state': store['state'],
23 'postcode': store['zip'],
24 'ref': store['storeNumber'],
25 'phone': store.get('phone'),
26 'website': 'https://orderz.sheetz.com/#/main/location/store/'+store['storeNumber'],
27 'lat': float(store['latitude']),
28 'lon': float(store['longitude']),
29 'opening_hours': '24/7' if store['open24x7'] else None,
30 'extras': {
31 'amenity:chargingstation': store['evCharger'],
32 'amenity:fuel': True,
33 'atm': store['atm'],
34 'car_wash': store['carWash'],
35 'fax': store['fax'] if 'fax' in store else None,
36 'fuel:diesel': store['diesel'],
37 'fuel:e15': store['e15'],
38 'fuel:e85': store['e85'],
39 'fuel:kerosene': store['kerosene'],
40 'fuel:propane': store['propane'],
41 }
42 }
43
44 yield GeojsonPointItem(**properties)
45
[end of locations/spiders/sheetz.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/locations/spiders/sheetz.py b/locations/spiders/sheetz.py
--- a/locations/spiders/sheetz.py
+++ b/locations/spiders/sheetz.py
@@ -24,8 +24,8 @@
'ref': store['storeNumber'],
'phone': store.get('phone'),
'website': 'https://orderz.sheetz.com/#/main/location/store/'+store['storeNumber'],
- 'lat': float(store['latitude']),
- 'lon': float(store['longitude']),
+ 'lat': store['latitude'],
+ 'lon': store['longitude'],
'opening_hours': '24/7' if store['open24x7'] else None,
'extras': {
'amenity:chargingstation': store['evCharger'],
|
{"golden_diff": "diff --git a/locations/spiders/sheetz.py b/locations/spiders/sheetz.py\n--- a/locations/spiders/sheetz.py\n+++ b/locations/spiders/sheetz.py\n@@ -24,8 +24,8 @@\n 'ref': store['storeNumber'],\n 'phone': store.get('phone'),\n 'website': 'https://orderz.sheetz.com/#/main/location/store/'+store['storeNumber'],\n- 'lat': float(store['latitude']),\n- 'lon': float(store['longitude']),\n+ 'lat': store['latitude'],\n+ 'lon': store['longitude'],\n 'opening_hours': '24/7' if store['open24x7'] else None,\n 'extras': {\n 'amenity:chargingstation': store['evCharger'],\n", "issue": "Spider sheetz is broken\nDuring the global build at 2021-08-11-14-42-19, spider **sheetz** failed with **526 features** and **1 errors**.\n\nHere's [the log](https://data.alltheplaces.xyz/runs/2021-08-11-14-42-19/logs/sheetz.txt) and [the output](https://data.alltheplaces.xyz/runs/2021-08-11-14-42-19/output/sheetz.geojson) ([on a map](https://data.alltheplaces.xyz/map.html?show=https://data.alltheplaces.xyz/runs/2021-08-11-14-42-19/output/sheetz.geojson))\n", "before_files": [{"content": "import json\nimport re\nimport scrapy\nfrom locations.items import GeojsonPointItem\n\n\nclass SheetzSpider(scrapy.Spider):\n name = \"sheetz\"\n item_attributes = {'brand': \"Sheetz\"}\n allowed_domains = [\"orderz.sheetz.com\"]\n start_urls = (\n \"https://orderz.sheetz.com/sas/store\",\n )\n\n def parse(self, response):\n stores = json.loads(response.body_as_unicode())\n\n for store in stores:\n properties = {\n 'addr_full': store['address'],\n 'city': store['city'],\n 'state': store['state'],\n 'postcode': store['zip'],\n 'ref': store['storeNumber'],\n 'phone': store.get('phone'),\n 'website': 'https://orderz.sheetz.com/#/main/location/store/'+store['storeNumber'],\n 'lat': float(store['latitude']),\n 'lon': float(store['longitude']),\n 'opening_hours': '24/7' if store['open24x7'] else None,\n 'extras': {\n 'amenity:chargingstation': store['evCharger'],\n 'amenity:fuel': True,\n 'atm': store['atm'],\n 'car_wash': store['carWash'],\n 'fax': store['fax'] if 'fax' in store else None,\n 'fuel:diesel': store['diesel'],\n 'fuel:e15': store['e15'],\n 'fuel:e85': store['e85'],\n 'fuel:kerosene': store['kerosene'],\n 'fuel:propane': store['propane'],\n }\n }\n\n yield GeojsonPointItem(**properties)\n", "path": "locations/spiders/sheetz.py"}]}
| 1,170 | 177 |
gh_patches_debug_5235
|
rasdani/github-patches
|
git_diff
|
rasterio__rasterio-618
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
YCbCr JPEG-in-TIFF breaks rio-info
One creates a YCbCr JPEG-in-TIFF with GDAL using `photometric=YCbCr` and `compress=JPEG` options. But reading the TIFFs tags to get the compression method returns "YCbCr JPEG", a value that's not in `rasterio.enums.Compression`.
Reference: http://www.gdal.org/frmt_gtiff.html
Solution: normalize "YCbCr JPEG" to "JPEG" and add source color space to rio-info's output.
</issue>
<code>
[start of rasterio/enums.py]
1
2 from enum import Enum, IntEnum
3
4
5 class ColorInterp(IntEnum):
6 undefined=0
7 grey=1
8 gray=1
9 palette=2
10 red=3
11 green=4
12 blue=5
13 alpha=6
14 hue=7
15 saturation=8
16 lightness=9
17 cyan=10
18 magenta=11
19 yellow=12
20 black=13
21
22
23 class Resampling(Enum):
24 nearest='NEAREST'
25 gauss='GAUSS'
26 cubic='CUBIC'
27 average='AVERAGE'
28 mode='MODE'
29 average_magphase='AVERAGE_MAGPHASE'
30 none='NONE'
31
32
33 class Compression(Enum):
34 jpeg='JPEG'
35 lzw='LZW'
36 packbits='PACKBITS'
37 deflate='DEFLATE'
38 ccittrle='CCITTRLE'
39 ccittfax3='CCITTFAX3'
40 ccittfax4='CCITTFAX4'
41 lzma='LZMA'
42 none='NONE'
43
44
45 class Interleaving(Enum):
46 pixel='PIXEL'
47 line='LINE'
48 band='BAND'
49
50
51 class MaskFlags(IntEnum):
52 all_valid=1
53 per_dataset=2
54 alpha=4
55 nodata=8
56
[end of rasterio/enums.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/rasterio/enums.py b/rasterio/enums.py
--- a/rasterio/enums.py
+++ b/rasterio/enums.py
@@ -18,6 +18,9 @@
magenta=11
yellow=12
black=13
+ Y=14
+ Cb=15
+ Cr=16
class Resampling(Enum):
@@ -53,3 +56,14 @@
per_dataset=2
alpha=4
nodata=8
+
+
+class PhotometricInterp(Enum):
+ black='MINISBLACK'
+ white='MINISWHITE'
+ rgb='RGB'
+ cmyk='CMYK'
+ ycbcr='YCbCr'
+ cielab='CIELAB'
+ icclab='ICCLAB'
+ itulab='ITULAB'
|
{"golden_diff": "diff --git a/rasterio/enums.py b/rasterio/enums.py\n--- a/rasterio/enums.py\n+++ b/rasterio/enums.py\n@@ -18,6 +18,9 @@\n magenta=11\n yellow=12\n black=13\n+ Y=14\n+ Cb=15\n+ Cr=16\n \n \n class Resampling(Enum):\n@@ -53,3 +56,14 @@\n per_dataset=2\n alpha=4\n nodata=8\n+\n+\n+class PhotometricInterp(Enum):\n+ black='MINISBLACK'\n+ white='MINISWHITE'\n+ rgb='RGB'\n+ cmyk='CMYK'\n+ ycbcr='YCbCr'\n+ cielab='CIELAB'\n+ icclab='ICCLAB'\n+ itulab='ITULAB'\n", "issue": "YCbCr JPEG-in-TIFF breaks rio-info\nOne creates a YCbCr JPEG-in-TIFF with GDAL using `photometric=YCbCr` and `compress=JPEG` options. But reading the TIFFs tags to get the compression method returns \"YCbCr JPEG\", a value that's not in `rasterio.enums.Compression`.\n\nReference: http://www.gdal.org/frmt_gtiff.html\n\nSolution: normalize \"YCbCr JPEG\" to \"JPEG\" and add source color space to rio-info's output.\n\n", "before_files": [{"content": "\nfrom enum import Enum, IntEnum\n\n\nclass ColorInterp(IntEnum):\n undefined=0\n grey=1\n gray=1\n palette=2\n red=3\n green=4\n blue=5\n alpha=6\n hue=7\n saturation=8\n lightness=9\n cyan=10\n magenta=11\n yellow=12\n black=13\n\n\nclass Resampling(Enum):\n nearest='NEAREST'\n gauss='GAUSS'\n cubic='CUBIC'\n average='AVERAGE'\n mode='MODE'\n average_magphase='AVERAGE_MAGPHASE'\n none='NONE'\n\n\nclass Compression(Enum):\n jpeg='JPEG'\n lzw='LZW'\n packbits='PACKBITS'\n deflate='DEFLATE'\n ccittrle='CCITTRLE'\n ccittfax3='CCITTFAX3'\n ccittfax4='CCITTFAX4'\n lzma='LZMA'\n none='NONE'\n\n\nclass Interleaving(Enum):\n pixel='PIXEL'\n line='LINE'\n band='BAND'\n\n\nclass MaskFlags(IntEnum):\n all_valid=1\n per_dataset=2\n alpha=4\n nodata=8\n", "path": "rasterio/enums.py"}]}
| 1,037 | 207 |
gh_patches_debug_40210
|
rasdani/github-patches
|
git_diff
|
cal-itp__benefits-664
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Engineering: Front-end - Eligibility Start page design: Redesign to a single-column design. Add media list.
- For both Mobile and Desktop, redesign the page to a two-column vertical design.
- Redesign the 3-icon content page to be a 2-icon page and add the new bullet points of copy.
- Establish Mobile/Tablet width margins
- Make sure H1 font size is correct
<img width="1512" alt="image" src="https://user-images.githubusercontent.com/3673236/172735521-c7b6760b-832d-448e-b45e-20f217debccb.png">
<img width="453" alt="image" src="https://user-images.githubusercontent.com/3673236/172735548-a9c6ed1f-23fd-4904-8e9f-629f6bf2eba1.png">
</issue>
<code>
[start of benefits/eligibility/views.py]
1 """
2 The eligibility application: view definitions for the eligibility verification flow.
3 """
4 from django.conf import settings
5 from django.contrib import messages
6 from django.shortcuts import redirect
7 from django.template.response import TemplateResponse
8 from django.urls import reverse
9 from django.utils.decorators import decorator_from_middleware
10 from django.utils.translation import pgettext, gettext as _
11
12 from benefits.core import recaptcha, session, viewmodels
13 from benefits.core.middleware import AgencySessionRequired, LoginRequired, RateLimit, VerifierSessionRequired
14 from benefits.core.models import EligibilityVerifier
15 from benefits.core.views import PageTemplateResponse
16 from . import analytics, api, forms
17
18
19 @decorator_from_middleware(AgencySessionRequired)
20 def index(request):
21 """View handler for the eligibility verifier selection form."""
22
23 session.update(request, eligibility_types=[], origin=reverse("eligibility:index"))
24 agency = session.agency(request)
25
26 eligibility_start = reverse("eligibility:start")
27
28 page = viewmodels.Page(
29 title=_("eligibility.pages.index.title"),
30 content_title=_("eligibility.pages.index.content_title"),
31 forms=forms.EligibilityVerifierSelectionForm(agency=agency),
32 )
33
34 if request.method == "POST":
35 form = forms.EligibilityVerifierSelectionForm(data=request.POST, agency=agency)
36
37 if form.is_valid():
38 verifier_id = form.cleaned_data.get("verifier")
39 verifier = EligibilityVerifier.objects.get(id=verifier_id)
40 session.update(request, verifier=verifier)
41
42 response = redirect(eligibility_start)
43 else:
44 # form was not valid, allow for correction/resubmission
45 page.forms = [form]
46 response = PageTemplateResponse(request, page)
47 else:
48 if agency.eligibility_verifiers.count() == 1:
49 verifier = agency.eligibility_verifiers.first()
50 session.update(request, verifier=verifier)
51 response = redirect(eligibility_start)
52 else:
53 response = PageTemplateResponse(request, page)
54
55 return response
56
57
58 @decorator_from_middleware(AgencySessionRequired)
59 @decorator_from_middleware(VerifierSessionRequired)
60 def start(request):
61 """View handler for the eligibility verification getting started screen."""
62
63 session.update(request, eligibility_types=[], origin=reverse("eligibility:start"))
64 verifier = session.verifier(request)
65
66 button = viewmodels.Button.primary(text=_("eligibility.buttons.continue"), url=reverse("eligibility:confirm"))
67
68 payment_options_link = f"{reverse('core:help')}#payment-options"
69 media = [
70 dict(
71 icon=viewmodels.Icon("idcardcheck", pgettext("image alt text", "core.icons.idcardcheck")),
72 heading=_(verifier.start_item_name),
73 details=_(verifier.start_item_description),
74 ),
75 dict(
76 icon=viewmodels.Icon("bankcardcheck", pgettext("image alt text", "core.icons.bankcardcheck")),
77 heading=_("eligibility.pages.start.bankcard.title"),
78 details=_("eligibility.pages.start.bankcard.text"),
79 links=[
80 viewmodels.Button.link(
81 classes="btn-text btn-link",
82 text=_("eligibility.pages.start.bankcard.button[0].link"),
83 url=payment_options_link,
84 ),
85 viewmodels.Button.link(
86 classes="btn-text btn-link",
87 text=_("eligibility.pages.start.bankcard.button[1].link"),
88 url=payment_options_link,
89 ),
90 ],
91 ),
92 ]
93
94 if verifier.requires_authentication:
95 if settings.OAUTH_CLIENT_NAME is None:
96 raise Exception("EligibilityVerifier requires authentication, but OAUTH_CLIENT_NAME is None")
97
98 oauth_help_link = f"{reverse('core:help')}#login-gov"
99
100 media.insert(
101 0,
102 dict(
103 icon=viewmodels.Icon("idscreencheck", pgettext("image alt text", "core.icons.idscreencheck")),
104 heading=_("eligibility.pages.start.oauth.heading"),
105 details=_("eligibility.pages.start.oauth.details"),
106 links=[
107 viewmodels.Button.link(
108 classes="btn-text btn-link",
109 text=_("eligibility.pages.start.oauth.link_text"),
110 url=oauth_help_link,
111 rel="noopener noreferrer",
112 )
113 ],
114 ),
115 )
116
117 if not session.logged_in(request):
118 button = viewmodels.Button.login(
119 label=_(verifier.auth_provider.sign_in_button_label),
120 text="",
121 url=reverse("oauth:login"),
122 )
123
124 page = viewmodels.Page(
125 title=_("eligibility.pages.start.title"),
126 noimage=True,
127 paragraphs=[_(verifier.start_blurb)],
128 button=button,
129 )
130
131 ctx = page.context_dict()
132 ctx["title"] = _(verifier.start_content_title)
133 ctx["media"] = media
134
135 return TemplateResponse(request, "eligibility/start.html", ctx)
136
137
138 @decorator_from_middleware(AgencySessionRequired)
139 @decorator_from_middleware(LoginRequired)
140 @decorator_from_middleware(RateLimit)
141 @decorator_from_middleware(VerifierSessionRequired)
142 def confirm(request):
143 """View handler for the eligibility verification form."""
144
145 template = "eligibility/confirm.html"
146 verifier = session.verifier(request)
147
148 page = viewmodels.Page(
149 title=_(verifier.form_title),
150 content_title=_(verifier.form_content_title),
151 paragraphs=[_(verifier.form_blurb)],
152 form=forms.EligibilityVerificationForm(auto_id=True, label_suffix="", verifier=verifier),
153 classes="text-lg-center",
154 )
155
156 # POST form submission, process form data
157 if request.method == "POST":
158 analytics.started_eligibility(request)
159
160 form = forms.EligibilityVerificationForm(data=request.POST, verifier=verifier)
161
162 # form was not valid, allow for correction/resubmission
163 if not form.is_valid():
164 if recaptcha.has_error(form):
165 messages.error(request, "Recaptcha failed. Please try again.")
166
167 page.forms = [form]
168 return TemplateResponse(request, template, page.context_dict())
169
170 # form is valid, make Eligibility Verification request to get the verified types
171 verified_types = api.get_verified_types(request, form)
172
173 # form was not valid, allow for correction/resubmission
174 if verified_types is None:
175 analytics.returned_error(request, form.errors)
176 page.forms = [form]
177 return TemplateResponse(request, template, page.context_dict())
178 # no types were verified
179 elif len(verified_types) == 0:
180 return unverified(request)
181 # type(s) were verified
182 else:
183 return verified(request, verified_types)
184
185 # GET from an already verified user, no need to verify again
186 elif session.eligible(request):
187 eligibility = session.eligibility(request)
188 return verified(request, [eligibility.name])
189 # GET from an unverified user, present the form
190 else:
191 return TemplateResponse(request, template, page.context_dict())
192
193
194 @decorator_from_middleware(AgencySessionRequired)
195 @decorator_from_middleware(LoginRequired)
196 def verified(request, verified_types):
197 """View handler for the verified eligibility page."""
198
199 analytics.returned_success(request)
200
201 session.update(request, eligibility_types=verified_types)
202
203 return redirect("enrollment:index")
204
205
206 @decorator_from_middleware(AgencySessionRequired)
207 @decorator_from_middleware(LoginRequired)
208 @decorator_from_middleware(VerifierSessionRequired)
209 def unverified(request):
210 """View handler for the unverified eligibility page."""
211
212 analytics.returned_fail(request)
213
214 # tel: link to agency phone number
215 agency = session.agency(request)
216 buttons = viewmodels.Button.agency_contact_links(agency)
217 buttons.append(viewmodels.Button.home(request, _("core.buttons.retry")))
218
219 verifier = session.verifier(request)
220
221 page = viewmodels.Page(
222 title=_(verifier.unverified_title),
223 classes="with-agency-links",
224 content_title=_(verifier.unverified_content_title),
225 icon=viewmodels.Icon("idcardquestion", pgettext("image alt text", "core.icons.idcardquestion")),
226 paragraphs=[_(verifier.unverified_blurb), _("eligibility.pages.unverified.p[1]")],
227 buttons=buttons,
228 )
229
230 return TemplateResponse(request, "eligibility/unverified.html", page.context_dict())
231
[end of benefits/eligibility/views.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/benefits/eligibility/views.py b/benefits/eligibility/views.py
--- a/benefits/eligibility/views.py
+++ b/benefits/eligibility/views.py
@@ -67,11 +67,6 @@
payment_options_link = f"{reverse('core:help')}#payment-options"
media = [
- dict(
- icon=viewmodels.Icon("idcardcheck", pgettext("image alt text", "core.icons.idcardcheck")),
- heading=_(verifier.start_item_name),
- details=_(verifier.start_item_description),
- ),
dict(
icon=viewmodels.Icon("bankcardcheck", pgettext("image alt text", "core.icons.bankcardcheck")),
heading=_("eligibility.pages.start.bankcard.title"),
@@ -96,11 +91,12 @@
raise Exception("EligibilityVerifier requires authentication, but OAUTH_CLIENT_NAME is None")
oauth_help_link = f"{reverse('core:help')}#login-gov"
+ oauth_help_more_link = f"{reverse('core:help')}#login-gov-verify-items"
media.insert(
0,
dict(
- icon=viewmodels.Icon("idscreencheck", pgettext("image alt text", "core.icons.idscreencheck")),
+ icon=viewmodels.Icon("idcardcheck", pgettext("image alt text", "core.icons.idcardcheck")),
heading=_("eligibility.pages.start.oauth.heading"),
details=_("eligibility.pages.start.oauth.details"),
links=[
@@ -109,18 +105,38 @@
text=_("eligibility.pages.start.oauth.link_text"),
url=oauth_help_link,
rel="noopener noreferrer",
- )
+ ),
+ viewmodels.Button.link(
+ classes="btn-text btn-link",
+ text=_("eligibility.pages.start.oauth.link_text[2]"),
+ url=oauth_help_more_link,
+ rel="noopener noreferrer",
+ ),
+ ],
+ bullets=[
+ _("eligibility.pages.start.oauth.required_items[0]"),
+ _("eligibility.pages.start.oauth.required_items[1]"),
+ _("eligibility.pages.start.oauth.required_items[2]"),
],
),
)
if not session.logged_in(request):
button = viewmodels.Button.login(
- label=_(verifier.auth_provider.sign_in_button_label),
- text="",
+ text=_(verifier.auth_provider.sign_in_button_label),
url=reverse("oauth:login"),
)
+ else:
+ media.insert(
+ 0,
+ dict(
+ icon=viewmodels.Icon("idcardcheck", pgettext("image alt text", "core.icons.idcardcheck")),
+ heading=_(verifier.start_item_name),
+ details=_(verifier.start_item_description),
+ ),
+ )
+
page = viewmodels.Page(
title=_("eligibility.pages.start.title"),
noimage=True,
|
{"golden_diff": "diff --git a/benefits/eligibility/views.py b/benefits/eligibility/views.py\n--- a/benefits/eligibility/views.py\n+++ b/benefits/eligibility/views.py\n@@ -67,11 +67,6 @@\n \n payment_options_link = f\"{reverse('core:help')}#payment-options\"\n media = [\n- dict(\n- icon=viewmodels.Icon(\"idcardcheck\", pgettext(\"image alt text\", \"core.icons.idcardcheck\")),\n- heading=_(verifier.start_item_name),\n- details=_(verifier.start_item_description),\n- ),\n dict(\n icon=viewmodels.Icon(\"bankcardcheck\", pgettext(\"image alt text\", \"core.icons.bankcardcheck\")),\n heading=_(\"eligibility.pages.start.bankcard.title\"),\n@@ -96,11 +91,12 @@\n raise Exception(\"EligibilityVerifier requires authentication, but OAUTH_CLIENT_NAME is None\")\n \n oauth_help_link = f\"{reverse('core:help')}#login-gov\"\n+ oauth_help_more_link = f\"{reverse('core:help')}#login-gov-verify-items\"\n \n media.insert(\n 0,\n dict(\n- icon=viewmodels.Icon(\"idscreencheck\", pgettext(\"image alt text\", \"core.icons.idscreencheck\")),\n+ icon=viewmodels.Icon(\"idcardcheck\", pgettext(\"image alt text\", \"core.icons.idcardcheck\")),\n heading=_(\"eligibility.pages.start.oauth.heading\"),\n details=_(\"eligibility.pages.start.oauth.details\"),\n links=[\n@@ -109,18 +105,38 @@\n text=_(\"eligibility.pages.start.oauth.link_text\"),\n url=oauth_help_link,\n rel=\"noopener noreferrer\",\n- )\n+ ),\n+ viewmodels.Button.link(\n+ classes=\"btn-text btn-link\",\n+ text=_(\"eligibility.pages.start.oauth.link_text[2]\"),\n+ url=oauth_help_more_link,\n+ rel=\"noopener noreferrer\",\n+ ),\n+ ],\n+ bullets=[\n+ _(\"eligibility.pages.start.oauth.required_items[0]\"),\n+ _(\"eligibility.pages.start.oauth.required_items[1]\"),\n+ _(\"eligibility.pages.start.oauth.required_items[2]\"),\n ],\n ),\n )\n \n if not session.logged_in(request):\n button = viewmodels.Button.login(\n- label=_(verifier.auth_provider.sign_in_button_label),\n- text=\"\",\n+ text=_(verifier.auth_provider.sign_in_button_label),\n url=reverse(\"oauth:login\"),\n )\n \n+ else:\n+ media.insert(\n+ 0,\n+ dict(\n+ icon=viewmodels.Icon(\"idcardcheck\", pgettext(\"image alt text\", \"core.icons.idcardcheck\")),\n+ heading=_(verifier.start_item_name),\n+ details=_(verifier.start_item_description),\n+ ),\n+ )\n+\n page = viewmodels.Page(\n title=_(\"eligibility.pages.start.title\"),\n noimage=True,\n", "issue": "Engineering: Front-end - Eligibility Start page design: Redesign to a single-column design. Add media list.\n- For both Mobile and Desktop, redesign the page to a two-column vertical design. \r\n- Redesign the 3-icon content page to be a 2-icon page and add the new bullet points of copy.\r\n- Establish Mobile/Tablet width margins\r\n- Make sure H1 font size is correct\r\n<img width=\"1512\" alt=\"image\" src=\"https://user-images.githubusercontent.com/3673236/172735521-c7b6760b-832d-448e-b45e-20f217debccb.png\">\r\n<img width=\"453\" alt=\"image\" src=\"https://user-images.githubusercontent.com/3673236/172735548-a9c6ed1f-23fd-4904-8e9f-629f6bf2eba1.png\">\r\n\r\n\n", "before_files": [{"content": "\"\"\"\nThe eligibility application: view definitions for the eligibility verification flow.\n\"\"\"\nfrom django.conf import settings\nfrom django.contrib import messages\nfrom django.shortcuts import redirect\nfrom django.template.response import TemplateResponse\nfrom django.urls import reverse\nfrom django.utils.decorators import decorator_from_middleware\nfrom django.utils.translation import pgettext, gettext as _\n\nfrom benefits.core import recaptcha, session, viewmodels\nfrom benefits.core.middleware import AgencySessionRequired, LoginRequired, RateLimit, VerifierSessionRequired\nfrom benefits.core.models import EligibilityVerifier\nfrom benefits.core.views import PageTemplateResponse\nfrom . import analytics, api, forms\n\n\n@decorator_from_middleware(AgencySessionRequired)\ndef index(request):\n \"\"\"View handler for the eligibility verifier selection form.\"\"\"\n\n session.update(request, eligibility_types=[], origin=reverse(\"eligibility:index\"))\n agency = session.agency(request)\n\n eligibility_start = reverse(\"eligibility:start\")\n\n page = viewmodels.Page(\n title=_(\"eligibility.pages.index.title\"),\n content_title=_(\"eligibility.pages.index.content_title\"),\n forms=forms.EligibilityVerifierSelectionForm(agency=agency),\n )\n\n if request.method == \"POST\":\n form = forms.EligibilityVerifierSelectionForm(data=request.POST, agency=agency)\n\n if form.is_valid():\n verifier_id = form.cleaned_data.get(\"verifier\")\n verifier = EligibilityVerifier.objects.get(id=verifier_id)\n session.update(request, verifier=verifier)\n\n response = redirect(eligibility_start)\n else:\n # form was not valid, allow for correction/resubmission\n page.forms = [form]\n response = PageTemplateResponse(request, page)\n else:\n if agency.eligibility_verifiers.count() == 1:\n verifier = agency.eligibility_verifiers.first()\n session.update(request, verifier=verifier)\n response = redirect(eligibility_start)\n else:\n response = PageTemplateResponse(request, page)\n\n return response\n\n\n@decorator_from_middleware(AgencySessionRequired)\n@decorator_from_middleware(VerifierSessionRequired)\ndef start(request):\n \"\"\"View handler for the eligibility verification getting started screen.\"\"\"\n\n session.update(request, eligibility_types=[], origin=reverse(\"eligibility:start\"))\n verifier = session.verifier(request)\n\n button = viewmodels.Button.primary(text=_(\"eligibility.buttons.continue\"), url=reverse(\"eligibility:confirm\"))\n\n payment_options_link = f\"{reverse('core:help')}#payment-options\"\n media = [\n dict(\n icon=viewmodels.Icon(\"idcardcheck\", pgettext(\"image alt text\", \"core.icons.idcardcheck\")),\n heading=_(verifier.start_item_name),\n details=_(verifier.start_item_description),\n ),\n dict(\n icon=viewmodels.Icon(\"bankcardcheck\", pgettext(\"image alt text\", \"core.icons.bankcardcheck\")),\n heading=_(\"eligibility.pages.start.bankcard.title\"),\n details=_(\"eligibility.pages.start.bankcard.text\"),\n links=[\n viewmodels.Button.link(\n classes=\"btn-text btn-link\",\n text=_(\"eligibility.pages.start.bankcard.button[0].link\"),\n url=payment_options_link,\n ),\n viewmodels.Button.link(\n classes=\"btn-text btn-link\",\n text=_(\"eligibility.pages.start.bankcard.button[1].link\"),\n url=payment_options_link,\n ),\n ],\n ),\n ]\n\n if verifier.requires_authentication:\n if settings.OAUTH_CLIENT_NAME is None:\n raise Exception(\"EligibilityVerifier requires authentication, but OAUTH_CLIENT_NAME is None\")\n\n oauth_help_link = f\"{reverse('core:help')}#login-gov\"\n\n media.insert(\n 0,\n dict(\n icon=viewmodels.Icon(\"idscreencheck\", pgettext(\"image alt text\", \"core.icons.idscreencheck\")),\n heading=_(\"eligibility.pages.start.oauth.heading\"),\n details=_(\"eligibility.pages.start.oauth.details\"),\n links=[\n viewmodels.Button.link(\n classes=\"btn-text btn-link\",\n text=_(\"eligibility.pages.start.oauth.link_text\"),\n url=oauth_help_link,\n rel=\"noopener noreferrer\",\n )\n ],\n ),\n )\n\n if not session.logged_in(request):\n button = viewmodels.Button.login(\n label=_(verifier.auth_provider.sign_in_button_label),\n text=\"\",\n url=reverse(\"oauth:login\"),\n )\n\n page = viewmodels.Page(\n title=_(\"eligibility.pages.start.title\"),\n noimage=True,\n paragraphs=[_(verifier.start_blurb)],\n button=button,\n )\n\n ctx = page.context_dict()\n ctx[\"title\"] = _(verifier.start_content_title)\n ctx[\"media\"] = media\n\n return TemplateResponse(request, \"eligibility/start.html\", ctx)\n\n\n@decorator_from_middleware(AgencySessionRequired)\n@decorator_from_middleware(LoginRequired)\n@decorator_from_middleware(RateLimit)\n@decorator_from_middleware(VerifierSessionRequired)\ndef confirm(request):\n \"\"\"View handler for the eligibility verification form.\"\"\"\n\n template = \"eligibility/confirm.html\"\n verifier = session.verifier(request)\n\n page = viewmodels.Page(\n title=_(verifier.form_title),\n content_title=_(verifier.form_content_title),\n paragraphs=[_(verifier.form_blurb)],\n form=forms.EligibilityVerificationForm(auto_id=True, label_suffix=\"\", verifier=verifier),\n classes=\"text-lg-center\",\n )\n\n # POST form submission, process form data\n if request.method == \"POST\":\n analytics.started_eligibility(request)\n\n form = forms.EligibilityVerificationForm(data=request.POST, verifier=verifier)\n\n # form was not valid, allow for correction/resubmission\n if not form.is_valid():\n if recaptcha.has_error(form):\n messages.error(request, \"Recaptcha failed. Please try again.\")\n\n page.forms = [form]\n return TemplateResponse(request, template, page.context_dict())\n\n # form is valid, make Eligibility Verification request to get the verified types\n verified_types = api.get_verified_types(request, form)\n\n # form was not valid, allow for correction/resubmission\n if verified_types is None:\n analytics.returned_error(request, form.errors)\n page.forms = [form]\n return TemplateResponse(request, template, page.context_dict())\n # no types were verified\n elif len(verified_types) == 0:\n return unverified(request)\n # type(s) were verified\n else:\n return verified(request, verified_types)\n\n # GET from an already verified user, no need to verify again\n elif session.eligible(request):\n eligibility = session.eligibility(request)\n return verified(request, [eligibility.name])\n # GET from an unverified user, present the form\n else:\n return TemplateResponse(request, template, page.context_dict())\n\n\n@decorator_from_middleware(AgencySessionRequired)\n@decorator_from_middleware(LoginRequired)\ndef verified(request, verified_types):\n \"\"\"View handler for the verified eligibility page.\"\"\"\n\n analytics.returned_success(request)\n\n session.update(request, eligibility_types=verified_types)\n\n return redirect(\"enrollment:index\")\n\n\n@decorator_from_middleware(AgencySessionRequired)\n@decorator_from_middleware(LoginRequired)\n@decorator_from_middleware(VerifierSessionRequired)\ndef unverified(request):\n \"\"\"View handler for the unverified eligibility page.\"\"\"\n\n analytics.returned_fail(request)\n\n # tel: link to agency phone number\n agency = session.agency(request)\n buttons = viewmodels.Button.agency_contact_links(agency)\n buttons.append(viewmodels.Button.home(request, _(\"core.buttons.retry\")))\n\n verifier = session.verifier(request)\n\n page = viewmodels.Page(\n title=_(verifier.unverified_title),\n classes=\"with-agency-links\",\n content_title=_(verifier.unverified_content_title),\n icon=viewmodels.Icon(\"idcardquestion\", pgettext(\"image alt text\", \"core.icons.idcardquestion\")),\n paragraphs=[_(verifier.unverified_blurb), _(\"eligibility.pages.unverified.p[1]\")],\n buttons=buttons,\n )\n\n return TemplateResponse(request, \"eligibility/unverified.html\", page.context_dict())\n", "path": "benefits/eligibility/views.py"}]}
| 3,083 | 637 |
gh_patches_debug_38569
|
rasdani/github-patches
|
git_diff
|
pre-commit__pre-commit-600
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
"Locking pre-commit directory" should only print if waiting for a lock
Otherwise this is just useless console noise
</issue>
<code>
[start of pre_commit/store.py]
1 from __future__ import unicode_literals
2
3 import contextlib
4 import io
5 import logging
6 import os.path
7 import sqlite3
8 import tempfile
9
10 from cached_property import cached_property
11
12 import pre_commit.constants as C
13 from pre_commit import file_lock
14 from pre_commit.prefixed_command_runner import PrefixedCommandRunner
15 from pre_commit.util import clean_path_on_failure
16 from pre_commit.util import cmd_output
17 from pre_commit.util import copy_tree_to_path
18 from pre_commit.util import cwd
19 from pre_commit.util import no_git_env
20 from pre_commit.util import resource_filename
21
22
23 logger = logging.getLogger('pre_commit')
24
25
26 def _get_default_directory():
27 """Returns the default directory for the Store. This is intentionally
28 underscored to indicate that `Store.get_default_directory` is the intended
29 way to get this information. This is also done so
30 `Store.get_default_directory` can be mocked in tests and
31 `_get_default_directory` can be tested.
32 """
33 return os.environ.get(
34 'PRE_COMMIT_HOME',
35 os.path.join(os.path.expanduser('~'), '.pre-commit'),
36 )
37
38
39 class Store(object):
40 get_default_directory = staticmethod(_get_default_directory)
41 __created = False
42
43 def __init__(self, directory=None):
44 if directory is None:
45 directory = self.get_default_directory()
46
47 self.directory = directory
48
49 @contextlib.contextmanager
50 def exclusive_lock(self, quiet=False):
51 if not quiet:
52 logger.info('Locking pre-commit directory')
53 with file_lock.lock(os.path.join(self.directory, '.lock')):
54 yield
55
56 def _write_readme(self):
57 with io.open(os.path.join(self.directory, 'README'), 'w') as readme:
58 readme.write(
59 'This directory is maintained by the pre-commit project.\n'
60 'Learn more: https://github.com/pre-commit/pre-commit\n',
61 )
62
63 def _write_sqlite_db(self):
64 # To avoid a race where someone ^Cs between db creation and execution
65 # of the CREATE TABLE statement
66 fd, tmpfile = tempfile.mkstemp(dir=self.directory)
67 # We'll be managing this file ourselves
68 os.close(fd)
69 # sqlite doesn't close its fd with its contextmanager >.<
70 # contextlib.closing fixes this.
71 # See: http://stackoverflow.com/a/28032829/812183
72 with contextlib.closing(sqlite3.connect(tmpfile)) as db:
73 db.executescript(
74 'CREATE TABLE repos ('
75 ' repo CHAR(255) NOT NULL,'
76 ' ref CHAR(255) NOT NULL,'
77 ' path CHAR(255) NOT NULL,'
78 ' PRIMARY KEY (repo, ref)'
79 ');',
80 )
81
82 # Atomic file move
83 os.rename(tmpfile, self.db_path)
84
85 def _create(self):
86 if not os.path.exists(self.directory):
87 os.makedirs(self.directory)
88 self._write_readme()
89
90 if os.path.exists(self.db_path):
91 return
92 with self.exclusive_lock(quiet=True):
93 # Another process may have already completed this work
94 if os.path.exists(self.db_path): # pragma: no cover (race)
95 return
96 self._write_sqlite_db()
97
98 def require_created(self):
99 """Require the pre-commit file store to be created."""
100 if not self.__created:
101 self._create()
102 self.__created = True
103
104 def _new_repo(self, repo, ref, make_strategy):
105 self.require_created()
106
107 def _get_result():
108 # Check if we already exist
109 with sqlite3.connect(self.db_path) as db:
110 result = db.execute(
111 'SELECT path FROM repos WHERE repo = ? AND ref = ?',
112 [repo, ref],
113 ).fetchone()
114 if result:
115 return result[0]
116
117 result = _get_result()
118 if result:
119 return result
120 with self.exclusive_lock():
121 # Another process may have already completed this work
122 result = _get_result()
123 if result: # pragma: no cover (race)
124 return result
125
126 logger.info('Initializing environment for {}.'.format(repo))
127
128 directory = tempfile.mkdtemp(prefix='repo', dir=self.directory)
129 with clean_path_on_failure(directory):
130 make_strategy(directory)
131
132 # Update our db with the created repo
133 with sqlite3.connect(self.db_path) as db:
134 db.execute(
135 'INSERT INTO repos (repo, ref, path) VALUES (?, ?, ?)',
136 [repo, ref, directory],
137 )
138 return directory
139
140 def clone(self, repo, ref):
141 """Clone the given url and checkout the specific ref."""
142 def clone_strategy(directory):
143 cmd_output(
144 'git', 'clone', '--no-checkout', repo, directory,
145 env=no_git_env(),
146 )
147 with cwd(directory):
148 cmd_output('git', 'reset', ref, '--hard', env=no_git_env())
149 cmd_output(
150 'git', 'submodule', 'update', '--init', '--recursive',
151 env=no_git_env(),
152 )
153
154 return self._new_repo(repo, ref, clone_strategy)
155
156 def make_local(self, deps):
157 def make_local_strategy(directory):
158 copy_tree_to_path(resource_filename('empty_template'), directory)
159 return self._new_repo(
160 'local:{}'.format(','.join(sorted(deps))), C.LOCAL_REPO_VERSION,
161 make_local_strategy,
162 )
163
164 @cached_property
165 def cmd_runner(self):
166 return PrefixedCommandRunner(self.directory)
167
168 @cached_property
169 def db_path(self):
170 return os.path.join(self.directory, 'db.db')
171
[end of pre_commit/store.py]
[start of pre_commit/file_lock.py]
1 import contextlib
2 import errno
3
4
5 try: # pragma: no cover (windows)
6 import msvcrt
7
8 # https://docs.microsoft.com/en-us/cpp/c-runtime-library/reference/locking
9
10 # on windows we lock "regions" of files, we don't care about the actual
11 # byte region so we'll just pick *some* number here.
12 _region = 0xffff
13
14 @contextlib.contextmanager
15 def _locked(fileno):
16 while True:
17 try:
18 msvcrt.locking(fileno, msvcrt.LK_LOCK, _region)
19 except OSError as e:
20 # Locking violation. Returned when the _LK_LOCK or _LK_RLCK
21 # flag is specified and the file cannot be locked after 10
22 # attempts.
23 if e.errno != errno.EDEADLOCK:
24 raise
25 else:
26 break
27
28 try:
29 yield
30 finally:
31 # From cursory testing, it seems to get unlocked when the file is
32 # closed so this may not be necessary.
33 # The documentation however states:
34 # "Regions should be locked only briefly and should be unlocked
35 # before closing a file or exiting the program."
36 msvcrt.locking(fileno, msvcrt.LK_UNLCK, _region)
37 except ImportError: # pragma: no cover (posix)
38 import fcntl
39
40 @contextlib.contextmanager
41 def _locked(fileno):
42 fcntl.flock(fileno, fcntl.LOCK_EX)
43 try:
44 yield
45 finally:
46 fcntl.flock(fileno, fcntl.LOCK_UN)
47
48
49 @contextlib.contextmanager
50 def lock(path):
51 with open(path, 'a+') as f:
52 with _locked(f.fileno()):
53 yield
54
[end of pre_commit/file_lock.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/pre_commit/file_lock.py b/pre_commit/file_lock.py
--- a/pre_commit/file_lock.py
+++ b/pre_commit/file_lock.py
@@ -12,18 +12,22 @@
_region = 0xffff
@contextlib.contextmanager
- def _locked(fileno):
- while True:
- try:
- msvcrt.locking(fileno, msvcrt.LK_LOCK, _region)
- except OSError as e:
- # Locking violation. Returned when the _LK_LOCK or _LK_RLCK
- # flag is specified and the file cannot be locked after 10
- # attempts.
- if e.errno != errno.EDEADLOCK:
- raise
- else:
- break
+ def _locked(fileno, blocked_cb):
+ try:
+ msvcrt.locking(fileno, msvcrt.LK_NBLCK, _region)
+ except IOError:
+ blocked_cb()
+ while True:
+ try:
+ msvcrt.locking(fileno, msvcrt.LK_LOCK, _region)
+ except IOError as e:
+ # Locking violation. Returned when the _LK_LOCK or _LK_RLCK
+ # flag is specified and the file cannot be locked after 10
+ # attempts.
+ if e.errno != errno.EDEADLOCK:
+ raise
+ else:
+ break
try:
yield
@@ -38,8 +42,12 @@
import fcntl
@contextlib.contextmanager
- def _locked(fileno):
- fcntl.flock(fileno, fcntl.LOCK_EX)
+ def _locked(fileno, blocked_cb):
+ try:
+ fcntl.flock(fileno, fcntl.LOCK_EX | fcntl.LOCK_NB)
+ except IOError:
+ blocked_cb()
+ fcntl.flock(fileno, fcntl.LOCK_EX)
try:
yield
finally:
@@ -47,7 +55,7 @@
@contextlib.contextmanager
-def lock(path):
+def lock(path, blocked_cb):
with open(path, 'a+') as f:
- with _locked(f.fileno()):
+ with _locked(f.fileno(), blocked_cb):
yield
diff --git a/pre_commit/store.py b/pre_commit/store.py
--- a/pre_commit/store.py
+++ b/pre_commit/store.py
@@ -47,10 +47,11 @@
self.directory = directory
@contextlib.contextmanager
- def exclusive_lock(self, quiet=False):
- if not quiet:
+ def exclusive_lock(self):
+ def blocked_cb(): # pragma: no cover (tests are single-process)
logger.info('Locking pre-commit directory')
- with file_lock.lock(os.path.join(self.directory, '.lock')):
+
+ with file_lock.lock(os.path.join(self.directory, '.lock'), blocked_cb):
yield
def _write_readme(self):
@@ -89,7 +90,7 @@
if os.path.exists(self.db_path):
return
- with self.exclusive_lock(quiet=True):
+ with self.exclusive_lock():
# Another process may have already completed this work
if os.path.exists(self.db_path): # pragma: no cover (race)
return
|
{"golden_diff": "diff --git a/pre_commit/file_lock.py b/pre_commit/file_lock.py\n--- a/pre_commit/file_lock.py\n+++ b/pre_commit/file_lock.py\n@@ -12,18 +12,22 @@\n _region = 0xffff\n \n @contextlib.contextmanager\n- def _locked(fileno):\n- while True:\n- try:\n- msvcrt.locking(fileno, msvcrt.LK_LOCK, _region)\n- except OSError as e:\n- # Locking violation. Returned when the _LK_LOCK or _LK_RLCK\n- # flag is specified and the file cannot be locked after 10\n- # attempts.\n- if e.errno != errno.EDEADLOCK:\n- raise\n- else:\n- break\n+ def _locked(fileno, blocked_cb):\n+ try:\n+ msvcrt.locking(fileno, msvcrt.LK_NBLCK, _region)\n+ except IOError:\n+ blocked_cb()\n+ while True:\n+ try:\n+ msvcrt.locking(fileno, msvcrt.LK_LOCK, _region)\n+ except IOError as e:\n+ # Locking violation. Returned when the _LK_LOCK or _LK_RLCK\n+ # flag is specified and the file cannot be locked after 10\n+ # attempts.\n+ if e.errno != errno.EDEADLOCK:\n+ raise\n+ else:\n+ break\n \n try:\n yield\n@@ -38,8 +42,12 @@\n import fcntl\n \n @contextlib.contextmanager\n- def _locked(fileno):\n- fcntl.flock(fileno, fcntl.LOCK_EX)\n+ def _locked(fileno, blocked_cb):\n+ try:\n+ fcntl.flock(fileno, fcntl.LOCK_EX | fcntl.LOCK_NB)\n+ except IOError:\n+ blocked_cb()\n+ fcntl.flock(fileno, fcntl.LOCK_EX)\n try:\n yield\n finally:\n@@ -47,7 +55,7 @@\n \n \n @contextlib.contextmanager\n-def lock(path):\n+def lock(path, blocked_cb):\n with open(path, 'a+') as f:\n- with _locked(f.fileno()):\n+ with _locked(f.fileno(), blocked_cb):\n yield\ndiff --git a/pre_commit/store.py b/pre_commit/store.py\n--- a/pre_commit/store.py\n+++ b/pre_commit/store.py\n@@ -47,10 +47,11 @@\n self.directory = directory\n \n @contextlib.contextmanager\n- def exclusive_lock(self, quiet=False):\n- if not quiet:\n+ def exclusive_lock(self):\n+ def blocked_cb(): # pragma: no cover (tests are single-process)\n logger.info('Locking pre-commit directory')\n- with file_lock.lock(os.path.join(self.directory, '.lock')):\n+\n+ with file_lock.lock(os.path.join(self.directory, '.lock'), blocked_cb):\n yield\n \n def _write_readme(self):\n@@ -89,7 +90,7 @@\n \n if os.path.exists(self.db_path):\n return\n- with self.exclusive_lock(quiet=True):\n+ with self.exclusive_lock():\n # Another process may have already completed this work\n if os.path.exists(self.db_path): # pragma: no cover (race)\n return\n", "issue": "\"Locking pre-commit directory\" should only print if waiting for a lock\nOtherwise this is just useless console noise\n", "before_files": [{"content": "from __future__ import unicode_literals\n\nimport contextlib\nimport io\nimport logging\nimport os.path\nimport sqlite3\nimport tempfile\n\nfrom cached_property import cached_property\n\nimport pre_commit.constants as C\nfrom pre_commit import file_lock\nfrom pre_commit.prefixed_command_runner import PrefixedCommandRunner\nfrom pre_commit.util import clean_path_on_failure\nfrom pre_commit.util import cmd_output\nfrom pre_commit.util import copy_tree_to_path\nfrom pre_commit.util import cwd\nfrom pre_commit.util import no_git_env\nfrom pre_commit.util import resource_filename\n\n\nlogger = logging.getLogger('pre_commit')\n\n\ndef _get_default_directory():\n \"\"\"Returns the default directory for the Store. This is intentionally\n underscored to indicate that `Store.get_default_directory` is the intended\n way to get this information. This is also done so\n `Store.get_default_directory` can be mocked in tests and\n `_get_default_directory` can be tested.\n \"\"\"\n return os.environ.get(\n 'PRE_COMMIT_HOME',\n os.path.join(os.path.expanduser('~'), '.pre-commit'),\n )\n\n\nclass Store(object):\n get_default_directory = staticmethod(_get_default_directory)\n __created = False\n\n def __init__(self, directory=None):\n if directory is None:\n directory = self.get_default_directory()\n\n self.directory = directory\n\n @contextlib.contextmanager\n def exclusive_lock(self, quiet=False):\n if not quiet:\n logger.info('Locking pre-commit directory')\n with file_lock.lock(os.path.join(self.directory, '.lock')):\n yield\n\n def _write_readme(self):\n with io.open(os.path.join(self.directory, 'README'), 'w') as readme:\n readme.write(\n 'This directory is maintained by the pre-commit project.\\n'\n 'Learn more: https://github.com/pre-commit/pre-commit\\n',\n )\n\n def _write_sqlite_db(self):\n # To avoid a race where someone ^Cs between db creation and execution\n # of the CREATE TABLE statement\n fd, tmpfile = tempfile.mkstemp(dir=self.directory)\n # We'll be managing this file ourselves\n os.close(fd)\n # sqlite doesn't close its fd with its contextmanager >.<\n # contextlib.closing fixes this.\n # See: http://stackoverflow.com/a/28032829/812183\n with contextlib.closing(sqlite3.connect(tmpfile)) as db:\n db.executescript(\n 'CREATE TABLE repos ('\n ' repo CHAR(255) NOT NULL,'\n ' ref CHAR(255) NOT NULL,'\n ' path CHAR(255) NOT NULL,'\n ' PRIMARY KEY (repo, ref)'\n ');',\n )\n\n # Atomic file move\n os.rename(tmpfile, self.db_path)\n\n def _create(self):\n if not os.path.exists(self.directory):\n os.makedirs(self.directory)\n self._write_readme()\n\n if os.path.exists(self.db_path):\n return\n with self.exclusive_lock(quiet=True):\n # Another process may have already completed this work\n if os.path.exists(self.db_path): # pragma: no cover (race)\n return\n self._write_sqlite_db()\n\n def require_created(self):\n \"\"\"Require the pre-commit file store to be created.\"\"\"\n if not self.__created:\n self._create()\n self.__created = True\n\n def _new_repo(self, repo, ref, make_strategy):\n self.require_created()\n\n def _get_result():\n # Check if we already exist\n with sqlite3.connect(self.db_path) as db:\n result = db.execute(\n 'SELECT path FROM repos WHERE repo = ? AND ref = ?',\n [repo, ref],\n ).fetchone()\n if result:\n return result[0]\n\n result = _get_result()\n if result:\n return result\n with self.exclusive_lock():\n # Another process may have already completed this work\n result = _get_result()\n if result: # pragma: no cover (race)\n return result\n\n logger.info('Initializing environment for {}.'.format(repo))\n\n directory = tempfile.mkdtemp(prefix='repo', dir=self.directory)\n with clean_path_on_failure(directory):\n make_strategy(directory)\n\n # Update our db with the created repo\n with sqlite3.connect(self.db_path) as db:\n db.execute(\n 'INSERT INTO repos (repo, ref, path) VALUES (?, ?, ?)',\n [repo, ref, directory],\n )\n return directory\n\n def clone(self, repo, ref):\n \"\"\"Clone the given url and checkout the specific ref.\"\"\"\n def clone_strategy(directory):\n cmd_output(\n 'git', 'clone', '--no-checkout', repo, directory,\n env=no_git_env(),\n )\n with cwd(directory):\n cmd_output('git', 'reset', ref, '--hard', env=no_git_env())\n cmd_output(\n 'git', 'submodule', 'update', '--init', '--recursive',\n env=no_git_env(),\n )\n\n return self._new_repo(repo, ref, clone_strategy)\n\n def make_local(self, deps):\n def make_local_strategy(directory):\n copy_tree_to_path(resource_filename('empty_template'), directory)\n return self._new_repo(\n 'local:{}'.format(','.join(sorted(deps))), C.LOCAL_REPO_VERSION,\n make_local_strategy,\n )\n\n @cached_property\n def cmd_runner(self):\n return PrefixedCommandRunner(self.directory)\n\n @cached_property\n def db_path(self):\n return os.path.join(self.directory, 'db.db')\n", "path": "pre_commit/store.py"}, {"content": "import contextlib\nimport errno\n\n\ntry: # pragma: no cover (windows)\n import msvcrt\n\n # https://docs.microsoft.com/en-us/cpp/c-runtime-library/reference/locking\n\n # on windows we lock \"regions\" of files, we don't care about the actual\n # byte region so we'll just pick *some* number here.\n _region = 0xffff\n\n @contextlib.contextmanager\n def _locked(fileno):\n while True:\n try:\n msvcrt.locking(fileno, msvcrt.LK_LOCK, _region)\n except OSError as e:\n # Locking violation. Returned when the _LK_LOCK or _LK_RLCK\n # flag is specified and the file cannot be locked after 10\n # attempts.\n if e.errno != errno.EDEADLOCK:\n raise\n else:\n break\n\n try:\n yield\n finally:\n # From cursory testing, it seems to get unlocked when the file is\n # closed so this may not be necessary.\n # The documentation however states:\n # \"Regions should be locked only briefly and should be unlocked\n # before closing a file or exiting the program.\"\n msvcrt.locking(fileno, msvcrt.LK_UNLCK, _region)\nexcept ImportError: # pragma: no cover (posix)\n import fcntl\n\n @contextlib.contextmanager\n def _locked(fileno):\n fcntl.flock(fileno, fcntl.LOCK_EX)\n try:\n yield\n finally:\n fcntl.flock(fileno, fcntl.LOCK_UN)\n\n\[email protected]\ndef lock(path):\n with open(path, 'a+') as f:\n with _locked(f.fileno()):\n yield\n", "path": "pre_commit/file_lock.py"}]}
| 2,692 | 739 |
gh_patches_debug_21465
|
rasdani/github-patches
|
git_diff
|
dbt-labs__dbt-core-2785
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Custom environment metadata in JSON artifacts
### Describe the feature
Include custom environment-sourced metadata in the JSON artifacts.
Inside the `metadata` field (or at the root if #2761 is not done yet), dbt should add a field - tentatively `env`. This field should be a dictionary, populated from the contents of dbt's environment that start with `DBT_ENV_CUSTOM_ENV_`. The keys should be the environment variables without the prefix, and the values the values. For example:
DBT_ENV_CUSTOM_ENV_run_id=1 DBT_ENV_CUSTOM_ENV_run_env=ci dbt run
will result in a manifest with `{"env": {"run_id": "1", "run_env": "ci"}}` in its metadata field.
See [this comment](https://github.com/fishtown-analytics/dbt/issues/2671#issuecomment-685892719) for a tiny bit of context.
Split off from #2671
</issue>
<code>
[start of core/dbt/contracts/util.py]
1 import dataclasses
2 from datetime import datetime
3 from typing import (
4 List, Tuple, ClassVar, Type, TypeVar, Dict, Any, Optional
5 )
6
7 from dbt.clients.system import write_json, read_json
8 from dbt.exceptions import (
9 IncompatibleSchemaException,
10 InternalException,
11 RuntimeException,
12 )
13 from dbt.version import __version__
14 from dbt.tracking import get_invocation_id
15 from hologram import JsonSchemaMixin
16
17 MacroKey = Tuple[str, str]
18 SourceKey = Tuple[str, str]
19
20
21 def list_str() -> List[str]:
22 """Mypy gets upset about stuff like:
23
24 from dataclasses import dataclass, field
25 from typing import Optional, List
26
27 @dataclass
28 class Foo:
29 x: Optional[List[str]] = field(default_factory=list)
30
31
32 Because `list` could be any kind of list, I guess
33 """
34 return []
35
36
37 class Replaceable:
38 def replace(self, **kwargs):
39 return dataclasses.replace(self, **kwargs)
40
41
42 class Mergeable(Replaceable):
43 def merged(self, *args):
44 """Perform a shallow merge, where the last non-None write wins. This is
45 intended to merge dataclasses that are a collection of optional values.
46 """
47 replacements = {}
48 cls = type(self)
49 for arg in args:
50 for field in dataclasses.fields(cls):
51 value = getattr(arg, field.name)
52 if value is not None:
53 replacements[field.name] = value
54
55 return self.replace(**replacements)
56
57
58 class Writable:
59 def write(self, path: str, omit_none: bool = False):
60 write_json(path, self.to_dict(omit_none=omit_none)) # type: ignore
61
62
63 class AdditionalPropertiesMixin:
64 """Make this class an extensible property.
65
66 The underlying class definition must include a type definition for a field
67 named '_extra' that is of type `Dict[str, Any]`.
68 """
69 ADDITIONAL_PROPERTIES = True
70
71 @classmethod
72 def from_dict(cls, data, validate=True):
73 self = super().from_dict(data=data, validate=validate)
74 keys = self.to_dict(validate=False, omit_none=False)
75 for key, value in data.items():
76 if key not in keys:
77 self.extra[key] = value
78 return self
79
80 def to_dict(self, omit_none=True, validate=False):
81 data = super().to_dict(omit_none=omit_none, validate=validate)
82 data.update(self.extra)
83 return data
84
85 def replace(self, **kwargs):
86 dct = self.to_dict(omit_none=False, validate=False)
87 dct.update(kwargs)
88 return self.from_dict(dct)
89
90 @property
91 def extra(self):
92 return self._extra
93
94
95 class Readable:
96 @classmethod
97 def read(cls, path: str):
98 try:
99 data = read_json(path)
100 except (EnvironmentError, ValueError) as exc:
101 raise RuntimeException(
102 f'Could not read {cls.__name__} at "{path}" as JSON: {exc}'
103 ) from exc
104
105 return cls.from_dict(data) # type: ignore
106
107
108 BASE_SCHEMAS_URL = 'https://schemas.getdbt.com/dbt/{name}/v{version}.json'
109
110
111 @dataclasses.dataclass
112 class SchemaVersion:
113 name: str
114 version: int
115
116 def __str__(self) -> str:
117 return BASE_SCHEMAS_URL.format(
118 name=self.name,
119 version=self.version,
120 )
121
122
123 SCHEMA_VERSION_KEY = 'dbt_schema_version'
124
125
126 @dataclasses.dataclass
127 class BaseArtifactMetadata(JsonSchemaMixin):
128 dbt_schema_version: str
129 dbt_version: str = __version__
130 generated_at: datetime = dataclasses.field(
131 default_factory=datetime.utcnow
132 )
133 invocation_id: Optional[str] = dataclasses.field(
134 default_factory=get_invocation_id
135 )
136
137
138 def schema_version(name: str, version: int):
139 def inner(cls: Type[VersionedSchema]):
140 cls.dbt_schema_version = SchemaVersion(
141 name=name,
142 version=version,
143 )
144 return cls
145 return inner
146
147
148 @dataclasses.dataclass
149 class VersionedSchema(JsonSchemaMixin):
150 dbt_schema_version: ClassVar[SchemaVersion]
151
152 @classmethod
153 def json_schema(cls, embeddable: bool = False) -> Dict[str, Any]:
154 result = super().json_schema(embeddable=embeddable)
155 if not embeddable:
156 result['$id'] = str(cls.dbt_schema_version)
157 return result
158
159
160 T = TypeVar('T', bound='ArtifactMixin')
161
162
163 # metadata should really be a Generic[T_M] where T_M is a TypeVar bound to
164 # BaseArtifactMetadata. Unfortunately this isn't possible due to a mypy issue:
165 # https://github.com/python/mypy/issues/7520
166 @dataclasses.dataclass(init=False)
167 class ArtifactMixin(VersionedSchema, Writable, Readable):
168 metadata: BaseArtifactMetadata
169
170 @classmethod
171 def from_dict(
172 cls: Type[T], data: Dict[str, Any], validate: bool = True
173 ) -> T:
174 if cls.dbt_schema_version is None:
175 raise InternalException(
176 'Cannot call from_dict with no schema version!'
177 )
178
179 if validate:
180 expected = str(cls.dbt_schema_version)
181 found = data.get('metadata', {}).get(SCHEMA_VERSION_KEY)
182 if found != expected:
183 raise IncompatibleSchemaException(expected, found)
184
185 return super().from_dict(data=data, validate=validate)
186
[end of core/dbt/contracts/util.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/core/dbt/contracts/util.py b/core/dbt/contracts/util.py
--- a/core/dbt/contracts/util.py
+++ b/core/dbt/contracts/util.py
@@ -1,4 +1,5 @@
import dataclasses
+import os
from datetime import datetime
from typing import (
List, Tuple, ClassVar, Type, TypeVar, Dict, Any, Optional
@@ -123,6 +124,16 @@
SCHEMA_VERSION_KEY = 'dbt_schema_version'
+METADATA_ENV_PREFIX = 'DBT_ENV_CUSTOM_ENV_'
+
+
+def get_metadata_env() -> Dict[str, str]:
+ return {
+ k[len(METADATA_ENV_PREFIX):]: v for k, v in os.environ.items()
+ if k.startswith(METADATA_ENV_PREFIX)
+ }
+
+
@dataclasses.dataclass
class BaseArtifactMetadata(JsonSchemaMixin):
dbt_schema_version: str
@@ -133,6 +144,7 @@
invocation_id: Optional[str] = dataclasses.field(
default_factory=get_invocation_id
)
+ env: Dict[str, str] = dataclasses.field(default_factory=get_metadata_env)
def schema_version(name: str, version: int):
|
{"golden_diff": "diff --git a/core/dbt/contracts/util.py b/core/dbt/contracts/util.py\n--- a/core/dbt/contracts/util.py\n+++ b/core/dbt/contracts/util.py\n@@ -1,4 +1,5 @@\n import dataclasses\n+import os\n from datetime import datetime\n from typing import (\n List, Tuple, ClassVar, Type, TypeVar, Dict, Any, Optional\n@@ -123,6 +124,16 @@\n SCHEMA_VERSION_KEY = 'dbt_schema_version'\n \n \n+METADATA_ENV_PREFIX = 'DBT_ENV_CUSTOM_ENV_'\n+\n+\n+def get_metadata_env() -> Dict[str, str]:\n+ return {\n+ k[len(METADATA_ENV_PREFIX):]: v for k, v in os.environ.items()\n+ if k.startswith(METADATA_ENV_PREFIX)\n+ }\n+\n+\n @dataclasses.dataclass\n class BaseArtifactMetadata(JsonSchemaMixin):\n dbt_schema_version: str\n@@ -133,6 +144,7 @@\n invocation_id: Optional[str] = dataclasses.field(\n default_factory=get_invocation_id\n )\n+ env: Dict[str, str] = dataclasses.field(default_factory=get_metadata_env)\n \n \n def schema_version(name: str, version: int):\n", "issue": "Custom environment metadata in JSON artifacts\n### Describe the feature\r\nInclude custom environment-sourced metadata in the JSON artifacts.\r\n\r\nInside the `metadata` field (or at the root if #2761 is not done yet), dbt should add a field - tentatively `env`. This field should be a dictionary, populated from the contents of dbt's environment that start with `DBT_ENV_CUSTOM_ENV_`. The keys should be the environment variables without the prefix, and the values the values. For example:\r\nDBT_ENV_CUSTOM_ENV_run_id=1 DBT_ENV_CUSTOM_ENV_run_env=ci dbt run\r\nwill result in a manifest with `{\"env\": {\"run_id\": \"1\", \"run_env\": \"ci\"}}` in its metadata field.\r\n\r\nSee [this comment](https://github.com/fishtown-analytics/dbt/issues/2671#issuecomment-685892719) for a tiny bit of context.\r\n\r\nSplit off from #2671 \r\n\n", "before_files": [{"content": "import dataclasses\nfrom datetime import datetime\nfrom typing import (\n List, Tuple, ClassVar, Type, TypeVar, Dict, Any, Optional\n)\n\nfrom dbt.clients.system import write_json, read_json\nfrom dbt.exceptions import (\n IncompatibleSchemaException,\n InternalException,\n RuntimeException,\n)\nfrom dbt.version import __version__\nfrom dbt.tracking import get_invocation_id\nfrom hologram import JsonSchemaMixin\n\nMacroKey = Tuple[str, str]\nSourceKey = Tuple[str, str]\n\n\ndef list_str() -> List[str]:\n \"\"\"Mypy gets upset about stuff like:\n\n from dataclasses import dataclass, field\n from typing import Optional, List\n\n @dataclass\n class Foo:\n x: Optional[List[str]] = field(default_factory=list)\n\n\n Because `list` could be any kind of list, I guess\n \"\"\"\n return []\n\n\nclass Replaceable:\n def replace(self, **kwargs):\n return dataclasses.replace(self, **kwargs)\n\n\nclass Mergeable(Replaceable):\n def merged(self, *args):\n \"\"\"Perform a shallow merge, where the last non-None write wins. This is\n intended to merge dataclasses that are a collection of optional values.\n \"\"\"\n replacements = {}\n cls = type(self)\n for arg in args:\n for field in dataclasses.fields(cls):\n value = getattr(arg, field.name)\n if value is not None:\n replacements[field.name] = value\n\n return self.replace(**replacements)\n\n\nclass Writable:\n def write(self, path: str, omit_none: bool = False):\n write_json(path, self.to_dict(omit_none=omit_none)) # type: ignore\n\n\nclass AdditionalPropertiesMixin:\n \"\"\"Make this class an extensible property.\n\n The underlying class definition must include a type definition for a field\n named '_extra' that is of type `Dict[str, Any]`.\n \"\"\"\n ADDITIONAL_PROPERTIES = True\n\n @classmethod\n def from_dict(cls, data, validate=True):\n self = super().from_dict(data=data, validate=validate)\n keys = self.to_dict(validate=False, omit_none=False)\n for key, value in data.items():\n if key not in keys:\n self.extra[key] = value\n return self\n\n def to_dict(self, omit_none=True, validate=False):\n data = super().to_dict(omit_none=omit_none, validate=validate)\n data.update(self.extra)\n return data\n\n def replace(self, **kwargs):\n dct = self.to_dict(omit_none=False, validate=False)\n dct.update(kwargs)\n return self.from_dict(dct)\n\n @property\n def extra(self):\n return self._extra\n\n\nclass Readable:\n @classmethod\n def read(cls, path: str):\n try:\n data = read_json(path)\n except (EnvironmentError, ValueError) as exc:\n raise RuntimeException(\n f'Could not read {cls.__name__} at \"{path}\" as JSON: {exc}'\n ) from exc\n\n return cls.from_dict(data) # type: ignore\n\n\nBASE_SCHEMAS_URL = 'https://schemas.getdbt.com/dbt/{name}/v{version}.json'\n\n\[email protected]\nclass SchemaVersion:\n name: str\n version: int\n\n def __str__(self) -> str:\n return BASE_SCHEMAS_URL.format(\n name=self.name,\n version=self.version,\n )\n\n\nSCHEMA_VERSION_KEY = 'dbt_schema_version'\n\n\[email protected]\nclass BaseArtifactMetadata(JsonSchemaMixin):\n dbt_schema_version: str\n dbt_version: str = __version__\n generated_at: datetime = dataclasses.field(\n default_factory=datetime.utcnow\n )\n invocation_id: Optional[str] = dataclasses.field(\n default_factory=get_invocation_id\n )\n\n\ndef schema_version(name: str, version: int):\n def inner(cls: Type[VersionedSchema]):\n cls.dbt_schema_version = SchemaVersion(\n name=name,\n version=version,\n )\n return cls\n return inner\n\n\[email protected]\nclass VersionedSchema(JsonSchemaMixin):\n dbt_schema_version: ClassVar[SchemaVersion]\n\n @classmethod\n def json_schema(cls, embeddable: bool = False) -> Dict[str, Any]:\n result = super().json_schema(embeddable=embeddable)\n if not embeddable:\n result['$id'] = str(cls.dbt_schema_version)\n return result\n\n\nT = TypeVar('T', bound='ArtifactMixin')\n\n\n# metadata should really be a Generic[T_M] where T_M is a TypeVar bound to\n# BaseArtifactMetadata. Unfortunately this isn't possible due to a mypy issue:\n# https://github.com/python/mypy/issues/7520\[email protected](init=False)\nclass ArtifactMixin(VersionedSchema, Writable, Readable):\n metadata: BaseArtifactMetadata\n\n @classmethod\n def from_dict(\n cls: Type[T], data: Dict[str, Any], validate: bool = True\n ) -> T:\n if cls.dbt_schema_version is None:\n raise InternalException(\n 'Cannot call from_dict with no schema version!'\n )\n\n if validate:\n expected = str(cls.dbt_schema_version)\n found = data.get('metadata', {}).get(SCHEMA_VERSION_KEY)\n if found != expected:\n raise IncompatibleSchemaException(expected, found)\n\n return super().from_dict(data=data, validate=validate)\n", "path": "core/dbt/contracts/util.py"}]}
| 2,410 | 271 |
gh_patches_debug_19931
|
rasdani/github-patches
|
git_diff
|
CiviWiki__OpenCiviWiki-1089
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Migration of frontend_views
### Idea summary
Elaboration of issue #1070
### Further details
As mentioned in issue #1070, we need to migrate from
```py
url(r"^")
```
to
```py
path()
```
And for frontend_views the following points need to be kept in mind
- [ ] usage of django.urls module for path and include
- [ ] Including the following views using the include method
- [ ] About_view
- [ ] support_us_view
- [ ] how_it_works_view
- [ ] user_profile
- [ ] issue_thread
- [ ] base_view
- [ ] civi2csv
- [ ] use path for all of the above mentioned urls
- [ ] Use the same name for reverse match.
</issue>
<code>
[start of project/frontend_views/urls.py]
1 from django.conf.urls import url
2 from . import views as v
3
4 urlpatterns = [
5 url(r"^about$", v.about_view, name="about"),
6 url(r"^support_us$", v.support_us_view, name="support us"),
7 url(r"^howitworks$", v.how_it_works_view, name="how it works"),
8 url(r"^profile/(?P<username>[a-zA-Z0-9-_]*)$", v.user_profile, name="profile"),
9 url(r"^profile/rep/(?P<username>\d+)$", v.user_profile, name="profile"),
10 url(r"^thread/(?P<thread_id>\w+)$", v.issue_thread, name="issue thread"),
11 url(r"^profile$", v.user_profile, name="default_profile"),
12 url(r"^$", v.base_view, name="base"),
13 url(r"^thread/(?P<thread_id>\w+)/csv$", v.civi2csv, name="civi2csv"),
14 ]
15
[end of project/frontend_views/urls.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/project/frontend_views/urls.py b/project/frontend_views/urls.py
--- a/project/frontend_views/urls.py
+++ b/project/frontend_views/urls.py
@@ -1,14 +1,13 @@
-from django.conf.urls import url
-from . import views as v
+from django.urls import path
+from frontend_views import views
urlpatterns = [
- url(r"^about$", v.about_view, name="about"),
- url(r"^support_us$", v.support_us_view, name="support us"),
- url(r"^howitworks$", v.how_it_works_view, name="how it works"),
- url(r"^profile/(?P<username>[a-zA-Z0-9-_]*)$", v.user_profile, name="profile"),
- url(r"^profile/rep/(?P<username>\d+)$", v.user_profile, name="profile"),
- url(r"^thread/(?P<thread_id>\w+)$", v.issue_thread, name="issue thread"),
- url(r"^profile$", v.user_profile, name="default_profile"),
- url(r"^$", v.base_view, name="base"),
- url(r"^thread/(?P<thread_id>\w+)/csv$", v.civi2csv, name="civi2csv"),
+ path("about/", views.about_view, name="about"),
+ path("support_us/", views.support_us_view, name="support us"),
+ path("howitworks/", views.how_it_works_view, name="how it works"),
+ path("profile/<str:username>/", views.user_profile, name="profile"),
+ path("thread/<int:thread_id>/", views.issue_thread, name="issue thread"),
+ path("profile/", views.user_profile, name="default_profile"),
+ path("", views.base_view, name="base"),
+ path("thread/<int:thread_id>/csv/", views.civi2csv, name="civi2csv"),
]
|
{"golden_diff": "diff --git a/project/frontend_views/urls.py b/project/frontend_views/urls.py\n--- a/project/frontend_views/urls.py\n+++ b/project/frontend_views/urls.py\n@@ -1,14 +1,13 @@\n-from django.conf.urls import url\n-from . import views as v\n+from django.urls import path\n+from frontend_views import views\n \n urlpatterns = [\n- url(r\"^about$\", v.about_view, name=\"about\"),\n- url(r\"^support_us$\", v.support_us_view, name=\"support us\"),\n- url(r\"^howitworks$\", v.how_it_works_view, name=\"how it works\"),\n- url(r\"^profile/(?P<username>[a-zA-Z0-9-_]*)$\", v.user_profile, name=\"profile\"),\n- url(r\"^profile/rep/(?P<username>\\d+)$\", v.user_profile, name=\"profile\"),\n- url(r\"^thread/(?P<thread_id>\\w+)$\", v.issue_thread, name=\"issue thread\"),\n- url(r\"^profile$\", v.user_profile, name=\"default_profile\"),\n- url(r\"^$\", v.base_view, name=\"base\"),\n- url(r\"^thread/(?P<thread_id>\\w+)/csv$\", v.civi2csv, name=\"civi2csv\"),\n+ path(\"about/\", views.about_view, name=\"about\"),\n+ path(\"support_us/\", views.support_us_view, name=\"support us\"),\n+ path(\"howitworks/\", views.how_it_works_view, name=\"how it works\"),\n+ path(\"profile/<str:username>/\", views.user_profile, name=\"profile\"),\n+ path(\"thread/<int:thread_id>/\", views.issue_thread, name=\"issue thread\"),\n+ path(\"profile/\", views.user_profile, name=\"default_profile\"),\n+ path(\"\", views.base_view, name=\"base\"),\n+ path(\"thread/<int:thread_id>/csv/\", views.civi2csv, name=\"civi2csv\"),\n ]\n", "issue": "Migration of frontend_views \n### Idea summary\n\nElaboration of issue #1070\n\n### Further details\n\nAs mentioned in issue #1070, we need to migrate from\r\n```py\r\nurl(r\"^\")\r\n```\r\nto \r\n```py\r\npath()\r\n```\r\nAnd for frontend_views the following points need to be kept in mind\r\n- [ ] usage of django.urls module for path and include\r\n- [ ] Including the following views using the include method\r\n - [ ] About_view\r\n - [ ] support_us_view\r\n - [ ] how_it_works_view\r\n - [ ] user_profile\r\n - [ ] issue_thread\r\n - [ ] base_view\r\n - [ ] civi2csv\r\n - [ ] use path for all of the above mentioned urls\r\n - [ ] Use the same name for reverse match. \n", "before_files": [{"content": "from django.conf.urls import url\nfrom . import views as v\n\nurlpatterns = [\n url(r\"^about$\", v.about_view, name=\"about\"),\n url(r\"^support_us$\", v.support_us_view, name=\"support us\"),\n url(r\"^howitworks$\", v.how_it_works_view, name=\"how it works\"),\n url(r\"^profile/(?P<username>[a-zA-Z0-9-_]*)$\", v.user_profile, name=\"profile\"),\n url(r\"^profile/rep/(?P<username>\\d+)$\", v.user_profile, name=\"profile\"),\n url(r\"^thread/(?P<thread_id>\\w+)$\", v.issue_thread, name=\"issue thread\"),\n url(r\"^profile$\", v.user_profile, name=\"default_profile\"),\n url(r\"^$\", v.base_view, name=\"base\"),\n url(r\"^thread/(?P<thread_id>\\w+)/csv$\", v.civi2csv, name=\"civi2csv\"),\n]\n", "path": "project/frontend_views/urls.py"}]}
| 940 | 419 |
gh_patches_debug_37451
|
rasdani/github-patches
|
git_diff
|
great-expectations__great_expectations-2142
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Error in slack_notification Action
Hi Guys,
I have 3 pipelines with great expectations, but one pipeline started to give some errors:
`Error running action with name notify_slack
Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/great_expectations/validation_operators/validation_operators.py", line 384, in _run_actions
payload=batch_actions_results,
File "/usr/local/lib/python3.7/site-packages/great_expectations/validation_operators/actions.py", line 55, in run
**kwargs,
File "/usr/local/lib/python3.7/site-packages/great_expectations/validation_operators/actions.py", line 170, in _run
validation_result_suite, data_docs_pages, self.notify_with
File "/usr/local/lib/python3.7/site-packages/great_expectations/render/renderer/slack_renderer.py", line 101, in render
report_element = self._get_report_element(docs_link)
File "/usr/local/lib/python3.7/site-packages/great_expectations/render/renderer/slack_renderer.py", line 155, in _get_report_element
if "file:///" in docs_link:
TypeError: argument of type 'NoneType' is not iterable
Traceback (most recent call last):
File "fact_demand_coverage_datalake_datasets_job.py", line 90, in <module>
expectation_suite=expectation_suite_name_for_validations
File "/mnt1/yarn/usercache/hadoop/appcache/application_1603978914184_5458/container_1603978914184_5458_01_000001/data_quality.zip/data_quality/ge_validation.py", line 174, in validate
File "/usr/local/lib/python3.7/site-packages/great_expectations/core/usage_statistics/usage_statistics.py", line 257, in usage_statistics_wrapped_method
result = func(*args, **kwargs)
File "/usr/local/lib/python3.7/site-packages/great_expectations/data_context/data_context.py", line 1022, in run_validation_operator
**kwargs,
File "/usr/local/lib/python3.7/site-packages/great_expectations/validation_operators/validation_operators.py", line 334, in run
run_id,
File "/usr/local/lib/python3.7/site-packages/great_expectations/validation_operators/validation_operators.py", line 399, in _run_actions
raise e
File "/usr/local/lib/python3.7/site-packages/great_expectations/validation_operators/validation_operators.py", line 384, in _run_actions
payload=batch_actions_results,
File "/usr/local/lib/python3.7/site-packages/great_expectations/validation_operators/actions.py", line 55, in run
**kwargs,
File "/usr/local/lib/python3.7/site-packages/great_expectations/validation_operators/actions.py", line 170, in _run
validation_result_suite, data_docs_pages, self.notify_with
File "/usr/local/lib/python3.7/site-packages/great_expectations/render/renderer/slack_renderer.py", line 101, in render
report_element = self._get_report_element(docs_link)
File "/usr/local/lib/python3.7/site-packages/great_expectations/render/renderer/slack_renderer.py", line 155, in _get_report_element
if "file:///" in docs_link:
TypeError: argument of type 'NoneType' is not iterable`
The stranger thing with this error, the error doesn't happen every day, for example yesterday it worked fine.
</issue>
<code>
[start of great_expectations/render/renderer/slack_renderer.py]
1 import logging
2
3 from great_expectations.exceptions import InvalidKeyError
4
5 logger = logging.getLogger(__name__)
6
7 from ...core.id_dict import BatchKwargs
8 from .renderer import Renderer
9
10
11 class SlackRenderer(Renderer):
12 def __init__(self):
13 super().__init__()
14
15 def render(
16 self, validation_result=None, data_docs_pages=None, notify_with=None,
17 ):
18 default_text = (
19 "No validation occurred. Please ensure you passed a validation_result."
20 )
21 status = "Failed :x:"
22
23 title_block = {
24 "type": "section",
25 "text": {"type": "mrkdwn", "text": default_text,},
26 }
27
28 query = {
29 "blocks": [title_block],
30 # this abbreviated root level "text" will show up in the notification and not the message
31 "text": default_text,
32 }
33
34 if validation_result:
35 expectation_suite_name = validation_result.meta.get(
36 "expectation_suite_name", "__no_expectation_suite_name__"
37 )
38
39 if "batch_kwargs" in validation_result.meta:
40 data_asset_name = validation_result.meta["batch_kwargs"].get(
41 "data_asset_name", "__no_data_asset_name__"
42 )
43 else:
44 data_asset_name = "__no_data_asset_name__"
45
46 n_checks_succeeded = validation_result.statistics["successful_expectations"]
47 n_checks = validation_result.statistics["evaluated_expectations"]
48 run_id = validation_result.meta.get("run_id", "__no_run_id__")
49 batch_id = BatchKwargs(
50 validation_result.meta.get("batch_kwargs", {})
51 ).to_id()
52 check_details_text = (
53 f"*{n_checks_succeeded}* of *{n_checks}* expectations were met"
54 )
55
56 if validation_result.success:
57 status = "Success :tada:"
58
59 summary_text = f"""*Batch Validation Status*: {status}
60 *Expectation suite name*: `{expectation_suite_name}`
61 *Data asset name*: `{data_asset_name}`
62 *Run ID*: `{run_id}`
63 *Batch ID*: `{batch_id}`
64 *Summary*: {check_details_text}"""
65 query["blocks"][0]["text"]["text"] = summary_text
66 # this abbreviated root level "text" will show up in the notification and not the message
67 query["text"] = f"{expectation_suite_name}: {status}"
68
69 if data_docs_pages:
70 if notify_with is not None:
71 for docs_link_key in notify_with:
72 if docs_link_key in data_docs_pages.keys():
73 docs_link = data_docs_pages[docs_link_key]
74 report_element = self._get_report_element(docs_link)
75 else:
76 logger.critical(
77 f"*ERROR*: Slack is trying to provide a link to the following DataDocs: `{str(docs_link_key)}`, but it is not configured under `data_docs_sites` in the `great_expectations.yml`\n"
78 )
79 report_element = {
80 "type": "section",
81 "text": {
82 "type": "mrkdwn",
83 "text": f"*ERROR*: Slack is trying to provide a link to the following DataDocs: `{str(docs_link_key)}`, but it is not configured under `data_docs_sites` in the `great_expectations.yml`\n",
84 },
85 }
86 if report_element:
87 query["blocks"].append(report_element)
88 else:
89 for docs_link_key in data_docs_pages.keys():
90 if docs_link_key == "class":
91 continue
92 docs_link = data_docs_pages[docs_link_key]
93 report_element = self._get_report_element(docs_link)
94 if report_element:
95 query["blocks"].append(report_element)
96
97 if "result_reference" in validation_result.meta:
98 result_reference = validation_result.meta["result_reference"]
99 report_element = {
100 "type": "section",
101 "text": {
102 "type": "mrkdwn",
103 "text": f"- *Validation Report*: {result_reference}",
104 },
105 }
106 query["blocks"].append(report_element)
107
108 if "dataset_reference" in validation_result.meta:
109 dataset_reference = validation_result.meta["dataset_reference"]
110 dataset_element = {
111 "type": "section",
112 "text": {
113 "type": "mrkdwn",
114 "text": f"- *Validation data asset*: {dataset_reference}",
115 },
116 }
117 query["blocks"].append(dataset_element)
118
119 custom_blocks = self._custom_blocks(evr=validation_result)
120 if custom_blocks:
121 query["blocks"].append(custom_blocks)
122
123 documentation_url = "https://docs.greatexpectations.io/en/latest/guides/tutorials/getting_started/set_up_data_docs.html"
124 footer_section = {
125 "type": "context",
126 "elements": [
127 {
128 "type": "mrkdwn",
129 "text": f"Learn how to review validation results in Data Docs: {documentation_url}",
130 }
131 ],
132 }
133
134 divider_block = {"type": "divider"}
135 query["blocks"].append(divider_block)
136 query["blocks"].append(footer_section)
137 return query
138
139 def _custom_blocks(self, evr):
140 return None
141
142 def _get_report_element(self, docs_link):
143 if docs_link is None:
144 logger.warn("No docs link found. Skipping data docs link in slack message.")
145 return
146
147 if "file://" in docs_link:
148 # handle special case since Slack does not render these links
149 report_element = {
150 "type": "section",
151 "text": {
152 "type": "mrkdwn",
153 "text": f"*DataDocs* can be found here: `{docs_link}` \n (Please copy and paste link into a browser to view)\n",
154 },
155 }
156 else:
157 report_element = {
158 "type": "section",
159 "text": {
160 "type": "mrkdwn",
161 "text": f"*DataDocs* can be found here: <{docs_link}|{docs_link}>",
162 },
163 }
164 return report_element
165
[end of great_expectations/render/renderer/slack_renderer.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/great_expectations/render/renderer/slack_renderer.py b/great_expectations/render/renderer/slack_renderer.py
--- a/great_expectations/render/renderer/slack_renderer.py
+++ b/great_expectations/render/renderer/slack_renderer.py
@@ -116,10 +116,6 @@
}
query["blocks"].append(dataset_element)
- custom_blocks = self._custom_blocks(evr=validation_result)
- if custom_blocks:
- query["blocks"].append(custom_blocks)
-
documentation_url = "https://docs.greatexpectations.io/en/latest/guides/tutorials/getting_started/set_up_data_docs.html"
footer_section = {
"type": "context",
@@ -136,29 +132,36 @@
query["blocks"].append(footer_section)
return query
- def _custom_blocks(self, evr):
- return None
-
def _get_report_element(self, docs_link):
- if docs_link is None:
- logger.warn("No docs link found. Skipping data docs link in slack message.")
- return
-
- if "file://" in docs_link:
- # handle special case since Slack does not render these links
- report_element = {
- "type": "section",
- "text": {
- "type": "mrkdwn",
- "text": f"*DataDocs* can be found here: `{docs_link}` \n (Please copy and paste link into a browser to view)\n",
- },
- }
+ report_element = None
+ if docs_link:
+ try:
+ if "file://" in docs_link:
+ # handle special case since Slack does not render these links
+ report_element = {
+ "type": "section",
+ "text": {
+ "type": "mrkdwn",
+ "text": f"*DataDocs* can be found here: `{docs_link}` \n (Please copy and paste link into a browser to view)\n",
+ },
+ }
+ else:
+ report_element = {
+ "type": "section",
+ "text": {
+ "type": "mrkdwn",
+ "text": f"*DataDocs* can be found here: <{docs_link}|{docs_link}>",
+ },
+ }
+ except Exception as e:
+ logger.warning(
+ f"""SlackRenderer had a problem with generating the docs link.
+ link used to generate the docs link is: {docs_link} and is of type: {type(docs_link)}.
+ Error: {e}"""
+ )
+ return
else:
- report_element = {
- "type": "section",
- "text": {
- "type": "mrkdwn",
- "text": f"*DataDocs* can be found here: <{docs_link}|{docs_link}>",
- },
- }
+ logger.warning(
+ "No docs link found. Skipping data docs link in Slack message."
+ )
return report_element
|
{"golden_diff": "diff --git a/great_expectations/render/renderer/slack_renderer.py b/great_expectations/render/renderer/slack_renderer.py\n--- a/great_expectations/render/renderer/slack_renderer.py\n+++ b/great_expectations/render/renderer/slack_renderer.py\n@@ -116,10 +116,6 @@\n }\n query[\"blocks\"].append(dataset_element)\n \n- custom_blocks = self._custom_blocks(evr=validation_result)\n- if custom_blocks:\n- query[\"blocks\"].append(custom_blocks)\n-\n documentation_url = \"https://docs.greatexpectations.io/en/latest/guides/tutorials/getting_started/set_up_data_docs.html\"\n footer_section = {\n \"type\": \"context\",\n@@ -136,29 +132,36 @@\n query[\"blocks\"].append(footer_section)\n return query\n \n- def _custom_blocks(self, evr):\n- return None\n-\n def _get_report_element(self, docs_link):\n- if docs_link is None:\n- logger.warn(\"No docs link found. Skipping data docs link in slack message.\")\n- return\n-\n- if \"file://\" in docs_link:\n- # handle special case since Slack does not render these links\n- report_element = {\n- \"type\": \"section\",\n- \"text\": {\n- \"type\": \"mrkdwn\",\n- \"text\": f\"*DataDocs* can be found here: `{docs_link}` \\n (Please copy and paste link into a browser to view)\\n\",\n- },\n- }\n+ report_element = None\n+ if docs_link:\n+ try:\n+ if \"file://\" in docs_link:\n+ # handle special case since Slack does not render these links\n+ report_element = {\n+ \"type\": \"section\",\n+ \"text\": {\n+ \"type\": \"mrkdwn\",\n+ \"text\": f\"*DataDocs* can be found here: `{docs_link}` \\n (Please copy and paste link into a browser to view)\\n\",\n+ },\n+ }\n+ else:\n+ report_element = {\n+ \"type\": \"section\",\n+ \"text\": {\n+ \"type\": \"mrkdwn\",\n+ \"text\": f\"*DataDocs* can be found here: <{docs_link}|{docs_link}>\",\n+ },\n+ }\n+ except Exception as e:\n+ logger.warning(\n+ f\"\"\"SlackRenderer had a problem with generating the docs link.\n+ link used to generate the docs link is: {docs_link} and is of type: {type(docs_link)}.\n+ Error: {e}\"\"\"\n+ )\n+ return\n else:\n- report_element = {\n- \"type\": \"section\",\n- \"text\": {\n- \"type\": \"mrkdwn\",\n- \"text\": f\"*DataDocs* can be found here: <{docs_link}|{docs_link}>\",\n- },\n- }\n+ logger.warning(\n+ \"No docs link found. Skipping data docs link in Slack message.\"\n+ )\n return report_element\n", "issue": "Error in slack_notification Action\nHi Guys,\r\nI have 3 pipelines with great expectations, but one pipeline started to give some errors:\r\n\r\n`Error running action with name notify_slack\r\nTraceback (most recent call last):\r\n File \"/usr/local/lib/python3.7/site-packages/great_expectations/validation_operators/validation_operators.py\", line 384, in _run_actions\r\n payload=batch_actions_results,\r\n File \"/usr/local/lib/python3.7/site-packages/great_expectations/validation_operators/actions.py\", line 55, in run\r\n **kwargs,\r\n File \"/usr/local/lib/python3.7/site-packages/great_expectations/validation_operators/actions.py\", line 170, in _run\r\n validation_result_suite, data_docs_pages, self.notify_with\r\n File \"/usr/local/lib/python3.7/site-packages/great_expectations/render/renderer/slack_renderer.py\", line 101, in render\r\n report_element = self._get_report_element(docs_link)\r\n File \"/usr/local/lib/python3.7/site-packages/great_expectations/render/renderer/slack_renderer.py\", line 155, in _get_report_element\r\n if \"file:///\" in docs_link:\r\nTypeError: argument of type 'NoneType' is not iterable\r\nTraceback (most recent call last):\r\n File \"fact_demand_coverage_datalake_datasets_job.py\", line 90, in <module>\r\n expectation_suite=expectation_suite_name_for_validations\r\n File \"/mnt1/yarn/usercache/hadoop/appcache/application_1603978914184_5458/container_1603978914184_5458_01_000001/data_quality.zip/data_quality/ge_validation.py\", line 174, in validate\r\n File \"/usr/local/lib/python3.7/site-packages/great_expectations/core/usage_statistics/usage_statistics.py\", line 257, in usage_statistics_wrapped_method\r\n result = func(*args, **kwargs)\r\n File \"/usr/local/lib/python3.7/site-packages/great_expectations/data_context/data_context.py\", line 1022, in run_validation_operator\r\n **kwargs,\r\n File \"/usr/local/lib/python3.7/site-packages/great_expectations/validation_operators/validation_operators.py\", line 334, in run\r\n run_id,\r\n File \"/usr/local/lib/python3.7/site-packages/great_expectations/validation_operators/validation_operators.py\", line 399, in _run_actions\r\n raise e\r\n File \"/usr/local/lib/python3.7/site-packages/great_expectations/validation_operators/validation_operators.py\", line 384, in _run_actions\r\n payload=batch_actions_results,\r\n File \"/usr/local/lib/python3.7/site-packages/great_expectations/validation_operators/actions.py\", line 55, in run\r\n **kwargs,\r\n File \"/usr/local/lib/python3.7/site-packages/great_expectations/validation_operators/actions.py\", line 170, in _run\r\n validation_result_suite, data_docs_pages, self.notify_with\r\n File \"/usr/local/lib/python3.7/site-packages/great_expectations/render/renderer/slack_renderer.py\", line 101, in render\r\n report_element = self._get_report_element(docs_link)\r\n File \"/usr/local/lib/python3.7/site-packages/great_expectations/render/renderer/slack_renderer.py\", line 155, in _get_report_element\r\n if \"file:///\" in docs_link:\r\nTypeError: argument of type 'NoneType' is not iterable`\r\n\r\nThe stranger thing with this error, the error doesn't happen every day, for example yesterday it worked fine.\n", "before_files": [{"content": "import logging\n\nfrom great_expectations.exceptions import InvalidKeyError\n\nlogger = logging.getLogger(__name__)\n\nfrom ...core.id_dict import BatchKwargs\nfrom .renderer import Renderer\n\n\nclass SlackRenderer(Renderer):\n def __init__(self):\n super().__init__()\n\n def render(\n self, validation_result=None, data_docs_pages=None, notify_with=None,\n ):\n default_text = (\n \"No validation occurred. Please ensure you passed a validation_result.\"\n )\n status = \"Failed :x:\"\n\n title_block = {\n \"type\": \"section\",\n \"text\": {\"type\": \"mrkdwn\", \"text\": default_text,},\n }\n\n query = {\n \"blocks\": [title_block],\n # this abbreviated root level \"text\" will show up in the notification and not the message\n \"text\": default_text,\n }\n\n if validation_result:\n expectation_suite_name = validation_result.meta.get(\n \"expectation_suite_name\", \"__no_expectation_suite_name__\"\n )\n\n if \"batch_kwargs\" in validation_result.meta:\n data_asset_name = validation_result.meta[\"batch_kwargs\"].get(\n \"data_asset_name\", \"__no_data_asset_name__\"\n )\n else:\n data_asset_name = \"__no_data_asset_name__\"\n\n n_checks_succeeded = validation_result.statistics[\"successful_expectations\"]\n n_checks = validation_result.statistics[\"evaluated_expectations\"]\n run_id = validation_result.meta.get(\"run_id\", \"__no_run_id__\")\n batch_id = BatchKwargs(\n validation_result.meta.get(\"batch_kwargs\", {})\n ).to_id()\n check_details_text = (\n f\"*{n_checks_succeeded}* of *{n_checks}* expectations were met\"\n )\n\n if validation_result.success:\n status = \"Success :tada:\"\n\n summary_text = f\"\"\"*Batch Validation Status*: {status}\n*Expectation suite name*: `{expectation_suite_name}`\n*Data asset name*: `{data_asset_name}`\n*Run ID*: `{run_id}`\n*Batch ID*: `{batch_id}`\n*Summary*: {check_details_text}\"\"\"\n query[\"blocks\"][0][\"text\"][\"text\"] = summary_text\n # this abbreviated root level \"text\" will show up in the notification and not the message\n query[\"text\"] = f\"{expectation_suite_name}: {status}\"\n\n if data_docs_pages:\n if notify_with is not None:\n for docs_link_key in notify_with:\n if docs_link_key in data_docs_pages.keys():\n docs_link = data_docs_pages[docs_link_key]\n report_element = self._get_report_element(docs_link)\n else:\n logger.critical(\n f\"*ERROR*: Slack is trying to provide a link to the following DataDocs: `{str(docs_link_key)}`, but it is not configured under `data_docs_sites` in the `great_expectations.yml`\\n\"\n )\n report_element = {\n \"type\": \"section\",\n \"text\": {\n \"type\": \"mrkdwn\",\n \"text\": f\"*ERROR*: Slack is trying to provide a link to the following DataDocs: `{str(docs_link_key)}`, but it is not configured under `data_docs_sites` in the `great_expectations.yml`\\n\",\n },\n }\n if report_element:\n query[\"blocks\"].append(report_element)\n else:\n for docs_link_key in data_docs_pages.keys():\n if docs_link_key == \"class\":\n continue\n docs_link = data_docs_pages[docs_link_key]\n report_element = self._get_report_element(docs_link)\n if report_element:\n query[\"blocks\"].append(report_element)\n\n if \"result_reference\" in validation_result.meta:\n result_reference = validation_result.meta[\"result_reference\"]\n report_element = {\n \"type\": \"section\",\n \"text\": {\n \"type\": \"mrkdwn\",\n \"text\": f\"- *Validation Report*: {result_reference}\",\n },\n }\n query[\"blocks\"].append(report_element)\n\n if \"dataset_reference\" in validation_result.meta:\n dataset_reference = validation_result.meta[\"dataset_reference\"]\n dataset_element = {\n \"type\": \"section\",\n \"text\": {\n \"type\": \"mrkdwn\",\n \"text\": f\"- *Validation data asset*: {dataset_reference}\",\n },\n }\n query[\"blocks\"].append(dataset_element)\n\n custom_blocks = self._custom_blocks(evr=validation_result)\n if custom_blocks:\n query[\"blocks\"].append(custom_blocks)\n\n documentation_url = \"https://docs.greatexpectations.io/en/latest/guides/tutorials/getting_started/set_up_data_docs.html\"\n footer_section = {\n \"type\": \"context\",\n \"elements\": [\n {\n \"type\": \"mrkdwn\",\n \"text\": f\"Learn how to review validation results in Data Docs: {documentation_url}\",\n }\n ],\n }\n\n divider_block = {\"type\": \"divider\"}\n query[\"blocks\"].append(divider_block)\n query[\"blocks\"].append(footer_section)\n return query\n\n def _custom_blocks(self, evr):\n return None\n\n def _get_report_element(self, docs_link):\n if docs_link is None:\n logger.warn(\"No docs link found. Skipping data docs link in slack message.\")\n return\n\n if \"file://\" in docs_link:\n # handle special case since Slack does not render these links\n report_element = {\n \"type\": \"section\",\n \"text\": {\n \"type\": \"mrkdwn\",\n \"text\": f\"*DataDocs* can be found here: `{docs_link}` \\n (Please copy and paste link into a browser to view)\\n\",\n },\n }\n else:\n report_element = {\n \"type\": \"section\",\n \"text\": {\n \"type\": \"mrkdwn\",\n \"text\": f\"*DataDocs* can be found here: <{docs_link}|{docs_link}>\",\n },\n }\n return report_element\n", "path": "great_expectations/render/renderer/slack_renderer.py"}]}
| 3,003 | 675 |
gh_patches_debug_8920
|
rasdani/github-patches
|
git_diff
|
networkx__networkx-3378
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
2.3rc1 drops python 2.7 support in a minor version
People using `NetworkX~=2.2` as their version specifier to pip, but still on Python 2.7 will get this message now that 2.3rc1 is out:
```
NetworkX requires Python 3.5 or later (2.7 detected).
```
This happens with no changes to their code.
Would dropping Python 2.7 support be enough of a change to necessitate a new major version of NetworkX?
</issue>
<code>
[start of networkx/release.py]
1 """Release data for NetworkX.
2
3 When NetworkX is imported a number of steps are followed to determine
4 the version information.
5
6 1) If the release is not a development release (dev=False), then version
7 information is read from version.py, a file containing statically
8 defined version information. This file should exist on every
9 downloadable release of NetworkX since setup.py creates it during
10 packaging/installation. However, version.py might not exist if one
11 is running NetworkX from the mercurial repository. In the event that
12 version.py does not exist, then no vcs information will be available.
13
14 2) If the release is a development release, then version information
15 is read dynamically, when possible. If no dynamic information can be
16 read, then an attempt is made to read the information from version.py.
17 If version.py does not exist, then no vcs information will be available.
18
19 Clarification:
20 version.py is created only by setup.py
21
22 When setup.py creates version.py, it does so before packaging/installation.
23 So the created file is included in the source distribution. When a user
24 downloads a tar.gz file and extracts the files, the files will not be in a
25 live version control repository. So when the user runs setup.py to install
26 NetworkX, we must make sure write_versionfile() does not overwrite the
27 revision information contained in the version.py that was included in the
28 tar.gz file. This is why write_versionfile() includes an early escape.
29
30 """
31
32 # Copyright (C) 2004-2019 by
33 # Aric Hagberg <[email protected]>
34 # Dan Schult <[email protected]>
35 # Pieter Swart <[email protected]>
36 # All rights reserved.
37 # BSD license.
38
39 from __future__ import absolute_import
40
41 import os
42 import sys
43 import time
44 import datetime
45
46 basedir = os.path.abspath(os.path.split(__file__)[0])
47
48
49 def write_versionfile():
50 """Creates a static file containing version information."""
51 versionfile = os.path.join(basedir, 'version.py')
52
53 text = '''"""
54 Version information for NetworkX, created during installation.
55
56 Do not add this file to the repository.
57
58 """
59
60 import datetime
61
62 version = %(version)r
63 date = %(date)r
64
65 # Was NetworkX built from a development version? If so, remember that the major
66 # and minor versions reference the "target" (rather than "current") release.
67 dev = %(dev)r
68
69 # Format: (name, major, min, revision)
70 version_info = %(version_info)r
71
72 # Format: a 'datetime.datetime' instance
73 date_info = %(date_info)r
74
75 # Format: (vcs, vcs_tuple)
76 vcs_info = %(vcs_info)r
77
78 '''
79
80 # Try to update all information
81 date, date_info, version, version_info, vcs_info = get_info(dynamic=True)
82
83 def writefile():
84 fh = open(versionfile, 'w')
85 subs = {
86 'dev': dev,
87 'version': version,
88 'version_info': version_info,
89 'date': date,
90 'date_info': date_info,
91 'vcs_info': vcs_info
92 }
93 fh.write(text % subs)
94 fh.close()
95
96 if vcs_info[0] == 'mercurial':
97 # Then, we want to update version.py.
98 writefile()
99 else:
100 if os.path.isfile(versionfile):
101 # This is *good*, and the most likely place users will be when
102 # running setup.py. We do not want to overwrite version.py.
103 # Grab the version so that setup can use it.
104 # sys.path.insert(0, basedir)
105 from version import version
106 # del sys.path[0]
107 else:
108 # This is *bad*. It means the user might have a tarball that
109 # does not include version.py. Let this error raise so we can
110 # fix the tarball.
111 # raise Exception('version.py not found!')
112
113 # We no longer require that prepared tarballs include a version.py
114 # So we use the possibly trunctated value from get_info()
115 # Then we write a new file.
116 writefile()
117
118 return version
119
120
121 def get_revision():
122 """Returns revision and vcs information, dynamically obtained."""
123 vcs, revision, tag = None, None, None
124
125 gitdir = os.path.join(basedir, '..', '.git')
126
127 if os.path.isdir(gitdir):
128 vcs = 'git'
129 # For now, we are not bothering with revision and tag.
130
131 vcs_info = (vcs, (revision, tag))
132
133 return revision, vcs_info
134
135
136 def get_info(dynamic=True):
137 # Date information
138 date_info = datetime.datetime.utcfromtimestamp(int(os.environ.get('SOURCE_DATE_EPOCH', time.time())))
139 date = time.asctime(date_info.timetuple())
140
141 revision, version, version_info, vcs_info = None, None, None, None
142
143 import_failed = False
144 dynamic_failed = False
145
146 if dynamic:
147 revision, vcs_info = get_revision()
148 if revision is None:
149 dynamic_failed = True
150
151 if dynamic_failed or not dynamic:
152 # This is where most final releases of NetworkX will be.
153 # All info should come from version.py. If it does not exist, then
154 # no vcs information will be provided.
155 # sys.path.insert(0, basedir)
156 try:
157 from version import date, date_info, version, version_info, vcs_info
158 except ImportError:
159 import_failed = True
160 vcs_info = (None, (None, None))
161 else:
162 revision = vcs_info[1][0]
163 #del sys.path[0]
164
165 if import_failed or (dynamic and not dynamic_failed):
166 # We are here if:
167 # we failed to determine static versioning info, or
168 # we successfully obtained dynamic revision info
169 version = ''.join([str(major), '.', str(minor)])
170 if dev:
171 version += '.dev_' + date_info.strftime("%Y%m%d%H%M%S")
172 version_info = (name, major, minor, revision)
173
174 return date, date_info, version, version_info, vcs_info
175
176
177 # Version information
178 name = 'networkx'
179 major = "2"
180 minor = "3rc2"
181
182
183 # Declare current release as a development release.
184 # Change to False before tagging a release; then change back.
185 dev = True
186
187
188 description = "Python package for creating and manipulating graphs and networks"
189
190 long_description = \
191 """
192 NetworkX is a Python package for the creation, manipulation, and
193 study of the structure, dynamics, and functions of complex networks.
194
195 """
196 license = 'BSD'
197 authors = {'Hagberg': ('Aric Hagberg', '[email protected]'),
198 'Schult': ('Dan Schult', '[email protected]'),
199 'Swart': ('Pieter Swart', '[email protected]')}
200 maintainer = "NetworkX Developers"
201 maintainer_email = "[email protected]"
202 url = 'http://networkx.github.io/'
203 download_url = 'https://pypi.python.org/pypi/networkx/'
204 platforms = ['Linux', 'Mac OSX', 'Windows', 'Unix']
205 keywords = ['Networks', 'Graph Theory', 'Mathematics',
206 'network', 'graph', 'discrete mathematics', 'math']
207 classifiers = [
208 'Development Status :: 5 - Production/Stable',
209 'Intended Audience :: Developers',
210 'Intended Audience :: Science/Research',
211 'License :: OSI Approved :: BSD License',
212 'Operating System :: OS Independent',
213 'Programming Language :: Python :: 3',
214 'Programming Language :: Python :: 3.5',
215 'Programming Language :: Python :: 3.6',
216 'Programming Language :: Python :: 3.7',
217 'Topic :: Software Development :: Libraries :: Python Modules',
218 'Topic :: Scientific/Engineering :: Bio-Informatics',
219 'Topic :: Scientific/Engineering :: Information Analysis',
220 'Topic :: Scientific/Engineering :: Mathematics',
221 'Topic :: Scientific/Engineering :: Physics']
222
223 date, date_info, version, version_info, vcs_info = get_info()
224
225 if __name__ == '__main__':
226 # Write versionfile for nightly snapshots.
227 write_versionfile()
228
[end of networkx/release.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/networkx/release.py b/networkx/release.py
--- a/networkx/release.py
+++ b/networkx/release.py
@@ -214,6 +214,7 @@
'Programming Language :: Python :: 3.5',
'Programming Language :: Python :: 3.6',
'Programming Language :: Python :: 3.7',
+ 'Programming Language :: Python :: 3 :: Only',
'Topic :: Software Development :: Libraries :: Python Modules',
'Topic :: Scientific/Engineering :: Bio-Informatics',
'Topic :: Scientific/Engineering :: Information Analysis',
|
{"golden_diff": "diff --git a/networkx/release.py b/networkx/release.py\n--- a/networkx/release.py\n+++ b/networkx/release.py\n@@ -214,6 +214,7 @@\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n+ 'Programming Language :: Python :: 3 :: Only',\n 'Topic :: Software Development :: Libraries :: Python Modules',\n 'Topic :: Scientific/Engineering :: Bio-Informatics',\n 'Topic :: Scientific/Engineering :: Information Analysis',\n", "issue": "2.3rc1 drops python 2.7 support in a minor version \nPeople using `NetworkX~=2.2` as their version specifier to pip, but still on Python 2.7 will get this message now that 2.3rc1 is out:\r\n\r\n```\r\nNetworkX requires Python 3.5 or later (2.7 detected).\r\n```\r\n\r\nThis happens with no changes to their code. \r\n\r\nWould dropping Python 2.7 support be enough of a change to necessitate a new major version of NetworkX?\n", "before_files": [{"content": "\"\"\"Release data for NetworkX.\n\nWhen NetworkX is imported a number of steps are followed to determine\nthe version information.\n\n 1) If the release is not a development release (dev=False), then version\n information is read from version.py, a file containing statically\n defined version information. This file should exist on every\n downloadable release of NetworkX since setup.py creates it during\n packaging/installation. However, version.py might not exist if one\n is running NetworkX from the mercurial repository. In the event that\n version.py does not exist, then no vcs information will be available.\n\n 2) If the release is a development release, then version information\n is read dynamically, when possible. If no dynamic information can be\n read, then an attempt is made to read the information from version.py.\n If version.py does not exist, then no vcs information will be available.\n\nClarification:\n version.py is created only by setup.py\n\nWhen setup.py creates version.py, it does so before packaging/installation.\nSo the created file is included in the source distribution. When a user\ndownloads a tar.gz file and extracts the files, the files will not be in a\nlive version control repository. So when the user runs setup.py to install\nNetworkX, we must make sure write_versionfile() does not overwrite the\nrevision information contained in the version.py that was included in the\ntar.gz file. This is why write_versionfile() includes an early escape.\n\n\"\"\"\n\n# Copyright (C) 2004-2019 by\n# Aric Hagberg <[email protected]>\n# Dan Schult <[email protected]>\n# Pieter Swart <[email protected]>\n# All rights reserved.\n# BSD license.\n\nfrom __future__ import absolute_import\n\nimport os\nimport sys\nimport time\nimport datetime\n\nbasedir = os.path.abspath(os.path.split(__file__)[0])\n\n\ndef write_versionfile():\n \"\"\"Creates a static file containing version information.\"\"\"\n versionfile = os.path.join(basedir, 'version.py')\n\n text = '''\"\"\"\nVersion information for NetworkX, created during installation.\n\nDo not add this file to the repository.\n\n\"\"\"\n\nimport datetime\n\nversion = %(version)r\ndate = %(date)r\n\n# Was NetworkX built from a development version? If so, remember that the major\n# and minor versions reference the \"target\" (rather than \"current\") release.\ndev = %(dev)r\n\n# Format: (name, major, min, revision)\nversion_info = %(version_info)r\n\n# Format: a 'datetime.datetime' instance\ndate_info = %(date_info)r\n\n# Format: (vcs, vcs_tuple)\nvcs_info = %(vcs_info)r\n\n'''\n\n # Try to update all information\n date, date_info, version, version_info, vcs_info = get_info(dynamic=True)\n\n def writefile():\n fh = open(versionfile, 'w')\n subs = {\n 'dev': dev,\n 'version': version,\n 'version_info': version_info,\n 'date': date,\n 'date_info': date_info,\n 'vcs_info': vcs_info\n }\n fh.write(text % subs)\n fh.close()\n\n if vcs_info[0] == 'mercurial':\n # Then, we want to update version.py.\n writefile()\n else:\n if os.path.isfile(versionfile):\n # This is *good*, and the most likely place users will be when\n # running setup.py. We do not want to overwrite version.py.\n # Grab the version so that setup can use it.\n # sys.path.insert(0, basedir)\n from version import version\n # del sys.path[0]\n else:\n # This is *bad*. It means the user might have a tarball that\n # does not include version.py. Let this error raise so we can\n # fix the tarball.\n # raise Exception('version.py not found!')\n\n # We no longer require that prepared tarballs include a version.py\n # So we use the possibly trunctated value from get_info()\n # Then we write a new file.\n writefile()\n\n return version\n\n\ndef get_revision():\n \"\"\"Returns revision and vcs information, dynamically obtained.\"\"\"\n vcs, revision, tag = None, None, None\n\n gitdir = os.path.join(basedir, '..', '.git')\n\n if os.path.isdir(gitdir):\n vcs = 'git'\n # For now, we are not bothering with revision and tag.\n\n vcs_info = (vcs, (revision, tag))\n\n return revision, vcs_info\n\n\ndef get_info(dynamic=True):\n # Date information\n date_info = datetime.datetime.utcfromtimestamp(int(os.environ.get('SOURCE_DATE_EPOCH', time.time())))\n date = time.asctime(date_info.timetuple())\n\n revision, version, version_info, vcs_info = None, None, None, None\n\n import_failed = False\n dynamic_failed = False\n\n if dynamic:\n revision, vcs_info = get_revision()\n if revision is None:\n dynamic_failed = True\n\n if dynamic_failed or not dynamic:\n # This is where most final releases of NetworkX will be.\n # All info should come from version.py. If it does not exist, then\n # no vcs information will be provided.\n # sys.path.insert(0, basedir)\n try:\n from version import date, date_info, version, version_info, vcs_info\n except ImportError:\n import_failed = True\n vcs_info = (None, (None, None))\n else:\n revision = vcs_info[1][0]\n #del sys.path[0]\n\n if import_failed or (dynamic and not dynamic_failed):\n # We are here if:\n # we failed to determine static versioning info, or\n # we successfully obtained dynamic revision info\n version = ''.join([str(major), '.', str(minor)])\n if dev:\n version += '.dev_' + date_info.strftime(\"%Y%m%d%H%M%S\")\n version_info = (name, major, minor, revision)\n\n return date, date_info, version, version_info, vcs_info\n\n\n# Version information\nname = 'networkx'\nmajor = \"2\"\nminor = \"3rc2\"\n\n\n# Declare current release as a development release.\n# Change to False before tagging a release; then change back.\ndev = True\n\n\ndescription = \"Python package for creating and manipulating graphs and networks\"\n\nlong_description = \\\n \"\"\"\nNetworkX is a Python package for the creation, manipulation, and\nstudy of the structure, dynamics, and functions of complex networks.\n\n\"\"\"\nlicense = 'BSD'\nauthors = {'Hagberg': ('Aric Hagberg', '[email protected]'),\n 'Schult': ('Dan Schult', '[email protected]'),\n 'Swart': ('Pieter Swart', '[email protected]')}\nmaintainer = \"NetworkX Developers\"\nmaintainer_email = \"[email protected]\"\nurl = 'http://networkx.github.io/'\ndownload_url = 'https://pypi.python.org/pypi/networkx/'\nplatforms = ['Linux', 'Mac OSX', 'Windows', 'Unix']\nkeywords = ['Networks', 'Graph Theory', 'Mathematics',\n 'network', 'graph', 'discrete mathematics', 'math']\nclassifiers = [\n 'Development Status :: 5 - Production/Stable',\n 'Intended Audience :: Developers',\n 'Intended Audience :: Science/Research',\n 'License :: OSI Approved :: BSD License',\n 'Operating System :: OS Independent',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Topic :: Software Development :: Libraries :: Python Modules',\n 'Topic :: Scientific/Engineering :: Bio-Informatics',\n 'Topic :: Scientific/Engineering :: Information Analysis',\n 'Topic :: Scientific/Engineering :: Mathematics',\n 'Topic :: Scientific/Engineering :: Physics']\n\ndate, date_info, version, version_info, vcs_info = get_info()\n\nif __name__ == '__main__':\n # Write versionfile for nightly snapshots.\n write_versionfile()\n", "path": "networkx/release.py"}]}
| 3,068 | 127 |
gh_patches_debug_7530
|
rasdani/github-patches
|
git_diff
|
open-telemetry__opentelemetry-python-3780
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
`ExponentialHistogram` is missing the `to_json` method
The [`ExponentialHistorgram`](https://github.com/open-telemetry/opentelemetry-python/blob/3dfe2249cc4a203bf24578483b192fec7266596b/opentelemetry-sdk/src/opentelemetry/sdk/metrics/_internal/point.py#L93-L102) data point type was [somewhat recently added](https://github.com/open-telemetry/opentelemetry-python/pull/2964), and does not work with the `ConsoleMetricExporter` because it is missing a `to_json` method (which seems to be defined on the other data point types).
This is the stack trace I get locally:
```
Exception while exporting metrics 'ExponentialHistogram' object has no attribute 'to_json'
Traceback (most recent call last):
File "/home/closeio/venv/lib/python3.10/site-packages/opentelemetry/sdk/metrics/_internal/export/__init__.py", line 523, in _receive_metrics
self._exporter.export(
File "/home/closeio/venv/lib/python3.10/site-packages/opentelemetry/sdk/metrics/_internal/export/__init__.py", line 163, in export
self.out.write(self.formatter(metrics_data))
File "/home/closeio/venv/lib/python3.10/site-packages/opentelemetry/sdk/metrics/_internal/export/__init__.py", line 143, in <lambda>
] = lambda metrics_data: metrics_data.to_json()
File "/home/closeio/venv/lib/python3.10/site-packages/opentelemetry/sdk/metrics/_internal/point.py", line 252, in to_json
"resource_metrics": [
File "/home/closeio/venv/lib/python3.10/site-packages/opentelemetry/sdk/metrics/_internal/point.py", line 253, in <listcomp>
loads(resource_metrics.to_json(indent=indent))
File "/home/closeio/venv/lib/python3.10/site-packages/opentelemetry/sdk/metrics/_internal/point.py", line 233, in to_json
"scope_metrics": [
File "/home/closeio/venv/lib/python3.10/site-packages/opentelemetry/sdk/metrics/_internal/point.py", line 234, in <listcomp>
loads(scope_metrics.to_json(indent=indent))
File "/home/closeio/venv/lib/python3.10/site-packages/opentelemetry/sdk/metrics/_internal/point.py", line 211, in to_json
"metrics": [
File "/home/closeio/venv/lib/python3.10/site-packages/opentelemetry/sdk/metrics/_internal/point.py", line 212, in <listcomp>
loads(metric.to_json(indent=indent))
File "/home/closeio/venv/lib/python3.10/site-packages/opentelemetry/sdk/metrics/_internal/point.py", line 193, in to_json
"data": loads(self.data.to_json(indent=indent)),
AttributeError: 'ExponentialHistogram' object has no attribute 'to_json'
```
</issue>
<code>
[start of opentelemetry-sdk/src/opentelemetry/sdk/metrics/_internal/point.py]
1 # Copyright The OpenTelemetry Authors
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 # pylint: disable=unused-import
16
17 from dataclasses import asdict, dataclass
18 from json import dumps, loads
19 from typing import Optional, Sequence, Union
20
21 # This kind of import is needed to avoid Sphinx errors.
22 import opentelemetry.sdk.metrics._internal
23 from opentelemetry.sdk.resources import Resource
24 from opentelemetry.sdk.util.instrumentation import InstrumentationScope
25 from opentelemetry.util.types import Attributes
26
27
28 @dataclass(frozen=True)
29 class NumberDataPoint:
30 """Single data point in a timeseries that describes the time-varying scalar
31 value of a metric.
32 """
33
34 attributes: Attributes
35 start_time_unix_nano: int
36 time_unix_nano: int
37 value: Union[int, float]
38
39 def to_json(self, indent=4) -> str:
40 return dumps(asdict(self), indent=indent)
41
42
43 @dataclass(frozen=True)
44 class HistogramDataPoint:
45 """Single data point in a timeseries that describes the time-varying scalar
46 value of a metric.
47 """
48
49 attributes: Attributes
50 start_time_unix_nano: int
51 time_unix_nano: int
52 count: int
53 sum: Union[int, float]
54 bucket_counts: Sequence[int]
55 explicit_bounds: Sequence[float]
56 min: float
57 max: float
58
59 def to_json(self, indent=4) -> str:
60 return dumps(asdict(self), indent=indent)
61
62
63 @dataclass(frozen=True)
64 class Buckets:
65 offset: int
66 bucket_counts: Sequence[int]
67
68
69 @dataclass(frozen=True)
70 class ExponentialHistogramDataPoint:
71 """Single data point in a timeseries whose boundaries are defined by an
72 exponential function. This timeseries describes the time-varying scalar
73 value of a metric.
74 """
75
76 attributes: Attributes
77 start_time_unix_nano: int
78 time_unix_nano: int
79 count: int
80 sum: Union[int, float]
81 scale: int
82 zero_count: int
83 positive: Buckets
84 negative: Buckets
85 flags: int
86 min: float
87 max: float
88
89 def to_json(self, indent=4) -> str:
90 return dumps(asdict(self), indent=indent)
91
92
93 @dataclass(frozen=True)
94 class ExponentialHistogram:
95 """Represents the type of a metric that is calculated by aggregating as an
96 ExponentialHistogram of all reported measurements over a time interval.
97 """
98
99 data_points: Sequence[ExponentialHistogramDataPoint]
100 aggregation_temporality: (
101 "opentelemetry.sdk.metrics.export.AggregationTemporality"
102 )
103
104
105 @dataclass(frozen=True)
106 class Sum:
107 """Represents the type of a scalar metric that is calculated as a sum of
108 all reported measurements over a time interval."""
109
110 data_points: Sequence[NumberDataPoint]
111 aggregation_temporality: (
112 "opentelemetry.sdk.metrics.export.AggregationTemporality"
113 )
114 is_monotonic: bool
115
116 def to_json(self, indent=4) -> str:
117 return dumps(
118 {
119 "data_points": [
120 loads(data_point.to_json(indent=indent))
121 for data_point in self.data_points
122 ],
123 "aggregation_temporality": self.aggregation_temporality,
124 "is_monotonic": self.is_monotonic,
125 },
126 indent=indent,
127 )
128
129
130 @dataclass(frozen=True)
131 class Gauge:
132 """Represents the type of a scalar metric that always exports the current
133 value for every data point. It should be used for an unknown
134 aggregation."""
135
136 data_points: Sequence[NumberDataPoint]
137
138 def to_json(self, indent=4) -> str:
139 return dumps(
140 {
141 "data_points": [
142 loads(data_point.to_json(indent=indent))
143 for data_point in self.data_points
144 ],
145 },
146 indent=indent,
147 )
148
149
150 @dataclass(frozen=True)
151 class Histogram:
152 """Represents the type of a metric that is calculated by aggregating as a
153 histogram of all reported measurements over a time interval."""
154
155 data_points: Sequence[HistogramDataPoint]
156 aggregation_temporality: (
157 "opentelemetry.sdk.metrics.export.AggregationTemporality"
158 )
159
160 def to_json(self, indent=4) -> str:
161 return dumps(
162 {
163 "data_points": [
164 loads(data_point.to_json(indent=indent))
165 for data_point in self.data_points
166 ],
167 "aggregation_temporality": self.aggregation_temporality,
168 },
169 indent=indent,
170 )
171
172
173 # pylint: disable=invalid-name
174 DataT = Union[Sum, Gauge, Histogram]
175 DataPointT = Union[NumberDataPoint, HistogramDataPoint]
176
177
178 @dataclass(frozen=True)
179 class Metric:
180 """Represents a metric point in the OpenTelemetry data model to be
181 exported."""
182
183 name: str
184 description: Optional[str]
185 unit: Optional[str]
186 data: DataT
187
188 def to_json(self, indent=4) -> str:
189 return dumps(
190 {
191 "name": self.name,
192 "description": self.description or "",
193 "unit": self.unit or "",
194 "data": loads(self.data.to_json(indent=indent)),
195 },
196 indent=indent,
197 )
198
199
200 @dataclass(frozen=True)
201 class ScopeMetrics:
202 """A collection of Metrics produced by a scope"""
203
204 scope: InstrumentationScope
205 metrics: Sequence[Metric]
206 schema_url: str
207
208 def to_json(self, indent=4) -> str:
209 return dumps(
210 {
211 "scope": loads(self.scope.to_json(indent=indent)),
212 "metrics": [
213 loads(metric.to_json(indent=indent))
214 for metric in self.metrics
215 ],
216 "schema_url": self.schema_url,
217 },
218 indent=indent,
219 )
220
221
222 @dataclass(frozen=True)
223 class ResourceMetrics:
224 """A collection of ScopeMetrics from a Resource"""
225
226 resource: Resource
227 scope_metrics: Sequence[ScopeMetrics]
228 schema_url: str
229
230 def to_json(self, indent=4) -> str:
231 return dumps(
232 {
233 "resource": loads(self.resource.to_json(indent=indent)),
234 "scope_metrics": [
235 loads(scope_metrics.to_json(indent=indent))
236 for scope_metrics in self.scope_metrics
237 ],
238 "schema_url": self.schema_url,
239 },
240 indent=indent,
241 )
242
243
244 @dataclass(frozen=True)
245 class MetricsData:
246 """An array of ResourceMetrics"""
247
248 resource_metrics: Sequence[ResourceMetrics]
249
250 def to_json(self, indent=4) -> str:
251 return dumps(
252 {
253 "resource_metrics": [
254 loads(resource_metrics.to_json(indent=indent))
255 for resource_metrics in self.resource_metrics
256 ]
257 },
258 indent=indent,
259 )
260
[end of opentelemetry-sdk/src/opentelemetry/sdk/metrics/_internal/point.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/opentelemetry-sdk/src/opentelemetry/sdk/metrics/_internal/point.py b/opentelemetry-sdk/src/opentelemetry/sdk/metrics/_internal/point.py
--- a/opentelemetry-sdk/src/opentelemetry/sdk/metrics/_internal/point.py
+++ b/opentelemetry-sdk/src/opentelemetry/sdk/metrics/_internal/point.py
@@ -101,6 +101,18 @@
"opentelemetry.sdk.metrics.export.AggregationTemporality"
)
+ def to_json(self, indent=4) -> str:
+ return dumps(
+ {
+ "data_points": [
+ loads(data_point.to_json(indent=indent))
+ for data_point in self.data_points
+ ],
+ "aggregation_temporality": self.aggregation_temporality,
+ },
+ indent=indent,
+ )
+
@dataclass(frozen=True)
class Sum:
|
{"golden_diff": "diff --git a/opentelemetry-sdk/src/opentelemetry/sdk/metrics/_internal/point.py b/opentelemetry-sdk/src/opentelemetry/sdk/metrics/_internal/point.py\n--- a/opentelemetry-sdk/src/opentelemetry/sdk/metrics/_internal/point.py\n+++ b/opentelemetry-sdk/src/opentelemetry/sdk/metrics/_internal/point.py\n@@ -101,6 +101,18 @@\n \"opentelemetry.sdk.metrics.export.AggregationTemporality\"\n )\n \n+ def to_json(self, indent=4) -> str:\n+ return dumps(\n+ {\n+ \"data_points\": [\n+ loads(data_point.to_json(indent=indent))\n+ for data_point in self.data_points\n+ ],\n+ \"aggregation_temporality\": self.aggregation_temporality,\n+ },\n+ indent=indent,\n+ )\n+\n \n @dataclass(frozen=True)\n class Sum:\n", "issue": "`ExponentialHistogram` is missing the `to_json` method\nThe [`ExponentialHistorgram`](https://github.com/open-telemetry/opentelemetry-python/blob/3dfe2249cc4a203bf24578483b192fec7266596b/opentelemetry-sdk/src/opentelemetry/sdk/metrics/_internal/point.py#L93-L102) data point type was [somewhat recently added](https://github.com/open-telemetry/opentelemetry-python/pull/2964), and does not work with the `ConsoleMetricExporter` because it is missing a `to_json` method (which seems to be defined on the other data point types).\r\n\r\nThis is the stack trace I get locally:\r\n\r\n```\r\nException while exporting metrics 'ExponentialHistogram' object has no attribute 'to_json'\r\nTraceback (most recent call last):\r\n File \"/home/closeio/venv/lib/python3.10/site-packages/opentelemetry/sdk/metrics/_internal/export/__init__.py\", line 523, in _receive_metrics\r\n self._exporter.export(\r\n File \"/home/closeio/venv/lib/python3.10/site-packages/opentelemetry/sdk/metrics/_internal/export/__init__.py\", line 163, in export\r\n self.out.write(self.formatter(metrics_data))\r\n File \"/home/closeio/venv/lib/python3.10/site-packages/opentelemetry/sdk/metrics/_internal/export/__init__.py\", line 143, in <lambda>\r\n ] = lambda metrics_data: metrics_data.to_json()\r\n File \"/home/closeio/venv/lib/python3.10/site-packages/opentelemetry/sdk/metrics/_internal/point.py\", line 252, in to_json\r\n \"resource_metrics\": [\r\n File \"/home/closeio/venv/lib/python3.10/site-packages/opentelemetry/sdk/metrics/_internal/point.py\", line 253, in <listcomp>\r\n loads(resource_metrics.to_json(indent=indent))\r\n File \"/home/closeio/venv/lib/python3.10/site-packages/opentelemetry/sdk/metrics/_internal/point.py\", line 233, in to_json\r\n \"scope_metrics\": [\r\n File \"/home/closeio/venv/lib/python3.10/site-packages/opentelemetry/sdk/metrics/_internal/point.py\", line 234, in <listcomp>\r\n loads(scope_metrics.to_json(indent=indent))\r\n File \"/home/closeio/venv/lib/python3.10/site-packages/opentelemetry/sdk/metrics/_internal/point.py\", line 211, in to_json\r\n \"metrics\": [\r\n File \"/home/closeio/venv/lib/python3.10/site-packages/opentelemetry/sdk/metrics/_internal/point.py\", line 212, in <listcomp>\r\n loads(metric.to_json(indent=indent))\r\n File \"/home/closeio/venv/lib/python3.10/site-packages/opentelemetry/sdk/metrics/_internal/point.py\", line 193, in to_json\r\n \"data\": loads(self.data.to_json(indent=indent)),\r\nAttributeError: 'ExponentialHistogram' object has no attribute 'to_json'\r\n```\n", "before_files": [{"content": "# Copyright The OpenTelemetry Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n# pylint: disable=unused-import\n\nfrom dataclasses import asdict, dataclass\nfrom json import dumps, loads\nfrom typing import Optional, Sequence, Union\n\n# This kind of import is needed to avoid Sphinx errors.\nimport opentelemetry.sdk.metrics._internal\nfrom opentelemetry.sdk.resources import Resource\nfrom opentelemetry.sdk.util.instrumentation import InstrumentationScope\nfrom opentelemetry.util.types import Attributes\n\n\n@dataclass(frozen=True)\nclass NumberDataPoint:\n \"\"\"Single data point in a timeseries that describes the time-varying scalar\n value of a metric.\n \"\"\"\n\n attributes: Attributes\n start_time_unix_nano: int\n time_unix_nano: int\n value: Union[int, float]\n\n def to_json(self, indent=4) -> str:\n return dumps(asdict(self), indent=indent)\n\n\n@dataclass(frozen=True)\nclass HistogramDataPoint:\n \"\"\"Single data point in a timeseries that describes the time-varying scalar\n value of a metric.\n \"\"\"\n\n attributes: Attributes\n start_time_unix_nano: int\n time_unix_nano: int\n count: int\n sum: Union[int, float]\n bucket_counts: Sequence[int]\n explicit_bounds: Sequence[float]\n min: float\n max: float\n\n def to_json(self, indent=4) -> str:\n return dumps(asdict(self), indent=indent)\n\n\n@dataclass(frozen=True)\nclass Buckets:\n offset: int\n bucket_counts: Sequence[int]\n\n\n@dataclass(frozen=True)\nclass ExponentialHistogramDataPoint:\n \"\"\"Single data point in a timeseries whose boundaries are defined by an\n exponential function. This timeseries describes the time-varying scalar\n value of a metric.\n \"\"\"\n\n attributes: Attributes\n start_time_unix_nano: int\n time_unix_nano: int\n count: int\n sum: Union[int, float]\n scale: int\n zero_count: int\n positive: Buckets\n negative: Buckets\n flags: int\n min: float\n max: float\n\n def to_json(self, indent=4) -> str:\n return dumps(asdict(self), indent=indent)\n\n\n@dataclass(frozen=True)\nclass ExponentialHistogram:\n \"\"\"Represents the type of a metric that is calculated by aggregating as an\n ExponentialHistogram of all reported measurements over a time interval.\n \"\"\"\n\n data_points: Sequence[ExponentialHistogramDataPoint]\n aggregation_temporality: (\n \"opentelemetry.sdk.metrics.export.AggregationTemporality\"\n )\n\n\n@dataclass(frozen=True)\nclass Sum:\n \"\"\"Represents the type of a scalar metric that is calculated as a sum of\n all reported measurements over a time interval.\"\"\"\n\n data_points: Sequence[NumberDataPoint]\n aggregation_temporality: (\n \"opentelemetry.sdk.metrics.export.AggregationTemporality\"\n )\n is_monotonic: bool\n\n def to_json(self, indent=4) -> str:\n return dumps(\n {\n \"data_points\": [\n loads(data_point.to_json(indent=indent))\n for data_point in self.data_points\n ],\n \"aggregation_temporality\": self.aggregation_temporality,\n \"is_monotonic\": self.is_monotonic,\n },\n indent=indent,\n )\n\n\n@dataclass(frozen=True)\nclass Gauge:\n \"\"\"Represents the type of a scalar metric that always exports the current\n value for every data point. It should be used for an unknown\n aggregation.\"\"\"\n\n data_points: Sequence[NumberDataPoint]\n\n def to_json(self, indent=4) -> str:\n return dumps(\n {\n \"data_points\": [\n loads(data_point.to_json(indent=indent))\n for data_point in self.data_points\n ],\n },\n indent=indent,\n )\n\n\n@dataclass(frozen=True)\nclass Histogram:\n \"\"\"Represents the type of a metric that is calculated by aggregating as a\n histogram of all reported measurements over a time interval.\"\"\"\n\n data_points: Sequence[HistogramDataPoint]\n aggregation_temporality: (\n \"opentelemetry.sdk.metrics.export.AggregationTemporality\"\n )\n\n def to_json(self, indent=4) -> str:\n return dumps(\n {\n \"data_points\": [\n loads(data_point.to_json(indent=indent))\n for data_point in self.data_points\n ],\n \"aggregation_temporality\": self.aggregation_temporality,\n },\n indent=indent,\n )\n\n\n# pylint: disable=invalid-name\nDataT = Union[Sum, Gauge, Histogram]\nDataPointT = Union[NumberDataPoint, HistogramDataPoint]\n\n\n@dataclass(frozen=True)\nclass Metric:\n \"\"\"Represents a metric point in the OpenTelemetry data model to be\n exported.\"\"\"\n\n name: str\n description: Optional[str]\n unit: Optional[str]\n data: DataT\n\n def to_json(self, indent=4) -> str:\n return dumps(\n {\n \"name\": self.name,\n \"description\": self.description or \"\",\n \"unit\": self.unit or \"\",\n \"data\": loads(self.data.to_json(indent=indent)),\n },\n indent=indent,\n )\n\n\n@dataclass(frozen=True)\nclass ScopeMetrics:\n \"\"\"A collection of Metrics produced by a scope\"\"\"\n\n scope: InstrumentationScope\n metrics: Sequence[Metric]\n schema_url: str\n\n def to_json(self, indent=4) -> str:\n return dumps(\n {\n \"scope\": loads(self.scope.to_json(indent=indent)),\n \"metrics\": [\n loads(metric.to_json(indent=indent))\n for metric in self.metrics\n ],\n \"schema_url\": self.schema_url,\n },\n indent=indent,\n )\n\n\n@dataclass(frozen=True)\nclass ResourceMetrics:\n \"\"\"A collection of ScopeMetrics from a Resource\"\"\"\n\n resource: Resource\n scope_metrics: Sequence[ScopeMetrics]\n schema_url: str\n\n def to_json(self, indent=4) -> str:\n return dumps(\n {\n \"resource\": loads(self.resource.to_json(indent=indent)),\n \"scope_metrics\": [\n loads(scope_metrics.to_json(indent=indent))\n for scope_metrics in self.scope_metrics\n ],\n \"schema_url\": self.schema_url,\n },\n indent=indent,\n )\n\n\n@dataclass(frozen=True)\nclass MetricsData:\n \"\"\"An array of ResourceMetrics\"\"\"\n\n resource_metrics: Sequence[ResourceMetrics]\n\n def to_json(self, indent=4) -> str:\n return dumps(\n {\n \"resource_metrics\": [\n loads(resource_metrics.to_json(indent=indent))\n for resource_metrics in self.resource_metrics\n ]\n },\n indent=indent,\n )\n", "path": "opentelemetry-sdk/src/opentelemetry/sdk/metrics/_internal/point.py"}]}
| 3,537 | 201 |
gh_patches_debug_16841
|
rasdani/github-patches
|
git_diff
|
cornellius-gp__gpytorch-761
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[Bug] PeriodicKernel returns error with BernoulliLikelihood
# 🐛 Bug
When running the simple GPClassificationModel example (as in https://gpytorch.readthedocs.io/en/latest/examples/02_Simple_GP_Classification/Simple_GP_Classification.html), and changing the RBFKernel to a PeriodicKernel, the `likelihood(model(test_x))` line returns a RuntimeError:
`RuntimeError: The kernel LazyEvaluatedKernelTensor is not equipped to handle and diag. Expected size torch.Size([101]). Got size torch.Size([1, 101])`
This error does not occur when changing the RBFKernel to a LinearKernel, or when changing the BernoulliLikelihood to GaussianLikelihood.
## To reproduce
```
import math
import torch
import gpytorch
from matplotlib import pyplot as plt
train_x = torch.linspace(0, 1, 10)
train_y = torch.sign(torch.cos(train_x * (4 * math.pi))).add(1).div(2)
from gpytorch.models import AbstractVariationalGP
from gpytorch.variational import CholeskyVariationalDistribution
from gpytorch.variational import VariationalStrategy
class GPClassificationModel(AbstractVariationalGP):
def __init__(self, train_x):
variational_distribution = CholeskyVariationalDistribution(train_x.size(0))
variational_strategy = VariationalStrategy(self, train_x, variational_distribution)
super(GPClassificationModel, self).__init__(variational_strategy)
self.mean_module = gpytorch.means.ConstantMean()
self.covar_module = gpytorch.kernels.ScaleKernel(gpytorch.kernels.PeriodicKernel())
def forward(self, x):
mean_x = self.mean_module(x)
covar_x = self.covar_module(x)
latent_pred = gpytorch.distributions.MultivariateNormal(mean_x, covar_x)
return latent_pred
model = GPClassificationModel(train_x)
likelihood = gpytorch.likelihoods.BernoulliLikelihood()
from gpytorch.mlls.variational_elbo import VariationalELBO
model.train()
likelihood.train()
optimizer = torch.optim.Adam(model.parameters(), lr=0.1)
mll = VariationalELBO(likelihood, model, train_y.numel())
training_iter = 50
for i in range(training_iter):
optimizer.zero_grad()
output = model(train_x)
loss = -mll(output, train_y)
loss.backward()
optimizer.step()
model.eval()
likelihood.eval()
with torch.no_grad():
test_x = torch.linspace(0, 1, 101)
observed_pred = likelihood(model(test_x))
print('Reached the end of the script, successfully calculated likelihood')
```
** Stack trace/error message **
```
RuntimeError: The kernel LazyEvaluatedKernelTensor is not equipped to handle and diag. Expected size torch.Size([101]). Got size torch.Size([1, 101])
```
## Expected Behavior
The expected behavior is that the model will run with PeriodicKernel, and not only RBFKernel or LinearKernel.
## System information
**Please complete the following information:**
- <!-- GPyTorch Version (run `print(gpytorch.__version__)` -->0.3.2
- <!-- PyTorch Version (run `print(torch.__version__)` -->1.1.0
- <!-- Computer OS -->macOS
</issue>
<code>
[start of gpytorch/kernels/periodic_kernel.py]
1 #!/usr/bin/env python3
2
3 import math
4 import torch
5 from .kernel import Kernel
6 from ..constraints import Positive
7
8
9 class PeriodicKernel(Kernel):
10 r""" Computes a covariance matrix based on the periodic kernel
11 between inputs :math:`\mathbf{x_1}` and :math:`\mathbf{x_2}`:
12
13 .. math::
14
15 \begin{equation*}
16 k_{\text{Periodic}}(\mathbf{x_1}, \mathbf{x_2}) = \exp \left(
17 \frac{2 \sin^2 \left( \pi \Vert \mathbf{x_1} - \mathbf{x_2} \Vert_1 / p \right) }
18 { \ell^2 } \right)
19 \end{equation*}
20
21 where
22
23 * :math:`p` is the periord length parameter.
24 * :math:`\ell` is a lengthscale parameter.
25
26 .. note::
27
28 This kernel does not have an `outputscale` parameter. To add a scaling parameter,
29 decorate this kernel with a :class:`gpytorch.kernels.ScaleKernel`.
30
31 .. note::
32
33 This kernel does not have an ARD lengthscale option.
34
35 Args:
36 :attr:`batch_shape` (torch.Size, optional):
37 Set this if you want a separate lengthscale for each
38 batch of input data. It should be `b` if :attr:`x1` is a `b x n x d` tensor. Default: `torch.Size([])`.
39 :attr:`active_dims` (tuple of ints, optional):
40 Set this if you want to compute the covariance of only a few input dimensions. The ints
41 corresponds to the indices of the dimensions. Default: `None`.
42 :attr:`period_length_prior` (Prior, optional):
43 Set this if you want to apply a prior to the period length parameter. Default: `None`.
44 :attr:`lengthscale_prior` (Prior, optional):
45 Set this if you want to apply a prior to the lengthscale parameter. Default: `None`.
46 :attr:`lengthscale_constraint` (Constraint, optional):
47 Set this if you want to apply a constraint to the value of the lengthscale. Default: `Positive`.
48 :attr:`period_length_constraint` (Constraint, optional):
49 Set this if you want to apply a constraint to the value of the period length. Default: `Positive`.
50 :attr:`eps` (float):
51 The minimum value that the lengthscale/period length can take
52 (prevents divide by zero errors). Default: `1e-6`.
53
54 Attributes:
55 :attr:`lengthscale` (Tensor):
56 The lengthscale parameter. Size = `*batch_shape x 1 x 1`.
57 :attr:`period_length` (Tensor):
58 The period length parameter. Size = `*batch_shape x 1 x 1`.
59
60 Example:
61 >>> x = torch.randn(10, 5)
62 >>> # Non-batch: Simple option
63 >>> covar_module = gpytorch.kernels.ScaleKernel(gpytorch.kernels.PeriodicKernel())
64 >>>
65 >>> batch_x = torch.randn(2, 10, 5)
66 >>> # Batch: Simple option
67 >>> covar_module = gpytorch.kernels.ScaleKernel(gpytorch.kernels.PeriodicKernel())
68 >>> # Batch: different lengthscale for each batch
69 >>> covar_module = gpytorch.kernels.ScaleKernel(gpytorch.kernels.PeriodicKernel(batch_size=2))
70 >>> covar = covar_module(x) # Output: LazyVariable of size (2 x 10 x 10)
71 """
72
73 def __init__(self, period_length_prior=None, period_length_constraint=None, **kwargs):
74 super(PeriodicKernel, self).__init__(has_lengthscale=True, **kwargs)
75 if period_length_constraint is None:
76 period_length_constraint = Positive()
77
78 self.register_parameter(
79 name="raw_period_length",
80 parameter=torch.nn.Parameter(torch.zeros(*self.batch_shape, 1, 1)))
81
82 if period_length_prior is not None:
83 self.register_prior(
84 "period_length_prior",
85 period_length_prior,
86 lambda: self.period_length,
87 lambda v: self._set_period_length(v),
88 )
89
90 self.register_constraint("raw_period_length", period_length_constraint)
91
92 @property
93 def period_length(self):
94 return self.raw_period_length_constraint.transform(self.raw_period_length)
95
96 @period_length.setter
97 def period_length(self, value):
98 self._set_period_length(value)
99
100 def _set_period_length(self, value):
101 if not torch.is_tensor(value):
102 value = torch.as_tensor(value).to(self.raw_period_length)
103 self.initialize(raw_period_length=self.raw_period_length_constraint.inverse_transform(value))
104
105 def forward(self, x1, x2, **params):
106 x1_ = x1.div(self.period_length)
107 x2_ = x2.div(self.period_length)
108 diff = self.covar_dist(x1_, x2_, **params)
109 res = torch.sin(diff.mul(math.pi)).pow(2).mul(-2 / self.lengthscale).exp_()
110 if diff.ndimension() == 2:
111 res = res.squeeze(0)
112 return res
113
[end of gpytorch/kernels/periodic_kernel.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/gpytorch/kernels/periodic_kernel.py b/gpytorch/kernels/periodic_kernel.py
--- a/gpytorch/kernels/periodic_kernel.py
+++ b/gpytorch/kernels/periodic_kernel.py
@@ -102,11 +102,11 @@
value = torch.as_tensor(value).to(self.raw_period_length)
self.initialize(raw_period_length=self.raw_period_length_constraint.inverse_transform(value))
- def forward(self, x1, x2, **params):
+ def forward(self, x1, x2, diag=False, **params):
x1_ = x1.div(self.period_length)
x2_ = x2.div(self.period_length)
- diff = self.covar_dist(x1_, x2_, **params)
+ diff = self.covar_dist(x1_, x2_, diag=diag, **params)
res = torch.sin(diff.mul(math.pi)).pow(2).mul(-2 / self.lengthscale).exp_()
- if diff.ndimension() == 2:
+ if diff.ndimension() == 2 or diag:
res = res.squeeze(0)
return res
|
{"golden_diff": "diff --git a/gpytorch/kernels/periodic_kernel.py b/gpytorch/kernels/periodic_kernel.py\n--- a/gpytorch/kernels/periodic_kernel.py\n+++ b/gpytorch/kernels/periodic_kernel.py\n@@ -102,11 +102,11 @@\n value = torch.as_tensor(value).to(self.raw_period_length)\n self.initialize(raw_period_length=self.raw_period_length_constraint.inverse_transform(value))\n \n- def forward(self, x1, x2, **params):\n+ def forward(self, x1, x2, diag=False, **params):\n x1_ = x1.div(self.period_length)\n x2_ = x2.div(self.period_length)\n- diff = self.covar_dist(x1_, x2_, **params)\n+ diff = self.covar_dist(x1_, x2_, diag=diag, **params)\n res = torch.sin(diff.mul(math.pi)).pow(2).mul(-2 / self.lengthscale).exp_()\n- if diff.ndimension() == 2:\n+ if diff.ndimension() == 2 or diag:\n res = res.squeeze(0)\n return res\n", "issue": "[Bug] PeriodicKernel returns error with BernoulliLikelihood \n# \ud83d\udc1b Bug\r\n\r\nWhen running the simple GPClassificationModel example (as in https://gpytorch.readthedocs.io/en/latest/examples/02_Simple_GP_Classification/Simple_GP_Classification.html), and changing the RBFKernel to a PeriodicKernel, the `likelihood(model(test_x))` line returns a RuntimeError:\r\n\r\n`RuntimeError: The kernel LazyEvaluatedKernelTensor is not equipped to handle and diag. Expected size torch.Size([101]). Got size torch.Size([1, 101])`\r\n\r\nThis error does not occur when changing the RBFKernel to a LinearKernel, or when changing the BernoulliLikelihood to GaussianLikelihood.\r\n\r\n## To reproduce\r\n\r\n```\r\nimport math\r\nimport torch\r\nimport gpytorch\r\nfrom matplotlib import pyplot as plt\r\n\r\ntrain_x = torch.linspace(0, 1, 10)\r\ntrain_y = torch.sign(torch.cos(train_x * (4 * math.pi))).add(1).div(2)\r\n\r\nfrom gpytorch.models import AbstractVariationalGP\r\nfrom gpytorch.variational import CholeskyVariationalDistribution\r\nfrom gpytorch.variational import VariationalStrategy\r\n\r\nclass GPClassificationModel(AbstractVariationalGP):\r\n def __init__(self, train_x):\r\n variational_distribution = CholeskyVariationalDistribution(train_x.size(0))\r\n variational_strategy = VariationalStrategy(self, train_x, variational_distribution)\r\n super(GPClassificationModel, self).__init__(variational_strategy)\r\n self.mean_module = gpytorch.means.ConstantMean()\r\n self.covar_module = gpytorch.kernels.ScaleKernel(gpytorch.kernels.PeriodicKernel())\r\n\r\n def forward(self, x):\r\n mean_x = self.mean_module(x)\r\n covar_x = self.covar_module(x)\r\n latent_pred = gpytorch.distributions.MultivariateNormal(mean_x, covar_x)\r\n return latent_pred\r\n\r\n\r\nmodel = GPClassificationModel(train_x)\r\nlikelihood = gpytorch.likelihoods.BernoulliLikelihood()\r\n\r\nfrom gpytorch.mlls.variational_elbo import VariationalELBO\r\n\r\nmodel.train()\r\nlikelihood.train()\r\n\r\noptimizer = torch.optim.Adam(model.parameters(), lr=0.1)\r\n\r\nmll = VariationalELBO(likelihood, model, train_y.numel())\r\n\r\ntraining_iter = 50\r\nfor i in range(training_iter):\r\n optimizer.zero_grad()\r\n output = model(train_x)\r\n loss = -mll(output, train_y)\r\n loss.backward()\r\n optimizer.step()\r\n\r\nmodel.eval()\r\nlikelihood.eval()\r\n\r\nwith torch.no_grad():\r\n test_x = torch.linspace(0, 1, 101)\r\n observed_pred = likelihood(model(test_x))\r\n\r\nprint('Reached the end of the script, successfully calculated likelihood')\r\n```\r\n\r\n** Stack trace/error message **\r\n```\r\nRuntimeError: The kernel LazyEvaluatedKernelTensor is not equipped to handle and diag. Expected size torch.Size([101]). Got size torch.Size([1, 101])\r\n```\r\n\r\n## Expected Behavior\r\n\r\nThe expected behavior is that the model will run with PeriodicKernel, and not only RBFKernel or LinearKernel.\r\n\r\n## System information\r\n\r\n**Please complete the following information:**\r\n- <!-- GPyTorch Version (run `print(gpytorch.__version__)` -->0.3.2\r\n- <!-- PyTorch Version (run `print(torch.__version__)` -->1.1.0\r\n- <!-- Computer OS -->macOS\r\n\n", "before_files": [{"content": "#!/usr/bin/env python3\n\nimport math\nimport torch\nfrom .kernel import Kernel\nfrom ..constraints import Positive\n\n\nclass PeriodicKernel(Kernel):\n r\"\"\" Computes a covariance matrix based on the periodic kernel\n between inputs :math:`\\mathbf{x_1}` and :math:`\\mathbf{x_2}`:\n\n .. math::\n\n \\begin{equation*}\n k_{\\text{Periodic}}(\\mathbf{x_1}, \\mathbf{x_2}) = \\exp \\left(\n \\frac{2 \\sin^2 \\left( \\pi \\Vert \\mathbf{x_1} - \\mathbf{x_2} \\Vert_1 / p \\right) }\n { \\ell^2 } \\right)\n \\end{equation*}\n\n where\n\n * :math:`p` is the periord length parameter.\n * :math:`\\ell` is a lengthscale parameter.\n\n .. note::\n\n This kernel does not have an `outputscale` parameter. To add a scaling parameter,\n decorate this kernel with a :class:`gpytorch.kernels.ScaleKernel`.\n\n .. note::\n\n This kernel does not have an ARD lengthscale option.\n\n Args:\n :attr:`batch_shape` (torch.Size, optional):\n Set this if you want a separate lengthscale for each\n batch of input data. It should be `b` if :attr:`x1` is a `b x n x d` tensor. Default: `torch.Size([])`.\n :attr:`active_dims` (tuple of ints, optional):\n Set this if you want to compute the covariance of only a few input dimensions. The ints\n corresponds to the indices of the dimensions. Default: `None`.\n :attr:`period_length_prior` (Prior, optional):\n Set this if you want to apply a prior to the period length parameter. Default: `None`.\n :attr:`lengthscale_prior` (Prior, optional):\n Set this if you want to apply a prior to the lengthscale parameter. Default: `None`.\n :attr:`lengthscale_constraint` (Constraint, optional):\n Set this if you want to apply a constraint to the value of the lengthscale. Default: `Positive`.\n :attr:`period_length_constraint` (Constraint, optional):\n Set this if you want to apply a constraint to the value of the period length. Default: `Positive`.\n :attr:`eps` (float):\n The minimum value that the lengthscale/period length can take\n (prevents divide by zero errors). Default: `1e-6`.\n\n Attributes:\n :attr:`lengthscale` (Tensor):\n The lengthscale parameter. Size = `*batch_shape x 1 x 1`.\n :attr:`period_length` (Tensor):\n The period length parameter. Size = `*batch_shape x 1 x 1`.\n\n Example:\n >>> x = torch.randn(10, 5)\n >>> # Non-batch: Simple option\n >>> covar_module = gpytorch.kernels.ScaleKernel(gpytorch.kernels.PeriodicKernel())\n >>>\n >>> batch_x = torch.randn(2, 10, 5)\n >>> # Batch: Simple option\n >>> covar_module = gpytorch.kernels.ScaleKernel(gpytorch.kernels.PeriodicKernel())\n >>> # Batch: different lengthscale for each batch\n >>> covar_module = gpytorch.kernels.ScaleKernel(gpytorch.kernels.PeriodicKernel(batch_size=2))\n >>> covar = covar_module(x) # Output: LazyVariable of size (2 x 10 x 10)\n \"\"\"\n\n def __init__(self, period_length_prior=None, period_length_constraint=None, **kwargs):\n super(PeriodicKernel, self).__init__(has_lengthscale=True, **kwargs)\n if period_length_constraint is None:\n period_length_constraint = Positive()\n\n self.register_parameter(\n name=\"raw_period_length\",\n parameter=torch.nn.Parameter(torch.zeros(*self.batch_shape, 1, 1)))\n\n if period_length_prior is not None:\n self.register_prior(\n \"period_length_prior\",\n period_length_prior,\n lambda: self.period_length,\n lambda v: self._set_period_length(v),\n )\n\n self.register_constraint(\"raw_period_length\", period_length_constraint)\n\n @property\n def period_length(self):\n return self.raw_period_length_constraint.transform(self.raw_period_length)\n\n @period_length.setter\n def period_length(self, value):\n self._set_period_length(value)\n\n def _set_period_length(self, value):\n if not torch.is_tensor(value):\n value = torch.as_tensor(value).to(self.raw_period_length)\n self.initialize(raw_period_length=self.raw_period_length_constraint.inverse_transform(value))\n\n def forward(self, x1, x2, **params):\n x1_ = x1.div(self.period_length)\n x2_ = x2.div(self.period_length)\n diff = self.covar_dist(x1_, x2_, **params)\n res = torch.sin(diff.mul(math.pi)).pow(2).mul(-2 / self.lengthscale).exp_()\n if diff.ndimension() == 2:\n res = res.squeeze(0)\n return res\n", "path": "gpytorch/kernels/periodic_kernel.py"}]}
| 2,655 | 255 |
gh_patches_debug_12460
|
rasdani/github-patches
|
git_diff
|
pypa__virtualenv-1591
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
`.dist-info.virtualenv` marker files break importlib.metadata in app-data seeded venvs
Since virtualenv 20.0.0, dist-info directories in virtual environments seeded using the `app-data` method are accompanied by empty marker files with the same name as the directory plus a `.virtualenv` suffix:
https://github.com/pypa/virtualenv/blob/c8ac632cd40bfb014515151a02d1ea55d47ff467/src/virtualenv/seed/via_app_data/pip_install/base.py#L73-L76
This breaks `importlib.metadata`, which picks these files up and attempts to create `importlib.metadata.Distribution` instances for them. As a result, something like `importlib.metadata.version("pip")` will return `None`.
The following session was run in a virtual environment created by virtualenv 20.0.2 inside a `python:3.8.1` Docker image:
```python
Python 3.8.1 (default, Feb 2 2020, 08:37:37)
[GCC 8.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import importlib.metadata
>>> for d in importlib.metadata.distributions():
... name = d.metadata["name"]
... version = d.metadata["version"]
... path = d._path
... print(f"{path=} {name=} {version=}")
...
path=PosixPath('/venv/lib/python3.8/site-packages/pip-20.0.2.dist-info.virtualenv') name=None version=None
path=PosixPath('/venv/lib/python3.8/site-packages/wheel-0.34.2.dist-info.virtualenv') name=None version=None
path=PosixPath('/venv/lib/python3.8/site-packages/pip-20.0.2.dist-info') name='pip' version='20.0.2'
path=PosixPath('/venv/lib/python3.8/site-packages/wheel-0.34.2.dist-info') name='wheel' version='0.34.2'
path=PosixPath('/venv/lib/python3.8/site-packages/setuptools-45.2.0.dist-info') name='setuptools' version='45.2.0'
path=PosixPath('/venv/lib/python3.8/site-packages/setuptools-45.2.0.dist-info.virtualenv') name=None version=None
>>> d = importlib.metadata.distribution("pip")
>>> d.metadata["name"]
>>> importlib.metadata.version("pip")
>>>
```
This may in fact be an issue in `importlib.metadata`. I'm reporting it here in the assumption that virtualenv may be able to fix or work around this issue faster, and that it would need to work around the issue anyway for Python 3.8.0 and 3.8.1.
The issue also causes Poetry to break when invoked inside such virtual environments, for example in Nox and tox: https://github.com/python-poetry/poetry/issues/1953. This is currently affecting projects that use Poetry in their CI pipelines.
</issue>
<code>
[start of src/virtualenv/seed/via_app_data/pip_install/base.py]
1 from __future__ import absolute_import, unicode_literals
2
3 import logging
4 import os
5 import re
6 import shutil
7 import zipfile
8 from abc import ABCMeta, abstractmethod
9 from tempfile import mkdtemp
10
11 import six
12 from six import PY3
13
14 from virtualenv.util import ConfigParser
15 from virtualenv.util.path import Path
16
17
18 @six.add_metaclass(ABCMeta)
19 class PipInstall(object):
20 def __init__(self, wheel, creator, image_folder):
21 self._wheel = wheel
22 self._creator = creator
23 self._image_dir = image_folder
24 self._extracted = False
25 self.__dist_info = None
26 self._console_entry_points = None
27
28 @abstractmethod
29 def _sync(self, src, dst):
30 raise NotImplementedError
31
32 def install(self):
33 self._extracted = True
34 # sync image
35 for filename in self._image_dir.iterdir():
36 into = self._creator.purelib / filename.name
37 if into.exists():
38 if into.is_dir() and not into.is_symlink():
39 shutil.rmtree(str(into))
40 else:
41 into.unlink()
42 self._sync(filename, into)
43 # generate console executables
44 consoles = set()
45 script_dir = self._creator.script_dir
46 for name, module in self._console_scripts.items():
47 consoles.update(self._create_console_entry_point(name, module, script_dir))
48 logging.debug("generated console scripts %s", " ".join(i.name for i in consoles))
49
50 def build_image(self):
51 # 1. first extract the wheel
52 logging.debug("build install image to %s of %s", self._image_dir, self._wheel.name)
53 with zipfile.ZipFile(str(self._wheel)) as zip_ref:
54 zip_ref.extractall(str(self._image_dir))
55 self._extracted = True
56 # 2. now add additional files not present in the package
57 new_files = self._generate_new_files()
58 # 3. finally fix the records file
59 self._fix_records(new_files)
60
61 def _records_text(self, files):
62 record_data = "\n".join(
63 "{},,".format(os.path.relpath(six.ensure_text(str(rec)), six.ensure_text(str(self._image_dir))))
64 for rec in files
65 )
66 return record_data
67
68 def _generate_new_files(self):
69 new_files = set()
70 installer = self._dist_info / "INSTALLER"
71 installer.write_text("pip\n")
72 new_files.add(installer)
73 # inject a no-op root element, as workaround for bug added
74 # by https://github.com/pypa/pip/commit/c7ae06c79#r35523722
75 marker = self._image_dir / "{}.virtualenv".format(self._dist_info.name)
76 marker.write_text("")
77 new_files.add(marker)
78 folder = mkdtemp()
79 try:
80 to_folder = Path(folder)
81 rel = os.path.relpath(
82 six.ensure_text(str(self._creator.script_dir)), six.ensure_text(str(self._creator.purelib))
83 )
84 for name, module in self._console_scripts.items():
85 new_files.update(
86 Path(os.path.normpath(six.ensure_text(str(self._image_dir / rel / i.name))))
87 for i in self._create_console_entry_point(name, module, to_folder)
88 )
89 finally:
90 shutil.rmtree(folder, ignore_errors=True)
91 return new_files
92
93 @property
94 def _dist_info(self):
95 if self._extracted is False:
96 return None # pragma: no cover
97 if self.__dist_info is None:
98 for filename in self._image_dir.iterdir():
99 if filename.suffix == ".dist-info":
100 self.__dist_info = filename
101 break
102 else:
103 raise RuntimeError("no dist info") # pragma: no cover
104 return self.__dist_info
105
106 @abstractmethod
107 def _fix_records(self, extra_record_data):
108 raise NotImplementedError
109
110 @property
111 def _console_scripts(self):
112 if self._extracted is False:
113 return None # pragma: no cover
114 if self._console_entry_points is None:
115 self._console_entry_points = {}
116 entry_points = self._dist_info / "entry_points.txt"
117 if entry_points.exists():
118 parser = ConfigParser.ConfigParser()
119 with entry_points.open() as file_handler:
120 reader = getattr(parser, "read_file" if PY3 else "readfp")
121 reader(file_handler)
122 if "console_scripts" in parser.sections():
123 for name, value in parser.items("console_scripts"):
124 match = re.match(r"(.*?)-?\d\.?\d*", name)
125 if match:
126 name = match.groups(1)[0]
127 self._console_entry_points[name] = value
128 return self._console_entry_points
129
130 def _create_console_entry_point(self, name, value, to_folder):
131 result = []
132 from distlib.scripts import ScriptMaker
133
134 maker = ScriptMaker(None, str(to_folder))
135 maker.clobber = True # overwrite
136 maker.variants = {"", "X", "X.Y"} # create all variants
137 maker.set_mode = True # ensure they are executable
138 maker.executable = str(self._creator.exe)
139 specification = "{} = {}".format(name, value)
140 new_files = maker.make(specification)
141 result.extend(Path(i) for i in new_files)
142 return result
143
144 def clear(self):
145 if self._image_dir.exists():
146 shutil.rmtree(six.ensure_text(str(self._image_dir)))
147
148 def has_image(self):
149 return self._image_dir.exists() and next(self._image_dir.iterdir()) is not None
150
[end of src/virtualenv/seed/via_app_data/pip_install/base.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/src/virtualenv/seed/via_app_data/pip_install/base.py b/src/virtualenv/seed/via_app_data/pip_install/base.py
--- a/src/virtualenv/seed/via_app_data/pip_install/base.py
+++ b/src/virtualenv/seed/via_app_data/pip_install/base.py
@@ -70,9 +70,8 @@
installer = self._dist_info / "INSTALLER"
installer.write_text("pip\n")
new_files.add(installer)
- # inject a no-op root element, as workaround for bug added
- # by https://github.com/pypa/pip/commit/c7ae06c79#r35523722
- marker = self._image_dir / "{}.virtualenv".format(self._dist_info.name)
+ # inject a no-op root element, as workaround for bug in https://github.com/pypa/pip/issues/7226
+ marker = self._image_dir / "{}.virtualenv".format(self._dist_info.stem)
marker.write_text("")
new_files.add(marker)
folder = mkdtemp()
|
{"golden_diff": "diff --git a/src/virtualenv/seed/via_app_data/pip_install/base.py b/src/virtualenv/seed/via_app_data/pip_install/base.py\n--- a/src/virtualenv/seed/via_app_data/pip_install/base.py\n+++ b/src/virtualenv/seed/via_app_data/pip_install/base.py\n@@ -70,9 +70,8 @@\n installer = self._dist_info / \"INSTALLER\"\n installer.write_text(\"pip\\n\")\n new_files.add(installer)\n- # inject a no-op root element, as workaround for bug added\n- # by https://github.com/pypa/pip/commit/c7ae06c79#r35523722\n- marker = self._image_dir / \"{}.virtualenv\".format(self._dist_info.name)\n+ # inject a no-op root element, as workaround for bug in https://github.com/pypa/pip/issues/7226\n+ marker = self._image_dir / \"{}.virtualenv\".format(self._dist_info.stem)\n marker.write_text(\"\")\n new_files.add(marker)\n folder = mkdtemp()\n", "issue": "`.dist-info.virtualenv` marker files break importlib.metadata in app-data seeded venvs\nSince virtualenv 20.0.0, dist-info directories in virtual environments seeded using the `app-data` method are accompanied by empty marker files with the same name as the directory plus a `.virtualenv` suffix:\r\n\r\nhttps://github.com/pypa/virtualenv/blob/c8ac632cd40bfb014515151a02d1ea55d47ff467/src/virtualenv/seed/via_app_data/pip_install/base.py#L73-L76\r\n\r\nThis breaks `importlib.metadata`, which picks these files up and attempts to create `importlib.metadata.Distribution` instances for them. As a result, something like `importlib.metadata.version(\"pip\")` will return `None`.\r\n\r\nThe following session was run in a virtual environment created by virtualenv 20.0.2 inside a `python:3.8.1` Docker image:\r\n\r\n```python\r\nPython 3.8.1 (default, Feb 2 2020, 08:37:37)\r\n[GCC 8.3.0] on linux\r\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\r\n>>> import importlib.metadata\r\n>>> for d in importlib.metadata.distributions():\r\n... name = d.metadata[\"name\"]\r\n... version = d.metadata[\"version\"]\r\n... path = d._path\r\n... print(f\"{path=} {name=} {version=}\")\r\n...\r\npath=PosixPath('/venv/lib/python3.8/site-packages/pip-20.0.2.dist-info.virtualenv') name=None version=None\r\npath=PosixPath('/venv/lib/python3.8/site-packages/wheel-0.34.2.dist-info.virtualenv') name=None version=None\r\npath=PosixPath('/venv/lib/python3.8/site-packages/pip-20.0.2.dist-info') name='pip' version='20.0.2'\r\npath=PosixPath('/venv/lib/python3.8/site-packages/wheel-0.34.2.dist-info') name='wheel' version='0.34.2'\r\npath=PosixPath('/venv/lib/python3.8/site-packages/setuptools-45.2.0.dist-info') name='setuptools' version='45.2.0'\r\npath=PosixPath('/venv/lib/python3.8/site-packages/setuptools-45.2.0.dist-info.virtualenv') name=None version=None\r\n>>> d = importlib.metadata.distribution(\"pip\")\r\n>>> d.metadata[\"name\"]\r\n>>> importlib.metadata.version(\"pip\")\r\n>>>\r\n```\r\n\r\nThis may in fact be an issue in `importlib.metadata`. I'm reporting it here in the assumption that virtualenv may be able to fix or work around this issue faster, and that it would need to work around the issue anyway for Python 3.8.0 and 3.8.1.\r\n\r\nThe issue also causes Poetry to break when invoked inside such virtual environments, for example in Nox and tox: https://github.com/python-poetry/poetry/issues/1953. This is currently affecting projects that use Poetry in their CI pipelines.\n", "before_files": [{"content": "from __future__ import absolute_import, unicode_literals\n\nimport logging\nimport os\nimport re\nimport shutil\nimport zipfile\nfrom abc import ABCMeta, abstractmethod\nfrom tempfile import mkdtemp\n\nimport six\nfrom six import PY3\n\nfrom virtualenv.util import ConfigParser\nfrom virtualenv.util.path import Path\n\n\[email protected]_metaclass(ABCMeta)\nclass PipInstall(object):\n def __init__(self, wheel, creator, image_folder):\n self._wheel = wheel\n self._creator = creator\n self._image_dir = image_folder\n self._extracted = False\n self.__dist_info = None\n self._console_entry_points = None\n\n @abstractmethod\n def _sync(self, src, dst):\n raise NotImplementedError\n\n def install(self):\n self._extracted = True\n # sync image\n for filename in self._image_dir.iterdir():\n into = self._creator.purelib / filename.name\n if into.exists():\n if into.is_dir() and not into.is_symlink():\n shutil.rmtree(str(into))\n else:\n into.unlink()\n self._sync(filename, into)\n # generate console executables\n consoles = set()\n script_dir = self._creator.script_dir\n for name, module in self._console_scripts.items():\n consoles.update(self._create_console_entry_point(name, module, script_dir))\n logging.debug(\"generated console scripts %s\", \" \".join(i.name for i in consoles))\n\n def build_image(self):\n # 1. first extract the wheel\n logging.debug(\"build install image to %s of %s\", self._image_dir, self._wheel.name)\n with zipfile.ZipFile(str(self._wheel)) as zip_ref:\n zip_ref.extractall(str(self._image_dir))\n self._extracted = True\n # 2. now add additional files not present in the package\n new_files = self._generate_new_files()\n # 3. finally fix the records file\n self._fix_records(new_files)\n\n def _records_text(self, files):\n record_data = \"\\n\".join(\n \"{},,\".format(os.path.relpath(six.ensure_text(str(rec)), six.ensure_text(str(self._image_dir))))\n for rec in files\n )\n return record_data\n\n def _generate_new_files(self):\n new_files = set()\n installer = self._dist_info / \"INSTALLER\"\n installer.write_text(\"pip\\n\")\n new_files.add(installer)\n # inject a no-op root element, as workaround for bug added\n # by https://github.com/pypa/pip/commit/c7ae06c79#r35523722\n marker = self._image_dir / \"{}.virtualenv\".format(self._dist_info.name)\n marker.write_text(\"\")\n new_files.add(marker)\n folder = mkdtemp()\n try:\n to_folder = Path(folder)\n rel = os.path.relpath(\n six.ensure_text(str(self._creator.script_dir)), six.ensure_text(str(self._creator.purelib))\n )\n for name, module in self._console_scripts.items():\n new_files.update(\n Path(os.path.normpath(six.ensure_text(str(self._image_dir / rel / i.name))))\n for i in self._create_console_entry_point(name, module, to_folder)\n )\n finally:\n shutil.rmtree(folder, ignore_errors=True)\n return new_files\n\n @property\n def _dist_info(self):\n if self._extracted is False:\n return None # pragma: no cover\n if self.__dist_info is None:\n for filename in self._image_dir.iterdir():\n if filename.suffix == \".dist-info\":\n self.__dist_info = filename\n break\n else:\n raise RuntimeError(\"no dist info\") # pragma: no cover\n return self.__dist_info\n\n @abstractmethod\n def _fix_records(self, extra_record_data):\n raise NotImplementedError\n\n @property\n def _console_scripts(self):\n if self._extracted is False:\n return None # pragma: no cover\n if self._console_entry_points is None:\n self._console_entry_points = {}\n entry_points = self._dist_info / \"entry_points.txt\"\n if entry_points.exists():\n parser = ConfigParser.ConfigParser()\n with entry_points.open() as file_handler:\n reader = getattr(parser, \"read_file\" if PY3 else \"readfp\")\n reader(file_handler)\n if \"console_scripts\" in parser.sections():\n for name, value in parser.items(\"console_scripts\"):\n match = re.match(r\"(.*?)-?\\d\\.?\\d*\", name)\n if match:\n name = match.groups(1)[0]\n self._console_entry_points[name] = value\n return self._console_entry_points\n\n def _create_console_entry_point(self, name, value, to_folder):\n result = []\n from distlib.scripts import ScriptMaker\n\n maker = ScriptMaker(None, str(to_folder))\n maker.clobber = True # overwrite\n maker.variants = {\"\", \"X\", \"X.Y\"} # create all variants\n maker.set_mode = True # ensure they are executable\n maker.executable = str(self._creator.exe)\n specification = \"{} = {}\".format(name, value)\n new_files = maker.make(specification)\n result.extend(Path(i) for i in new_files)\n return result\n\n def clear(self):\n if self._image_dir.exists():\n shutil.rmtree(six.ensure_text(str(self._image_dir)))\n\n def has_image(self):\n return self._image_dir.exists() and next(self._image_dir.iterdir()) is not None\n", "path": "src/virtualenv/seed/via_app_data/pip_install/base.py"}]}
| 2,830 | 252 |
gh_patches_debug_36414
|
rasdani/github-patches
|
git_diff
|
facebookresearch__hydra-1695
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
strict flag was removed from the compose API without a prior deprecation
Strict config composition functionality has become the default in Hydra 1.0 (See https://hydra.cc/docs/upgrades/0.11_to_1.0/strict_mode_flag_deprecated).
This flag was completely removed in Hydra 1.1.0.
Unfortunately, the Compose API strict flag was not deprecated and was thus an avoidable breaking change.
A followup PR will re-introduce the strict flag to the Compose API as a deprecated flag. That flag will be removed in the major version of Hydra.
</issue>
<code>
[start of hydra/experimental/compose.py]
1 # Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
2 # DEPRECATED: remove in 1.2
3 import warnings
4 from typing import List, Optional
5
6 from omegaconf import DictConfig
7
8
9 def compose(
10 config_name: Optional[str] = None,
11 overrides: List[str] = [],
12 return_hydra_config: bool = False,
13 ) -> DictConfig:
14 from hydra import compose as real_compose
15
16 warnings.warn(
17 category=UserWarning,
18 message="hydra.experimental.compose() is no longer experimental."
19 " Use hydra.compose()",
20 )
21 return real_compose(
22 config_name=config_name,
23 overrides=overrides,
24 return_hydra_config=return_hydra_config,
25 )
26
[end of hydra/experimental/compose.py]
[start of hydra/compose.py]
1 # Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
2 from typing import List, Optional
3
4 from omegaconf import DictConfig, open_dict
5
6 from hydra.core.global_hydra import GlobalHydra
7 from hydra.types import RunMode
8
9
10 def compose(
11 config_name: Optional[str] = None,
12 overrides: List[str] = [],
13 return_hydra_config: bool = False,
14 ) -> DictConfig:
15 """
16 :param config_name: the name of the config
17 (usually the file name without the .yaml extension)
18 :param overrides: list of overrides for config file
19 :param return_hydra_config: True to return the hydra config node in the result
20 :return: the composed config
21 """
22 assert (
23 GlobalHydra().is_initialized()
24 ), "GlobalHydra is not initialized, use @hydra.main() or call one of the hydra initialization methods first"
25
26 gh = GlobalHydra.instance()
27 assert gh.hydra is not None
28 cfg = gh.hydra.compose_config(
29 config_name=config_name,
30 overrides=overrides,
31 run_mode=RunMode.RUN,
32 from_shell=False,
33 with_log_configuration=False,
34 )
35 assert isinstance(cfg, DictConfig)
36
37 if not return_hydra_config:
38 if "hydra" in cfg:
39 with open_dict(cfg):
40 del cfg["hydra"]
41 return cfg
42
[end of hydra/compose.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/hydra/compose.py b/hydra/compose.py
--- a/hydra/compose.py
+++ b/hydra/compose.py
@@ -1,7 +1,9 @@
# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
+import warnings
+from textwrap import dedent
from typing import List, Optional
-from omegaconf import DictConfig, open_dict
+from omegaconf import DictConfig, OmegaConf, open_dict
from hydra.core.global_hydra import GlobalHydra
from hydra.types import RunMode
@@ -11,12 +13,14 @@
config_name: Optional[str] = None,
overrides: List[str] = [],
return_hydra_config: bool = False,
+ strict: Optional[bool] = None,
) -> DictConfig:
"""
:param config_name: the name of the config
(usually the file name without the .yaml extension)
:param overrides: list of overrides for config file
:param return_hydra_config: True to return the hydra config node in the result
+ :param strict: DEPRECATED. If true, returned config has struct mode disabled.
:return: the composed config
"""
assert (
@@ -38,4 +42,18 @@
if "hydra" in cfg:
with open_dict(cfg):
del cfg["hydra"]
+
+ if strict is not None:
+ # DEPRECATED: remove in 1.2
+ warnings.warn(
+ dedent(
+ """\
+
+ The strict flag in the compose API is deprecated and will be removed in the next version of Hydra.
+ See https://hydra.cc/docs/upgrades/0.11_to_1.0/strict_mode_flag_deprecated for more info.
+ """
+ )
+ )
+ OmegaConf.set_struct(cfg, strict)
+
return cfg
diff --git a/hydra/experimental/compose.py b/hydra/experimental/compose.py
--- a/hydra/experimental/compose.py
+++ b/hydra/experimental/compose.py
@@ -10,6 +10,7 @@
config_name: Optional[str] = None,
overrides: List[str] = [],
return_hydra_config: bool = False,
+ strict: Optional[bool] = None,
) -> DictConfig:
from hydra import compose as real_compose
@@ -22,4 +23,5 @@
config_name=config_name,
overrides=overrides,
return_hydra_config=return_hydra_config,
+ strict=strict,
)
|
{"golden_diff": "diff --git a/hydra/compose.py b/hydra/compose.py\n--- a/hydra/compose.py\n+++ b/hydra/compose.py\n@@ -1,7 +1,9 @@\n # Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\n+import warnings\n+from textwrap import dedent\n from typing import List, Optional\n \n-from omegaconf import DictConfig, open_dict\n+from omegaconf import DictConfig, OmegaConf, open_dict\n \n from hydra.core.global_hydra import GlobalHydra\n from hydra.types import RunMode\n@@ -11,12 +13,14 @@\n config_name: Optional[str] = None,\n overrides: List[str] = [],\n return_hydra_config: bool = False,\n+ strict: Optional[bool] = None,\n ) -> DictConfig:\n \"\"\"\n :param config_name: the name of the config\n (usually the file name without the .yaml extension)\n :param overrides: list of overrides for config file\n :param return_hydra_config: True to return the hydra config node in the result\n+ :param strict: DEPRECATED. If true, returned config has struct mode disabled.\n :return: the composed config\n \"\"\"\n assert (\n@@ -38,4 +42,18 @@\n if \"hydra\" in cfg:\n with open_dict(cfg):\n del cfg[\"hydra\"]\n+\n+ if strict is not None:\n+ # DEPRECATED: remove in 1.2\n+ warnings.warn(\n+ dedent(\n+ \"\"\"\\\n+\n+ The strict flag in the compose API is deprecated and will be removed in the next version of Hydra.\n+ See https://hydra.cc/docs/upgrades/0.11_to_1.0/strict_mode_flag_deprecated for more info.\n+ \"\"\"\n+ )\n+ )\n+ OmegaConf.set_struct(cfg, strict)\n+\n return cfg\ndiff --git a/hydra/experimental/compose.py b/hydra/experimental/compose.py\n--- a/hydra/experimental/compose.py\n+++ b/hydra/experimental/compose.py\n@@ -10,6 +10,7 @@\n config_name: Optional[str] = None,\n overrides: List[str] = [],\n return_hydra_config: bool = False,\n+ strict: Optional[bool] = None,\n ) -> DictConfig:\n from hydra import compose as real_compose\n \n@@ -22,4 +23,5 @@\n config_name=config_name,\n overrides=overrides,\n return_hydra_config=return_hydra_config,\n+ strict=strict,\n )\n", "issue": "strict flag was removed from the compose API without a prior deprecation\nStrict config composition functionality has become the default in Hydra 1.0 (See https://hydra.cc/docs/upgrades/0.11_to_1.0/strict_mode_flag_deprecated).\r\n\r\nThis flag was completely removed in Hydra 1.1.0.\r\nUnfortunately, the Compose API strict flag was not deprecated and was thus an avoidable breaking change.\r\n\r\nA followup PR will re-introduce the strict flag to the Compose API as a deprecated flag. That flag will be removed in the major version of Hydra.\n", "before_files": [{"content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\n# DEPRECATED: remove in 1.2\nimport warnings\nfrom typing import List, Optional\n\nfrom omegaconf import DictConfig\n\n\ndef compose(\n config_name: Optional[str] = None,\n overrides: List[str] = [],\n return_hydra_config: bool = False,\n) -> DictConfig:\n from hydra import compose as real_compose\n\n warnings.warn(\n category=UserWarning,\n message=\"hydra.experimental.compose() is no longer experimental.\"\n \" Use hydra.compose()\",\n )\n return real_compose(\n config_name=config_name,\n overrides=overrides,\n return_hydra_config=return_hydra_config,\n )\n", "path": "hydra/experimental/compose.py"}, {"content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\nfrom typing import List, Optional\n\nfrom omegaconf import DictConfig, open_dict\n\nfrom hydra.core.global_hydra import GlobalHydra\nfrom hydra.types import RunMode\n\n\ndef compose(\n config_name: Optional[str] = None,\n overrides: List[str] = [],\n return_hydra_config: bool = False,\n) -> DictConfig:\n \"\"\"\n :param config_name: the name of the config\n (usually the file name without the .yaml extension)\n :param overrides: list of overrides for config file\n :param return_hydra_config: True to return the hydra config node in the result\n :return: the composed config\n \"\"\"\n assert (\n GlobalHydra().is_initialized()\n ), \"GlobalHydra is not initialized, use @hydra.main() or call one of the hydra initialization methods first\"\n\n gh = GlobalHydra.instance()\n assert gh.hydra is not None\n cfg = gh.hydra.compose_config(\n config_name=config_name,\n overrides=overrides,\n run_mode=RunMode.RUN,\n from_shell=False,\n with_log_configuration=False,\n )\n assert isinstance(cfg, DictConfig)\n\n if not return_hydra_config:\n if \"hydra\" in cfg:\n with open_dict(cfg):\n del cfg[\"hydra\"]\n return cfg\n", "path": "hydra/compose.py"}]}
| 1,264 | 582 |
gh_patches_debug_22393
|
rasdani/github-patches
|
git_diff
|
pyload__pyload-1508
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[SkipRev] broken - cannot import name SkipDownload
SkipRev plugin failed to import with following error:
Errore durante l'importazione SkipRev: cannot import name SkipDownload
I already tried following, without success: http://forum.pyload.org/viewtopic.php?f=7&t=4335
Debian GNU/Linux 7.8 (wheezy) x64
Python 2.7.3
pyLoad 0.4.9
SkipRev 0.30
</issue>
<code>
[start of module/plugins/hooks/SkipRev.py]
1 # -*- coding: utf-8 -*-
2
3 import re
4 import urllib
5 import urlparse
6
7 from types import MethodType
8
9 from module.PyFile import PyFile
10 from module.plugins.internal.Hook import Hook
11 from module.plugins.internal.Plugin import SkipDownload
12
13
14 class SkipRev(Hook):
15 __name__ = "SkipRev"
16 __type__ = "hook"
17 __version__ = "0.30"
18
19 __config__ = [("mode" , "Auto;Manual", "Choose recovery archives to skip" , "Auto"),
20 ("revtokeep", "int" , "Number of recovery archives to keep for package", 0 )]
21
22 __description__ = """Skip recovery archives (.rev)"""
23 __license__ = "GPLv3"
24 __authors__ = [("Walter Purcaro", "[email protected]")]
25
26
27 interval = 0 #@TODO: Remove in 0.4.10
28
29
30 def setup(self):
31 self.info = {} #@TODO: Remove in 0.4.10
32
33
34 @staticmethod
35 def _setup(self):
36 self.pyfile.plugin._setup()
37 if self.pyfile.hasStatus("skipped"):
38 raise SkipDownload(self.pyfile.statusname or self.pyfile.pluginname)
39
40
41 def _name(self, pyfile):
42 if hasattr(pyfile.pluginmodule, "getInfo"): #@NOTE: getInfo is deprecated in 0.4.10
43 return pyfile.pluginmodule.getInfo([pyfile.url]).next()[0]
44 else:
45 self.logWarning("Unable to grab file name")
46 return urlparse.urlparse(urllib.unquote(pyfile.url)).path.split('/')[-1]
47
48
49 def _pyfile(self, link):
50 return PyFile(self.core.files,
51 link.fid,
52 link.url,
53 link.name,
54 link.size,
55 link.status,
56 link.error,
57 link.plugin,
58 link.packageID,
59 link.order)
60
61
62 def downloadPreparing(self, pyfile):
63 name = self._name(pyfile)
64
65 if pyfile.statusname is _("unskipped") or not name.endswith(".rev") or not ".part" in name:
66 return
67
68 revtokeep = -1 if self.getConfig('mode') == "Auto" else self.getConfig('revtokeep')
69
70 if revtokeep:
71 status_list = (1, 4, 8, 9, 14) if revtokeep < 0 else (1, 3, 4, 8, 9, 14)
72 pyname = re.compile(r'%s\.part\d+\.rev$' % name.rsplit('.', 2)[0].replace('.', '\.'))
73
74 queued = [True for link in self.core.api.getPackageData(pyfile.package().id).links \
75 if link.status not in status_list and pyname.match(link.name)].count(True)
76
77 if not queued or queued < revtokeep: #: keep one rev at least in auto mode
78 return
79
80 pyfile.setCustomStatus("SkipRev", "skipped")
81
82 if not hasattr(pyfile.plugin, "_setup"):
83 # Work-around: inject status checker inside the preprocessing routine of the plugin
84 pyfile.plugin._setup = pyfile.plugin.setup
85 pyfile.plugin.setup = MethodType(self._setup, pyfile.plugin)
86
87
88 def downloadFailed(self, pyfile):
89 #: Check if pyfile is still "failed",
90 # maybe might has been restarted in meantime
91 if pyfile.status != 8 or pyfile.name.rsplit('.', 1)[-1].strip() not in ("rar", "rev"):
92 return
93
94 revtokeep = -1 if self.getConfig('mode') == "Auto" else self.getConfig('revtokeep')
95
96 if not revtokeep:
97 return
98
99 pyname = re.compile(r'%s\.part\d+\.rev$' % pyfile.name.rsplit('.', 2)[0].replace('.', '\.'))
100
101 for link in self.core.api.getPackageData(pyfile.package().id).links:
102 if link.status is 4 and pyname.match(link.name):
103 pylink = self._pyfile(link)
104
105 if revtokeep > -1 or pyfile.name.endswith(".rev"):
106 pylink.setStatus("queued")
107 else:
108 pylink.setCustomStatus(_("unskipped"), "queued")
109
110 self.core.files.save()
111 pylink.release()
112 return
113
[end of module/plugins/hooks/SkipRev.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/module/plugins/hooks/SkipRev.py b/module/plugins/hooks/SkipRev.py
--- a/module/plugins/hooks/SkipRev.py
+++ b/module/plugins/hooks/SkipRev.py
@@ -8,13 +8,13 @@
from module.PyFile import PyFile
from module.plugins.internal.Hook import Hook
-from module.plugins.internal.Plugin import SkipDownload
+from module.plugins.internal.Plugin import Skip
class SkipRev(Hook):
__name__ = "SkipRev"
__type__ = "hook"
- __version__ = "0.30"
+ __version__ = "0.31"
__config__ = [("mode" , "Auto;Manual", "Choose recovery archives to skip" , "Auto"),
("revtokeep", "int" , "Number of recovery archives to keep for package", 0 )]
@@ -35,7 +35,7 @@
def _setup(self):
self.pyfile.plugin._setup()
if self.pyfile.hasStatus("skipped"):
- raise SkipDownload(self.pyfile.statusname or self.pyfile.pluginname)
+ raise Skip(self.pyfile.statusname or self.pyfile.pluginname)
def _name(self, pyfile):
|
{"golden_diff": "diff --git a/module/plugins/hooks/SkipRev.py b/module/plugins/hooks/SkipRev.py\n--- a/module/plugins/hooks/SkipRev.py\n+++ b/module/plugins/hooks/SkipRev.py\n@@ -8,13 +8,13 @@\n \n from module.PyFile import PyFile\n from module.plugins.internal.Hook import Hook\n-from module.plugins.internal.Plugin import SkipDownload\n+from module.plugins.internal.Plugin import Skip\n \n \n class SkipRev(Hook):\n __name__ = \"SkipRev\"\n __type__ = \"hook\"\n- __version__ = \"0.30\"\n+ __version__ = \"0.31\"\n \n __config__ = [(\"mode\" , \"Auto;Manual\", \"Choose recovery archives to skip\" , \"Auto\"),\n (\"revtokeep\", \"int\" , \"Number of recovery archives to keep for package\", 0 )]\n@@ -35,7 +35,7 @@\n def _setup(self):\n self.pyfile.plugin._setup()\n if self.pyfile.hasStatus(\"skipped\"):\n- raise SkipDownload(self.pyfile.statusname or self.pyfile.pluginname)\n+ raise Skip(self.pyfile.statusname or self.pyfile.pluginname)\n \n \n def _name(self, pyfile):\n", "issue": "[SkipRev] broken - cannot import name SkipDownload\nSkipRev plugin failed to import with following error:\nErrore durante l'importazione SkipRev: cannot import name SkipDownload\n\nI already tried following, without success: http://forum.pyload.org/viewtopic.php?f=7&t=4335\n\nDebian GNU/Linux 7.8 (wheezy) x64\nPython 2.7.3\npyLoad 0.4.9\nSkipRev 0.30\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\nimport re\nimport urllib\nimport urlparse\n\nfrom types import MethodType\n\nfrom module.PyFile import PyFile\nfrom module.plugins.internal.Hook import Hook\nfrom module.plugins.internal.Plugin import SkipDownload\n\n\nclass SkipRev(Hook):\n __name__ = \"SkipRev\"\n __type__ = \"hook\"\n __version__ = \"0.30\"\n\n __config__ = [(\"mode\" , \"Auto;Manual\", \"Choose recovery archives to skip\" , \"Auto\"),\n (\"revtokeep\", \"int\" , \"Number of recovery archives to keep for package\", 0 )]\n\n __description__ = \"\"\"Skip recovery archives (.rev)\"\"\"\n __license__ = \"GPLv3\"\n __authors__ = [(\"Walter Purcaro\", \"[email protected]\")]\n\n\n interval = 0 #@TODO: Remove in 0.4.10\n\n\n def setup(self):\n self.info = {} #@TODO: Remove in 0.4.10\n\n\n @staticmethod\n def _setup(self):\n self.pyfile.plugin._setup()\n if self.pyfile.hasStatus(\"skipped\"):\n raise SkipDownload(self.pyfile.statusname or self.pyfile.pluginname)\n\n\n def _name(self, pyfile):\n if hasattr(pyfile.pluginmodule, \"getInfo\"): #@NOTE: getInfo is deprecated in 0.4.10\n return pyfile.pluginmodule.getInfo([pyfile.url]).next()[0]\n else:\n self.logWarning(\"Unable to grab file name\")\n return urlparse.urlparse(urllib.unquote(pyfile.url)).path.split('/')[-1]\n\n\n def _pyfile(self, link):\n return PyFile(self.core.files,\n link.fid,\n link.url,\n link.name,\n link.size,\n link.status,\n link.error,\n link.plugin,\n link.packageID,\n link.order)\n\n\n def downloadPreparing(self, pyfile):\n name = self._name(pyfile)\n\n if pyfile.statusname is _(\"unskipped\") or not name.endswith(\".rev\") or not \".part\" in name:\n return\n\n revtokeep = -1 if self.getConfig('mode') == \"Auto\" else self.getConfig('revtokeep')\n\n if revtokeep:\n status_list = (1, 4, 8, 9, 14) if revtokeep < 0 else (1, 3, 4, 8, 9, 14)\n pyname = re.compile(r'%s\\.part\\d+\\.rev$' % name.rsplit('.', 2)[0].replace('.', '\\.'))\n\n queued = [True for link in self.core.api.getPackageData(pyfile.package().id).links \\\n if link.status not in status_list and pyname.match(link.name)].count(True)\n\n if not queued or queued < revtokeep: #: keep one rev at least in auto mode\n return\n\n pyfile.setCustomStatus(\"SkipRev\", \"skipped\")\n\n if not hasattr(pyfile.plugin, \"_setup\"):\n # Work-around: inject status checker inside the preprocessing routine of the plugin\n pyfile.plugin._setup = pyfile.plugin.setup\n pyfile.plugin.setup = MethodType(self._setup, pyfile.plugin)\n\n\n def downloadFailed(self, pyfile):\n #: Check if pyfile is still \"failed\",\n # maybe might has been restarted in meantime\n if pyfile.status != 8 or pyfile.name.rsplit('.', 1)[-1].strip() not in (\"rar\", \"rev\"):\n return\n\n revtokeep = -1 if self.getConfig('mode') == \"Auto\" else self.getConfig('revtokeep')\n\n if not revtokeep:\n return\n\n pyname = re.compile(r'%s\\.part\\d+\\.rev$' % pyfile.name.rsplit('.', 2)[0].replace('.', '\\.'))\n\n for link in self.core.api.getPackageData(pyfile.package().id).links:\n if link.status is 4 and pyname.match(link.name):\n pylink = self._pyfile(link)\n\n if revtokeep > -1 or pyfile.name.endswith(\".rev\"):\n pylink.setStatus(\"queued\")\n else:\n pylink.setCustomStatus(_(\"unskipped\"), \"queued\")\n\n self.core.files.save()\n pylink.release()\n return\n", "path": "module/plugins/hooks/SkipRev.py"}]}
| 1,844 | 272 |
gh_patches_debug_19087
|
rasdani/github-patches
|
git_diff
|
cloudtools__troposphere-869
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Validation error in CodeCommit Trigger
It appears that the validation for the CodeCommit Trigger class does not currently allow the use of any intrinsic functions for the `Events` property, and only accepts a list of hard coded values.
https://github.com/cloudtools/troposphere/blob/45582eb1d21a6cc9cfa608f626d8acbf0317f37f/troposphere/codecommit.py#L18-L32
We are trying to allow for the dynamic selection of the values with a parameter, but encounter errors when attempting to use a `Ref`. A snippet of our trigger definition, and the error encountered is below:
```
repo_trigger1 = codecommit.Trigger(
Name = Ref(trigger_1_name),
CustomData = Ref(trigger_1_custom_data),
DestinationArn = Ref(trigger_1_destination_arn),
Branches = Ref(trigger_1_branches),
Events = Ref(trigger_1_events),
)
```
We are able to successfully generate the template when changing the `Events` assignment to:
```Events = ["all"],```
I believe we just need to check if the value is one of the Helper functions before iterating through the events. I will try to get a fix pushed up for review.
</issue>
<code>
[start of troposphere/codecommit.py]
1 # Copyright (c) 2016, Mark Peek <[email protected]>
2 # All rights reserved.
3 #
4 # See LICENSE file for full license.
5
6 from . import AWSObject, AWSProperty
7
8
9 class Trigger(AWSProperty):
10 props = {
11 'Branches': ([basestring], False),
12 'CustomData': (basestring, False),
13 'DestinationArn': (basestring, False),
14 'Events': ([basestring], False),
15 'Name': (basestring, False),
16 }
17
18 def validate(self):
19 valid = [
20 'all',
21 'createReference',
22 'deleteReference',
23 'updateReference',
24 ]
25 events = self.properties.get('Events')
26 if events:
27 if 'all' in events and len(events) != 1:
28 raise ValueError('Trigger events: all must be used alone')
29 else:
30 for e in events:
31 if e not in valid:
32 raise ValueError('Trigger: invalid event %s' % e)
33
34
35 class Repository(AWSObject):
36 resource_type = "AWS::CodeCommit::Repository"
37
38 props = {
39 'RepositoryDescription': (basestring, False),
40 'RepositoryName': (basestring, True),
41 'Triggers': ([Trigger], False),
42 }
43
[end of troposphere/codecommit.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/troposphere/codecommit.py b/troposphere/codecommit.py
--- a/troposphere/codecommit.py
+++ b/troposphere/codecommit.py
@@ -3,7 +3,7 @@
#
# See LICENSE file for full license.
-from . import AWSObject, AWSProperty
+from . import AWSHelperFn, AWSObject, AWSProperty
class Trigger(AWSProperty):
@@ -23,12 +23,12 @@
'updateReference',
]
events = self.properties.get('Events')
- if events:
+ if events and not isinstance(events, AWSHelperFn):
if 'all' in events and len(events) != 1:
raise ValueError('Trigger events: all must be used alone')
else:
for e in events:
- if e not in valid:
+ if e not in valid and not isinstance(e, AWSHelperFn):
raise ValueError('Trigger: invalid event %s' % e)
|
{"golden_diff": "diff --git a/troposphere/codecommit.py b/troposphere/codecommit.py\n--- a/troposphere/codecommit.py\n+++ b/troposphere/codecommit.py\n@@ -3,7 +3,7 @@\n #\n # See LICENSE file for full license.\n \n-from . import AWSObject, AWSProperty\n+from . import AWSHelperFn, AWSObject, AWSProperty\n \n \n class Trigger(AWSProperty):\n@@ -23,12 +23,12 @@\n 'updateReference',\n ]\n events = self.properties.get('Events')\n- if events:\n+ if events and not isinstance(events, AWSHelperFn):\n if 'all' in events and len(events) != 1:\n raise ValueError('Trigger events: all must be used alone')\n else:\n for e in events:\n- if e not in valid:\n+ if e not in valid and not isinstance(e, AWSHelperFn):\n raise ValueError('Trigger: invalid event %s' % e)\n", "issue": "Validation error in CodeCommit Trigger\nIt appears that the validation for the CodeCommit Trigger class does not currently allow the use of any intrinsic functions for the `Events` property, and only accepts a list of hard coded values.\r\n\r\nhttps://github.com/cloudtools/troposphere/blob/45582eb1d21a6cc9cfa608f626d8acbf0317f37f/troposphere/codecommit.py#L18-L32\r\n\r\nWe are trying to allow for the dynamic selection of the values with a parameter, but encounter errors when attempting to use a `Ref`. A snippet of our trigger definition, and the error encountered is below:\r\n\r\n```\r\nrepo_trigger1 = codecommit.Trigger(\r\n Name = Ref(trigger_1_name),\r\n CustomData = Ref(trigger_1_custom_data),\r\n DestinationArn = Ref(trigger_1_destination_arn),\r\n Branches = Ref(trigger_1_branches),\r\n Events = Ref(trigger_1_events),\r\n )\r\n```\r\nWe are able to successfully generate the template when changing the `Events` assignment to:\r\n\r\n```Events = [\"all\"],```\r\n\r\nI believe we just need to check if the value is one of the Helper functions before iterating through the events. I will try to get a fix pushed up for review.\n", "before_files": [{"content": "# Copyright (c) 2016, Mark Peek <[email protected]>\n# All rights reserved.\n#\n# See LICENSE file for full license.\n\nfrom . import AWSObject, AWSProperty\n\n\nclass Trigger(AWSProperty):\n props = {\n 'Branches': ([basestring], False),\n 'CustomData': (basestring, False),\n 'DestinationArn': (basestring, False),\n 'Events': ([basestring], False),\n 'Name': (basestring, False),\n }\n\n def validate(self):\n valid = [\n 'all',\n 'createReference',\n 'deleteReference',\n 'updateReference',\n ]\n events = self.properties.get('Events')\n if events:\n if 'all' in events and len(events) != 1:\n raise ValueError('Trigger events: all must be used alone')\n else:\n for e in events:\n if e not in valid:\n raise ValueError('Trigger: invalid event %s' % e)\n\n\nclass Repository(AWSObject):\n resource_type = \"AWS::CodeCommit::Repository\"\n\n props = {\n 'RepositoryDescription': (basestring, False),\n 'RepositoryName': (basestring, True),\n 'Triggers': ([Trigger], False),\n }\n", "path": "troposphere/codecommit.py"}]}
| 1,159 | 209 |
gh_patches_debug_6298
|
rasdani/github-patches
|
git_diff
|
vispy__vispy-1389
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Camera API documentation missing
I could not find a list of available cameras in the docs:
http://vispy.org/scene.html?highlight=cameras#module-vispy.scene.cameras
</issue>
<code>
[start of vispy/scene/cameras/__init__.py]
1 # -*- coding: utf-8 -*-
2 # Copyright (c) Vispy Development Team. All Rights Reserved.
3 # Distributed under the (new) BSD License. See LICENSE.txt for more info.
4 """
5 Cameras are responsible for determining which part of a scene is displayed
6 in a viewbox and for handling user input to change the view.
7
8 Several Camera subclasses are available to customize the projection of the
9 scene such as 3D perspective and orthographic projections, 2D
10 scale/translation, and other specialty cameras. A variety of user interaction
11 styles are available for each camera including arcball, turntable,
12 first-person, and pan/zoom interactions.
13
14 Internally, Cameras work by setting the transform of a SubScene object such
15 that a certain part of the scene is mapped to the bounding rectangle of the
16 ViewBox.
17 """
18 from ._base import make_camera # noqa
19 from .base_camera import BaseCamera # noqa
20 from .panzoom import PanZoomCamera # noqa
21 from .arcball import ArcballCamera # noqa
22 from .turntable import TurntableCamera # noqa
23 from .fly import FlyCamera # noqa
24 from .magnify import MagnifyCamera, Magnify1DCamera # noqa
25
[end of vispy/scene/cameras/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/vispy/scene/cameras/__init__.py b/vispy/scene/cameras/__init__.py
--- a/vispy/scene/cameras/__init__.py
+++ b/vispy/scene/cameras/__init__.py
@@ -15,6 +15,9 @@
that a certain part of the scene is mapped to the bounding rectangle of the
ViewBox.
"""
+__all__ = ['ArcballCamera', 'BaseCamera', 'FlyCamera', 'MagnifyCamera',
+ 'Magnify1DCamera', 'PanZoomCamera', 'TurntableCamera']
+
from ._base import make_camera # noqa
from .base_camera import BaseCamera # noqa
from .panzoom import PanZoomCamera # noqa
|
{"golden_diff": "diff --git a/vispy/scene/cameras/__init__.py b/vispy/scene/cameras/__init__.py\n--- a/vispy/scene/cameras/__init__.py\n+++ b/vispy/scene/cameras/__init__.py\n@@ -15,6 +15,9 @@\n that a certain part of the scene is mapped to the bounding rectangle of the \n ViewBox.\n \"\"\"\n+__all__ = ['ArcballCamera', 'BaseCamera', 'FlyCamera', 'MagnifyCamera',\n+ 'Magnify1DCamera', 'PanZoomCamera', 'TurntableCamera']\n+\n from ._base import make_camera # noqa\n from .base_camera import BaseCamera # noqa\n from .panzoom import PanZoomCamera # noqa\n", "issue": "Camera API documentation missing\nI could not find a list of available cameras in the docs:\n\nhttp://vispy.org/scene.html?highlight=cameras#module-vispy.scene.cameras\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n# Copyright (c) Vispy Development Team. All Rights Reserved.\n# Distributed under the (new) BSD License. See LICENSE.txt for more info.\n\"\"\"\nCameras are responsible for determining which part of a scene is displayed\nin a viewbox and for handling user input to change the view.\n\nSeveral Camera subclasses are available to customize the projection of the \nscene such as 3D perspective and orthographic projections, 2D \nscale/translation, and other specialty cameras. A variety of user interaction\nstyles are available for each camera including arcball, turntable, \nfirst-person, and pan/zoom interactions.\n\nInternally, Cameras work by setting the transform of a SubScene object such \nthat a certain part of the scene is mapped to the bounding rectangle of the \nViewBox.\n\"\"\"\nfrom ._base import make_camera # noqa\nfrom .base_camera import BaseCamera # noqa\nfrom .panzoom import PanZoomCamera # noqa\nfrom .arcball import ArcballCamera # noqa\nfrom .turntable import TurntableCamera # noqa\nfrom .fly import FlyCamera # noqa\nfrom .magnify import MagnifyCamera, Magnify1DCamera # noqa\n", "path": "vispy/scene/cameras/__init__.py"}]}
| 884 | 168 |
gh_patches_debug_22907
|
rasdani/github-patches
|
git_diff
|
fedora-infra__bodhi-974
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
The non-NULL comment text migration needs to be inserted before 4df1fcd59050
This migration was developed against the develop branch, but I had forgotten that there was already a migration in the develop branch that isn't on the 2.2 branch. We'll need to change both migrations so that the new one is inserted before the prior one:
```
[vagrant@localhost vagrant]$ alembic upgrade head
WARNING:fedmsg.crypto.x509:Crypto disabled ImportError('No module named M2Crypto',)
INFO [alembic.runtime.migration] Context impl PostgresqlImpl.
INFO [alembic.runtime.migration] Will assume transactional DDL.
/usr/lib/python2.7/site-packages/alembic/util/messaging.py:69: UserWarning: Revision 4df1fcd59050 referenced from 4df1fcd59050 -> 37f38ddc4c8d (head), Do not allow NULL values in the text column of the comments table. is not present
warnings.warn(msg)
Traceback (most recent call last):
File "/usr/bin/alembic", line 9, in <module>
load_entry_point('alembic==0.8.3', 'console_scripts', 'alembic')()
File "/usr/lib/python2.7/site-packages/alembic/config.py", line 450, in main
CommandLine(prog=prog).main(argv=argv)
File "/usr/lib/python2.7/site-packages/alembic/config.py", line 444, in main
self.run_cmd(cfg, options)
File "/usr/lib/python2.7/site-packages/alembic/config.py", line 427, in run_cmd
**dict((k, getattr(options, k)) for k in kwarg)
File "/usr/lib/python2.7/site-packages/alembic/command.py", line 174, in upgrade
script.run_env()
File "/usr/lib/python2.7/site-packages/alembic/script/base.py", line 397, in run_env
util.load_python_file(self.dir, 'env.py')
File "/usr/lib/python2.7/site-packages/alembic/util/pyfiles.py", line 81, in load_python_file
module = load_module_py(module_id, path)
File "/usr/lib/python2.7/site-packages/alembic/util/compat.py", line 79, in load_module_py
mod = imp.load_source(module_id, path, fp)
File "alembic/env.py", line 72, in <module>
run_migrations_online()
File "alembic/env.py", line 65, in run_migrations_online
context.run_migrations()
File "<string>", line 8, in run_migrations
File "/usr/lib/python2.7/site-packages/alembic/runtime/environment.py", line 797, in run_migrations
self.get_context().run_migrations(**kw)
File "/usr/lib/python2.7/site-packages/alembic/runtime/migration.py", line 303, in run_migrations
for step in self._migrations_fn(heads, self):
File "/usr/lib/python2.7/site-packages/alembic/command.py", line 163, in upgrade
return script._upgrade_revs(revision, rev)
File "/usr/lib/python2.7/site-packages/alembic/script/base.py", line 310, in _upgrade_revs
revs = list(revs)
File "/usr/lib/python2.7/site-packages/alembic/script/revision.py", line 610, in _iterate_revisions
requested_lowers = self.get_revisions(lower)
File "/usr/lib/python2.7/site-packages/alembic/script/revision.py", line 299, in get_revisions
return sum([self.get_revisions(id_elem) for id_elem in id_], ())
File "/usr/lib/python2.7/site-packages/alembic/script/revision.py", line 301, in get_revisions
resolved_id, branch_label = self._resolve_revision_number(id_)
File "/usr/lib/python2.7/site-packages/alembic/script/revision.py", line 423, in _resolve_revision_number
self._revision_map
File "/usr/lib/python2.7/site-packages/alembic/util/langhelpers.py", line 241, in __get__
obj.__dict__[self.__name__] = result = self.fget(obj)
File "/usr/lib/python2.7/site-packages/alembic/script/revision.py", line 151, in _revision_map
down_revision = map_[downrev]
KeyError: '4df1fcd59050'
```
It needs to be possible to vagrant up on the 2.2 branch and then advance to the develop branch.
</issue>
<code>
[start of alembic/versions/4df1fcd59050_.py]
1 """Add the new pending_signing_tag column to the releases table.
2
3 Revision ID: 4df1fcd59050
4 Revises: 3c72757fa59e
5 Create Date: 2016-09-16 18:51:19.514301
6
7 """
8 from alembic import op
9 import sqlalchemy as sa
10
11
12 # revision identifiers, used by Alembic.
13 revision = '4df1fcd59050'
14 down_revision = '3c72757fa59e'
15
16
17 def upgrade():
18 op.add_column('releases',
19 sa.Column('pending_signing_tag', sa.UnicodeText(), server_default='',
20 nullable=False))
21 # We only used the server_default to stop the new column from being NULL. Let's now drop the
22 # server default.
23 op.alter_column('releases', 'pending_signing_tag', server_default=None)
24
25
26 def downgrade():
27 op.drop_column('releases', 'pending_signing_tag')
28
[end of alembic/versions/4df1fcd59050_.py]
[start of alembic/versions/37f38ddc4c8d_.py]
1 """Do not allow NULL values in the text column of the comments table.
2
3 Revision ID: 37f38ddc4c8d
4 Revises: 4df1fcd59050
5 Create Date: 2016-09-21 19:51:04.946521
6
7 """
8
9 from alembic import op
10 import sqlalchemy as sa
11
12
13 # revision identifiers, used by Alembic.
14 revision = '37f38ddc4c8d'
15 down_revision = '4df1fcd59050'
16
17
18 def upgrade():
19 """
20 We will need to set all existing NULL comments to "", then change the column to disallow NULL comments.
21 """
22 # Build a fake mini version of the comments table so we can form an UPDATE statement.
23 comments = sa.sql.table('comments', sa.sql.column('text', sa.UnicodeText))
24 # Set existing NULL comments to "".
25 op.execute(comments.update().where(comments.c.text==None).values({'text': op.inline_literal('')}))
26
27 # Disallow new NULL comments.
28 op.alter_column('comments', 'text', existing_type=sa.TEXT(), nullable=False)
29
30
31 def downgrade():
32 op.alter_column('comments', 'text', existing_type=sa.TEXT(), nullable=True)
33
[end of alembic/versions/37f38ddc4c8d_.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/alembic/versions/37f38ddc4c8d_.py b/alembic/versions/37f38ddc4c8d_.py
--- a/alembic/versions/37f38ddc4c8d_.py
+++ b/alembic/versions/37f38ddc4c8d_.py
@@ -1,7 +1,7 @@
"""Do not allow NULL values in the text column of the comments table.
Revision ID: 37f38ddc4c8d
-Revises: 4df1fcd59050
+Revises: 3c72757fa59e
Create Date: 2016-09-21 19:51:04.946521
"""
@@ -12,7 +12,7 @@
# revision identifiers, used by Alembic.
revision = '37f38ddc4c8d'
-down_revision = '4df1fcd59050'
+down_revision = '3c72757fa59e'
def upgrade():
diff --git a/alembic/versions/4df1fcd59050_.py b/alembic/versions/4df1fcd59050_.py
--- a/alembic/versions/4df1fcd59050_.py
+++ b/alembic/versions/4df1fcd59050_.py
@@ -1,7 +1,7 @@
"""Add the new pending_signing_tag column to the releases table.
Revision ID: 4df1fcd59050
-Revises: 3c72757fa59e
+Revises: 37f38ddc4c8d
Create Date: 2016-09-16 18:51:19.514301
"""
@@ -11,7 +11,7 @@
# revision identifiers, used by Alembic.
revision = '4df1fcd59050'
-down_revision = '3c72757fa59e'
+down_revision = '37f38ddc4c8d'
def upgrade():
|
{"golden_diff": "diff --git a/alembic/versions/37f38ddc4c8d_.py b/alembic/versions/37f38ddc4c8d_.py\n--- a/alembic/versions/37f38ddc4c8d_.py\n+++ b/alembic/versions/37f38ddc4c8d_.py\n@@ -1,7 +1,7 @@\n \"\"\"Do not allow NULL values in the text column of the comments table.\n \n Revision ID: 37f38ddc4c8d\n-Revises: 4df1fcd59050\n+Revises: 3c72757fa59e\n Create Date: 2016-09-21 19:51:04.946521\n \n \"\"\"\n@@ -12,7 +12,7 @@\n \n # revision identifiers, used by Alembic.\n revision = '37f38ddc4c8d'\n-down_revision = '4df1fcd59050'\n+down_revision = '3c72757fa59e'\n \n \n def upgrade():\ndiff --git a/alembic/versions/4df1fcd59050_.py b/alembic/versions/4df1fcd59050_.py\n--- a/alembic/versions/4df1fcd59050_.py\n+++ b/alembic/versions/4df1fcd59050_.py\n@@ -1,7 +1,7 @@\n \"\"\"Add the new pending_signing_tag column to the releases table.\n \n Revision ID: 4df1fcd59050\n-Revises: 3c72757fa59e\n+Revises: 37f38ddc4c8d\n Create Date: 2016-09-16 18:51:19.514301\n \n \"\"\"\n@@ -11,7 +11,7 @@\n \n # revision identifiers, used by Alembic.\n revision = '4df1fcd59050'\n-down_revision = '3c72757fa59e'\n+down_revision = '37f38ddc4c8d'\n \n \n def upgrade():\n", "issue": "The non-NULL comment text migration needs to be inserted before 4df1fcd59050\nThis migration was developed against the develop branch, but I had forgotten that there was already a migration in the develop branch that isn't on the 2.2 branch. We'll need to change both migrations so that the new one is inserted before the prior one:\n\n```\n[vagrant@localhost vagrant]$ alembic upgrade head\nWARNING:fedmsg.crypto.x509:Crypto disabled ImportError('No module named M2Crypto',)\nINFO [alembic.runtime.migration] Context impl PostgresqlImpl.\nINFO [alembic.runtime.migration] Will assume transactional DDL.\n/usr/lib/python2.7/site-packages/alembic/util/messaging.py:69: UserWarning: Revision 4df1fcd59050 referenced from 4df1fcd59050 -> 37f38ddc4c8d (head), Do not allow NULL values in the text column of the comments table. is not present\n warnings.warn(msg)\nTraceback (most recent call last):\n File \"/usr/bin/alembic\", line 9, in <module>\n load_entry_point('alembic==0.8.3', 'console_scripts', 'alembic')()\n File \"/usr/lib/python2.7/site-packages/alembic/config.py\", line 450, in main\n CommandLine(prog=prog).main(argv=argv)\n File \"/usr/lib/python2.7/site-packages/alembic/config.py\", line 444, in main\n self.run_cmd(cfg, options)\n File \"/usr/lib/python2.7/site-packages/alembic/config.py\", line 427, in run_cmd\n **dict((k, getattr(options, k)) for k in kwarg)\n File \"/usr/lib/python2.7/site-packages/alembic/command.py\", line 174, in upgrade\n script.run_env()\n File \"/usr/lib/python2.7/site-packages/alembic/script/base.py\", line 397, in run_env\n util.load_python_file(self.dir, 'env.py')\n File \"/usr/lib/python2.7/site-packages/alembic/util/pyfiles.py\", line 81, in load_python_file\n module = load_module_py(module_id, path)\n File \"/usr/lib/python2.7/site-packages/alembic/util/compat.py\", line 79, in load_module_py\n mod = imp.load_source(module_id, path, fp)\n File \"alembic/env.py\", line 72, in <module>\n run_migrations_online()\n File \"alembic/env.py\", line 65, in run_migrations_online\n context.run_migrations()\n File \"<string>\", line 8, in run_migrations\n File \"/usr/lib/python2.7/site-packages/alembic/runtime/environment.py\", line 797, in run_migrations\n self.get_context().run_migrations(**kw)\n File \"/usr/lib/python2.7/site-packages/alembic/runtime/migration.py\", line 303, in run_migrations\n for step in self._migrations_fn(heads, self):\n File \"/usr/lib/python2.7/site-packages/alembic/command.py\", line 163, in upgrade\n return script._upgrade_revs(revision, rev)\n File \"/usr/lib/python2.7/site-packages/alembic/script/base.py\", line 310, in _upgrade_revs\n revs = list(revs)\n File \"/usr/lib/python2.7/site-packages/alembic/script/revision.py\", line 610, in _iterate_revisions\n requested_lowers = self.get_revisions(lower)\n File \"/usr/lib/python2.7/site-packages/alembic/script/revision.py\", line 299, in get_revisions\n return sum([self.get_revisions(id_elem) for id_elem in id_], ())\n File \"/usr/lib/python2.7/site-packages/alembic/script/revision.py\", line 301, in get_revisions\n resolved_id, branch_label = self._resolve_revision_number(id_)\n File \"/usr/lib/python2.7/site-packages/alembic/script/revision.py\", line 423, in _resolve_revision_number\n self._revision_map\n File \"/usr/lib/python2.7/site-packages/alembic/util/langhelpers.py\", line 241, in __get__\n obj.__dict__[self.__name__] = result = self.fget(obj)\n File \"/usr/lib/python2.7/site-packages/alembic/script/revision.py\", line 151, in _revision_map\n down_revision = map_[downrev]\nKeyError: '4df1fcd59050'\n```\n\nIt needs to be possible to vagrant up on the 2.2 branch and then advance to the develop branch.\n\n", "before_files": [{"content": "\"\"\"Add the new pending_signing_tag column to the releases table.\n\nRevision ID: 4df1fcd59050\nRevises: 3c72757fa59e\nCreate Date: 2016-09-16 18:51:19.514301\n\n\"\"\"\nfrom alembic import op\nimport sqlalchemy as sa\n\n\n# revision identifiers, used by Alembic.\nrevision = '4df1fcd59050'\ndown_revision = '3c72757fa59e'\n\n\ndef upgrade():\n op.add_column('releases',\n sa.Column('pending_signing_tag', sa.UnicodeText(), server_default='',\n nullable=False))\n # We only used the server_default to stop the new column from being NULL. Let's now drop the\n # server default.\n op.alter_column('releases', 'pending_signing_tag', server_default=None)\n\n\ndef downgrade():\n op.drop_column('releases', 'pending_signing_tag')\n", "path": "alembic/versions/4df1fcd59050_.py"}, {"content": "\"\"\"Do not allow NULL values in the text column of the comments table.\n\nRevision ID: 37f38ddc4c8d\nRevises: 4df1fcd59050\nCreate Date: 2016-09-21 19:51:04.946521\n\n\"\"\"\n\nfrom alembic import op\nimport sqlalchemy as sa\n\n\n# revision identifiers, used by Alembic.\nrevision = '37f38ddc4c8d'\ndown_revision = '4df1fcd59050'\n\n\ndef upgrade():\n \"\"\"\n We will need to set all existing NULL comments to \"\", then change the column to disallow NULL comments.\n \"\"\"\n # Build a fake mini version of the comments table so we can form an UPDATE statement.\n comments = sa.sql.table('comments', sa.sql.column('text', sa.UnicodeText))\n # Set existing NULL comments to \"\".\n op.execute(comments.update().where(comments.c.text==None).values({'text': op.inline_literal('')}))\n\n # Disallow new NULL comments.\n op.alter_column('comments', 'text', existing_type=sa.TEXT(), nullable=False)\n\n\ndef downgrade():\n op.alter_column('comments', 'text', existing_type=sa.TEXT(), nullable=True)\n", "path": "alembic/versions/37f38ddc4c8d_.py"}]}
| 2,328 | 543 |
gh_patches_debug_34075
|
rasdani/github-patches
|
git_diff
|
scikit-hep__awkward-2102
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
behaviors with decorated mixin methods cannot be pickled
### Version of Awkward Array
main
### Description and code to reproduce
The mixin machinery introduces a closure, which `pickle` can't serialise.
</issue>
<code>
[start of src/awkward/behaviors/mixins.py]
1 # BSD 3-Clause License; see https://github.com/scikit-hep/awkward-1.0/blob/main/LICENSE
2
3
4 import sys
5
6 import awkward as ak
7
8
9 def mixin_class(registry, name=None):
10 """
11 Args:
12 registry (dict): The destination behavior mapping registry. Typically,
13 this would be the global registry #ak.behavior, but one may wish
14 to register methods in an alternative way.
15 name (str): The name to assign to the behaviour class.
16
17 This decorator can be used to register a behavior mixin class.
18
19 Any inherited behaviors will automatically be made available to the decorated
20 class.
21
22 See the "Mixin decorators" section of #ak.behavior for further details.
23 """
24
25 def register(cls):
26 cls_name = cls.__name__
27 if name is None:
28 behavior_name = cls_name
29 else:
30 behavior_name = name
31
32 record = type(
33 cls_name + "Record",
34 (cls, ak.highlevel.Record),
35 {"__module__": cls.__module__},
36 )
37 setattr(sys.modules[cls.__module__], cls_name + "Record", record)
38 registry[behavior_name] = record
39 array = type(
40 cls_name + "Array",
41 (cls, ak.highlevel.Array),
42 {"__module__": cls.__module__},
43 )
44 setattr(sys.modules[cls.__module__], cls_name + "Array", array)
45 registry["*", behavior_name] = array
46 for basecls in cls.mro():
47 for method in basecls.__dict__.values():
48 if hasattr(method, "_awkward_mixin"):
49 ufunc, rhs, transpose = method._awkward_mixin
50 if rhs is None:
51 registry.setdefault((ufunc, behavior_name), method)
52 continue
53 for rhs_name in list(rhs) + [behavior_name]:
54 registry.setdefault((ufunc, behavior_name, rhs_name), method)
55 if transpose is not None and rhs_name != behavior_name:
56 registry.setdefault(
57 (ufunc, rhs_name, behavior_name), transpose
58 )
59 if basecls.__name__ in rhs:
60 rhs.add(behavior_name)
61 return cls
62
63 return register
64
65
66 def mixin_class_method(ufunc, rhs=None, *, transpose=True):
67 """
68 Args:
69 ufunc (numpy.ufunc): A universal function (or NEP18 callable) that is
70 hooked in Awkward Array, i.e. it can be the first argument of a behavior.
71 rhs (Set[type] or None): Set of right-hand side argument types, optional
72 if wrapping a unary function. The left-hand side is expected to
73 always be `self` of the parent class.
74 transpose (bool): If true, automatically create a transpose signature
75 (only makes sense for binary ufuncs).
76
77 This decorator can be used to register a mixin class method.
78
79 Using this decorator ensures that derived classes that are declared with the
80 #ak.mixin_class decorator will also have the behaviors that this class has.
81 """
82
83 def register(method):
84 if not isinstance(rhs, (set, type(None))):
85 raise ak._errors.wrap_error(
86 ValueError("expected a set of right-hand-side argument types")
87 )
88 if transpose and rhs is not None:
89
90 def transposed(left, right):
91 return method(right, left)
92
93 # make a copy of rhs, we will edit it later
94 method._awkward_mixin = (ufunc, set(rhs), transposed)
95 else:
96 method._awkward_mixin = (ufunc, rhs, None)
97 return method
98
99 return register
100
[end of src/awkward/behaviors/mixins.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/src/awkward/behaviors/mixins.py b/src/awkward/behaviors/mixins.py
--- a/src/awkward/behaviors/mixins.py
+++ b/src/awkward/behaviors/mixins.py
@@ -1,6 +1,7 @@
# BSD 3-Clause License; see https://github.com/scikit-hep/awkward-1.0/blob/main/LICENSE
+import functools
import sys
import awkward as ak
@@ -63,6 +64,10 @@
return register
+def _call_transposed(func, left, right):
+ return func(right, left)
+
+
def mixin_class_method(ufunc, rhs=None, *, transpose=True):
"""
Args:
@@ -70,7 +75,8 @@
hooked in Awkward Array, i.e. it can be the first argument of a behavior.
rhs (Set[type] or None): Set of right-hand side argument types, optional
if wrapping a unary function. The left-hand side is expected to
- always be `self` of the parent class.
+ always be `self` of the parent class. The current class is implicitly
+ included in this set.
transpose (bool): If true, automatically create a transpose signature
(only makes sense for binary ufuncs).
@@ -86,12 +92,13 @@
ValueError("expected a set of right-hand-side argument types")
)
if transpose and rhs is not None:
-
- def transposed(left, right):
- return method(right, left)
-
# make a copy of rhs, we will edit it later
- method._awkward_mixin = (ufunc, set(rhs), transposed)
+ # use partial & a module-scoped function so that this is pickleable
+ method._awkward_mixin = (
+ ufunc,
+ set(rhs),
+ functools.partial(_call_transposed, method),
+ )
else:
method._awkward_mixin = (ufunc, rhs, None)
return method
|
{"golden_diff": "diff --git a/src/awkward/behaviors/mixins.py b/src/awkward/behaviors/mixins.py\n--- a/src/awkward/behaviors/mixins.py\n+++ b/src/awkward/behaviors/mixins.py\n@@ -1,6 +1,7 @@\n # BSD 3-Clause License; see https://github.com/scikit-hep/awkward-1.0/blob/main/LICENSE\n \n \n+import functools\n import sys\n \n import awkward as ak\n@@ -63,6 +64,10 @@\n return register\n \n \n+def _call_transposed(func, left, right):\n+ return func(right, left)\n+\n+\n def mixin_class_method(ufunc, rhs=None, *, transpose=True):\n \"\"\"\n Args:\n@@ -70,7 +75,8 @@\n hooked in Awkward Array, i.e. it can be the first argument of a behavior.\n rhs (Set[type] or None): Set of right-hand side argument types, optional\n if wrapping a unary function. The left-hand side is expected to\n- always be `self` of the parent class.\n+ always be `self` of the parent class. The current class is implicitly\n+ included in this set.\n transpose (bool): If true, automatically create a transpose signature\n (only makes sense for binary ufuncs).\n \n@@ -86,12 +92,13 @@\n ValueError(\"expected a set of right-hand-side argument types\")\n )\n if transpose and rhs is not None:\n-\n- def transposed(left, right):\n- return method(right, left)\n-\n # make a copy of rhs, we will edit it later\n- method._awkward_mixin = (ufunc, set(rhs), transposed)\n+ # use partial & a module-scoped function so that this is pickleable\n+ method._awkward_mixin = (\n+ ufunc,\n+ set(rhs),\n+ functools.partial(_call_transposed, method),\n+ )\n else:\n method._awkward_mixin = (ufunc, rhs, None)\n return method\n", "issue": "behaviors with decorated mixin methods cannot be pickled\n### Version of Awkward Array\n\nmain\n\n### Description and code to reproduce\n\nThe mixin machinery introduces a closure, which `pickle` can't serialise.\n", "before_files": [{"content": "# BSD 3-Clause License; see https://github.com/scikit-hep/awkward-1.0/blob/main/LICENSE\n\n\nimport sys\n\nimport awkward as ak\n\n\ndef mixin_class(registry, name=None):\n \"\"\"\n Args:\n registry (dict): The destination behavior mapping registry. Typically,\n this would be the global registry #ak.behavior, but one may wish\n to register methods in an alternative way.\n name (str): The name to assign to the behaviour class.\n\n This decorator can be used to register a behavior mixin class.\n\n Any inherited behaviors will automatically be made available to the decorated\n class.\n\n See the \"Mixin decorators\" section of #ak.behavior for further details.\n \"\"\"\n\n def register(cls):\n cls_name = cls.__name__\n if name is None:\n behavior_name = cls_name\n else:\n behavior_name = name\n\n record = type(\n cls_name + \"Record\",\n (cls, ak.highlevel.Record),\n {\"__module__\": cls.__module__},\n )\n setattr(sys.modules[cls.__module__], cls_name + \"Record\", record)\n registry[behavior_name] = record\n array = type(\n cls_name + \"Array\",\n (cls, ak.highlevel.Array),\n {\"__module__\": cls.__module__},\n )\n setattr(sys.modules[cls.__module__], cls_name + \"Array\", array)\n registry[\"*\", behavior_name] = array\n for basecls in cls.mro():\n for method in basecls.__dict__.values():\n if hasattr(method, \"_awkward_mixin\"):\n ufunc, rhs, transpose = method._awkward_mixin\n if rhs is None:\n registry.setdefault((ufunc, behavior_name), method)\n continue\n for rhs_name in list(rhs) + [behavior_name]:\n registry.setdefault((ufunc, behavior_name, rhs_name), method)\n if transpose is not None and rhs_name != behavior_name:\n registry.setdefault(\n (ufunc, rhs_name, behavior_name), transpose\n )\n if basecls.__name__ in rhs:\n rhs.add(behavior_name)\n return cls\n\n return register\n\n\ndef mixin_class_method(ufunc, rhs=None, *, transpose=True):\n \"\"\"\n Args:\n ufunc (numpy.ufunc): A universal function (or NEP18 callable) that is\n hooked in Awkward Array, i.e. it can be the first argument of a behavior.\n rhs (Set[type] or None): Set of right-hand side argument types, optional\n if wrapping a unary function. The left-hand side is expected to\n always be `self` of the parent class.\n transpose (bool): If true, automatically create a transpose signature\n (only makes sense for binary ufuncs).\n\n This decorator can be used to register a mixin class method.\n\n Using this decorator ensures that derived classes that are declared with the\n #ak.mixin_class decorator will also have the behaviors that this class has.\n \"\"\"\n\n def register(method):\n if not isinstance(rhs, (set, type(None))):\n raise ak._errors.wrap_error(\n ValueError(\"expected a set of right-hand-side argument types\")\n )\n if transpose and rhs is not None:\n\n def transposed(left, right):\n return method(right, left)\n\n # make a copy of rhs, we will edit it later\n method._awkward_mixin = (ufunc, set(rhs), transposed)\n else:\n method._awkward_mixin = (ufunc, rhs, None)\n return method\n\n return register\n", "path": "src/awkward/behaviors/mixins.py"}]}
| 1,562 | 460 |
gh_patches_debug_18294
|
rasdani/github-patches
|
git_diff
|
dask__distributed-6839
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Fix profile on msgpack 1.0.6/Windows
- Closes #8212
- Iterates on #3455
- Twin of #8214
Make sure that profile doesn't reach the point where sizeof() starts raising (at depth 140).
This is problematic with msgpack 1.0.6 on Windows because its max recursion depth fell from 512 to 256.
The remaining exception in the unit tests is fixed by #8214
</issue>
<code>
[start of distributed/deploy/local.py]
1 from __future__ import annotations
2
3 import logging
4 import math
5 import warnings
6
7 import toolz
8
9 from dask.system import CPU_COUNT
10 from dask.widgets import get_template
11
12 from distributed.deploy.spec import SpecCluster
13 from distributed.deploy.utils import nprocesses_nthreads
14 from distributed.nanny import Nanny
15 from distributed.scheduler import Scheduler
16 from distributed.security import Security
17 from distributed.worker import Worker
18 from distributed.worker_memory import parse_memory_limit
19
20 logger = logging.getLogger(__name__)
21
22
23 class LocalCluster(SpecCluster):
24 """Create local Scheduler and Workers
25
26 This creates a "cluster" of a scheduler and workers running on the local
27 machine.
28
29 Parameters
30 ----------
31 n_workers: int
32 Number of workers to start
33 processes: bool
34 Whether to use processes (True) or threads (False). Defaults to True, unless
35 worker_class=Worker, in which case it defaults to False.
36 threads_per_worker: int
37 Number of threads per each worker
38 scheduler_port: int
39 Port of the scheduler. 8786 by default, use 0 to choose a random port
40 silence_logs: logging level
41 Level of logs to print out to stdout. ``logging.WARN`` by default.
42 Use a falsey value like False or None for no change.
43 host: string
44 Host address on which the scheduler will listen, defaults to only localhost
45 ip: string
46 Deprecated. See ``host`` above.
47 dashboard_address: str
48 Address on which to listen for the Bokeh diagnostics server like
49 'localhost:8787' or '0.0.0.0:8787'. Defaults to ':8787'.
50 Set to ``None`` to disable the dashboard.
51 Use ':0' for a random port.
52 worker_dashboard_address: str
53 Address on which to listen for the Bokeh worker diagnostics server like
54 'localhost:8787' or '0.0.0.0:8787'. Defaults to None which disables the dashboard.
55 Use ':0' for a random port.
56 diagnostics_port: int
57 Deprecated. See dashboard_address.
58 asynchronous: bool (False by default)
59 Set to True if using this cluster within async/await functions or within
60 Tornado gen.coroutines. This should remain False for normal use.
61 blocked_handlers: List[str]
62 A list of strings specifying a blocklist of handlers to disallow on the
63 Scheduler, like ``['feed', 'run_function']``
64 service_kwargs: Dict[str, Dict]
65 Extra keywords to hand to the running services
66 security : Security or bool, optional
67 Configures communication security in this cluster. Can be a security
68 object, or True. If True, temporary self-signed credentials will
69 be created automatically.
70 protocol: str (optional)
71 Protocol to use like ``tcp://``, ``tls://``, ``inproc://``
72 This defaults to sensible choice given other keyword arguments like
73 ``processes`` and ``security``
74 interface: str (optional)
75 Network interface to use. Defaults to lo/localhost
76 worker_class: Worker
77 Worker class used to instantiate workers from. Defaults to Worker if
78 processes=False and Nanny if processes=True or omitted.
79 **worker_kwargs:
80 Extra worker arguments. Any additional keyword arguments will be passed
81 to the ``Worker`` class constructor.
82
83 Examples
84 --------
85 >>> cluster = LocalCluster() # Create a local cluster # doctest: +SKIP
86 >>> cluster # doctest: +SKIP
87 LocalCluster("127.0.0.1:8786", workers=8, threads=8)
88
89 >>> c = Client(cluster) # connect to local cluster # doctest: +SKIP
90
91 Scale the cluster to three workers
92
93 >>> cluster.scale(3) # doctest: +SKIP
94
95 Pass extra keyword arguments to Bokeh
96
97 >>> LocalCluster(service_kwargs={'dashboard': {'prefix': '/foo'}}) # doctest: +SKIP
98 """
99
100 def __init__(
101 self,
102 name=None,
103 n_workers=None,
104 threads_per_worker=None,
105 processes=None,
106 loop=None,
107 start=None,
108 host=None,
109 ip=None,
110 scheduler_port=0,
111 silence_logs=logging.WARN,
112 dashboard_address=":8787",
113 worker_dashboard_address=None,
114 diagnostics_port=None,
115 services=None,
116 worker_services=None,
117 service_kwargs=None,
118 asynchronous=False,
119 security=None,
120 protocol=None,
121 blocked_handlers=None,
122 interface=None,
123 worker_class=None,
124 scheduler_kwargs=None,
125 scheduler_sync_interval=1,
126 **worker_kwargs,
127 ):
128 if ip is not None:
129 # In the future we should warn users about this move
130 # warnings.warn("The ip keyword has been moved to host")
131 host = ip
132
133 if diagnostics_port is not None:
134 warnings.warn(
135 "diagnostics_port has been deprecated. "
136 "Please use `dashboard_address=` instead"
137 )
138 dashboard_address = diagnostics_port
139
140 if threads_per_worker == 0:
141 warnings.warn(
142 "Setting `threads_per_worker` to 0 has been deprecated. "
143 "Please set to None or to a specific int."
144 )
145 threads_per_worker = None
146
147 if "dashboard" in worker_kwargs:
148 warnings.warn(
149 "Setting `dashboard` is discouraged. "
150 "Please set `dashboard_address` to affect the scheduler (more common) "
151 "and `worker_dashboard_address` for the worker (less common)."
152 )
153
154 if processes is None:
155 processes = worker_class is None or issubclass(worker_class, Nanny)
156 if worker_class is None:
157 worker_class = Nanny if processes else Worker
158
159 self.status = None
160 self.processes = processes
161
162 if security is None:
163 # Falsey values load the default configuration
164 security = Security()
165 elif security is True:
166 # True indicates self-signed temporary credentials should be used
167 security = Security.temporary()
168 elif not isinstance(security, Security):
169 raise TypeError("security must be a Security object")
170
171 if protocol is None:
172 if host and "://" in host:
173 protocol = host.split("://")[0]
174 elif security and security.require_encryption:
175 protocol = "tls://"
176 elif not self.processes and not scheduler_port:
177 protocol = "inproc://"
178 else:
179 protocol = "tcp://"
180 if not protocol.endswith("://"):
181 protocol = protocol + "://"
182
183 if host is None and not protocol.startswith("inproc") and not interface:
184 host = "127.0.0.1"
185
186 services = services or {}
187 worker_services = worker_services or {}
188 if n_workers is None and threads_per_worker is None:
189 if processes:
190 n_workers, threads_per_worker = nprocesses_nthreads()
191 else:
192 n_workers = 1
193 threads_per_worker = CPU_COUNT
194 if n_workers is None and threads_per_worker is not None:
195 n_workers = max(1, CPU_COUNT // threads_per_worker) if processes else 1
196 if n_workers and threads_per_worker is None:
197 # Overcommit threads per worker, rather than undercommit
198 threads_per_worker = max(1, int(math.ceil(CPU_COUNT / n_workers)))
199 if n_workers and "memory_limit" not in worker_kwargs:
200 worker_kwargs["memory_limit"] = parse_memory_limit("auto", 1, n_workers)
201
202 worker_kwargs.update(
203 {
204 "host": host,
205 "nthreads": threads_per_worker,
206 "services": worker_services,
207 "dashboard_address": worker_dashboard_address,
208 "dashboard": worker_dashboard_address is not None,
209 "interface": interface,
210 "protocol": protocol,
211 "security": security,
212 "silence_logs": silence_logs,
213 }
214 )
215
216 scheduler = {
217 "cls": Scheduler,
218 "options": toolz.merge(
219 dict(
220 host=host,
221 services=services,
222 service_kwargs=service_kwargs,
223 security=security,
224 port=scheduler_port,
225 interface=interface,
226 protocol=protocol,
227 dashboard=dashboard_address is not None,
228 dashboard_address=dashboard_address,
229 blocked_handlers=blocked_handlers,
230 ),
231 scheduler_kwargs or {},
232 ),
233 }
234
235 worker = {"cls": worker_class, "options": worker_kwargs}
236 workers = {i: worker for i in range(n_workers)}
237
238 super().__init__(
239 name=name,
240 scheduler=scheduler,
241 workers=workers,
242 worker=worker,
243 loop=loop,
244 asynchronous=asynchronous,
245 silence_logs=silence_logs,
246 security=security,
247 scheduler_sync_interval=scheduler_sync_interval,
248 )
249
250 def start_worker(self, *args, **kwargs):
251 raise NotImplementedError(
252 "The `cluster.start_worker` function has been removed. "
253 "Please see the `cluster.scale` method instead."
254 )
255
256 def _repr_html_(self, cluster_status=None):
257 cluster_status = get_template("local_cluster.html.j2").render(
258 status=self.status.name,
259 processes=self.processes,
260 cluster_status=cluster_status,
261 )
262 return super()._repr_html_(cluster_status=cluster_status)
263
[end of distributed/deploy/local.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/distributed/deploy/local.py b/distributed/deploy/local.py
--- a/distributed/deploy/local.py
+++ b/distributed/deploy/local.py
@@ -30,6 +30,19 @@
----------
n_workers: int
Number of workers to start
+ memory_limit: str, float, int, or None, default "auto"
+ Sets the memory limit *per worker*.
+
+ Notes regarding argument data type:
+
+ * If None or 0, no limit is applied.
+ * If "auto", the total system memory is split evenly between the workers.
+ * If a float, that fraction of the system memory is used *per worker*.
+ * If a string giving a number of bytes (like ``"1GiB"``), that amount is used *per worker*.
+ * If an int, that number of bytes is used *per worker*.
+
+ Note that the limit will only be enforced when ``processes=True``, and the limit is only
+ enforced on a best-effort basis — it's still possible for workers to exceed this limit.
processes: bool
Whether to use processes (True) or threads (False). Defaults to True, unless
worker_class=Worker, in which case it defaults to False.
|
{"golden_diff": "diff --git a/distributed/deploy/local.py b/distributed/deploy/local.py\n--- a/distributed/deploy/local.py\n+++ b/distributed/deploy/local.py\n@@ -30,6 +30,19 @@\n ----------\n n_workers: int\n Number of workers to start\n+ memory_limit: str, float, int, or None, default \"auto\"\n+ Sets the memory limit *per worker*.\n+\n+ Notes regarding argument data type:\n+\n+ * If None or 0, no limit is applied.\n+ * If \"auto\", the total system memory is split evenly between the workers.\n+ * If a float, that fraction of the system memory is used *per worker*.\n+ * If a string giving a number of bytes (like ``\"1GiB\"``), that amount is used *per worker*.\n+ * If an int, that number of bytes is used *per worker*.\n+\n+ Note that the limit will only be enforced when ``processes=True``, and the limit is only\n+ enforced on a best-effort basis \u2014 it's still possible for workers to exceed this limit.\n processes: bool\n Whether to use processes (True) or threads (False). Defaults to True, unless\n worker_class=Worker, in which case it defaults to False.\n", "issue": "Fix profile on msgpack 1.0.6/Windows\n- Closes #8212\r\n- Iterates on #3455\r\n- Twin of #8214\r\n\r\nMake sure that profile doesn't reach the point where sizeof() starts raising (at depth 140).\r\nThis is problematic with msgpack 1.0.6 on Windows because its max recursion depth fell from 512 to 256.\r\n\r\nThe remaining exception in the unit tests is fixed by #8214\n", "before_files": [{"content": "from __future__ import annotations\n\nimport logging\nimport math\nimport warnings\n\nimport toolz\n\nfrom dask.system import CPU_COUNT\nfrom dask.widgets import get_template\n\nfrom distributed.deploy.spec import SpecCluster\nfrom distributed.deploy.utils import nprocesses_nthreads\nfrom distributed.nanny import Nanny\nfrom distributed.scheduler import Scheduler\nfrom distributed.security import Security\nfrom distributed.worker import Worker\nfrom distributed.worker_memory import parse_memory_limit\n\nlogger = logging.getLogger(__name__)\n\n\nclass LocalCluster(SpecCluster):\n \"\"\"Create local Scheduler and Workers\n\n This creates a \"cluster\" of a scheduler and workers running on the local\n machine.\n\n Parameters\n ----------\n n_workers: int\n Number of workers to start\n processes: bool\n Whether to use processes (True) or threads (False). Defaults to True, unless\n worker_class=Worker, in which case it defaults to False.\n threads_per_worker: int\n Number of threads per each worker\n scheduler_port: int\n Port of the scheduler. 8786 by default, use 0 to choose a random port\n silence_logs: logging level\n Level of logs to print out to stdout. ``logging.WARN`` by default.\n Use a falsey value like False or None for no change.\n host: string\n Host address on which the scheduler will listen, defaults to only localhost\n ip: string\n Deprecated. See ``host`` above.\n dashboard_address: str\n Address on which to listen for the Bokeh diagnostics server like\n 'localhost:8787' or '0.0.0.0:8787'. Defaults to ':8787'.\n Set to ``None`` to disable the dashboard.\n Use ':0' for a random port.\n worker_dashboard_address: str\n Address on which to listen for the Bokeh worker diagnostics server like\n 'localhost:8787' or '0.0.0.0:8787'. Defaults to None which disables the dashboard.\n Use ':0' for a random port.\n diagnostics_port: int\n Deprecated. See dashboard_address.\n asynchronous: bool (False by default)\n Set to True if using this cluster within async/await functions or within\n Tornado gen.coroutines. This should remain False for normal use.\n blocked_handlers: List[str]\n A list of strings specifying a blocklist of handlers to disallow on the\n Scheduler, like ``['feed', 'run_function']``\n service_kwargs: Dict[str, Dict]\n Extra keywords to hand to the running services\n security : Security or bool, optional\n Configures communication security in this cluster. Can be a security\n object, or True. If True, temporary self-signed credentials will\n be created automatically.\n protocol: str (optional)\n Protocol to use like ``tcp://``, ``tls://``, ``inproc://``\n This defaults to sensible choice given other keyword arguments like\n ``processes`` and ``security``\n interface: str (optional)\n Network interface to use. Defaults to lo/localhost\n worker_class: Worker\n Worker class used to instantiate workers from. Defaults to Worker if\n processes=False and Nanny if processes=True or omitted.\n **worker_kwargs:\n Extra worker arguments. Any additional keyword arguments will be passed\n to the ``Worker`` class constructor.\n\n Examples\n --------\n >>> cluster = LocalCluster() # Create a local cluster # doctest: +SKIP\n >>> cluster # doctest: +SKIP\n LocalCluster(\"127.0.0.1:8786\", workers=8, threads=8)\n\n >>> c = Client(cluster) # connect to local cluster # doctest: +SKIP\n\n Scale the cluster to three workers\n\n >>> cluster.scale(3) # doctest: +SKIP\n\n Pass extra keyword arguments to Bokeh\n\n >>> LocalCluster(service_kwargs={'dashboard': {'prefix': '/foo'}}) # doctest: +SKIP\n \"\"\"\n\n def __init__(\n self,\n name=None,\n n_workers=None,\n threads_per_worker=None,\n processes=None,\n loop=None,\n start=None,\n host=None,\n ip=None,\n scheduler_port=0,\n silence_logs=logging.WARN,\n dashboard_address=\":8787\",\n worker_dashboard_address=None,\n diagnostics_port=None,\n services=None,\n worker_services=None,\n service_kwargs=None,\n asynchronous=False,\n security=None,\n protocol=None,\n blocked_handlers=None,\n interface=None,\n worker_class=None,\n scheduler_kwargs=None,\n scheduler_sync_interval=1,\n **worker_kwargs,\n ):\n if ip is not None:\n # In the future we should warn users about this move\n # warnings.warn(\"The ip keyword has been moved to host\")\n host = ip\n\n if diagnostics_port is not None:\n warnings.warn(\n \"diagnostics_port has been deprecated. \"\n \"Please use `dashboard_address=` instead\"\n )\n dashboard_address = diagnostics_port\n\n if threads_per_worker == 0:\n warnings.warn(\n \"Setting `threads_per_worker` to 0 has been deprecated. \"\n \"Please set to None or to a specific int.\"\n )\n threads_per_worker = None\n\n if \"dashboard\" in worker_kwargs:\n warnings.warn(\n \"Setting `dashboard` is discouraged. \"\n \"Please set `dashboard_address` to affect the scheduler (more common) \"\n \"and `worker_dashboard_address` for the worker (less common).\"\n )\n\n if processes is None:\n processes = worker_class is None or issubclass(worker_class, Nanny)\n if worker_class is None:\n worker_class = Nanny if processes else Worker\n\n self.status = None\n self.processes = processes\n\n if security is None:\n # Falsey values load the default configuration\n security = Security()\n elif security is True:\n # True indicates self-signed temporary credentials should be used\n security = Security.temporary()\n elif not isinstance(security, Security):\n raise TypeError(\"security must be a Security object\")\n\n if protocol is None:\n if host and \"://\" in host:\n protocol = host.split(\"://\")[0]\n elif security and security.require_encryption:\n protocol = \"tls://\"\n elif not self.processes and not scheduler_port:\n protocol = \"inproc://\"\n else:\n protocol = \"tcp://\"\n if not protocol.endswith(\"://\"):\n protocol = protocol + \"://\"\n\n if host is None and not protocol.startswith(\"inproc\") and not interface:\n host = \"127.0.0.1\"\n\n services = services or {}\n worker_services = worker_services or {}\n if n_workers is None and threads_per_worker is None:\n if processes:\n n_workers, threads_per_worker = nprocesses_nthreads()\n else:\n n_workers = 1\n threads_per_worker = CPU_COUNT\n if n_workers is None and threads_per_worker is not None:\n n_workers = max(1, CPU_COUNT // threads_per_worker) if processes else 1\n if n_workers and threads_per_worker is None:\n # Overcommit threads per worker, rather than undercommit\n threads_per_worker = max(1, int(math.ceil(CPU_COUNT / n_workers)))\n if n_workers and \"memory_limit\" not in worker_kwargs:\n worker_kwargs[\"memory_limit\"] = parse_memory_limit(\"auto\", 1, n_workers)\n\n worker_kwargs.update(\n {\n \"host\": host,\n \"nthreads\": threads_per_worker,\n \"services\": worker_services,\n \"dashboard_address\": worker_dashboard_address,\n \"dashboard\": worker_dashboard_address is not None,\n \"interface\": interface,\n \"protocol\": protocol,\n \"security\": security,\n \"silence_logs\": silence_logs,\n }\n )\n\n scheduler = {\n \"cls\": Scheduler,\n \"options\": toolz.merge(\n dict(\n host=host,\n services=services,\n service_kwargs=service_kwargs,\n security=security,\n port=scheduler_port,\n interface=interface,\n protocol=protocol,\n dashboard=dashboard_address is not None,\n dashboard_address=dashboard_address,\n blocked_handlers=blocked_handlers,\n ),\n scheduler_kwargs or {},\n ),\n }\n\n worker = {\"cls\": worker_class, \"options\": worker_kwargs}\n workers = {i: worker for i in range(n_workers)}\n\n super().__init__(\n name=name,\n scheduler=scheduler,\n workers=workers,\n worker=worker,\n loop=loop,\n asynchronous=asynchronous,\n silence_logs=silence_logs,\n security=security,\n scheduler_sync_interval=scheduler_sync_interval,\n )\n\n def start_worker(self, *args, **kwargs):\n raise NotImplementedError(\n \"The `cluster.start_worker` function has been removed. \"\n \"Please see the `cluster.scale` method instead.\"\n )\n\n def _repr_html_(self, cluster_status=None):\n cluster_status = get_template(\"local_cluster.html.j2\").render(\n status=self.status.name,\n processes=self.processes,\n cluster_status=cluster_status,\n )\n return super()._repr_html_(cluster_status=cluster_status)\n", "path": "distributed/deploy/local.py"}]}
| 3,328 | 286 |
gh_patches_debug_14398
|
rasdani/github-patches
|
git_diff
|
akvo__akvo-rsr-3803
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
/related_project/ issue
Attempted to POST the following data:
{"relation":"1","project":"8012","related_iati_id":"23232"}
The response is 400:
`related_project: ["This field is required."]`
However, we want to allow a selection of external project:
<img width="1440" alt="Screenshot 2019-08-16 10 16 11" src="https://user-images.githubusercontent.com/1336477/63150230-effe3400-c00e-11e9-8aff-dc162686751a.png">
</issue>
<code>
[start of akvo/rest/serializers/related_project.py]
1 # -*- coding: utf-8 -*-
2
3 # Akvo RSR is covered by the GNU Affero General Public License.
4 # See more details in the license.txt file located at the root folder of the Akvo RSR module.
5 # For additional details on the GNU license please see < http://www.gnu.org/licenses/agpl.html >.
6
7
8 from akvo.rsr.models import RelatedProject
9
10 from .rsr_serializer import BaseRSRSerializer
11
12 from rest_framework import serializers
13
14
15 class RelatedProjectRawSerializer(BaseRSRSerializer):
16
17 class Meta:
18 model = RelatedProject
19 fields = '__all__'
20
21
22 class RelatedProjectSerializer(RelatedProjectRawSerializer):
23
24 related_project_name = serializers.ReadOnlyField(source='related_project.title')
25 related_project_show_link = serializers.ReadOnlyField()
26 relation_label = serializers.ReadOnlyField(source='iati_relation_unicode')
27
[end of akvo/rest/serializers/related_project.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/akvo/rest/serializers/related_project.py b/akvo/rest/serializers/related_project.py
--- a/akvo/rest/serializers/related_project.py
+++ b/akvo/rest/serializers/related_project.py
@@ -5,7 +5,7 @@
# For additional details on the GNU license please see < http://www.gnu.org/licenses/agpl.html >.
-from akvo.rsr.models import RelatedProject
+from akvo.rsr.models import RelatedProject, Project
from .rsr_serializer import BaseRSRSerializer
@@ -14,6 +14,9 @@
class RelatedProjectRawSerializer(BaseRSRSerializer):
+ related_project = serializers.PrimaryKeyRelatedField(
+ allow_null=True, queryset=Project.objects.all(), required=False, default=None)
+
class Meta:
model = RelatedProject
fields = '__all__'
|
{"golden_diff": "diff --git a/akvo/rest/serializers/related_project.py b/akvo/rest/serializers/related_project.py\n--- a/akvo/rest/serializers/related_project.py\n+++ b/akvo/rest/serializers/related_project.py\n@@ -5,7 +5,7 @@\n # For additional details on the GNU license please see < http://www.gnu.org/licenses/agpl.html >.\n \n \n-from akvo.rsr.models import RelatedProject\n+from akvo.rsr.models import RelatedProject, Project\n \n from .rsr_serializer import BaseRSRSerializer\n \n@@ -14,6 +14,9 @@\n \n class RelatedProjectRawSerializer(BaseRSRSerializer):\n \n+ related_project = serializers.PrimaryKeyRelatedField(\n+ allow_null=True, queryset=Project.objects.all(), required=False, default=None)\n+\n class Meta:\n model = RelatedProject\n fields = '__all__'\n", "issue": "/related_project/ issue\nAttempted to POST the following data:\r\n{\"relation\":\"1\",\"project\":\"8012\",\"related_iati_id\":\"23232\"}\r\n\r\nThe response is 400:\r\n`related_project: [\"This field is required.\"]`\r\n\r\nHowever, we want to allow a selection of external project:\r\n<img width=\"1440\" alt=\"Screenshot 2019-08-16 10 16 11\" src=\"https://user-images.githubusercontent.com/1336477/63150230-effe3400-c00e-11e9-8aff-dc162686751a.png\">\r\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\n# Akvo RSR is covered by the GNU Affero General Public License.\n# See more details in the license.txt file located at the root folder of the Akvo RSR module.\n# For additional details on the GNU license please see < http://www.gnu.org/licenses/agpl.html >.\n\n\nfrom akvo.rsr.models import RelatedProject\n\nfrom .rsr_serializer import BaseRSRSerializer\n\nfrom rest_framework import serializers\n\n\nclass RelatedProjectRawSerializer(BaseRSRSerializer):\n\n class Meta:\n model = RelatedProject\n fields = '__all__'\n\n\nclass RelatedProjectSerializer(RelatedProjectRawSerializer):\n\n related_project_name = serializers.ReadOnlyField(source='related_project.title')\n related_project_show_link = serializers.ReadOnlyField()\n relation_label = serializers.ReadOnlyField(source='iati_relation_unicode')\n", "path": "akvo/rest/serializers/related_project.py"}]}
| 932 | 193 |
gh_patches_debug_15838
|
rasdani/github-patches
|
git_diff
|
wagtail__wagtail-4730
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
AppRegistryNotReady("Apps aren't loaded yet.")
### After updating wagtail from 2.1 to 2.2 I get an AppRegistryNotReady error when migrating.
Removing **from .sitemap_generator import Sitemap**
from **wagtail/contrib/sitemaps/__init__.py**
fixed the problem.
Maybe it has something to do with the User model; I use a custom one.
</issue>
<code>
[start of wagtail/contrib/sitemaps/sitemap_generator.py]
1 import warnings
2
3 from django.contrib.sitemaps import Sitemap as DjangoSitemap
4
5 from wagtail.core.models import Site
6 from wagtail.core.utils import accepts_kwarg
7 from wagtail.utils.deprecation import RemovedInWagtail24Warning
8
9
10 class Sitemap(DjangoSitemap):
11
12 def __init__(self, request=None):
13 self.request = request
14
15 def location(self, obj):
16 return obj.get_full_url(self.request)
17
18 def lastmod(self, obj):
19 # fall back on latest_revision_created_at if last_published_at is null
20 # (for backwards compatibility from before last_published_at was added)
21 return (obj.last_published_at or obj.latest_revision_created_at)
22
23 def get_wagtail_site(self):
24 site = getattr(self.request, 'site', None)
25 if site is None:
26 return Site.objects.select_related(
27 'root_page'
28 ).get(is_default_site=True)
29 return site
30
31 def items(self):
32 return (
33 self.get_wagtail_site()
34 .root_page
35 .get_descendants(inclusive=True)
36 .live()
37 .public()
38 .order_by('path')
39 .specific())
40
41 def _urls(self, page, protocol, domain):
42 urls = []
43 last_mods = set()
44
45 for item in self.paginator.page(page).object_list:
46
47 if not accepts_kwarg(item.get_sitemap_urls, 'request'):
48 warnings.warn(
49 "%s.get_sitemap_urls() must be updated to accept an optional "
50 "'request' keyword argument" % type(item).__name__,
51 category=RemovedInWagtail24Warning)
52
53 url_info_items = item.get_sitemap_urls()
54 else:
55 url_info_items = item.get_sitemap_urls(self.request)
56
57 for url_info in url_info_items:
58 urls.append(url_info)
59 last_mods.add(url_info.get('lastmod'))
60
61 # last_mods might be empty if the whole site is private
62 if last_mods and None not in last_mods:
63 self.latest_lastmod = max(last_mods)
64 return urls
65
[end of wagtail/contrib/sitemaps/sitemap_generator.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/wagtail/contrib/sitemaps/sitemap_generator.py b/wagtail/contrib/sitemaps/sitemap_generator.py
--- a/wagtail/contrib/sitemaps/sitemap_generator.py
+++ b/wagtail/contrib/sitemaps/sitemap_generator.py
@@ -2,7 +2,6 @@
from django.contrib.sitemaps import Sitemap as DjangoSitemap
-from wagtail.core.models import Site
from wagtail.core.utils import accepts_kwarg
from wagtail.utils.deprecation import RemovedInWagtail24Warning
@@ -23,6 +22,7 @@
def get_wagtail_site(self):
site = getattr(self.request, 'site', None)
if site is None:
+ from wagtail.core.models import Site
return Site.objects.select_related(
'root_page'
).get(is_default_site=True)
|
{"golden_diff": "diff --git a/wagtail/contrib/sitemaps/sitemap_generator.py b/wagtail/contrib/sitemaps/sitemap_generator.py\n--- a/wagtail/contrib/sitemaps/sitemap_generator.py\n+++ b/wagtail/contrib/sitemaps/sitemap_generator.py\n@@ -2,7 +2,6 @@\n \n from django.contrib.sitemaps import Sitemap as DjangoSitemap\n \n-from wagtail.core.models import Site\n from wagtail.core.utils import accepts_kwarg\n from wagtail.utils.deprecation import RemovedInWagtail24Warning\n \n@@ -23,6 +22,7 @@\n def get_wagtail_site(self):\n site = getattr(self.request, 'site', None)\n if site is None:\n+ from wagtail.core.models import Site\n return Site.objects.select_related(\n 'root_page'\n ).get(is_default_site=True)\n", "issue": "AppRegistryNotReady(\"Apps aren't loaded yet.\")\n### After updating wagtail from 2.1 to 2.2 I get an AppRegistryNotReady error when migrating.\r\n\r\nRemoving **from .sitemap_generator import Sitemap**\r\nfrom **wagtail/contrib/sitemaps/__init__.py**\r\nfixed the problem.\r\n\r\nMaybe it has something to do with the User model; I use a custom one.\n", "before_files": [{"content": "import warnings\n\nfrom django.contrib.sitemaps import Sitemap as DjangoSitemap\n\nfrom wagtail.core.models import Site\nfrom wagtail.core.utils import accepts_kwarg\nfrom wagtail.utils.deprecation import RemovedInWagtail24Warning\n\n\nclass Sitemap(DjangoSitemap):\n\n def __init__(self, request=None):\n self.request = request\n\n def location(self, obj):\n return obj.get_full_url(self.request)\n\n def lastmod(self, obj):\n # fall back on latest_revision_created_at if last_published_at is null\n # (for backwards compatibility from before last_published_at was added)\n return (obj.last_published_at or obj.latest_revision_created_at)\n\n def get_wagtail_site(self):\n site = getattr(self.request, 'site', None)\n if site is None:\n return Site.objects.select_related(\n 'root_page'\n ).get(is_default_site=True)\n return site\n\n def items(self):\n return (\n self.get_wagtail_site()\n .root_page\n .get_descendants(inclusive=True)\n .live()\n .public()\n .order_by('path')\n .specific())\n\n def _urls(self, page, protocol, domain):\n urls = []\n last_mods = set()\n\n for item in self.paginator.page(page).object_list:\n\n if not accepts_kwarg(item.get_sitemap_urls, 'request'):\n warnings.warn(\n \"%s.get_sitemap_urls() must be updated to accept an optional \"\n \"'request' keyword argument\" % type(item).__name__,\n category=RemovedInWagtail24Warning)\n\n url_info_items = item.get_sitemap_urls()\n else:\n url_info_items = item.get_sitemap_urls(self.request)\n\n for url_info in url_info_items:\n urls.append(url_info)\n last_mods.add(url_info.get('lastmod'))\n\n # last_mods might be empty if the whole site is private\n if last_mods and None not in last_mods:\n self.latest_lastmod = max(last_mods)\n return urls\n", "path": "wagtail/contrib/sitemaps/sitemap_generator.py"}]}
| 1,201 | 188 |
gh_patches_debug_14105
|
rasdani/github-patches
|
git_diff
|
interactions-py__interactions.py-1169
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[BUG] Cannot pop from Cache during dispatch.
### Describe the bug.
When running functions leading to delete events, for example `channel.purge` the cache encounters a KeyError and kills the whole Process.
This is caused by the implementation of #482
### List the steps.
n/A
### What you expected.
n/A
### What you saw.
```
Websocket have raised an exception, closing.
Traceback (most recent call last):
File "C:\Users\\Desktop\PycharmProjects\library\interactions\client\bot.py", line 440, in _login
await self._websocket.run()
File "C:\Users\\Desktop\PycharmProjects\library\interactions\api\gateway\client.py", line 279, in run
await self._handle_stream(msg)
File "C:\Users\\Desktop\PycharmProjects\library\interactions\api\gateway\client.py", line 332, in _handle_stream
self._dispatch_event(event, data)
File "C:\Users\\Desktop\PycharmProjects\library\interactions\api\gateway\client.py", line 562, in _dispatch_event
_message_cache.pop(message_id)
File "C:\Users\\Desktop\PycharmProjects\library\interactions\api\cache.py", line 131, in pop
return self.values.pop(key, default)
File "C:\Users\\Desktop\PycharmProjects\library\interactions\utils\dict_caches.py", line 39, in __getitem__
self.move_to_end(key)
KeyError: Snowflake(1040316644695756912)
Process finished with exit code 0
```
### What version of the library did you use?
unstable
### Version specification
The unstable unstable version
### Code of Conduct
- [X] I agree to follow the contribution requirements.
[BUG] Cannot pop from Cache during dispatch.
### Describe the bug.
When running functions leading to delete events, for example `channel.purge` the cache encounters a KeyError and kills the whole Process.
This is caused by the implementation of #482
### List the steps.
n/A
### What you expected.
n/A
### What you saw.
```
Websocket have raised an exception, closing.
Traceback (most recent call last):
File "C:\Users\\Desktop\PycharmProjects\library\interactions\client\bot.py", line 440, in _login
await self._websocket.run()
File "C:\Users\\Desktop\PycharmProjects\library\interactions\api\gateway\client.py", line 279, in run
await self._handle_stream(msg)
File "C:\Users\\Desktop\PycharmProjects\library\interactions\api\gateway\client.py", line 332, in _handle_stream
self._dispatch_event(event, data)
File "C:\Users\\Desktop\PycharmProjects\library\interactions\api\gateway\client.py", line 562, in _dispatch_event
_message_cache.pop(message_id)
File "C:\Users\\Desktop\PycharmProjects\library\interactions\api\cache.py", line 131, in pop
return self.values.pop(key, default)
File "C:\Users\\Desktop\PycharmProjects\library\interactions\utils\dict_caches.py", line 39, in __getitem__
self.move_to_end(key)
KeyError: Snowflake(1040316644695756912)
Process finished with exit code 0
```
### What version of the library did you use?
unstable
### Version specification
The unstable unstable version
### Code of Conduct
- [X] I agree to follow the contribution requirements.
</issue>
<code>
[start of interactions/utils/dict_caches.py]
1 from collections import OrderedDict
2 from typing import Generic, TypeVar
3
4 __all__ = ("FIFODict", "LRUDict")
5
6 _KT = TypeVar("_KT")
7 _VT = TypeVar("_VT")
8
9
10 class FIFODict(OrderedDict, Generic[_KT, _VT]):
11 """A dictionary that removes the old keys if over the item limit"""
12
13 def __init__(self, *args, max_items: int = float("inf"), **kwargs):
14 if max_items < 0:
15 raise RuntimeError("You cannot set max_items to negative numbers.")
16
17 super().__init__(*args, **kwargs)
18 self._max_items = max_items
19
20 def __setitem__(self, key: _KT, value: _VT):
21 super().__setitem__(key, value)
22
23 # Prevent buildup over time
24 while len(self) > self._max_items:
25 del self[next(iter(self))]
26
27
28 class LRUDict(OrderedDict, Generic[_KT, _VT]):
29 """A dictionary that removes the value that was the least recently used if over the item limit"""
30
31 def __init__(self, *args, max_items: int = float("inf"), **kwargs):
32 if max_items < 0:
33 raise RuntimeError("You cannot set max_items to negative numbers.")
34
35 super().__init__(*args, **kwargs)
36 self._max_items = max_items
37
38 def __getitem__(self, key: _KT) -> _VT:
39 self.move_to_end(key)
40 return super().__getitem__(key)
41
42 def __setitem__(self, key: _KT, value: _VT):
43 super().__setitem__(key, value)
44
45 # Prevent buildup over time
46 while len(self) > self._max_items:
47 del self[next(iter(self))]
48
[end of interactions/utils/dict_caches.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/interactions/utils/dict_caches.py b/interactions/utils/dict_caches.py
--- a/interactions/utils/dict_caches.py
+++ b/interactions/utils/dict_caches.py
@@ -1,6 +1,8 @@
from collections import OrderedDict
from typing import Generic, TypeVar
+from .missing import MISSING
+
__all__ = ("FIFODict", "LRUDict")
_KT = TypeVar("_KT")
@@ -45,3 +47,14 @@
# Prevent buildup over time
while len(self) > self._max_items:
del self[next(iter(self))]
+
+ __marker = object()
+
+ def pop(self, key: _KT, default: _VT = __marker) -> _VT:
+ if key in self:
+ result = self[key]
+ del self[key]
+ return result
+ if default is MISSING:
+ raise KeyError(key)
+ return default
|
{"golden_diff": "diff --git a/interactions/utils/dict_caches.py b/interactions/utils/dict_caches.py\n--- a/interactions/utils/dict_caches.py\n+++ b/interactions/utils/dict_caches.py\n@@ -1,6 +1,8 @@\n from collections import OrderedDict\n from typing import Generic, TypeVar\n \n+from .missing import MISSING\n+\n __all__ = (\"FIFODict\", \"LRUDict\")\n \n _KT = TypeVar(\"_KT\")\n@@ -45,3 +47,14 @@\n # Prevent buildup over time\n while len(self) > self._max_items:\n del self[next(iter(self))]\n+\n+ __marker = object()\n+\n+ def pop(self, key: _KT, default: _VT = __marker) -> _VT:\n+ if key in self:\n+ result = self[key]\n+ del self[key]\n+ return result\n+ if default is MISSING:\n+ raise KeyError(key)\n+ return default\n", "issue": "[BUG] Cannot pop from Cache during dispatch.\n### Describe the bug.\r\n\r\nWhen running functions leading to delete events, for example `channel.purge` the cache encounters a KeyError and kills the whole Process.\r\n\r\nThis is caused by the implementation of #482\r\n\r\n### List the steps.\r\n\r\nn/A\r\n\r\n### What you expected.\r\n\r\nn/A\r\n\r\n### What you saw.\r\n```\r\nWebsocket have raised an exception, closing.\r\nTraceback (most recent call last):\r\n File \"C:\\Users\\\\Desktop\\PycharmProjects\\library\\interactions\\client\\bot.py\", line 440, in _login\r\n await self._websocket.run()\r\n File \"C:\\Users\\\\Desktop\\PycharmProjects\\library\\interactions\\api\\gateway\\client.py\", line 279, in run\r\n await self._handle_stream(msg)\r\n File \"C:\\Users\\\\Desktop\\PycharmProjects\\library\\interactions\\api\\gateway\\client.py\", line 332, in _handle_stream\r\n self._dispatch_event(event, data)\r\n File \"C:\\Users\\\\Desktop\\PycharmProjects\\library\\interactions\\api\\gateway\\client.py\", line 562, in _dispatch_event\r\n _message_cache.pop(message_id)\r\n File \"C:\\Users\\\\Desktop\\PycharmProjects\\library\\interactions\\api\\cache.py\", line 131, in pop\r\n return self.values.pop(key, default)\r\n File \"C:\\Users\\\\Desktop\\PycharmProjects\\library\\interactions\\utils\\dict_caches.py\", line 39, in __getitem__\r\n self.move_to_end(key)\r\nKeyError: Snowflake(1040316644695756912)\r\n\r\nProcess finished with exit code 0\r\n```\r\n\r\n### What version of the library did you use?\r\n\r\nunstable\r\n\r\n### Version specification\r\n\r\nThe unstable unstable version\r\n\r\n### Code of Conduct\r\n\r\n- [X] I agree to follow the contribution requirements.\n[BUG] Cannot pop from Cache during dispatch.\n### Describe the bug.\r\n\r\nWhen running functions leading to delete events, for example `channel.purge` the cache encounters a KeyError and kills the whole Process.\r\n\r\nThis is caused by the implementation of #482\r\n\r\n### List the steps.\r\n\r\nn/A\r\n\r\n### What you expected.\r\n\r\nn/A\r\n\r\n### What you saw.\r\n```\r\nWebsocket have raised an exception, closing.\r\nTraceback (most recent call last):\r\n File \"C:\\Users\\\\Desktop\\PycharmProjects\\library\\interactions\\client\\bot.py\", line 440, in _login\r\n await self._websocket.run()\r\n File \"C:\\Users\\\\Desktop\\PycharmProjects\\library\\interactions\\api\\gateway\\client.py\", line 279, in run\r\n await self._handle_stream(msg)\r\n File \"C:\\Users\\\\Desktop\\PycharmProjects\\library\\interactions\\api\\gateway\\client.py\", line 332, in _handle_stream\r\n self._dispatch_event(event, data)\r\n File \"C:\\Users\\\\Desktop\\PycharmProjects\\library\\interactions\\api\\gateway\\client.py\", line 562, in _dispatch_event\r\n _message_cache.pop(message_id)\r\n File \"C:\\Users\\\\Desktop\\PycharmProjects\\library\\interactions\\api\\cache.py\", line 131, in pop\r\n return self.values.pop(key, default)\r\n File \"C:\\Users\\\\Desktop\\PycharmProjects\\library\\interactions\\utils\\dict_caches.py\", line 39, in __getitem__\r\n self.move_to_end(key)\r\nKeyError: Snowflake(1040316644695756912)\r\n\r\nProcess finished with exit code 0\r\n```\r\n\r\n### What version of the library did you use?\r\n\r\nunstable\r\n\r\n### Version specification\r\n\r\nThe unstable unstable version\r\n\r\n### Code of Conduct\r\n\r\n- [X] I agree to follow the contribution requirements.\n", "before_files": [{"content": "from collections import OrderedDict\nfrom typing import Generic, TypeVar\n\n__all__ = (\"FIFODict\", \"LRUDict\")\n\n_KT = TypeVar(\"_KT\")\n_VT = TypeVar(\"_VT\")\n\n\nclass FIFODict(OrderedDict, Generic[_KT, _VT]):\n \"\"\"A dictionary that removes the old keys if over the item limit\"\"\"\n\n def __init__(self, *args, max_items: int = float(\"inf\"), **kwargs):\n if max_items < 0:\n raise RuntimeError(\"You cannot set max_items to negative numbers.\")\n\n super().__init__(*args, **kwargs)\n self._max_items = max_items\n\n def __setitem__(self, key: _KT, value: _VT):\n super().__setitem__(key, value)\n\n # Prevent buildup over time\n while len(self) > self._max_items:\n del self[next(iter(self))]\n\n\nclass LRUDict(OrderedDict, Generic[_KT, _VT]):\n \"\"\"A dictionary that removes the value that was the least recently used if over the item limit\"\"\"\n\n def __init__(self, *args, max_items: int = float(\"inf\"), **kwargs):\n if max_items < 0:\n raise RuntimeError(\"You cannot set max_items to negative numbers.\")\n\n super().__init__(*args, **kwargs)\n self._max_items = max_items\n\n def __getitem__(self, key: _KT) -> _VT:\n self.move_to_end(key)\n return super().__getitem__(key)\n\n def __setitem__(self, key: _KT, value: _VT):\n super().__setitem__(key, value)\n\n # Prevent buildup over time\n while len(self) > self._max_items:\n del self[next(iter(self))]\n", "path": "interactions/utils/dict_caches.py"}]}
| 1,856 | 216 |
gh_patches_debug_9077
|
rasdani/github-patches
|
git_diff
|
keras-team__keras-nlp-760
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Simplify docstring example of `TokenAndPositionEmbedding`
The current [docstring example](https://github.com/keras-team/keras-nlp/blob/3a951a9f5442b77a0c474bb8b61406f65c7ef924/keras_nlp/layers/token_and_position_embedding.py#L45) of `TokenAndPositionEmbedding` is a bit more complicated than needed and should better match newer examples like those in [`keras_nlp.models.BertBackbone`](https://github.com/keras-team/keras-nlp/blob/3a951a9f5442b77a0c474bb8b61406f65c7ef924/keras_nlp/models/bert/bert_backbone.py#L67).
Make the following changes to the docstring example:
* Create a real input of the correct shape using `tf.ones`
* Inline constant values like `"seq_length"` directly into the function calls
Be sure to check that your example runs on colab or your local machine!
</issue>
<code>
[start of keras_nlp/layers/token_and_position_embedding.py]
1 # Copyright 2023 The KerasNLP Authors
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # https://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """Creates an Embedding Layer and adds Positional Embeddings"""
16
17 from tensorflow import keras
18
19 import keras_nlp.layers
20 from keras_nlp.utils.keras_utils import clone_initializer
21
22
23 @keras.utils.register_keras_serializable(package="keras_nlp")
24 class TokenAndPositionEmbedding(keras.layers.Layer):
25 """A layer which sums a token and position embedding.
26
27 Token and position embeddings are ways of representing words and their order
28 in a sentence. This layer creates a `keras.layers.Embedding` token embedding
29 and a `keras_nlp.layers.PositionEmbedding` position embedding and sums their
30 output when called. This layer assumes that the last dimension in the input
31 corresponds to the sequence dimension.
32
33 Args:
34 vocabulary_size: The size of the vocabulary.
35 sequence_length: The maximum length of input sequence
36 embedding_dim: The output dimension of the embedding layer
37 embeddings_initializer: The initializer to use for the Embedding
38 Layers
39 mask_zero: Boolean, whether or not the input value 0 is a special
40 "padding" value that should be masked out.
41 This is useful when using recurrent layers which may take variable
42 length input. If this is True, then all subsequent layers in the
43 model need to support masking or an exception will be raised.
44 If mask_zero` is set to True, as a consequence, index 0 cannot be
45 used in the vocabulary
46 (input_dim should equal size of vocabulary + 1).
47
48 Examples:
49 ```python
50 seq_length = 50
51 vocab_size = 5000
52 embed_dim = 128
53 inputs = keras.Input(shape=(seq_length,))
54 embedding_layer = keras_nlp.layers.TokenAndPositionEmbedding(
55 vocabulary_size=vocab_size,
56 sequence_length=seq_length,
57 embedding_dim=embed_dim,
58 )
59 outputs = embedding_layer(inputs)
60 ```
61 """
62
63 def __init__(
64 self,
65 vocabulary_size,
66 sequence_length,
67 embedding_dim,
68 embeddings_initializer="glorot_uniform",
69 mask_zero=False,
70 **kwargs
71 ):
72 super().__init__(**kwargs)
73 if vocabulary_size is None:
74 raise ValueError(
75 "`vocabulary_size` must be an Integer, received `None`."
76 )
77 if sequence_length is None:
78 raise ValueError(
79 "`sequence_length` must be an Integer, received `None`."
80 )
81 if embedding_dim is None:
82 raise ValueError(
83 "`embedding_dim` must be an Integer, received `None`."
84 )
85 self.vocabulary_size = int(vocabulary_size)
86 self.sequence_length = int(sequence_length)
87 self.embedding_dim = int(embedding_dim)
88 self.embeddings_initializer = keras.initializers.get(
89 embeddings_initializer
90 )
91 self.token_embedding = keras.layers.Embedding(
92 vocabulary_size,
93 embedding_dim,
94 embeddings_initializer=clone_initializer(
95 self.embeddings_initializer
96 ),
97 mask_zero=mask_zero,
98 name="token_embedding"
99 + str(keras.backend.get_uid("token_embedding")),
100 )
101 self.position_embedding = keras_nlp.layers.PositionEmbedding(
102 sequence_length=sequence_length,
103 initializer=clone_initializer(self.embeddings_initializer),
104 name="position_embedding"
105 + str(keras.backend.get_uid("position_embedding")),
106 )
107 self.supports_masking = self.token_embedding.supports_masking
108
109 def get_config(self):
110 config = super().get_config()
111 config.update(
112 {
113 "vocabulary_size": self.vocabulary_size,
114 "sequence_length": self.sequence_length,
115 "embedding_dim": self.embedding_dim,
116 "embeddings_initializer": keras.initializers.serialize(
117 self.embeddings_initializer
118 ),
119 "mask_zero": self.token_embedding.mask_zero,
120 },
121 )
122 return config
123
124 def call(self, inputs):
125 embedded_tokens = self.token_embedding(inputs)
126 embedded_positions = self.position_embedding(embedded_tokens)
127 outputs = embedded_tokens + embedded_positions
128 return outputs
129
130 def compute_mask(self, inputs, mask=None):
131 return self.token_embedding.compute_mask(inputs, mask=mask)
132
[end of keras_nlp/layers/token_and_position_embedding.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/keras_nlp/layers/token_and_position_embedding.py b/keras_nlp/layers/token_and_position_embedding.py
--- a/keras_nlp/layers/token_and_position_embedding.py
+++ b/keras_nlp/layers/token_and_position_embedding.py
@@ -47,14 +47,11 @@
Examples:
```python
- seq_length = 50
- vocab_size = 5000
- embed_dim = 128
- inputs = keras.Input(shape=(seq_length,))
+ inputs = tf.ones(shape=(1, 50), dtype=tf.int64)
embedding_layer = keras_nlp.layers.TokenAndPositionEmbedding(
- vocabulary_size=vocab_size,
- sequence_length=seq_length,
- embedding_dim=embed_dim,
+ vocabulary_size=10_000,
+ sequence_length=50,
+ embedding_dim=128,
)
outputs = embedding_layer(inputs)
```
|
{"golden_diff": "diff --git a/keras_nlp/layers/token_and_position_embedding.py b/keras_nlp/layers/token_and_position_embedding.py\n--- a/keras_nlp/layers/token_and_position_embedding.py\n+++ b/keras_nlp/layers/token_and_position_embedding.py\n@@ -47,14 +47,11 @@\n \n Examples:\n ```python\n- seq_length = 50\n- vocab_size = 5000\n- embed_dim = 128\n- inputs = keras.Input(shape=(seq_length,))\n+ inputs = tf.ones(shape=(1, 50), dtype=tf.int64)\n embedding_layer = keras_nlp.layers.TokenAndPositionEmbedding(\n- vocabulary_size=vocab_size,\n- sequence_length=seq_length,\n- embedding_dim=embed_dim,\n+ vocabulary_size=10_000,\n+ sequence_length=50,\n+ embedding_dim=128,\n )\n outputs = embedding_layer(inputs)\n ```\n", "issue": "Simplify docstring example of `TokenAndPositionEmbedding`\nThe current [docstring example](https://github.com/keras-team/keras-nlp/blob/3a951a9f5442b77a0c474bb8b61406f65c7ef924/keras_nlp/layers/token_and_position_embedding.py#L45) of `TokenAndPositionEmbedding` is a bit more complicated than needed and should better match newer examples like those in [`keras_nlp.models.BertBackbone`](https://github.com/keras-team/keras-nlp/blob/3a951a9f5442b77a0c474bb8b61406f65c7ef924/keras_nlp/models/bert/bert_backbone.py#L67).\r\n\r\nMake the following changes to the docstring example:\r\n* Create a real input of the correct shape using `tf.ones`\r\n* Inline constant values like `\"seq_length\"` directly into the function calls\r\n\r\nBe sure to check that your example runs on colab or your local machine!\n", "before_files": [{"content": "# Copyright 2023 The KerasNLP Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Creates an Embedding Layer and adds Positional Embeddings\"\"\"\n\nfrom tensorflow import keras\n\nimport keras_nlp.layers\nfrom keras_nlp.utils.keras_utils import clone_initializer\n\n\[email protected]_keras_serializable(package=\"keras_nlp\")\nclass TokenAndPositionEmbedding(keras.layers.Layer):\n \"\"\"A layer which sums a token and position embedding.\n\n Token and position embeddings are ways of representing words and their order\n in a sentence. This layer creates a `keras.layers.Embedding` token embedding\n and a `keras_nlp.layers.PositionEmbedding` position embedding and sums their\n output when called. This layer assumes that the last dimension in the input\n corresponds to the sequence dimension.\n\n Args:\n vocabulary_size: The size of the vocabulary.\n sequence_length: The maximum length of input sequence\n embedding_dim: The output dimension of the embedding layer\n embeddings_initializer: The initializer to use for the Embedding\n Layers\n mask_zero: Boolean, whether or not the input value 0 is a special\n \"padding\" value that should be masked out.\n This is useful when using recurrent layers which may take variable\n length input. If this is True, then all subsequent layers in the\n model need to support masking or an exception will be raised.\n If mask_zero` is set to True, as a consequence, index 0 cannot be\n used in the vocabulary\n (input_dim should equal size of vocabulary + 1).\n\n Examples:\n ```python\n seq_length = 50\n vocab_size = 5000\n embed_dim = 128\n inputs = keras.Input(shape=(seq_length,))\n embedding_layer = keras_nlp.layers.TokenAndPositionEmbedding(\n vocabulary_size=vocab_size,\n sequence_length=seq_length,\n embedding_dim=embed_dim,\n )\n outputs = embedding_layer(inputs)\n ```\n \"\"\"\n\n def __init__(\n self,\n vocabulary_size,\n sequence_length,\n embedding_dim,\n embeddings_initializer=\"glorot_uniform\",\n mask_zero=False,\n **kwargs\n ):\n super().__init__(**kwargs)\n if vocabulary_size is None:\n raise ValueError(\n \"`vocabulary_size` must be an Integer, received `None`.\"\n )\n if sequence_length is None:\n raise ValueError(\n \"`sequence_length` must be an Integer, received `None`.\"\n )\n if embedding_dim is None:\n raise ValueError(\n \"`embedding_dim` must be an Integer, received `None`.\"\n )\n self.vocabulary_size = int(vocabulary_size)\n self.sequence_length = int(sequence_length)\n self.embedding_dim = int(embedding_dim)\n self.embeddings_initializer = keras.initializers.get(\n embeddings_initializer\n )\n self.token_embedding = keras.layers.Embedding(\n vocabulary_size,\n embedding_dim,\n embeddings_initializer=clone_initializer(\n self.embeddings_initializer\n ),\n mask_zero=mask_zero,\n name=\"token_embedding\"\n + str(keras.backend.get_uid(\"token_embedding\")),\n )\n self.position_embedding = keras_nlp.layers.PositionEmbedding(\n sequence_length=sequence_length,\n initializer=clone_initializer(self.embeddings_initializer),\n name=\"position_embedding\"\n + str(keras.backend.get_uid(\"position_embedding\")),\n )\n self.supports_masking = self.token_embedding.supports_masking\n\n def get_config(self):\n config = super().get_config()\n config.update(\n {\n \"vocabulary_size\": self.vocabulary_size,\n \"sequence_length\": self.sequence_length,\n \"embedding_dim\": self.embedding_dim,\n \"embeddings_initializer\": keras.initializers.serialize(\n self.embeddings_initializer\n ),\n \"mask_zero\": self.token_embedding.mask_zero,\n },\n )\n return config\n\n def call(self, inputs):\n embedded_tokens = self.token_embedding(inputs)\n embedded_positions = self.position_embedding(embedded_tokens)\n outputs = embedded_tokens + embedded_positions\n return outputs\n\n def compute_mask(self, inputs, mask=None):\n return self.token_embedding.compute_mask(inputs, mask=mask)\n", "path": "keras_nlp/layers/token_and_position_embedding.py"}]}
| 2,090 | 221 |
gh_patches_debug_41055
|
rasdani/github-patches
|
git_diff
|
deepset-ai__haystack-7897
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Allow Chat Generators to connect to Answer Builder
**Is your feature request related to a problem? Please describe.**
There is no current way for a `ChatGenerator` to directly connect to `AnswerBuilder()` despite similar functionality to a regular Generator.
**Describe the solution you'd like**
We can extract the text from a `ChatMessage` and use it as a `str` as expected in the code for AnswerBuilder.
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
ChatGenerators that rely on non-open ai models require a specific chat format to be applied to properly differentiate between system/user/assistant messages. This is done manually (and prone to errors) in regular generators. This is an example I pulled from a Haystack notebook:
```
prompt_template = """
<|begin_of_text|><|start_header_id|>user<|end_header_id|>
Using the information contained in the context, give a comprehensive answer to the question.
If the answer cannot be deduced from the context, do not give an answer.
Context:
{% for doc in documents %}
{{ doc.content }} URL:{{ doc.meta['url'] }}
{% endfor %};
Question: {{query}}<|eot_id|>
<|start_header_id|>assistant<|end_header_id|>
"""
prompt_builder = PromptBuilder(template=prompt_template)
```
This can be accomplished by using a `ChatGenerator` as follows:
```
system_message = ChatMessage.from_system(
"""
Read the context provided and answer the question if possible.If you can not form an answer from the context, reply with "Nah".
Context:
{% for doc in documents %}
{{ doc.content }}
{% endfor %};
"""
)
user_message = ChatMessage.from_user("query: {{query}}")
assistent_message = ChatMessage.from_assistant("Answer: ")
```
We can then add the output to an `AnswerBuilder` to complete a pipeline ensuring that the proper template is applied.
</issue>
<code>
[start of haystack/components/builders/answer_builder.py]
1 # SPDX-FileCopyrightText: 2022-present deepset GmbH <[email protected]>
2 #
3 # SPDX-License-Identifier: Apache-2.0
4
5 import re
6 from typing import Any, Dict, List, Optional
7
8 from haystack import Document, GeneratedAnswer, component, logging
9
10 logger = logging.getLogger(__name__)
11
12
13 @component
14 class AnswerBuilder:
15 """
16 Takes a query and the replies a Generator returns as input and parses them into GeneratedAnswer objects.
17
18 Optionally, it also takes Documents and metadata from the Generator as inputs to enrich the GeneratedAnswer objects.
19
20 Usage example:
21 ```python
22 from haystack.components.builders import AnswerBuilder
23
24 builder = AnswerBuilder(pattern="Answer: (.*)")
25 builder.run(query="What's the answer?", replies=["This is an argument. Answer: This is the answer."])
26 ```
27 """
28
29 def __init__(self, pattern: Optional[str] = None, reference_pattern: Optional[str] = None):
30 """
31 Creates an instance of the AnswerBuilder component.
32
33 :param pattern:
34 The regular expression pattern to use to extract the answer text from the generator output.
35 If not specified, the whole string is used as the answer. The regular expression can have at
36 most one capture group. If a capture group is present, the text matched by the capture group
37 is used as the answer. If no capture group is present, the whole match is used as the answer.
38 Examples:
39 `[^\\n]+$` finds "this is an answer" in a string "this is an argument.\\nthis is an answer".
40 `Answer: (.*)` finds "this is an answer" in a string "this is an argument. Answer: this is an answer".
41
42 :param reference_pattern:
43 The regular expression pattern to use for parsing the document references.
44 We assume that references are specified as indices of the input documents and that
45 indices start at 1.
46 Example: `\\[(\\d+)\\]` finds "1" in a string "this is an answer[1]".
47 If not specified, no parsing is done, and all documents are referenced.
48 """
49 if pattern:
50 AnswerBuilder._check_num_groups_in_regex(pattern)
51
52 self.pattern = pattern
53 self.reference_pattern = reference_pattern
54
55 @component.output_types(answers=List[GeneratedAnswer])
56 def run(
57 self,
58 query: str,
59 replies: List[str],
60 meta: Optional[List[Dict[str, Any]]] = None,
61 documents: Optional[List[Document]] = None,
62 pattern: Optional[str] = None,
63 reference_pattern: Optional[str] = None,
64 ):
65 """
66 Turns the output of a Generator into `Answer` objects using regular expressions.
67
68 :param query:
69 The query used in the prompts for the Generator.
70 :param replies:
71 The output of the Generator.
72 :param meta:
73 The metadata returned by the Generator. If not specified, the generated answer will contain no metadata.
74 :param documents:
75 The documents used as input to the Generator. If `documents` are specified, they are added to the `Answer`
76 objects. If both `documents` and `reference_pattern` are specified, the documents referenced in the
77 Generator output are extracted from the input documents and added to the `Answer` objects.
78 :param pattern:
79 The regular expression pattern to use to extract the answer text from the generator output.
80 If not specified, the whole string is used as the answer. The regular expression can have at
81 most one capture group. If a capture group is present, the text matched by the capture group
82 is used as the answer. If no capture group is present, the whole match is used as the answer.
83 Examples:
84 `[^\\n]+$` finds "this is an answer" in a string "this is an argument.\\nthis is an answer".
85 `Answer: (.*)` finds "this is an answer" in a string
86 "this is an argument. Answer: this is an answer".
87 :param reference_pattern:
88 The regular expression pattern to use for parsing the document references.
89 We assume that references are specified as indices of the input documents and that indices start at 1.
90 Example: `\\[(\\d+)\\]` finds "1" in a string "this is an answer[1]".
91 If not specified, no parsing is done, and all documents are referenced.
92
93 :returns: A dictionary with the following keys:
94 - `answers`: The answers obtained from the output of the generator
95 """
96 if not meta:
97 meta = [{}] * len(replies)
98 elif len(replies) != len(meta):
99 raise ValueError(f"Number of replies ({len(replies)}), and metadata ({len(meta)}) must match.")
100
101 if pattern:
102 AnswerBuilder._check_num_groups_in_regex(pattern)
103
104 pattern = pattern or self.pattern
105 reference_pattern = reference_pattern or self.reference_pattern
106
107 all_answers = []
108 for reply, metadata in zip(replies, meta):
109 referenced_docs = []
110 if documents:
111 reference_idxs = []
112 if reference_pattern:
113 reference_idxs = AnswerBuilder._extract_reference_idxs(reply, reference_pattern)
114 else:
115 reference_idxs = [doc_idx for doc_idx, _ in enumerate(documents)]
116
117 for idx in reference_idxs:
118 try:
119 referenced_docs.append(documents[idx])
120 except IndexError:
121 logger.warning(
122 "Document index '{index}' referenced in Generator output is out of range. ", index=idx + 1
123 )
124
125 answer_string = AnswerBuilder._extract_answer_string(reply, pattern)
126 answer = GeneratedAnswer(data=answer_string, query=query, documents=referenced_docs, meta=metadata)
127 all_answers.append(answer)
128
129 return {"answers": all_answers}
130
131 @staticmethod
132 def _extract_answer_string(reply: str, pattern: Optional[str] = None) -> str:
133 """
134 Extract the answer string from the generator output using the specified pattern.
135
136 If no pattern is specified, the whole string is used as the answer.
137
138 :param reply:
139 The output of the Generator. A string.
140 :param pattern:
141 The regular expression pattern to use to extract the answer text from the generator output.
142 """
143 if pattern is None:
144 return reply
145
146 if match := re.search(pattern, reply):
147 # No capture group in pattern -> use the whole match as answer
148 if not match.lastindex:
149 return match.group(0)
150 # One capture group in pattern -> use the capture group as answer
151 return match.group(1)
152 return ""
153
154 @staticmethod
155 def _extract_reference_idxs(reply: str, reference_pattern: str) -> List[int]:
156 document_idxs = re.findall(reference_pattern, reply)
157 return [int(idx) - 1 for idx in document_idxs]
158
159 @staticmethod
160 def _check_num_groups_in_regex(pattern: str):
161 num_groups = re.compile(pattern).groups
162 if num_groups > 1:
163 raise ValueError(
164 f"Pattern '{pattern}' contains multiple capture groups. "
165 f"Please specify a pattern with at most one capture group."
166 )
167
[end of haystack/components/builders/answer_builder.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/haystack/components/builders/answer_builder.py b/haystack/components/builders/answer_builder.py
--- a/haystack/components/builders/answer_builder.py
+++ b/haystack/components/builders/answer_builder.py
@@ -3,9 +3,10 @@
# SPDX-License-Identifier: Apache-2.0
import re
-from typing import Any, Dict, List, Optional
+from typing import Any, Dict, List, Optional, Union
from haystack import Document, GeneratedAnswer, component, logging
+from haystack.dataclasses.chat_message import ChatMessage
logger = logging.getLogger(__name__)
@@ -56,7 +57,7 @@
def run(
self,
query: str,
- replies: List[str],
+ replies: Union[List[str], List[ChatMessage]],
meta: Optional[List[Dict[str, Any]]] = None,
documents: Optional[List[Document]] = None,
pattern: Optional[str] = None,
@@ -68,7 +69,7 @@
:param query:
The query used in the prompts for the Generator.
:param replies:
- The output of the Generator.
+ The output of the Generator. Can be a list of strings or a list of ChatMessage objects.
:param meta:
The metadata returned by the Generator. If not specified, the generated answer will contain no metadata.
:param documents:
@@ -103,14 +104,15 @@
pattern = pattern or self.pattern
reference_pattern = reference_pattern or self.reference_pattern
-
all_answers = []
for reply, metadata in zip(replies, meta):
+ # Extract content from ChatMessage objects if reply is a ChatMessages, else use the string as is
+ extracted_reply: str = reply.content if isinstance(reply, ChatMessage) else reply # type: ignore
+ extracted_metadata = reply.meta if isinstance(reply, ChatMessage) else metadata
referenced_docs = []
if documents:
- reference_idxs = []
if reference_pattern:
- reference_idxs = AnswerBuilder._extract_reference_idxs(reply, reference_pattern)
+ reference_idxs = AnswerBuilder._extract_reference_idxs(extracted_reply, reference_pattern)
else:
reference_idxs = [doc_idx for doc_idx, _ in enumerate(documents)]
@@ -122,8 +124,10 @@
"Document index '{index}' referenced in Generator output is out of range. ", index=idx + 1
)
- answer_string = AnswerBuilder._extract_answer_string(reply, pattern)
- answer = GeneratedAnswer(data=answer_string, query=query, documents=referenced_docs, meta=metadata)
+ answer_string = AnswerBuilder._extract_answer_string(extracted_reply, pattern)
+ answer = GeneratedAnswer(
+ data=answer_string, query=query, documents=referenced_docs, meta=extracted_metadata
+ )
all_answers.append(answer)
return {"answers": all_answers}
|
{"golden_diff": "diff --git a/haystack/components/builders/answer_builder.py b/haystack/components/builders/answer_builder.py\n--- a/haystack/components/builders/answer_builder.py\n+++ b/haystack/components/builders/answer_builder.py\n@@ -3,9 +3,10 @@\n # SPDX-License-Identifier: Apache-2.0\n \n import re\n-from typing import Any, Dict, List, Optional\n+from typing import Any, Dict, List, Optional, Union\n \n from haystack import Document, GeneratedAnswer, component, logging\n+from haystack.dataclasses.chat_message import ChatMessage\n \n logger = logging.getLogger(__name__)\n \n@@ -56,7 +57,7 @@\n def run(\n self,\n query: str,\n- replies: List[str],\n+ replies: Union[List[str], List[ChatMessage]],\n meta: Optional[List[Dict[str, Any]]] = None,\n documents: Optional[List[Document]] = None,\n pattern: Optional[str] = None,\n@@ -68,7 +69,7 @@\n :param query:\n The query used in the prompts for the Generator.\n :param replies:\n- The output of the Generator.\n+ The output of the Generator. Can be a list of strings or a list of ChatMessage objects.\n :param meta:\n The metadata returned by the Generator. If not specified, the generated answer will contain no metadata.\n :param documents:\n@@ -103,14 +104,15 @@\n \n pattern = pattern or self.pattern\n reference_pattern = reference_pattern or self.reference_pattern\n-\n all_answers = []\n for reply, metadata in zip(replies, meta):\n+ # Extract content from ChatMessage objects if reply is a ChatMessages, else use the string as is\n+ extracted_reply: str = reply.content if isinstance(reply, ChatMessage) else reply # type: ignore\n+ extracted_metadata = reply.meta if isinstance(reply, ChatMessage) else metadata\n referenced_docs = []\n if documents:\n- reference_idxs = []\n if reference_pattern:\n- reference_idxs = AnswerBuilder._extract_reference_idxs(reply, reference_pattern)\n+ reference_idxs = AnswerBuilder._extract_reference_idxs(extracted_reply, reference_pattern)\n else:\n reference_idxs = [doc_idx for doc_idx, _ in enumerate(documents)]\n \n@@ -122,8 +124,10 @@\n \"Document index '{index}' referenced in Generator output is out of range. \", index=idx + 1\n )\n \n- answer_string = AnswerBuilder._extract_answer_string(reply, pattern)\n- answer = GeneratedAnswer(data=answer_string, query=query, documents=referenced_docs, meta=metadata)\n+ answer_string = AnswerBuilder._extract_answer_string(extracted_reply, pattern)\n+ answer = GeneratedAnswer(\n+ data=answer_string, query=query, documents=referenced_docs, meta=extracted_metadata\n+ )\n all_answers.append(answer)\n \n return {\"answers\": all_answers}\n", "issue": "Allow Chat Generators to connect to Answer Builder\n**Is your feature request related to a problem? Please describe.**\r\nThere is no current way for a `ChatGenerator` to directly connect to `AnswerBuilder()` despite similar functionality to a regular Generator.\r\n\r\n**Describe the solution you'd like**\r\nWe can extract the text from a `ChatMessage` and use it as a `str` as expected in the code for AnswerBuilder.\r\n\r\n**Describe alternatives you've considered**\r\nA clear and concise description of any alternative solutions or features you've considered.\r\n\r\n**Additional context**\r\nChatGenerators that rely on non-open ai models require a specific chat format to be applied to properly differentiate between system/user/assistant messages. This is done manually (and prone to errors) in regular generators. This is an example I pulled from a Haystack notebook:\r\n\r\n```\r\nprompt_template = \"\"\"\r\n<|begin_of_text|><|start_header_id|>user<|end_header_id|>\r\n\r\n\r\nUsing the information contained in the context, give a comprehensive answer to the question.\r\nIf the answer cannot be deduced from the context, do not give an answer.\r\n\r\nContext:\r\n {% for doc in documents %}\r\n {{ doc.content }} URL:{{ doc.meta['url'] }}\r\n {% endfor %};\r\n Question: {{query}}<|eot_id|>\r\n\r\n<|start_header_id|>assistant<|end_header_id|>\r\n\r\n\r\n\"\"\"\r\nprompt_builder = PromptBuilder(template=prompt_template)\r\n```\r\n\r\nThis can be accomplished by using a `ChatGenerator` as follows:\r\n\r\n```\r\nsystem_message = ChatMessage.from_system(\r\n \"\"\"\r\n Read the context provided and answer the question if possible.If you can not form an answer from the context, reply with \"Nah\".\r\n Context:\r\n {% for doc in documents %}\r\n {{ doc.content }}\r\n {% endfor %};\r\n \"\"\"\r\n)\r\nuser_message = ChatMessage.from_user(\"query: {{query}}\")\r\nassistent_message = ChatMessage.from_assistant(\"Answer: \")\r\n```\r\n\r\nWe can then add the output to an `AnswerBuilder` to complete a pipeline ensuring that the proper template is applied.\n", "before_files": [{"content": "# SPDX-FileCopyrightText: 2022-present deepset GmbH <[email protected]>\n#\n# SPDX-License-Identifier: Apache-2.0\n\nimport re\nfrom typing import Any, Dict, List, Optional\n\nfrom haystack import Document, GeneratedAnswer, component, logging\n\nlogger = logging.getLogger(__name__)\n\n\n@component\nclass AnswerBuilder:\n \"\"\"\n Takes a query and the replies a Generator returns as input and parses them into GeneratedAnswer objects.\n\n Optionally, it also takes Documents and metadata from the Generator as inputs to enrich the GeneratedAnswer objects.\n\n Usage example:\n ```python\n from haystack.components.builders import AnswerBuilder\n\n builder = AnswerBuilder(pattern=\"Answer: (.*)\")\n builder.run(query=\"What's the answer?\", replies=[\"This is an argument. Answer: This is the answer.\"])\n ```\n \"\"\"\n\n def __init__(self, pattern: Optional[str] = None, reference_pattern: Optional[str] = None):\n \"\"\"\n Creates an instance of the AnswerBuilder component.\n\n :param pattern:\n The regular expression pattern to use to extract the answer text from the generator output.\n If not specified, the whole string is used as the answer. The regular expression can have at\n most one capture group. If a capture group is present, the text matched by the capture group\n is used as the answer. If no capture group is present, the whole match is used as the answer.\n Examples:\n `[^\\\\n]+$` finds \"this is an answer\" in a string \"this is an argument.\\\\nthis is an answer\".\n `Answer: (.*)` finds \"this is an answer\" in a string \"this is an argument. Answer: this is an answer\".\n\n :param reference_pattern:\n The regular expression pattern to use for parsing the document references.\n We assume that references are specified as indices of the input documents and that\n indices start at 1.\n Example: `\\\\[(\\\\d+)\\\\]` finds \"1\" in a string \"this is an answer[1]\".\n If not specified, no parsing is done, and all documents are referenced.\n \"\"\"\n if pattern:\n AnswerBuilder._check_num_groups_in_regex(pattern)\n\n self.pattern = pattern\n self.reference_pattern = reference_pattern\n\n @component.output_types(answers=List[GeneratedAnswer])\n def run(\n self,\n query: str,\n replies: List[str],\n meta: Optional[List[Dict[str, Any]]] = None,\n documents: Optional[List[Document]] = None,\n pattern: Optional[str] = None,\n reference_pattern: Optional[str] = None,\n ):\n \"\"\"\n Turns the output of a Generator into `Answer` objects using regular expressions.\n\n :param query:\n The query used in the prompts for the Generator.\n :param replies:\n The output of the Generator.\n :param meta:\n The metadata returned by the Generator. If not specified, the generated answer will contain no metadata.\n :param documents:\n The documents used as input to the Generator. If `documents` are specified, they are added to the `Answer`\n objects. If both `documents` and `reference_pattern` are specified, the documents referenced in the\n Generator output are extracted from the input documents and added to the `Answer` objects.\n :param pattern:\n The regular expression pattern to use to extract the answer text from the generator output.\n If not specified, the whole string is used as the answer. The regular expression can have at\n most one capture group. If a capture group is present, the text matched by the capture group\n is used as the answer. If no capture group is present, the whole match is used as the answer.\n Examples:\n `[^\\\\n]+$` finds \"this is an answer\" in a string \"this is an argument.\\\\nthis is an answer\".\n `Answer: (.*)` finds \"this is an answer\" in a string\n \"this is an argument. Answer: this is an answer\".\n :param reference_pattern:\n The regular expression pattern to use for parsing the document references.\n We assume that references are specified as indices of the input documents and that indices start at 1.\n Example: `\\\\[(\\\\d+)\\\\]` finds \"1\" in a string \"this is an answer[1]\".\n If not specified, no parsing is done, and all documents are referenced.\n\n :returns: A dictionary with the following keys:\n - `answers`: The answers obtained from the output of the generator\n \"\"\"\n if not meta:\n meta = [{}] * len(replies)\n elif len(replies) != len(meta):\n raise ValueError(f\"Number of replies ({len(replies)}), and metadata ({len(meta)}) must match.\")\n\n if pattern:\n AnswerBuilder._check_num_groups_in_regex(pattern)\n\n pattern = pattern or self.pattern\n reference_pattern = reference_pattern or self.reference_pattern\n\n all_answers = []\n for reply, metadata in zip(replies, meta):\n referenced_docs = []\n if documents:\n reference_idxs = []\n if reference_pattern:\n reference_idxs = AnswerBuilder._extract_reference_idxs(reply, reference_pattern)\n else:\n reference_idxs = [doc_idx for doc_idx, _ in enumerate(documents)]\n\n for idx in reference_idxs:\n try:\n referenced_docs.append(documents[idx])\n except IndexError:\n logger.warning(\n \"Document index '{index}' referenced in Generator output is out of range. \", index=idx + 1\n )\n\n answer_string = AnswerBuilder._extract_answer_string(reply, pattern)\n answer = GeneratedAnswer(data=answer_string, query=query, documents=referenced_docs, meta=metadata)\n all_answers.append(answer)\n\n return {\"answers\": all_answers}\n\n @staticmethod\n def _extract_answer_string(reply: str, pattern: Optional[str] = None) -> str:\n \"\"\"\n Extract the answer string from the generator output using the specified pattern.\n\n If no pattern is specified, the whole string is used as the answer.\n\n :param reply:\n The output of the Generator. A string.\n :param pattern:\n The regular expression pattern to use to extract the answer text from the generator output.\n \"\"\"\n if pattern is None:\n return reply\n\n if match := re.search(pattern, reply):\n # No capture group in pattern -> use the whole match as answer\n if not match.lastindex:\n return match.group(0)\n # One capture group in pattern -> use the capture group as answer\n return match.group(1)\n return \"\"\n\n @staticmethod\n def _extract_reference_idxs(reply: str, reference_pattern: str) -> List[int]:\n document_idxs = re.findall(reference_pattern, reply)\n return [int(idx) - 1 for idx in document_idxs]\n\n @staticmethod\n def _check_num_groups_in_regex(pattern: str):\n num_groups = re.compile(pattern).groups\n if num_groups > 1:\n raise ValueError(\n f\"Pattern '{pattern}' contains multiple capture groups. \"\n f\"Please specify a pattern with at most one capture group.\"\n )\n", "path": "haystack/components/builders/answer_builder.py"}]}
| 2,896 | 643 |
gh_patches_debug_12713
|
rasdani/github-patches
|
git_diff
|
pypa__pip-3037
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
installing from a wheel puts platform files into purelib, ignoring WHEEL Root-Is-Pure: False
When installing a package with C extensions, the platform specific shared libraries should be placed in the 'platform library' install path install_platlib. On 64 bit linux, this would be /usr/lib64/python2.7/<...>
pip 7.0.3 incorrectly installs these platform specific shared binaries to the 'pure library' install path install_purelib.
This only happens when installing from a wheel file. The wheel file contains a WHEEL configuration file which indicates the package is not pure (Root-Is-Pure: False), but pip appears to ignore that.
For example, if I were to:
pip install python-qpid-proton
pip builds the extensions, generates a wheel in the cache, and installs from that wheel.
the shared libraries generated by the extension are placed here:
## $ pip show python-qpid-proton
Metadata-Version: 2.0
Name: python-qpid-proton
Version: 0.9.1.1
Summary: An AMQP based messaging library.
Home-page: http://qpid.apache.org/proton/
Author: Apache Qpid
Author-email: [email protected]
License: Apache Software License
Location: /usr/lib/python2.7/site-packages <----------------------------
However, if I disable the cache (which skips the wheel building/install) the shared objects are correctly installed:
$ sudo pip --no-cache-dir install python-qpid-proton
## $ pip show python-qpid-proton
Metadata-Version: 1.1
Name: python-qpid-proton
Version: 0.9.1.1
Summary: An AMQP based messaging library.
Home-page: http://qpid.apache.org/proton/
Author: Apache Qpid
Author-email: [email protected]
License: Apache Software License
Location: /usr/lib64/python2.7/site-packages <-------------------------------------------
Requires:
It appears that the install_platlib value in the distribution scheme is being incorrectly overridden here
https://github.com/pypa/pip/blob/develop/pip/locations.py#L196
The Distribution instance created here:
https://github.com/pypa/pip/blob/develop/pip/locations.py#L181
is a dummy distribution in the case of a wheel. It is not the actual distribution as defined by the packages setup.py file. As such, it will not have any extensions configured.
But when finalize_options is run on the dummy install instance, distutils checks to see if 'install_lib' is unset, and _if there are no extensions_ it set install_lib to the value of install_purelib.
Which means install_lib will never be set to platlib, even if the package contains binary extensions.
So when the installation path for a platform library is picked here:
https://github.com/pypa/pip/blob/develop/pip/wheel.py#L248
it has been incorrectly set to the pure lib directory.
https://github.com/pypa/pip/blob/develop/pip/wheel.py#L193
</issue>
<code>
[start of pip/locations.py]
1 """Locations where we look for configs, install stuff, etc"""
2 from __future__ import absolute_import
3
4 import getpass
5 import os
6 import os.path
7 import site
8 import sys
9
10 from distutils import sysconfig
11 from distutils.command.install import install, SCHEME_KEYS # noqa
12
13 from pip.compat import WINDOWS
14 from pip.utils import appdirs
15
16
17 # CA Bundle Locations
18 CA_BUNDLE_PATHS = [
19 # Debian/Ubuntu/Gentoo etc.
20 "/etc/ssl/certs/ca-certificates.crt",
21
22 # Fedora/RHEL
23 "/etc/pki/tls/certs/ca-bundle.crt",
24
25 # OpenSUSE
26 "/etc/ssl/ca-bundle.pem",
27
28 # OpenBSD
29 "/etc/ssl/cert.pem",
30
31 # FreeBSD/DragonFly
32 "/usr/local/share/certs/ca-root-nss.crt",
33
34 # Homebrew on OSX
35 "/usr/local/etc/openssl/cert.pem",
36 ]
37
38 # Attempt to locate a CA Bundle that we can pass into requests, we have a list
39 # of possible ones from various systems. If we cannot find one then we'll set
40 # this to None so that we default to whatever requests is setup to handle.
41 #
42 # Note to Downstream: If you wish to disable this autodetection and simply use
43 # whatever requests does (likely you've already patched
44 # requests.certs.where()) then simply edit this line so
45 # that it reads ``CA_BUNDLE_PATH = None``.
46 CA_BUNDLE_PATH = next((x for x in CA_BUNDLE_PATHS if os.path.exists(x)), None)
47
48
49 # Application Directories
50 USER_CACHE_DIR = appdirs.user_cache_dir("pip")
51
52
53 DELETE_MARKER_MESSAGE = '''\
54 This file is placed here by pip to indicate the source was put
55 here by pip.
56
57 Once this package is successfully installed this source code will be
58 deleted (unless you remove this file).
59 '''
60 PIP_DELETE_MARKER_FILENAME = 'pip-delete-this-directory.txt'
61
62
63 def write_delete_marker_file(directory):
64 """
65 Write the pip delete marker file into this directory.
66 """
67 filepath = os.path.join(directory, PIP_DELETE_MARKER_FILENAME)
68 with open(filepath, 'w') as marker_fp:
69 marker_fp.write(DELETE_MARKER_MESSAGE)
70
71
72 def running_under_virtualenv():
73 """
74 Return True if we're running inside a virtualenv, False otherwise.
75
76 """
77 if hasattr(sys, 'real_prefix'):
78 return True
79 elif sys.prefix != getattr(sys, "base_prefix", sys.prefix):
80 return True
81
82 return False
83
84
85 def virtualenv_no_global():
86 """
87 Return True if in a venv and no system site packages.
88 """
89 # this mirrors the logic in virtualenv.py for locating the
90 # no-global-site-packages.txt file
91 site_mod_dir = os.path.dirname(os.path.abspath(site.__file__))
92 no_global_file = os.path.join(site_mod_dir, 'no-global-site-packages.txt')
93 if running_under_virtualenv() and os.path.isfile(no_global_file):
94 return True
95
96
97 def __get_username():
98 """ Returns the effective username of the current process. """
99 if WINDOWS:
100 return getpass.getuser()
101 import pwd
102 return pwd.getpwuid(os.geteuid()).pw_name
103
104
105 if running_under_virtualenv():
106 src_prefix = os.path.join(sys.prefix, 'src')
107 else:
108 # FIXME: keep src in cwd for now (it is not a temporary folder)
109 try:
110 src_prefix = os.path.join(os.getcwd(), 'src')
111 except OSError:
112 # In case the current working directory has been renamed or deleted
113 sys.exit(
114 "The folder you are executing pip from can no longer be found."
115 )
116
117 # under Mac OS X + virtualenv sys.prefix is not properly resolved
118 # it is something like /path/to/python/bin/..
119 # Note: using realpath due to tmp dirs on OSX being symlinks
120 src_prefix = os.path.abspath(src_prefix)
121
122 # FIXME doesn't account for venv linked to global site-packages
123
124 site_packages = sysconfig.get_python_lib()
125 user_site = site.USER_SITE
126 user_dir = os.path.expanduser('~')
127 if WINDOWS:
128 bin_py = os.path.join(sys.prefix, 'Scripts')
129 bin_user = os.path.join(user_site, 'Scripts')
130 # buildout uses 'bin' on Windows too?
131 if not os.path.exists(bin_py):
132 bin_py = os.path.join(sys.prefix, 'bin')
133 bin_user = os.path.join(user_site, 'bin')
134
135 config_basename = 'pip.ini'
136
137 legacy_storage_dir = os.path.join(user_dir, 'pip')
138 legacy_config_file = os.path.join(
139 legacy_storage_dir,
140 config_basename,
141 )
142 else:
143 bin_py = os.path.join(sys.prefix, 'bin')
144 bin_user = os.path.join(user_site, 'bin')
145
146 config_basename = 'pip.conf'
147
148 legacy_storage_dir = os.path.join(user_dir, '.pip')
149 legacy_config_file = os.path.join(
150 legacy_storage_dir,
151 config_basename,
152 )
153
154 # Forcing to use /usr/local/bin for standard Mac OS X framework installs
155 # Also log to ~/Library/Logs/ for use with the Console.app log viewer
156 if sys.platform[:6] == 'darwin' and sys.prefix[:16] == '/System/Library/':
157 bin_py = '/usr/local/bin'
158
159 site_config_files = [
160 os.path.join(path, config_basename)
161 for path in appdirs.site_config_dirs('pip')
162 ]
163
164
165 def distutils_scheme(dist_name, user=False, home=None, root=None,
166 isolated=False):
167 """
168 Return a distutils install scheme
169 """
170 from distutils.dist import Distribution
171
172 scheme = {}
173
174 if isolated:
175 extra_dist_args = {"script_args": ["--no-user-cfg"]}
176 else:
177 extra_dist_args = {}
178 dist_args = {'name': dist_name}
179 dist_args.update(extra_dist_args)
180
181 d = Distribution(dist_args)
182 d.parse_config_files()
183 i = d.get_command_obj('install', create=True)
184 # NOTE: setting user or home has the side-effect of creating the home dir
185 # or user base for installations during finalize_options()
186 # ideally, we'd prefer a scheme class that has no side-effects.
187 i.user = user or i.user
188 if user:
189 i.prefix = ""
190 i.home = home or i.home
191 i.root = root or i.root
192 i.finalize_options()
193 for key in SCHEME_KEYS:
194 scheme[key] = getattr(i, 'install_' + key)
195
196 if i.install_lib is not None:
197 # install_lib takes precedence over purelib and platlib
198 scheme.update(dict(purelib=i.install_lib, platlib=i.install_lib))
199
200 if running_under_virtualenv():
201 scheme['headers'] = os.path.join(
202 sys.prefix,
203 'include',
204 'site',
205 'python' + sys.version[:3],
206 dist_name,
207 )
208
209 if root is not None:
210 scheme["headers"] = os.path.join(
211 root,
212 os.path.abspath(scheme["headers"])[1:],
213 )
214
215 return scheme
216
[end of pip/locations.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/pip/locations.py b/pip/locations.py
--- a/pip/locations.py
+++ b/pip/locations.py
@@ -237,8 +237,12 @@
for key in SCHEME_KEYS:
scheme[key] = getattr(i, 'install_' + key)
- if i.install_lib is not None:
- # install_lib takes precedence over purelib and platlib
+ # install_lib specified in setup.cfg should install *everything*
+ # into there (i.e. it takes precedence over both purelib and
+ # platlib). Note, i.install_lib is *always* set after
+ # finalize_options(); we only want to override here if the user
+ # has explicitly requested it hence going back to the config
+ if 'install_lib' in d.get_option_dict('install'):
scheme.update(dict(purelib=i.install_lib, platlib=i.install_lib))
if running_under_virtualenv():
|
{"golden_diff": "diff --git a/pip/locations.py b/pip/locations.py\n--- a/pip/locations.py\n+++ b/pip/locations.py\n@@ -237,8 +237,12 @@\n for key in SCHEME_KEYS:\n scheme[key] = getattr(i, 'install_' + key)\n \n- if i.install_lib is not None:\n- # install_lib takes precedence over purelib and platlib\n+ # install_lib specified in setup.cfg should install *everything*\n+ # into there (i.e. it takes precedence over both purelib and\n+ # platlib). Note, i.install_lib is *always* set after\n+ # finalize_options(); we only want to override here if the user\n+ # has explicitly requested it hence going back to the config\n+ if 'install_lib' in d.get_option_dict('install'):\n scheme.update(dict(purelib=i.install_lib, platlib=i.install_lib))\n \n if running_under_virtualenv():\n", "issue": "installing from a wheel puts platform files into purelib, ignoring WHEEL Root-Is-Pure: False\nWhen installing a package with C extensions, the platform specific shared libraries should be placed in the 'platform library' install path install_platlib. On 64 bit linux, this would be /usr/lib64/python2.7/<...>\n\npip 7.0.3 incorrectly installs these platform specific shared binaries to the 'pure library' install path install_purelib.\n\nThis only happens when installing from a wheel file. The wheel file contains a WHEEL configuration file which indicates the package is not pure (Root-Is-Pure: False), but pip appears to ignore that.\n\nFor example, if I were to:\n\n pip install python-qpid-proton\n\npip builds the extensions, generates a wheel in the cache, and installs from that wheel.\nthe shared libraries generated by the extension are placed here:\n## $ pip show python-qpid-proton\n\nMetadata-Version: 2.0\nName: python-qpid-proton\nVersion: 0.9.1.1\nSummary: An AMQP based messaging library.\nHome-page: http://qpid.apache.org/proton/\nAuthor: Apache Qpid\nAuthor-email: [email protected]\nLicense: Apache Software License\nLocation: /usr/lib/python2.7/site-packages <----------------------------\n\nHowever, if I disable the cache (which skips the wheel building/install) the shared objects are correctly installed:\n\n$ sudo pip --no-cache-dir install python-qpid-proton\n## $ pip show python-qpid-proton\n\nMetadata-Version: 1.1\nName: python-qpid-proton\nVersion: 0.9.1.1\nSummary: An AMQP based messaging library.\nHome-page: http://qpid.apache.org/proton/\nAuthor: Apache Qpid\nAuthor-email: [email protected]\nLicense: Apache Software License\nLocation: /usr/lib64/python2.7/site-packages <-------------------------------------------\nRequires: \n\nIt appears that the install_platlib value in the distribution scheme is being incorrectly overridden here\n\nhttps://github.com/pypa/pip/blob/develop/pip/locations.py#L196\n\nThe Distribution instance created here:\nhttps://github.com/pypa/pip/blob/develop/pip/locations.py#L181\n\nis a dummy distribution in the case of a wheel. It is not the actual distribution as defined by the packages setup.py file. As such, it will not have any extensions configured. \n\nBut when finalize_options is run on the dummy install instance, distutils checks to see if 'install_lib' is unset, and _if there are no extensions_ it set install_lib to the value of install_purelib.\n\nWhich means install_lib will never be set to platlib, even if the package contains binary extensions.\nSo when the installation path for a platform library is picked here:\n\nhttps://github.com/pypa/pip/blob/develop/pip/wheel.py#L248\n\nit has been incorrectly set to the pure lib directory.\n\nhttps://github.com/pypa/pip/blob/develop/pip/wheel.py#L193\n\n", "before_files": [{"content": "\"\"\"Locations where we look for configs, install stuff, etc\"\"\"\nfrom __future__ import absolute_import\n\nimport getpass\nimport os\nimport os.path\nimport site\nimport sys\n\nfrom distutils import sysconfig\nfrom distutils.command.install import install, SCHEME_KEYS # noqa\n\nfrom pip.compat import WINDOWS\nfrom pip.utils import appdirs\n\n\n# CA Bundle Locations\nCA_BUNDLE_PATHS = [\n # Debian/Ubuntu/Gentoo etc.\n \"/etc/ssl/certs/ca-certificates.crt\",\n\n # Fedora/RHEL\n \"/etc/pki/tls/certs/ca-bundle.crt\",\n\n # OpenSUSE\n \"/etc/ssl/ca-bundle.pem\",\n\n # OpenBSD\n \"/etc/ssl/cert.pem\",\n\n # FreeBSD/DragonFly\n \"/usr/local/share/certs/ca-root-nss.crt\",\n\n # Homebrew on OSX\n \"/usr/local/etc/openssl/cert.pem\",\n]\n\n# Attempt to locate a CA Bundle that we can pass into requests, we have a list\n# of possible ones from various systems. If we cannot find one then we'll set\n# this to None so that we default to whatever requests is setup to handle.\n#\n# Note to Downstream: If you wish to disable this autodetection and simply use\n# whatever requests does (likely you've already patched\n# requests.certs.where()) then simply edit this line so\n# that it reads ``CA_BUNDLE_PATH = None``.\nCA_BUNDLE_PATH = next((x for x in CA_BUNDLE_PATHS if os.path.exists(x)), None)\n\n\n# Application Directories\nUSER_CACHE_DIR = appdirs.user_cache_dir(\"pip\")\n\n\nDELETE_MARKER_MESSAGE = '''\\\nThis file is placed here by pip to indicate the source was put\nhere by pip.\n\nOnce this package is successfully installed this source code will be\ndeleted (unless you remove this file).\n'''\nPIP_DELETE_MARKER_FILENAME = 'pip-delete-this-directory.txt'\n\n\ndef write_delete_marker_file(directory):\n \"\"\"\n Write the pip delete marker file into this directory.\n \"\"\"\n filepath = os.path.join(directory, PIP_DELETE_MARKER_FILENAME)\n with open(filepath, 'w') as marker_fp:\n marker_fp.write(DELETE_MARKER_MESSAGE)\n\n\ndef running_under_virtualenv():\n \"\"\"\n Return True if we're running inside a virtualenv, False otherwise.\n\n \"\"\"\n if hasattr(sys, 'real_prefix'):\n return True\n elif sys.prefix != getattr(sys, \"base_prefix\", sys.prefix):\n return True\n\n return False\n\n\ndef virtualenv_no_global():\n \"\"\"\n Return True if in a venv and no system site packages.\n \"\"\"\n # this mirrors the logic in virtualenv.py for locating the\n # no-global-site-packages.txt file\n site_mod_dir = os.path.dirname(os.path.abspath(site.__file__))\n no_global_file = os.path.join(site_mod_dir, 'no-global-site-packages.txt')\n if running_under_virtualenv() and os.path.isfile(no_global_file):\n return True\n\n\ndef __get_username():\n \"\"\" Returns the effective username of the current process. \"\"\"\n if WINDOWS:\n return getpass.getuser()\n import pwd\n return pwd.getpwuid(os.geteuid()).pw_name\n\n\nif running_under_virtualenv():\n src_prefix = os.path.join(sys.prefix, 'src')\nelse:\n # FIXME: keep src in cwd for now (it is not a temporary folder)\n try:\n src_prefix = os.path.join(os.getcwd(), 'src')\n except OSError:\n # In case the current working directory has been renamed or deleted\n sys.exit(\n \"The folder you are executing pip from can no longer be found.\"\n )\n\n# under Mac OS X + virtualenv sys.prefix is not properly resolved\n# it is something like /path/to/python/bin/..\n# Note: using realpath due to tmp dirs on OSX being symlinks\nsrc_prefix = os.path.abspath(src_prefix)\n\n# FIXME doesn't account for venv linked to global site-packages\n\nsite_packages = sysconfig.get_python_lib()\nuser_site = site.USER_SITE\nuser_dir = os.path.expanduser('~')\nif WINDOWS:\n bin_py = os.path.join(sys.prefix, 'Scripts')\n bin_user = os.path.join(user_site, 'Scripts')\n # buildout uses 'bin' on Windows too?\n if not os.path.exists(bin_py):\n bin_py = os.path.join(sys.prefix, 'bin')\n bin_user = os.path.join(user_site, 'bin')\n\n config_basename = 'pip.ini'\n\n legacy_storage_dir = os.path.join(user_dir, 'pip')\n legacy_config_file = os.path.join(\n legacy_storage_dir,\n config_basename,\n )\nelse:\n bin_py = os.path.join(sys.prefix, 'bin')\n bin_user = os.path.join(user_site, 'bin')\n\n config_basename = 'pip.conf'\n\n legacy_storage_dir = os.path.join(user_dir, '.pip')\n legacy_config_file = os.path.join(\n legacy_storage_dir,\n config_basename,\n )\n\n # Forcing to use /usr/local/bin for standard Mac OS X framework installs\n # Also log to ~/Library/Logs/ for use with the Console.app log viewer\n if sys.platform[:6] == 'darwin' and sys.prefix[:16] == '/System/Library/':\n bin_py = '/usr/local/bin'\n\nsite_config_files = [\n os.path.join(path, config_basename)\n for path in appdirs.site_config_dirs('pip')\n]\n\n\ndef distutils_scheme(dist_name, user=False, home=None, root=None,\n isolated=False):\n \"\"\"\n Return a distutils install scheme\n \"\"\"\n from distutils.dist import Distribution\n\n scheme = {}\n\n if isolated:\n extra_dist_args = {\"script_args\": [\"--no-user-cfg\"]}\n else:\n extra_dist_args = {}\n dist_args = {'name': dist_name}\n dist_args.update(extra_dist_args)\n\n d = Distribution(dist_args)\n d.parse_config_files()\n i = d.get_command_obj('install', create=True)\n # NOTE: setting user or home has the side-effect of creating the home dir\n # or user base for installations during finalize_options()\n # ideally, we'd prefer a scheme class that has no side-effects.\n i.user = user or i.user\n if user:\n i.prefix = \"\"\n i.home = home or i.home\n i.root = root or i.root\n i.finalize_options()\n for key in SCHEME_KEYS:\n scheme[key] = getattr(i, 'install_' + key)\n\n if i.install_lib is not None:\n # install_lib takes precedence over purelib and platlib\n scheme.update(dict(purelib=i.install_lib, platlib=i.install_lib))\n\n if running_under_virtualenv():\n scheme['headers'] = os.path.join(\n sys.prefix,\n 'include',\n 'site',\n 'python' + sys.version[:3],\n dist_name,\n )\n\n if root is not None:\n scheme[\"headers\"] = os.path.join(\n root,\n os.path.abspath(scheme[\"headers\"])[1:],\n )\n\n return scheme\n", "path": "pip/locations.py"}]}
| 3,279 | 215 |
gh_patches_debug_21734
|
rasdani/github-patches
|
git_diff
|
marshmallow-code__webargs-99
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Refactor tests
The tests can be DRY'd up significantly. I suggest the following:
- [ ] Implement the same HTTP API across all the supported frameworks
- [ ] Use webtest to test endpoints; create a base test class that tests common functionality
</issue>
<code>
[start of tasks.py]
1 # -*- coding: utf-8 -*-
2 import os
3 import sys
4 import webbrowser
5
6 from invoke import task, run
7
8 docs_dir = 'docs'
9 build_dir = os.path.join(docs_dir, '_build')
10
11 @task
12 def test(coverage=False, browse=False):
13 flake()
14 import pytest
15 args = []
16 if coverage:
17 args.extend(['--cov=webargs', '--cov-report=term', '--cov-report=html'])
18
19 if sys.version_info < (3, 4, 1):
20 args.append('--ignore={0}'.format(os.path.join('tests', 'test_aiohttp')))
21 retcode = pytest.main(args)
22 if coverage and browse:
23 webbrowser.open_new_tab(os.path.join('htmlcov', 'index.html'))
24 sys.exit(retcode)
25
26 @task
27 def flake():
28 """Run flake8 on codebase."""
29 cmd = 'flake8 .'
30 if sys.version_info < (3, 4, 1):
31 excludes = [
32 os.path.join('tests', 'test_aiohttp'),
33 os.path.join('webargs', 'async.py'),
34 os.path.join('webargs', 'aiohttpparser.py'),
35 os.path.join('examples', 'annotations_example.py'),
36 'build',
37 ]
38 cmd += ' --exclude={0}'.format(','.join(excludes))
39 run(cmd, echo=True)
40
41 @task
42 def clean():
43 run("rm -rf build")
44 run("rm -rf dist")
45 run("rm -rf webargs.egg-info")
46 clean_docs()
47 print("Cleaned up.")
48
49 @task
50 def readme(browse=False):
51 run('rst2html.py README.rst > README.html')
52 if browse:
53 webbrowser.open_new_tab('README.html')
54
55 @task
56 def clean_docs():
57 run("rm -rf %s" % build_dir)
58
59 @task
60 def browse_docs():
61 path = os.path.join(build_dir, 'index.html')
62 webbrowser.open_new_tab(path)
63
64 @task
65 def docs(clean=False, browse=False, watch=False):
66 """Build the docs."""
67 if clean:
68 clean_docs()
69 run("sphinx-build %s %s" % (docs_dir, build_dir), echo=True)
70 if browse:
71 browse_docs()
72 if watch:
73 watch_docs()
74
75 @task
76 def watch_docs():
77 """Run build the docs when a file changes."""
78 try:
79 import sphinx_autobuild # noqa
80 except ImportError:
81 print('ERROR: watch task requires the sphinx_autobuild package.')
82 print('Install it with:')
83 print(' pip install sphinx-autobuild')
84 sys.exit(1)
85 run('sphinx-autobuild {0} {1} --watch {2}'.format(
86 docs_dir, build_dir, 'webargs'), echo=True, pty=True)
87
88 @task
89 def publish(test=False):
90 """Publish to the cheeseshop."""
91 clean()
92 if test:
93 run('python setup.py register -r test sdist bdist_wheel', echo=True)
94 run('twine upload dist/* -r test', echo=True)
95 else:
96 run('python setup.py register sdist bdist_wheel', echo=True)
97 run('twine upload dist/*', echo=True)
98
[end of tasks.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/tasks.py b/tasks.py
--- a/tasks.py
+++ b/tasks.py
@@ -17,7 +17,7 @@
args.extend(['--cov=webargs', '--cov-report=term', '--cov-report=html'])
if sys.version_info < (3, 4, 1):
- args.append('--ignore={0}'.format(os.path.join('tests', 'test_aiohttp')))
+ args.append('--ignore={0}'.format(os.path.join('tests', 'test_aiohttpparser.py')))
retcode = pytest.main(args)
if coverage and browse:
webbrowser.open_new_tab(os.path.join('htmlcov', 'index.html'))
@@ -29,7 +29,8 @@
cmd = 'flake8 .'
if sys.version_info < (3, 4, 1):
excludes = [
- os.path.join('tests', 'test_aiohttp'),
+ os.path.join('tests', 'apps', 'aiohttp_app.py'),
+ os.path.join('tests', 'test_aiohttparser.py'),
os.path.join('webargs', 'async.py'),
os.path.join('webargs', 'aiohttpparser.py'),
os.path.join('examples', 'annotations_example.py'),
|
{"golden_diff": "diff --git a/tasks.py b/tasks.py\n--- a/tasks.py\n+++ b/tasks.py\n@@ -17,7 +17,7 @@\n args.extend(['--cov=webargs', '--cov-report=term', '--cov-report=html'])\n \n if sys.version_info < (3, 4, 1):\n- args.append('--ignore={0}'.format(os.path.join('tests', 'test_aiohttp')))\n+ args.append('--ignore={0}'.format(os.path.join('tests', 'test_aiohttpparser.py')))\n retcode = pytest.main(args)\n if coverage and browse:\n webbrowser.open_new_tab(os.path.join('htmlcov', 'index.html'))\n@@ -29,7 +29,8 @@\n cmd = 'flake8 .'\n if sys.version_info < (3, 4, 1):\n excludes = [\n- os.path.join('tests', 'test_aiohttp'),\n+ os.path.join('tests', 'apps', 'aiohttp_app.py'),\n+ os.path.join('tests', 'test_aiohttparser.py'),\n os.path.join('webargs', 'async.py'),\n os.path.join('webargs', 'aiohttpparser.py'),\n os.path.join('examples', 'annotations_example.py'),\n", "issue": "Refactor tests\nThe tests can be DRY'd up significantly. I suggest the following:\n- [ ] Implement the same HTTP API across all the supported frameworks\n- [ ] Use webtest to test endpoints; create a base test class that tests common functionality\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\nimport os\nimport sys\nimport webbrowser\n\nfrom invoke import task, run\n\ndocs_dir = 'docs'\nbuild_dir = os.path.join(docs_dir, '_build')\n\n@task\ndef test(coverage=False, browse=False):\n flake()\n import pytest\n args = []\n if coverage:\n args.extend(['--cov=webargs', '--cov-report=term', '--cov-report=html'])\n\n if sys.version_info < (3, 4, 1):\n args.append('--ignore={0}'.format(os.path.join('tests', 'test_aiohttp')))\n retcode = pytest.main(args)\n if coverage and browse:\n webbrowser.open_new_tab(os.path.join('htmlcov', 'index.html'))\n sys.exit(retcode)\n\n@task\ndef flake():\n \"\"\"Run flake8 on codebase.\"\"\"\n cmd = 'flake8 .'\n if sys.version_info < (3, 4, 1):\n excludes = [\n os.path.join('tests', 'test_aiohttp'),\n os.path.join('webargs', 'async.py'),\n os.path.join('webargs', 'aiohttpparser.py'),\n os.path.join('examples', 'annotations_example.py'),\n 'build',\n ]\n cmd += ' --exclude={0}'.format(','.join(excludes))\n run(cmd, echo=True)\n\n@task\ndef clean():\n run(\"rm -rf build\")\n run(\"rm -rf dist\")\n run(\"rm -rf webargs.egg-info\")\n clean_docs()\n print(\"Cleaned up.\")\n\n@task\ndef readme(browse=False):\n run('rst2html.py README.rst > README.html')\n if browse:\n webbrowser.open_new_tab('README.html')\n\n@task\ndef clean_docs():\n run(\"rm -rf %s\" % build_dir)\n\n@task\ndef browse_docs():\n path = os.path.join(build_dir, 'index.html')\n webbrowser.open_new_tab(path)\n\n@task\ndef docs(clean=False, browse=False, watch=False):\n \"\"\"Build the docs.\"\"\"\n if clean:\n clean_docs()\n run(\"sphinx-build %s %s\" % (docs_dir, build_dir), echo=True)\n if browse:\n browse_docs()\n if watch:\n watch_docs()\n\n@task\ndef watch_docs():\n \"\"\"Run build the docs when a file changes.\"\"\"\n try:\n import sphinx_autobuild # noqa\n except ImportError:\n print('ERROR: watch task requires the sphinx_autobuild package.')\n print('Install it with:')\n print(' pip install sphinx-autobuild')\n sys.exit(1)\n run('sphinx-autobuild {0} {1} --watch {2}'.format(\n docs_dir, build_dir, 'webargs'), echo=True, pty=True)\n\n@task\ndef publish(test=False):\n \"\"\"Publish to the cheeseshop.\"\"\"\n clean()\n if test:\n run('python setup.py register -r test sdist bdist_wheel', echo=True)\n run('twine upload dist/* -r test', echo=True)\n else:\n run('python setup.py register sdist bdist_wheel', echo=True)\n run('twine upload dist/*', echo=True)\n", "path": "tasks.py"}]}
| 1,474 | 275 |
gh_patches_debug_26599
|
rasdani/github-patches
|
git_diff
|
getsentry__sentry-python-347
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Flask integration wrong User IP behind proxy
When sentry report bugs I get wrong users IP.
I'm using UWSGI and NGINX stack. I forward X-REAL-IP from NGINX to UWSGI. Also flask.request.remote_addr shows correct user IP address but sentry-sdk version 0.7.10 shows my server IP address as user IP.
Would you please assist?
</issue>
<code>
[start of sentry_sdk/integrations/wsgi.py]
1 import sys
2
3 from sentry_sdk.hub import Hub, _should_send_default_pii
4 from sentry_sdk.utils import capture_internal_exceptions, event_from_exception
5 from sentry_sdk._compat import PY2, reraise
6 from sentry_sdk.integrations._wsgi_common import _filter_headers
7
8 if False:
9 from typing import Callable
10 from typing import Dict
11 from typing import List
12 from typing import Iterator
13 from typing import Any
14 from typing import Tuple
15 from typing import Optional
16
17 from sentry_sdk.utils import ExcInfo
18
19
20 if PY2:
21
22 def wsgi_decoding_dance(s, charset="utf-8", errors="replace"):
23 # type: (str, str, str) -> str
24 return s.decode(charset, errors)
25
26
27 else:
28
29 def wsgi_decoding_dance(s, charset="utf-8", errors="replace"):
30 # type: (str, str, str) -> str
31 return s.encode("latin1").decode(charset, errors)
32
33
34 def get_host(environ):
35 # type: (Dict[str, str]) -> str
36 """Return the host for the given WSGI environment. Yanked from Werkzeug."""
37 if environ.get("HTTP_HOST"):
38 rv = environ["HTTP_HOST"]
39 if environ["wsgi.url_scheme"] == "http" and rv.endswith(":80"):
40 rv = rv[:-3]
41 elif environ["wsgi.url_scheme"] == "https" and rv.endswith(":443"):
42 rv = rv[:-4]
43 elif environ.get("SERVER_NAME"):
44 rv = environ["SERVER_NAME"]
45 if (environ["wsgi.url_scheme"], environ["SERVER_PORT"]) not in (
46 ("https", "443"),
47 ("http", "80"),
48 ):
49 rv += ":" + environ["SERVER_PORT"]
50 else:
51 # In spite of the WSGI spec, SERVER_NAME might not be present.
52 rv = "unknown"
53
54 return rv
55
56
57 def get_request_url(environ):
58 # type: (Dict[str, str]) -> str
59 """Return the absolute URL without query string for the given WSGI
60 environment."""
61 return "%s://%s/%s" % (
62 environ.get("wsgi.url_scheme"),
63 get_host(environ),
64 wsgi_decoding_dance(environ.get("PATH_INFO") or "").lstrip("/"),
65 )
66
67
68 class SentryWsgiMiddleware(object):
69 __slots__ = ("app",)
70
71 def __init__(self, app):
72 # type: (Callable) -> None
73 self.app = app
74
75 def __call__(self, environ, start_response):
76 # type: (Dict[str, str], Callable) -> _ScopedResponse
77 hub = Hub(Hub.current)
78
79 with hub:
80 with capture_internal_exceptions():
81 with hub.configure_scope() as scope:
82 scope.clear_breadcrumbs()
83 scope._name = "wsgi"
84 scope.add_event_processor(_make_wsgi_event_processor(environ))
85
86 try:
87 rv = self.app(environ, start_response)
88 except Exception:
89 reraise(*_capture_exception(hub))
90
91 return _ScopedResponse(hub, rv)
92
93
94 def _get_environ(environ):
95 # type: (Dict[str, str]) -> Iterator[Tuple[str, str]]
96 """
97 Returns our whitelisted environment variables.
98 """
99 keys = ("SERVER_NAME", "SERVER_PORT")
100 if _should_send_default_pii():
101 keys += ("REMOTE_ADDR",) # type: ignore
102
103 for key in keys:
104 if key in environ:
105 yield key, environ[key]
106
107
108 # `get_headers` comes from `werkzeug.datastructures.EnvironHeaders`
109 #
110 # We need this function because Django does not give us a "pure" http header
111 # dict. So we might as well use it for all WSGI integrations.
112 def _get_headers(environ):
113 # type: (Dict[str, str]) -> Iterator[Tuple[str, str]]
114 """
115 Returns only proper HTTP headers.
116
117 """
118 for key, value in environ.items():
119 key = str(key)
120 if key.startswith("HTTP_") and key not in (
121 "HTTP_CONTENT_TYPE",
122 "HTTP_CONTENT_LENGTH",
123 ):
124 yield key[5:].replace("_", "-").title(), value
125 elif key in ("CONTENT_TYPE", "CONTENT_LENGTH"):
126 yield key.replace("_", "-").title(), value
127
128
129 def get_client_ip(environ):
130 # type: (Dict[str, str]) -> Optional[Any]
131 """
132 Naively yank the first IP address in an X-Forwarded-For header
133 and assume this is correct.
134
135 Note: Don't use this in security sensitive situations since this
136 value may be forged from a client.
137 """
138 try:
139 return environ["HTTP_X_FORWARDED_FOR"].split(",")[0].strip()
140 except (KeyError, IndexError):
141 return environ.get("REMOTE_ADDR")
142
143
144 def _capture_exception(hub):
145 # type: (Hub) -> ExcInfo
146 # Check client here as it might have been unset while streaming response
147 if hub.client is not None:
148 exc_info = sys.exc_info()
149 event, hint = event_from_exception(
150 exc_info,
151 client_options=hub.client.options,
152 mechanism={"type": "wsgi", "handled": False},
153 )
154 hub.capture_event(event, hint=hint)
155 return exc_info
156
157
158 class _ScopedResponse(object):
159 __slots__ = ("_response", "_hub")
160
161 def __init__(self, hub, response):
162 # type: (Hub, List[bytes]) -> None
163 self._hub = hub
164 self._response = response
165
166 def __iter__(self):
167 # type: () -> Iterator[bytes]
168 iterator = iter(self._response)
169
170 while True:
171 with self._hub:
172 try:
173 chunk = next(iterator)
174 except StopIteration:
175 break
176 except Exception:
177 reraise(*_capture_exception(self._hub))
178
179 yield chunk
180
181 def close(self):
182 with self._hub:
183 try:
184 self._response.close()
185 except AttributeError:
186 pass
187 except Exception:
188 reraise(*_capture_exception(self._hub))
189
190
191 def _make_wsgi_event_processor(environ):
192 # type: (Dict[str, str]) -> Callable
193 # It's a bit unfortunate that we have to extract and parse the request data
194 # from the environ so eagerly, but there are a few good reasons for this.
195 #
196 # We might be in a situation where the scope/hub never gets torn down
197 # properly. In that case we will have an unnecessary strong reference to
198 # all objects in the environ (some of which may take a lot of memory) when
199 # we're really just interested in a few of them.
200 #
201 # Keeping the environment around for longer than the request lifecycle is
202 # also not necessarily something uWSGI can deal with:
203 # https://github.com/unbit/uwsgi/issues/1950
204
205 client_ip = get_client_ip(environ)
206 request_url = get_request_url(environ)
207 query_string = environ.get("QUERY_STRING")
208 method = environ.get("REQUEST_METHOD")
209 env = dict(_get_environ(environ))
210 headers = _filter_headers(dict(_get_headers(environ)))
211
212 def event_processor(event, hint):
213 # type: (Dict[str, Any], Dict[str, Any]) -> Dict[str, Any]
214 with capture_internal_exceptions():
215 # if the code below fails halfway through we at least have some data
216 request_info = event.setdefault("request", {})
217
218 if _should_send_default_pii():
219 user_info = event.setdefault("user", {})
220 user_info["ip_address"] = client_ip
221
222 request_info["url"] = request_url
223 request_info["query_string"] = query_string
224 request_info["method"] = method
225 request_info["env"] = env
226 request_info["headers"] = headers
227
228 return event
229
230 return event_processor
231
[end of sentry_sdk/integrations/wsgi.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/sentry_sdk/integrations/wsgi.py b/sentry_sdk/integrations/wsgi.py
--- a/sentry_sdk/integrations/wsgi.py
+++ b/sentry_sdk/integrations/wsgi.py
@@ -96,9 +96,10 @@
"""
Returns our whitelisted environment variables.
"""
- keys = ("SERVER_NAME", "SERVER_PORT")
+ keys = ["SERVER_NAME", "SERVER_PORT"]
if _should_send_default_pii():
- keys += ("REMOTE_ADDR",) # type: ignore
+ # Add all three headers here to make debugging of proxy setup easier.
+ keys += ["REMOTE_ADDR", "HTTP_X_FORWARDED_FOR", "HTTP_X_REAL_IP"]
for key in keys:
if key in environ:
@@ -129,16 +130,21 @@
def get_client_ip(environ):
# type: (Dict[str, str]) -> Optional[Any]
"""
- Naively yank the first IP address in an X-Forwarded-For header
- and assume this is correct.
-
- Note: Don't use this in security sensitive situations since this
- value may be forged from a client.
+ Infer the user IP address from various headers. This cannot be used in
+ security sensitive situations since the value may be forged from a client,
+ but it's good enough for the event payload.
"""
try:
return environ["HTTP_X_FORWARDED_FOR"].split(",")[0].strip()
except (KeyError, IndexError):
- return environ.get("REMOTE_ADDR")
+ pass
+
+ try:
+ return environ["HTTP_X_REAL_IP"]
+ except KeyError:
+ pass
+
+ return environ.get("REMOTE_ADDR")
def _capture_exception(hub):
|
{"golden_diff": "diff --git a/sentry_sdk/integrations/wsgi.py b/sentry_sdk/integrations/wsgi.py\n--- a/sentry_sdk/integrations/wsgi.py\n+++ b/sentry_sdk/integrations/wsgi.py\n@@ -96,9 +96,10 @@\n \"\"\"\n Returns our whitelisted environment variables.\n \"\"\"\n- keys = (\"SERVER_NAME\", \"SERVER_PORT\")\n+ keys = [\"SERVER_NAME\", \"SERVER_PORT\"]\n if _should_send_default_pii():\n- keys += (\"REMOTE_ADDR\",) # type: ignore\n+ # Add all three headers here to make debugging of proxy setup easier.\n+ keys += [\"REMOTE_ADDR\", \"HTTP_X_FORWARDED_FOR\", \"HTTP_X_REAL_IP\"]\n \n for key in keys:\n if key in environ:\n@@ -129,16 +130,21 @@\n def get_client_ip(environ):\n # type: (Dict[str, str]) -> Optional[Any]\n \"\"\"\n- Naively yank the first IP address in an X-Forwarded-For header\n- and assume this is correct.\n-\n- Note: Don't use this in security sensitive situations since this\n- value may be forged from a client.\n+ Infer the user IP address from various headers. This cannot be used in\n+ security sensitive situations since the value may be forged from a client,\n+ but it's good enough for the event payload.\n \"\"\"\n try:\n return environ[\"HTTP_X_FORWARDED_FOR\"].split(\",\")[0].strip()\n except (KeyError, IndexError):\n- return environ.get(\"REMOTE_ADDR\")\n+ pass\n+\n+ try:\n+ return environ[\"HTTP_X_REAL_IP\"]\n+ except KeyError:\n+ pass\n+\n+ return environ.get(\"REMOTE_ADDR\")\n \n \n def _capture_exception(hub):\n", "issue": "Flask integration wrong User IP behind proxy\nWhen sentry report bugs I get wrong users IP. \r\n\r\nI'm using UWSGI and NGINX stack. I forward X-REAL-IP from NGINX to UWSGI. Also flask.request.remote_addr shows correct user IP address but sentry-sdk version 0.7.10 shows my server IP address as user IP.\r\n\r\nWould you please assist?\n", "before_files": [{"content": "import sys\n\nfrom sentry_sdk.hub import Hub, _should_send_default_pii\nfrom sentry_sdk.utils import capture_internal_exceptions, event_from_exception\nfrom sentry_sdk._compat import PY2, reraise\nfrom sentry_sdk.integrations._wsgi_common import _filter_headers\n\nif False:\n from typing import Callable\n from typing import Dict\n from typing import List\n from typing import Iterator\n from typing import Any\n from typing import Tuple\n from typing import Optional\n\n from sentry_sdk.utils import ExcInfo\n\n\nif PY2:\n\n def wsgi_decoding_dance(s, charset=\"utf-8\", errors=\"replace\"):\n # type: (str, str, str) -> str\n return s.decode(charset, errors)\n\n\nelse:\n\n def wsgi_decoding_dance(s, charset=\"utf-8\", errors=\"replace\"):\n # type: (str, str, str) -> str\n return s.encode(\"latin1\").decode(charset, errors)\n\n\ndef get_host(environ):\n # type: (Dict[str, str]) -> str\n \"\"\"Return the host for the given WSGI environment. Yanked from Werkzeug.\"\"\"\n if environ.get(\"HTTP_HOST\"):\n rv = environ[\"HTTP_HOST\"]\n if environ[\"wsgi.url_scheme\"] == \"http\" and rv.endswith(\":80\"):\n rv = rv[:-3]\n elif environ[\"wsgi.url_scheme\"] == \"https\" and rv.endswith(\":443\"):\n rv = rv[:-4]\n elif environ.get(\"SERVER_NAME\"):\n rv = environ[\"SERVER_NAME\"]\n if (environ[\"wsgi.url_scheme\"], environ[\"SERVER_PORT\"]) not in (\n (\"https\", \"443\"),\n (\"http\", \"80\"),\n ):\n rv += \":\" + environ[\"SERVER_PORT\"]\n else:\n # In spite of the WSGI spec, SERVER_NAME might not be present.\n rv = \"unknown\"\n\n return rv\n\n\ndef get_request_url(environ):\n # type: (Dict[str, str]) -> str\n \"\"\"Return the absolute URL without query string for the given WSGI\n environment.\"\"\"\n return \"%s://%s/%s\" % (\n environ.get(\"wsgi.url_scheme\"),\n get_host(environ),\n wsgi_decoding_dance(environ.get(\"PATH_INFO\") or \"\").lstrip(\"/\"),\n )\n\n\nclass SentryWsgiMiddleware(object):\n __slots__ = (\"app\",)\n\n def __init__(self, app):\n # type: (Callable) -> None\n self.app = app\n\n def __call__(self, environ, start_response):\n # type: (Dict[str, str], Callable) -> _ScopedResponse\n hub = Hub(Hub.current)\n\n with hub:\n with capture_internal_exceptions():\n with hub.configure_scope() as scope:\n scope.clear_breadcrumbs()\n scope._name = \"wsgi\"\n scope.add_event_processor(_make_wsgi_event_processor(environ))\n\n try:\n rv = self.app(environ, start_response)\n except Exception:\n reraise(*_capture_exception(hub))\n\n return _ScopedResponse(hub, rv)\n\n\ndef _get_environ(environ):\n # type: (Dict[str, str]) -> Iterator[Tuple[str, str]]\n \"\"\"\n Returns our whitelisted environment variables.\n \"\"\"\n keys = (\"SERVER_NAME\", \"SERVER_PORT\")\n if _should_send_default_pii():\n keys += (\"REMOTE_ADDR\",) # type: ignore\n\n for key in keys:\n if key in environ:\n yield key, environ[key]\n\n\n# `get_headers` comes from `werkzeug.datastructures.EnvironHeaders`\n#\n# We need this function because Django does not give us a \"pure\" http header\n# dict. So we might as well use it for all WSGI integrations.\ndef _get_headers(environ):\n # type: (Dict[str, str]) -> Iterator[Tuple[str, str]]\n \"\"\"\n Returns only proper HTTP headers.\n\n \"\"\"\n for key, value in environ.items():\n key = str(key)\n if key.startswith(\"HTTP_\") and key not in (\n \"HTTP_CONTENT_TYPE\",\n \"HTTP_CONTENT_LENGTH\",\n ):\n yield key[5:].replace(\"_\", \"-\").title(), value\n elif key in (\"CONTENT_TYPE\", \"CONTENT_LENGTH\"):\n yield key.replace(\"_\", \"-\").title(), value\n\n\ndef get_client_ip(environ):\n # type: (Dict[str, str]) -> Optional[Any]\n \"\"\"\n Naively yank the first IP address in an X-Forwarded-For header\n and assume this is correct.\n\n Note: Don't use this in security sensitive situations since this\n value may be forged from a client.\n \"\"\"\n try:\n return environ[\"HTTP_X_FORWARDED_FOR\"].split(\",\")[0].strip()\n except (KeyError, IndexError):\n return environ.get(\"REMOTE_ADDR\")\n\n\ndef _capture_exception(hub):\n # type: (Hub) -> ExcInfo\n # Check client here as it might have been unset while streaming response\n if hub.client is not None:\n exc_info = sys.exc_info()\n event, hint = event_from_exception(\n exc_info,\n client_options=hub.client.options,\n mechanism={\"type\": \"wsgi\", \"handled\": False},\n )\n hub.capture_event(event, hint=hint)\n return exc_info\n\n\nclass _ScopedResponse(object):\n __slots__ = (\"_response\", \"_hub\")\n\n def __init__(self, hub, response):\n # type: (Hub, List[bytes]) -> None\n self._hub = hub\n self._response = response\n\n def __iter__(self):\n # type: () -> Iterator[bytes]\n iterator = iter(self._response)\n\n while True:\n with self._hub:\n try:\n chunk = next(iterator)\n except StopIteration:\n break\n except Exception:\n reraise(*_capture_exception(self._hub))\n\n yield chunk\n\n def close(self):\n with self._hub:\n try:\n self._response.close()\n except AttributeError:\n pass\n except Exception:\n reraise(*_capture_exception(self._hub))\n\n\ndef _make_wsgi_event_processor(environ):\n # type: (Dict[str, str]) -> Callable\n # It's a bit unfortunate that we have to extract and parse the request data\n # from the environ so eagerly, but there are a few good reasons for this.\n #\n # We might be in a situation where the scope/hub never gets torn down\n # properly. In that case we will have an unnecessary strong reference to\n # all objects in the environ (some of which may take a lot of memory) when\n # we're really just interested in a few of them.\n #\n # Keeping the environment around for longer than the request lifecycle is\n # also not necessarily something uWSGI can deal with:\n # https://github.com/unbit/uwsgi/issues/1950\n\n client_ip = get_client_ip(environ)\n request_url = get_request_url(environ)\n query_string = environ.get(\"QUERY_STRING\")\n method = environ.get(\"REQUEST_METHOD\")\n env = dict(_get_environ(environ))\n headers = _filter_headers(dict(_get_headers(environ)))\n\n def event_processor(event, hint):\n # type: (Dict[str, Any], Dict[str, Any]) -> Dict[str, Any]\n with capture_internal_exceptions():\n # if the code below fails halfway through we at least have some data\n request_info = event.setdefault(\"request\", {})\n\n if _should_send_default_pii():\n user_info = event.setdefault(\"user\", {})\n user_info[\"ip_address\"] = client_ip\n\n request_info[\"url\"] = request_url\n request_info[\"query_string\"] = query_string\n request_info[\"method\"] = method\n request_info[\"env\"] = env\n request_info[\"headers\"] = headers\n\n return event\n\n return event_processor\n", "path": "sentry_sdk/integrations/wsgi.py"}]}
| 2,961 | 394 |
gh_patches_debug_12492
|
rasdani/github-patches
|
git_diff
|
TheAlgorithms__Python-9161
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
issue with permute_recursive
### What would you like to share?
Your code looks mostly correct, but there's one issue in the `permute_recursive` function due to the modification of the `nums` list. Lists in Python are mutable, and when you use `nums.pop(0)`, it modifies the original `nums` list. This can lead to incorrect results and even an infinite loop.
To fix this, you should pass a copy of the `nums` list to the recursive function. Here's the corrected `permute_recursive` function:
def permute_recursive(nums: list[int]) -> list[list[int]]:
"""
Return all permutations.
>>> permute_recursive([1, 2, 3])
[[3, 2, 1], [2, 3, 1], [1, 3, 2], [3, 1, 2], [2, 1, 3], [1, 2, 3]]
"""
result: list[list[int]] = []
if len(nums) == 0:
return [[]]
for _ in range(len(nums)):
n = nums.pop(0)
permutations = permute_recursive(nums[:]) # Make a copy of nums
for perm in permutations:
perm.append(n)
result.extend(permutations)
nums.append(n)
return result
```
With this modification, your code should work correctly for both `permute_recursive` and `permute_backtrack`.
### Additional information
_No response_
</issue>
<code>
[start of data_structures/arrays/permutations.py]
1 def permute_recursive(nums: list[int]) -> list[list[int]]:
2 """
3 Return all permutations.
4
5 >>> permute_recursive([1, 2, 3])
6 [[3, 2, 1], [2, 3, 1], [1, 3, 2], [3, 1, 2], [2, 1, 3], [1, 2, 3]]
7 """
8 result: list[list[int]] = []
9 if len(nums) == 0:
10 return [[]]
11 for _ in range(len(nums)):
12 n = nums.pop(0)
13 permutations = permute_recursive(nums)
14 for perm in permutations:
15 perm.append(n)
16 result.extend(permutations)
17 nums.append(n)
18 return result
19
20
21 def permute_backtrack(nums: list[int]) -> list[list[int]]:
22 """
23 Return all permutations of the given list.
24
25 >>> permute_backtrack([1, 2, 3])
26 [[1, 2, 3], [1, 3, 2], [2, 1, 3], [2, 3, 1], [3, 2, 1], [3, 1, 2]]
27 """
28
29 def backtrack(start: int) -> None:
30 if start == len(nums) - 1:
31 output.append(nums[:])
32 else:
33 for i in range(start, len(nums)):
34 nums[start], nums[i] = nums[i], nums[start]
35 backtrack(start + 1)
36 nums[start], nums[i] = nums[i], nums[start] # backtrack
37
38 output: list[list[int]] = []
39 backtrack(0)
40 return output
41
42
43 if __name__ == "__main__":
44 import doctest
45
46 res = permute_backtrack([1, 2, 3])
47 print(res)
48 doctest.testmod()
49
[end of data_structures/arrays/permutations.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/data_structures/arrays/permutations.py b/data_structures/arrays/permutations.py
--- a/data_structures/arrays/permutations.py
+++ b/data_structures/arrays/permutations.py
@@ -10,7 +10,7 @@
return [[]]
for _ in range(len(nums)):
n = nums.pop(0)
- permutations = permute_recursive(nums)
+ permutations = permute_recursive(nums.copy())
for perm in permutations:
perm.append(n)
result.extend(permutations)
@@ -43,6 +43,6 @@
if __name__ == "__main__":
import doctest
- res = permute_backtrack([1, 2, 3])
- print(res)
+ result = permute_backtrack([1, 2, 3])
+ print(result)
doctest.testmod()
|
{"golden_diff": "diff --git a/data_structures/arrays/permutations.py b/data_structures/arrays/permutations.py\n--- a/data_structures/arrays/permutations.py\n+++ b/data_structures/arrays/permutations.py\n@@ -10,7 +10,7 @@\n return [[]]\n for _ in range(len(nums)):\n n = nums.pop(0)\n- permutations = permute_recursive(nums)\n+ permutations = permute_recursive(nums.copy())\n for perm in permutations:\n perm.append(n)\n result.extend(permutations)\n@@ -43,6 +43,6 @@\n if __name__ == \"__main__\":\n import doctest\n \n- res = permute_backtrack([1, 2, 3])\n- print(res)\n+ result = permute_backtrack([1, 2, 3])\n+ print(result)\n doctest.testmod()\n", "issue": "issue with permute_recursive\n### What would you like to share?\n\nYour code looks mostly correct, but there's one issue in the `permute_recursive` function due to the modification of the `nums` list. Lists in Python are mutable, and when you use `nums.pop(0)`, it modifies the original `nums` list. This can lead to incorrect results and even an infinite loop.\r\n\r\nTo fix this, you should pass a copy of the `nums` list to the recursive function. Here's the corrected `permute_recursive` function:\r\n\r\n\r\ndef permute_recursive(nums: list[int]) -> list[list[int]]:\r\n \"\"\"\r\n Return all permutations.\r\n\r\n >>> permute_recursive([1, 2, 3])\r\n [[3, 2, 1], [2, 3, 1], [1, 3, 2], [3, 1, 2], [2, 1, 3], [1, 2, 3]]\r\n \"\"\"\r\n result: list[list[int]] = []\r\n if len(nums) == 0:\r\n return [[]]\r\n for _ in range(len(nums)):\r\n n = nums.pop(0)\r\n permutations = permute_recursive(nums[:]) # Make a copy of nums\r\n for perm in permutations:\r\n perm.append(n)\r\n result.extend(permutations)\r\n nums.append(n)\r\n return result\r\n```\r\n\r\nWith this modification, your code should work correctly for both `permute_recursive` and `permute_backtrack`.\n\n### Additional information\n\n_No response_\n", "before_files": [{"content": "def permute_recursive(nums: list[int]) -> list[list[int]]:\n \"\"\"\n Return all permutations.\n\n >>> permute_recursive([1, 2, 3])\n [[3, 2, 1], [2, 3, 1], [1, 3, 2], [3, 1, 2], [2, 1, 3], [1, 2, 3]]\n \"\"\"\n result: list[list[int]] = []\n if len(nums) == 0:\n return [[]]\n for _ in range(len(nums)):\n n = nums.pop(0)\n permutations = permute_recursive(nums)\n for perm in permutations:\n perm.append(n)\n result.extend(permutations)\n nums.append(n)\n return result\n\n\ndef permute_backtrack(nums: list[int]) -> list[list[int]]:\n \"\"\"\n Return all permutations of the given list.\n\n >>> permute_backtrack([1, 2, 3])\n [[1, 2, 3], [1, 3, 2], [2, 1, 3], [2, 3, 1], [3, 2, 1], [3, 1, 2]]\n \"\"\"\n\n def backtrack(start: int) -> None:\n if start == len(nums) - 1:\n output.append(nums[:])\n else:\n for i in range(start, len(nums)):\n nums[start], nums[i] = nums[i], nums[start]\n backtrack(start + 1)\n nums[start], nums[i] = nums[i], nums[start] # backtrack\n\n output: list[list[int]] = []\n backtrack(0)\n return output\n\n\nif __name__ == \"__main__\":\n import doctest\n\n res = permute_backtrack([1, 2, 3])\n print(res)\n doctest.testmod()\n", "path": "data_structures/arrays/permutations.py"}]}
| 1,357 | 187 |
gh_patches_debug_39293
|
rasdani/github-patches
|
git_diff
|
cookiecutter__cookiecutter-451
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Choices in Context
Add support for choices in a context as suggested in #441.
I noticed that we are using custom methods for retrieving the user input and decided to let `click.prompt` handle the Python2/Python3 compatibility matters. This PR requires thorough reviews and testing before merging it as I touched some core functions and I want to be certain I didn't break anything :exclamation:
The CI might not detect these errors as most of the `read_response` calls are still `monkeypatched`. I'm afraid we need to do some manual testing to be 100% positive that everyting is working as expected :sweat:
I implemented several tests for the new functionality. :smiley:
Please let me know your thoughts and I'll be happy to update this PR.
</issue>
<code>
[start of cookiecutter/compat.py]
1 import os
2 import sys
3
4 PY3 = sys.version_info[0] == 3
5 OLD_PY2 = sys.version_info[:2] < (2, 7)
6
7 if PY3: # pragma: no cover
8 input_str = 'builtins.input'
9 iteritems = lambda d: iter(d.items())
10 from unittest.mock import patch
11 from io import StringIO
12
13 def read_response(prompt=''):
14 """
15 Prompt the user for a response.
16
17 Prints the given prompt (which should be a Unicode string),
18 and returns the text entered by the user as a Unicode string.
19
20 :param prompt: A Unicode string that is presented to the user.
21 """
22 # The Python 3 input function does exactly what we want
23 return input(prompt)
24
25 else: # pragma: no cover
26 from __builtin__ import raw_input
27 input = raw_input
28 input_str = '__builtin__.raw_input'
29 iteritems = lambda d: d.iteritems()
30 from mock import patch
31 from cStringIO import StringIO
32
33 def read_response(prompt=''):
34 """
35 Prompt the user for a response.
36
37 Prints the given prompt (which should be a Unicode string),
38 and returns the text entered by the user as a Unicode string.
39
40 :param prompt: A Unicode string that is presented to the user.
41 """
42 # For Python 2, raw_input takes a byte string argument for the prompt.
43 # This must be encoded using the encoding used by sys.stdout.
44 # The result is a byte string encoding using sys.stdin.encoding.
45 # However, if the program is not being run interactively, sys.stdout
46 # and sys.stdin may not have encoding attributes.
47 # In that case we don't print a prompt (stdin/out isn't interactive,
48 # so prompting is pointless), and we assume the returned data is
49 # encoded using sys.getdefaultencoding(). This may not be right,
50 # but it's likely the best we can do.
51 # Isn't Python 2 encoding support wonderful? :-)
52 if sys.stdout.encoding:
53 prompt = prompt.encode(sys.stdout.encoding)
54 else:
55 prompt = ''
56 enc = sys.stdin.encoding or sys.getdefaultencoding()
57 return raw_input(prompt).decode(enc)
58
59
60 if PY3: # Forced testing
61
62 from shutil import which
63
64 else: # Forced testing
65
66 def is_exe(program):
67 """
68 Returns whether or not a file is an executable.
69 """
70 return os.path.isfile(program) and os.access(program, os.X_OK)
71
72 def which(cmd, mode=os.F_OK | os.X_OK, path=None):
73 """Given a command, mode, and a PATH string, return the path which
74 conforms to the given mode on the PATH, or None if there is no such
75 file.
76 `mode` defaults to os.F_OK | os.X_OK. `path` defaults to the result
77 of os.environ.get("PATH"), or can be overridden with a custom search
78 path.
79
80 Note: This function was backported from the Python 3 source code.
81 """
82 # Check that a given file can be accessed with the correct mode.
83 # Additionally check that `file` is not a directory, as on Windows
84 # directories pass the os.access check.
85 def _access_check(fn, mode):
86 return (os.path.exists(fn) and os.access(fn, mode) and
87 not os.path.isdir(fn))
88
89 # If we're given a path with a directory part, look it up directly
90 # rather than referring to PATH directories. This includes checking
91 # relative to the current directory, e.g. ./script
92 if os.path.dirname(cmd):
93 if _access_check(cmd, mode):
94 return cmd
95 return None
96
97 if path is None:
98 path = os.environ.get("PATH", os.defpath)
99 if not path:
100 return None
101 path = path.split(os.pathsep)
102
103 if sys.platform == "win32":
104 # The current directory takes precedence on Windows.
105 if os.curdir not in path:
106 path.insert(0, os.curdir)
107
108 # PATHEXT is necessary to check on Windows.
109 pathext = os.environ.get("PATHEXT", "").split(os.pathsep)
110 # See if the given file matches any of the expected path
111 # extensions. This will allow us to short circuit when given
112 # "python.exe". If it does match, only test that one, otherwise we
113 # have to try others.
114 if any(cmd.lower().endswith(ext.lower()) for ext in pathext):
115 files = [cmd]
116 else:
117 files = [cmd + ext for ext in pathext]
118 else:
119 # On other platforms you don't have things like PATHEXT to tell you
120 # what file suffixes are executable, so just pass on cmd as-is.
121 files = [cmd]
122
123 seen = set()
124 for dir in path:
125 normdir = os.path.normcase(dir)
126 if normdir not in seen:
127 seen.add(normdir)
128 for thefile in files:
129 name = os.path.join(dir, thefile)
130 if _access_check(name, mode):
131 return name
132 return None
133
134
135 def is_string(obj):
136 """Determine if an object is a string."""
137 return isinstance(obj, str if PY3 else basestring)
138
139
140 _hush_pyflakes = (patch, StringIO, which)
141
[end of cookiecutter/compat.py]
[start of cookiecutter/prompt.py]
1 #!/usr/bin/env python
2 # -*- coding: utf-8 -*-
3
4 """
5 cookiecutter.prompt
6 ---------------------
7
8 Functions for prompting the user for project info.
9 """
10
11 from __future__ import unicode_literals
12 import sys
13
14 from .compat import iteritems, read_response, is_string
15 from jinja2.environment import Environment
16
17
18 def prompt_for_config(context, no_input=False):
19 """
20 Prompts the user to enter new config, using context as a source for the
21 field names and sample values.
22
23 :param no_input: Prompt the user at command line for manual configuration?
24 """
25 cookiecutter_dict = {}
26 env = Environment()
27
28 for key, raw in iteritems(context['cookiecutter']):
29 if key.startswith('_'):
30 cookiecutter_dict[key] = raw
31 continue
32
33 raw = raw if is_string(raw) else str(raw)
34 val = env.from_string(raw).render(cookiecutter=cookiecutter_dict)
35
36 if not no_input:
37 prompt = '{0} (default is "{1}")? '.format(key, val)
38
39 new_val = read_response(prompt).strip()
40
41 if new_val != '':
42 val = new_val
43
44 cookiecutter_dict[key] = val
45 return cookiecutter_dict
46
47
48 def query_yes_no(question, default='yes'):
49 """
50 Ask a yes/no question via `read_response()` and return their answer.
51
52 :param question: A string that is presented to the user.
53 :param default: The presumed answer if the user just hits <Enter>.
54 It must be "yes" (the default), "no" or None (meaning
55 an answer is required of the user).
56
57 The "answer" return value is one of "yes" or "no".
58
59 Adapted from
60 http://stackoverflow.com/questions/3041986/python-command-line-yes-no-input
61 http://code.activestate.com/recipes/577058/
62
63 """
64 valid = {'yes': True, 'y': True, 'ye': True, 'no': False, 'n': False}
65 if default is None:
66 prompt = ' [y/n] '
67 elif default == 'yes':
68 prompt = ' [Y/n] '
69 elif default == 'no':
70 prompt = ' [y/N] '
71 else:
72 raise ValueError('Invalid default answer: "{0}"'.format(default))
73
74 while True:
75 sys.stdout.write(question + prompt)
76 choice = read_response().lower()
77
78 if default is not None and choice == '':
79 return valid[default]
80 elif choice in valid:
81 return valid[choice]
82 else:
83 sys.stdout.write('Please respond with "yes" or "no" '
84 '(or "y" or "n").\n')
85
[end of cookiecutter/prompt.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/cookiecutter/compat.py b/cookiecutter/compat.py
--- a/cookiecutter/compat.py
+++ b/cookiecutter/compat.py
@@ -4,23 +4,13 @@
PY3 = sys.version_info[0] == 3
OLD_PY2 = sys.version_info[:2] < (2, 7)
+
if PY3: # pragma: no cover
input_str = 'builtins.input'
iteritems = lambda d: iter(d.items())
from unittest.mock import patch
from io import StringIO
- def read_response(prompt=''):
- """
- Prompt the user for a response.
-
- Prints the given prompt (which should be a Unicode string),
- and returns the text entered by the user as a Unicode string.
-
- :param prompt: A Unicode string that is presented to the user.
- """
- # The Python 3 input function does exactly what we want
- return input(prompt)
else: # pragma: no cover
from __builtin__ import raw_input
@@ -30,32 +20,6 @@
from mock import patch
from cStringIO import StringIO
- def read_response(prompt=''):
- """
- Prompt the user for a response.
-
- Prints the given prompt (which should be a Unicode string),
- and returns the text entered by the user as a Unicode string.
-
- :param prompt: A Unicode string that is presented to the user.
- """
- # For Python 2, raw_input takes a byte string argument for the prompt.
- # This must be encoded using the encoding used by sys.stdout.
- # The result is a byte string encoding using sys.stdin.encoding.
- # However, if the program is not being run interactively, sys.stdout
- # and sys.stdin may not have encoding attributes.
- # In that case we don't print a prompt (stdin/out isn't interactive,
- # so prompting is pointless), and we assume the returned data is
- # encoded using sys.getdefaultencoding(). This may not be right,
- # but it's likely the best we can do.
- # Isn't Python 2 encoding support wonderful? :-)
- if sys.stdout.encoding:
- prompt = prompt.encode(sys.stdout.encoding)
- else:
- prompt = ''
- enc = sys.stdin.encoding or sys.getdefaultencoding()
- return raw_input(prompt).decode(enc)
-
if PY3: # Forced testing
diff --git a/cookiecutter/prompt.py b/cookiecutter/prompt.py
--- a/cookiecutter/prompt.py
+++ b/cookiecutter/prompt.py
@@ -11,10 +11,29 @@
from __future__ import unicode_literals
import sys
-from .compat import iteritems, read_response, is_string
+import click
+
+from .compat import iteritems, is_string
from jinja2.environment import Environment
+def read_response(prompt=''):
+ """Prompt the user and return the entered value or an empty string.
+
+ :param str prompt: Text to display to the user
+ """
+ # Please see http://click.pocoo.org/4/api/#click.prompt
+ # default: use an empty string if no input happens
+ # prompt_suffix: do not add a suffix to the prompt
+ # show_default: the prompt is expected to mention the default itself
+ return click.prompt(
+ prompt,
+ default='',
+ prompt_suffix='',
+ show_default=False,
+ )
+
+
def prompt_for_config(context, no_input=False):
"""
Prompts the user to enter new config, using context as a source for the
|
{"golden_diff": "diff --git a/cookiecutter/compat.py b/cookiecutter/compat.py\n--- a/cookiecutter/compat.py\n+++ b/cookiecutter/compat.py\n@@ -4,23 +4,13 @@\n PY3 = sys.version_info[0] == 3\n OLD_PY2 = sys.version_info[:2] < (2, 7)\n \n+\n if PY3: # pragma: no cover\n input_str = 'builtins.input'\n iteritems = lambda d: iter(d.items())\n from unittest.mock import patch\n from io import StringIO\n \n- def read_response(prompt=''):\n- \"\"\"\n- Prompt the user for a response.\n-\n- Prints the given prompt (which should be a Unicode string),\n- and returns the text entered by the user as a Unicode string.\n-\n- :param prompt: A Unicode string that is presented to the user.\n- \"\"\"\n- # The Python 3 input function does exactly what we want\n- return input(prompt)\n \n else: # pragma: no cover\n from __builtin__ import raw_input\n@@ -30,32 +20,6 @@\n from mock import patch\n from cStringIO import StringIO\n \n- def read_response(prompt=''):\n- \"\"\"\n- Prompt the user for a response.\n-\n- Prints the given prompt (which should be a Unicode string),\n- and returns the text entered by the user as a Unicode string.\n-\n- :param prompt: A Unicode string that is presented to the user.\n- \"\"\"\n- # For Python 2, raw_input takes a byte string argument for the prompt.\n- # This must be encoded using the encoding used by sys.stdout.\n- # The result is a byte string encoding using sys.stdin.encoding.\n- # However, if the program is not being run interactively, sys.stdout\n- # and sys.stdin may not have encoding attributes.\n- # In that case we don't print a prompt (stdin/out isn't interactive,\n- # so prompting is pointless), and we assume the returned data is\n- # encoded using sys.getdefaultencoding(). This may not be right,\n- # but it's likely the best we can do.\n- # Isn't Python 2 encoding support wonderful? :-)\n- if sys.stdout.encoding:\n- prompt = prompt.encode(sys.stdout.encoding)\n- else:\n- prompt = ''\n- enc = sys.stdin.encoding or sys.getdefaultencoding()\n- return raw_input(prompt).decode(enc)\n-\n \n if PY3: # Forced testing\n \ndiff --git a/cookiecutter/prompt.py b/cookiecutter/prompt.py\n--- a/cookiecutter/prompt.py\n+++ b/cookiecutter/prompt.py\n@@ -11,10 +11,29 @@\n from __future__ import unicode_literals\n import sys\n \n-from .compat import iteritems, read_response, is_string\n+import click\n+\n+from .compat import iteritems, is_string\n from jinja2.environment import Environment\n \n \n+def read_response(prompt=''):\n+ \"\"\"Prompt the user and return the entered value or an empty string.\n+\n+ :param str prompt: Text to display to the user\n+ \"\"\"\n+ # Please see http://click.pocoo.org/4/api/#click.prompt\n+ # default: use an empty string if no input happens\n+ # prompt_suffix: do not add a suffix to the prompt\n+ # show_default: the prompt is expected to mention the default itself\n+ return click.prompt(\n+ prompt,\n+ default='',\n+ prompt_suffix='',\n+ show_default=False,\n+ )\n+\n+\n def prompt_for_config(context, no_input=False):\n \"\"\"\n Prompts the user to enter new config, using context as a source for the\n", "issue": "Choices in Context\nAdd support for choices in a context as suggested in #441.\n\nI noticed that we are using custom methods for retrieving the user input and decided to let `click.prompt` handle the Python2/Python3 compatibility matters. This PR requires thorough reviews and testing before merging it as I touched some core functions and I want to be certain I didn't break anything :exclamation: \n\nThe CI might not detect these errors as most of the `read_response` calls are still `monkeypatched`. I'm afraid we need to do some manual testing to be 100% positive that everyting is working as expected :sweat: \n\nI implemented several tests for the new functionality. :smiley: \n\nPlease let me know your thoughts and I'll be happy to update this PR.\n\n", "before_files": [{"content": "import os\nimport sys\n\nPY3 = sys.version_info[0] == 3\nOLD_PY2 = sys.version_info[:2] < (2, 7)\n\nif PY3: # pragma: no cover\n input_str = 'builtins.input'\n iteritems = lambda d: iter(d.items())\n from unittest.mock import patch\n from io import StringIO\n\n def read_response(prompt=''):\n \"\"\"\n Prompt the user for a response.\n\n Prints the given prompt (which should be a Unicode string),\n and returns the text entered by the user as a Unicode string.\n\n :param prompt: A Unicode string that is presented to the user.\n \"\"\"\n # The Python 3 input function does exactly what we want\n return input(prompt)\n\nelse: # pragma: no cover\n from __builtin__ import raw_input\n input = raw_input\n input_str = '__builtin__.raw_input'\n iteritems = lambda d: d.iteritems()\n from mock import patch\n from cStringIO import StringIO\n\n def read_response(prompt=''):\n \"\"\"\n Prompt the user for a response.\n\n Prints the given prompt (which should be a Unicode string),\n and returns the text entered by the user as a Unicode string.\n\n :param prompt: A Unicode string that is presented to the user.\n \"\"\"\n # For Python 2, raw_input takes a byte string argument for the prompt.\n # This must be encoded using the encoding used by sys.stdout.\n # The result is a byte string encoding using sys.stdin.encoding.\n # However, if the program is not being run interactively, sys.stdout\n # and sys.stdin may not have encoding attributes.\n # In that case we don't print a prompt (stdin/out isn't interactive,\n # so prompting is pointless), and we assume the returned data is\n # encoded using sys.getdefaultencoding(). This may not be right,\n # but it's likely the best we can do.\n # Isn't Python 2 encoding support wonderful? :-)\n if sys.stdout.encoding:\n prompt = prompt.encode(sys.stdout.encoding)\n else:\n prompt = ''\n enc = sys.stdin.encoding or sys.getdefaultencoding()\n return raw_input(prompt).decode(enc)\n\n\nif PY3: # Forced testing\n\n from shutil import which\n\nelse: # Forced testing\n\n def is_exe(program):\n \"\"\"\n Returns whether or not a file is an executable.\n \"\"\"\n return os.path.isfile(program) and os.access(program, os.X_OK)\n\n def which(cmd, mode=os.F_OK | os.X_OK, path=None):\n \"\"\"Given a command, mode, and a PATH string, return the path which\n conforms to the given mode on the PATH, or None if there is no such\n file.\n `mode` defaults to os.F_OK | os.X_OK. `path` defaults to the result\n of os.environ.get(\"PATH\"), or can be overridden with a custom search\n path.\n\n Note: This function was backported from the Python 3 source code.\n \"\"\"\n # Check that a given file can be accessed with the correct mode.\n # Additionally check that `file` is not a directory, as on Windows\n # directories pass the os.access check.\n def _access_check(fn, mode):\n return (os.path.exists(fn) and os.access(fn, mode) and\n not os.path.isdir(fn))\n\n # If we're given a path with a directory part, look it up directly\n # rather than referring to PATH directories. This includes checking\n # relative to the current directory, e.g. ./script\n if os.path.dirname(cmd):\n if _access_check(cmd, mode):\n return cmd\n return None\n\n if path is None:\n path = os.environ.get(\"PATH\", os.defpath)\n if not path:\n return None\n path = path.split(os.pathsep)\n\n if sys.platform == \"win32\":\n # The current directory takes precedence on Windows.\n if os.curdir not in path:\n path.insert(0, os.curdir)\n\n # PATHEXT is necessary to check on Windows.\n pathext = os.environ.get(\"PATHEXT\", \"\").split(os.pathsep)\n # See if the given file matches any of the expected path\n # extensions. This will allow us to short circuit when given\n # \"python.exe\". If it does match, only test that one, otherwise we\n # have to try others.\n if any(cmd.lower().endswith(ext.lower()) for ext in pathext):\n files = [cmd]\n else:\n files = [cmd + ext for ext in pathext]\n else:\n # On other platforms you don't have things like PATHEXT to tell you\n # what file suffixes are executable, so just pass on cmd as-is.\n files = [cmd]\n\n seen = set()\n for dir in path:\n normdir = os.path.normcase(dir)\n if normdir not in seen:\n seen.add(normdir)\n for thefile in files:\n name = os.path.join(dir, thefile)\n if _access_check(name, mode):\n return name\n return None\n\n\ndef is_string(obj):\n \"\"\"Determine if an object is a string.\"\"\"\n return isinstance(obj, str if PY3 else basestring)\n\n\n_hush_pyflakes = (patch, StringIO, which)\n", "path": "cookiecutter/compat.py"}, {"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\n\"\"\"\ncookiecutter.prompt\n---------------------\n\nFunctions for prompting the user for project info.\n\"\"\"\n\nfrom __future__ import unicode_literals\nimport sys\n\nfrom .compat import iteritems, read_response, is_string\nfrom jinja2.environment import Environment\n\n\ndef prompt_for_config(context, no_input=False):\n \"\"\"\n Prompts the user to enter new config, using context as a source for the\n field names and sample values.\n\n :param no_input: Prompt the user at command line for manual configuration?\n \"\"\"\n cookiecutter_dict = {}\n env = Environment()\n\n for key, raw in iteritems(context['cookiecutter']):\n if key.startswith('_'):\n cookiecutter_dict[key] = raw\n continue\n\n raw = raw if is_string(raw) else str(raw)\n val = env.from_string(raw).render(cookiecutter=cookiecutter_dict)\n\n if not no_input:\n prompt = '{0} (default is \"{1}\")? '.format(key, val)\n\n new_val = read_response(prompt).strip()\n\n if new_val != '':\n val = new_val\n\n cookiecutter_dict[key] = val\n return cookiecutter_dict\n\n\ndef query_yes_no(question, default='yes'):\n \"\"\"\n Ask a yes/no question via `read_response()` and return their answer.\n\n :param question: A string that is presented to the user.\n :param default: The presumed answer if the user just hits <Enter>.\n It must be \"yes\" (the default), \"no\" or None (meaning\n an answer is required of the user).\n\n The \"answer\" return value is one of \"yes\" or \"no\".\n\n Adapted from\n http://stackoverflow.com/questions/3041986/python-command-line-yes-no-input\n http://code.activestate.com/recipes/577058/\n\n \"\"\"\n valid = {'yes': True, 'y': True, 'ye': True, 'no': False, 'n': False}\n if default is None:\n prompt = ' [y/n] '\n elif default == 'yes':\n prompt = ' [Y/n] '\n elif default == 'no':\n prompt = ' [y/N] '\n else:\n raise ValueError('Invalid default answer: \"{0}\"'.format(default))\n\n while True:\n sys.stdout.write(question + prompt)\n choice = read_response().lower()\n\n if default is not None and choice == '':\n return valid[default]\n elif choice in valid:\n return valid[choice]\n else:\n sys.stdout.write('Please respond with \"yes\" or \"no\" '\n '(or \"y\" or \"n\").\\n')\n", "path": "cookiecutter/prompt.py"}]}
| 2,967 | 817 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.